Progress Monitoring: Response to Intervention Procedures School Psychologists Commonly Use to Monitor Students’ Progress.
Abstract: Since the reauthorization of the Individuals with Disabilities Education Act (IDEA) in 2004, the increased pressure to identify, intervene early, and use research-based programs has made Response to Intervention (RTI) the method of choice. This research provides empirical data on procedures commonly utilized to monitor student progress within an RTI program. A survey was mailed to a random sample of 140 members of the New York Association of School Psychologists (NYASP). The survey included questions on demographic information, implementation procedures, and the practice of student progress monitoring. The data was analyzed by descriptive and inferential statistics. Data analysis indicated that teachers typically use evidenced-based progress monitoring procedures such as DIBELS and CBM to monitor student’s progress within an RTI model.
Progress Monitoring: Response to Intervention Procedures School Psychologists Commonly Use to Monitor Students’ Progress.
The reauthorization of the Individuals with Disabilities Education Act (IDEA) in 2004 brought forth some significant changes in the special education realm. Specifically, the revisions have called for a change in the procedure for determining a specific learning disability (Prasse, 2006). Hence, years of practicing the discrepancy model approach for determining a child’s eligibility has now moved toward a Response to Intervention (RTI) approach. Although, the new law does not require an RTI approach, the law strongly advises the use of a problem solving delivery system and specifically mentions RTI as an approach (Prasse, 2006). This coupled with the increased pressure to identify, intervene early, and use research-based programs has made RTI the method of choice (U.S. Department of Education, Office of Special Education and Rehabilitation Services [OSERS], 2002). From the No Child Left Behind Act (NCLB) to the President’s Commission on Excellence in Special Education report (OSERS, 2002), to IDEA 2004, there are considerable recommendations which propose the use of a problem solving model approach. In addition, not only will effective instruction and progress monitoring be required before recommending special education services, but the recommendations facilitate educational accountability (Barnett, Dally III, Jones, & Lentz, Jr., 2004). With educational accountability in play, teachers will be required to employ evidenced-based instruction and interventions. In addition, teachers will also be required to provide documented evidence that students are receiving this instruction. For teachers, progress-monitoring procedures will become essential to helping them evaluate and modify their instructional methods. But still there are few discussions as to how teachers and schools are monitoring student progress within the RTI method. As a result, the purpose of this study is to determine which procedures are commonly being used to monitor students’ progress when implementing RTI.
For the purposes of this research, a school psychologist is defined as a certified psychologist currently working in the school system. Response to Intervention is defined as a multilayered prevention system that determines a student response to scientific, research-based intervention(s) (Canter, 2006; Fuchs & Fuchs, 2006). And finally, progress-monitoring procedures will be operationally defined as evidenced-based procedures that are being used to monitor the student’s progress to evaluate an intervention or instruction, such as Curriculum Based Measurement and/or the Dynamic Indicators of Basic Early Literacy Skills.
Typically, RTI is referred to as a cyclical approach that is part of a Problem Solving model, which focuses on improving student performance (Canter, 2006; Linan-Thompson, Vaughn, Prater, & Cirino, 2006). Currently, most problem solving models used, favor a 3-tiered structure (Compton, Fuchs, Fuchs, & Bryant, 2006). Most of the primary and secondary tiers prevent problems through the use of effective interventions (Reschly, 2005). More specifically, during Tier 1 or the primary prevention, all students receive evidenced-based instruction in regular general education classrooms (Compton et. al., 2006; Fuchs & Fuchs, 2006; Reschly, 2005). At the Tier 1 level, progress monitoring and standard testing or benchmark testing is recommended at least 3 times per year (Vaughn, 2003). When students do not respond to Tier 1 instruction or they respond but are still falling behind grade expectations, they move to Tier 2. During Tier 2, students receive more intensive, systematic, evidenced-based instruction and intervention within a small group (Compton et. al., 2006; Fuchs & Fuchs, 2006; Reschly, 2005). Although Vaughn (2003) suggests the use of progress monitoring twice a month during the secondary tier, no specific types of progress monitoring procedures or tools were mentioned. In some problem solving models, Tier 3 refers to a special education evaluation, referral or placement (Compton et. al., 2006). Reschly (2005) states that advocates of 4-tiered models define the 4th tier as special education services and describe the 3rd tier as yet another level of intervention. In this case, Tier 3 is described as intense individualized instruction provided by a specialist, more frequently, and longer in duration (Reschly, 2005; Vaughn, 2003). There is some debate as to how many tiers would be ideal within a school-based problem-solving model (Reschly, 2005). However, regardless of the amount of tiers within the model, it is important to recognize that progress monitoring exists regularly throughout each tier.
Progress monitoring or intervention responsiveness assessment (IRA) should utilize standardized procedures so it can be implemented and understood by school personnel across states and districts (Fuchs, 2003). It should also include: a specific set of measurements such as, the time and content of measurement; a set criteria to determine responsiveness meaning only certain criteria (increased grades, percentage of progress) will be used to determine a student responded to the intervention; and an operational definition of the nature of the intervention (Fuchs, 2003). Since most research defines Curriculum Based Measurement (CBM) and/or the Dynamic Indicators of Basic Early Literacy Skills (DIBELS) as evidenced-based progress monitoring procedures it is essential to review them briefly.
CBM was originally developed to improve special education teachers’ instruction by collecting repeated measurement data to evaluate their effectiveness (Deno, 2003). Deno (2003) defines CBM as “a specific set of standard procedures” that are characterized by reliable and valid data. This data is collected through direct observational methods, in frequent and short durations, to sample a specific level of performance (Deno, 2003). CBM can be implemented to improve programs, predict performance, enhance teacher instructional planning, develop norms for decision making, increase student and teacher awareness of goals, screen and identify students at risk for academic failure, reduce bias in assessment, assess English Language Learners (ELL), and the list goes on and on (Deno, 2003).
DIBELS is an evidenced-based progress monitoring procedure often associated with RTI. DIBELS were designed to monitor a student’s growth in the achievement of early literacy skills, to identify students who need additional instructional support, and to evaluate the effectiveness of instruction and interventions (Good III, Gruba, & Kaminski, 2002). It is a practical progress monitoring procedure because it is reliable, valid, brief, well-organized, easy to administer, and can be used repeatedly (Good et. al., 2002). One of the major themes consistent in IDEA, NCLB, and the President’s Commission report are the need for preventive methods. Since literacy develops during infancy and the quality of literacy development will contribute to later school success, it is imperative to ensure children are acquiring these skills long before they enter school (Good et. al., 2002). As a result, it is important to recognize that DIBELS can be used not only a progress monitoring procedure, but as a preventative measure as well.
For school psychologists, RTI procedures have important implications (Canter, 2006). As mentioned above, the 2004 revisions of IDEA address the procedures of how to determine a Specific Learning Disability (SLD) (Prasse, 2006; Canter, 2006). Hence, years of practicing the discrepancy model approach for determining a child’s eligibility has now moved toward an RTI approach (Prasse, 2006; Canter, 2006; Bradley, Danielson & Doolittle, 2005; Mastropieri & Scruggs, 2005). Although, the new law does not require an RTI approach, the law strongly advises the use of a problem solving delivery system and specifically mentions RTI as an approach (IDEA, 2004; Prasse, 2006). The President’s Commission report (2002) strongly suggests incorporating a RTI approach with continuous progress monitoring, but never recommends what progress monitoring procedures should be used, who should monitor progress, when progress monitoring should occur, and how long progress monitoring should occur. These important factors contribute a great deal to a student’s response to instruction and interventions. It is important to determine the response to intervention, not only to determine where students’ belong within the tiers, but also to provide additional instruction or modifications to current interventions (Thompson et. al., 2006). It is also important to note, that IDEA, NCLB, and the President’s Commission report are written as a broad framework and are probably prepared in this manner so States can interpret and establish more specific guidelines.
In addition, research on RTI indicates some public schools have already been implementing the RTI model (Canter, 2006). Learning how these schools are monitoring the students’ progress is the first step in the success of an RTI approach. For school psychologists, the information collected will help provide not only more effective instruction for students, but will help identify students early on in their academic career. In light of the above discussion, this study answered the following research questions:
- How do school psychologists monitor or observe the monitoring of students’ progress when using Response to Intervention approach?
- How often does progress monitoring occur?
- Who mainly monitors progress?
- What type of RTI training have teachers received?
Based on current trends, it was hypothesized the monitoring of progress is based on student end products and not through evidenced-based evaluation procedures. These end products are an accumulation of students’ homework, classwork, and tests. Since they are not evidence-based evaluation procedures, using end products may or may not effectively assess the child’s progress, instruction, and intervention. In addition, it was hypothesized that teachers are most likely primarily responsible for progress monitoring. It was also hypothesized that because of teachers’ time restrictions, schedules, and/or lack of training, the progress monitoring procedures may not be completed in a timely manner.
Method
Sample
The participants consisted of approximately 140 certified school psychologists who were currently working in the school system. The participants were randomly recruited from the New York Association of School Psychologists (NYASP) membership directory.
The NYASP population is estimated to be approximately 1,500 school psychologists in the New York State and surrounding areas. A proportionate sampling technique was used to select the sample. Within the proportionate sampling context, the selection of individual subjects was on an equitable basis. Based on this population size, the sample was estimated. A good economical sample to ensure valid results would be to receive approximately 230 surveys. A sample of approximately 140 surveys were mailed. Of the 140 surveys mailed, 33 surveys were completed and returned.
Instrument
A cover letter was designed to obtain consent and inform the participants on the nature of the study. The cover letter specifically stated, all confidentiality of the participants would be fully maintained and no names, personal information or individual data would be disclosed at any time. The cover letter not only informed subjects of the voluntary nature of the survey, but also discussed the relevance of the research. In addition, the cover letter also notified participants that their completion of the survey confirmed their consent.
The data was collected in the form of a field survey. The survey form to collect the data was chosen because it was convenient and the easiest way to anonymously obtain information about progress monitoring in the schools. The survey was designed to directly question school psychologists on the use of RTI and the types of progress monitoring procedures currently used in their school(s). The questions in the survey included the demographic information of the respondents such as gender, ethnic background, years practicing school psychology, and level of education. Also included was the type of RTI implementation in the school and the practice of student progress monitoring. The survey included open-ended items, restricted items, and partially open-ended items. The survey was designed in a visually pleasing format, with related items presented together. All respondents received identical surveys along with the cover letter. The items were reviewed to make sure they were clear, well written, and appropriate for the sample. Please refer to the Appendix A for a copy of the cover letter and Appendix B for a copy of the survey.
Procedure(s)
An email was sent to NYASP requesting their membership directory. NYASP replied requesting a copy of the abstract, cover letter, and survey. In supplying NYASP with that information, they emailed their membership directory to the project investigator in the form of an excel spreadsheet. The subjects were randomly selected from the excel spreadsheet and labels were created using the Microsoft Word ‘mail merge wizard’. The survey was put into addressed envelopes and mailed directly to the potential participants accompanying the cover letter and self-addressed, stamped return envelopes.
Results
Data Analysis
The research design in this study was a survey. The SPSS (version 11.0) was used to conduct the statistical analyses. The data was analyzed first by descriptive statistics (e.g., frequencies, percentages, means, & graphs). Further data analysis was performed using inferential statistics (e.g., chi-square test), which examined the effect of student progress monitoring (e.g., who conducts the monitoring, procedures used in monitoring student progress, etc.). The criterion to establish statistical significance for all comparisons was set to p.05. There was also not a significant difference as to how often progress monitoring occurred with x2 (4,10)=.910, p>.05.
The amount of training on RTI teachers and special education teachers received indicated several significant differences which included national/regional conferences, courses, self study, information group training, website information, and not applicable where x2 (1,33)=.000, p=.0001. The remaining analysis on the types of training teacher’s received were not statistically significant (p>.05).
Discussion
It was hypothesized the monitoring of progress would be based on student end products and not through evidenced-based evaluation procedures. However, the data indicated that students progress was monitored through evidenced-based evaluation procedures, with DIBELS being used approximately 63.6% of the time and CBM approximately 54.5% (Table 2). Students end products were only used 27.3% of the time. There was a significant difference found in both dynamic assessment and portfolio assessment which were the least likely to be used as progress monitoring procedures. Since progress is being monitored by evidenced based procedures it is probably effectively assessing the child’s progress, instruction, and intervention.
Based on the research question (who mainly monitors progress), it was hypothesized that teachers are primarily responsible for progress monitoring. Although there was no significant differences found among which type of teacher monitored progress, the frequency analysis indicated progress was monitored 62.5% of the time by teachers, 25% of the time by the special education teacher, and 12.5% of the time by a trained teacher’s assistant (Table 4).
It was also hypothesized that since teachers’ lack training in RTI, progress-monitoring procedures may not be completed in a timely manner. The frequency data supported this hypothesis in that 42.4% of teachers did not receive any RTI training. Data analysis suggested that the monitoring of progress is completed based on the tier (Table 3). Since teachers who are monitoring progress lack training in RTI and the tiers, it would be hard to tell whether or not this is being done properly or at all based on the data.
Conclusions
The information collected in this study provided valuable information on progress monitoring procedures used in RTI. It is especially important to school psychologist now that the discrepancy model approach for determining a child’s eligibility has moved toward an RTI approach. The pressure from IDEA, NCLB, and the President’s Commission Report all strongly advises the use of a problem solving delivery system. The RTI model is becoming increasing well liked, represented in literature, and put into practice. With that said it is important to have current research on RTI to provide information to either support its use or expose any major problems or flaws.
Since the RTI model is growing rapidly, it is likely school psychologist will need to expand their roles in the areas of intervention and related assessments. This may be simply educating teachers and staff on RTI and the importance of adequate progress monitoring procedures or it may be taking an active role in the progress monitoring itself. Essentially, the role of school psychologists will revolutionize with whatever model is needed within the school system.
The following research data should be interpreted with caution based some weaknesses of the study such as the small sample size (n=33). The appropriate sample size was estimated to be approximately 230 subjects. Since only 140 surveys were mailed, the sample size was much smaller than an appropriate sample size. Since the subject pool was obtained from a random sample of NYASP members, it limited the subject pool to only those school psychologists who were located in New York and members of NYASP. These limitations make it difficult to generalize the results. The study may have yielded much different results if the population was not limited to New York. In addition, the weak control of extraneous variables such as location or whether or not all 140 surveys mailed were received by certified school psychologist. The list obtained from NYASP included students and retirees that should not have answered the survey. These types of extraneous variables may have been impossible to amend but may have also significantly altered the data collected.
This study provided a review of progress monitoring within the RTI model. Future research should focus on the use of RTI in the United States with an adequate sample size of the population. In addition, information on the specific type and amount of progress monitoring within each tier of RTI is necessary to determine its efficiency. Since progress-monitoring procedures are not standardized or regulated, future research is necessary to help provide this information so the data collected by progress monitoring can be implemented and understood by all school personnel across states and districts.
References
Barnett, D., Daly III, E., Jones, K., Lentz Jr., F. (2004). Response to Intervention: Empirically based special service decisions from single-case designs of increasing and decreasing intensity. The Journal of Special Education, 38, pp. 66-79.
Bordens, K. & Abbott, B. (2006). Choosing and Using Participants and Subjects: Pragmatic and ethical considerations. In Research and Design Methods: A process approach (6th ed.) New York: McGraw-Hill
Bradley, R., Danielson, L., & Doolittle, J. (2005). Response to Intervention. Journal of Learning Disabilities, 38, 485-486.
Canter, A. (2006). Problem Solving and RTI: New roles for school psychologists. Communiqué, 34. Retrieved November 18, 2006 http://www.nasponline.org/publications/cq/cq345rti.aspx
Compton, D., Fuchs, D., Fuchs, L., & Bryant, J. (2006). Selecting At-Risk Readers in First Grade for Early Intervention: A two-year longitudinal study of decision rules and procedures. Journal of Educational Psychology, 98, 394-409.
Deno, S. (2003). Developments in Curriculum Based Measurements. The Journal of Special Education , 37, 184-192.
Fagan, T. & Wise, P. (2000). Introduction to the field of school psychology. In School Psychology: Past, present, and future (2nd ed., pp. 1-22). Bethesda, MD: National Association of School Psychologists.
Fuchs, L. & Fuchs, D. (2006). A framework for building capacity for responsiveness to intervention. School Psychology Review, 35, 621-626.
Fuchs, L. & Fuchs, D. (2006). What is Scientifically-Based Research on Progress Monitoring? National Center on Student Progress Monitoring, Retrieved December 7, 2006 http://www.osepideasthatwork.org/toolkit/pdf/ScientificallyBasedResearch.pdf
Fuchs, L. (2003). Assessing Intervention Responsiveness: Conceptual and technical issues. Learning Disabilities Research & Practice, 18, 172-186.
Good, R., Gruba, J., & Kaminski, R. (2002). Best Practices in Using Dynamic Indicators of Basic Early Literacy Skills (DIBELS) in an Outcomes Driven Model. In A. Thomas, & J. Grimes (Eds.), Best practices in school psychology IV, 1, 699-720. Washington, DC: NASP
Individuals with Disabilities Education Improvement Act of 2004, 20 U.S.C. § 1400 et seq. Retrieved November 27, 2006 http://www.ed.gov/legislation/FedRegister/proprule/2004-4/122904a.pdf
Linan-Thompson, S., Vaughn, S., Prater, K., & Cirino, P. (2006). The Response to Intervention of English Language Learners at Risk for Reading Problems. Journal of Learning Disabilities, 39, 390-398.
Mastropieri, M. & Scruggs, T. (2005). Feasibility and Consequences of Response to Intervention: Examination of the issues and scientific evidence as a model for the identification of individuals with learning disabilities. Journal of Learning Disabilities, 38, 525-531.
No Child Left Behind Act of 2001, P.L. 107-110, 115 Stat. 1425. Retrieved November 20, 2006 http://www.ed.gov/ policy/elsec/leg/esea02/107-110.pdf
Prasse, D. P. (2006). Legal Supports for Problem-Solving Systems. Remedial and Special Education, 27, pp. 7-15.
President’s Commission of Excellence in Special Education. (2002). A new era: Revitalizing special education for children and their families. U.S. Department of Education, Office of Special Education and Rehabilitation Services. Retrieved November 25, 2006 http:// www.ed.gov/inits/commissionsboards/whspecialeducation/reports/images/Pres_Rep.pdf
Reschly, D. (2005). Learning Disabilities Identification: Primary intervention, secondary intervention, and then what? Journal of Learning Disabilities, 38, 510-515.
Vaughn, S. (2003, December). How many tiers are needed for response to intervention to achieve acceptable prevention outcomes? Presented at the National Research Center on Learning Disabilities Responsiveness-to-Intervention Symposium, Kansas City, MO.