Literacy Assessments within an MTSS Framework
Literacy Assessments
Within a Multi-Tiered System of Support (MTSS) framework, decision making begins first with assessment. Through the assessments administered, problem solving teams are able to look at the overall “health” of a school by determining how many students are on target to meet grade level expectations versus how many students are not on target for success. This can help schools determine which elements of the core curriculum need to be supplemented or adjusted. Additional assessments can then be utilized to look at the overall performance of grade levels, classrooms, and individual students. There are multiple forms of assessment that are used as part of a comprehensive assessment system within a school, but the three most commonly used to make decisions as part of the problem solving process are universal screeners, diagnostic assessments, and progress monitoring.
Universal Screeners
A universal screener is designed to determine if a student is at-risk for reading difficulty. They are not designed to be diagnostic or necessarily prescriptive. Screeners are typically brief, one to three minutes to administer, and can be given individually or as a group depending upon the assessment. Screeners do not necessarily assess all subcomponents of literacy, but tend to assess skills that are highly correlated to later reading success (International Dyslexia Association 2019). These assessments help to give an overview of which students may be in need of literacy interventions. Universal screeners are typically given to all students two to three times per year and have standardized norms.
When thinking about universal screeners, it’s important to remember that a screener tells us who needs help, but it doesn’t tell us exactly where the underlying issue begins. They are designed to help us know who may need additional assessments or additional instruction in order to identify and remediate issues early (Peterson 2010). Think about when you go to the doctor each year for your annual physical. The nurse and the doctor are going to look at your weight and blood pressure to compare how you’re doing now versus standardized norms. They will also do a quick physical exam to look for any physical symptoms that may indicate another issue. This is like a universal screener. The doctor is looking for any general indicators of unhealthiness just as a teacher would use a universal screener to determine if there are any red flag indicators that a student is not developing the literacy skills needed to be successful. Under the Read to Achieve Act in North Carolina, all students in grades K-3 are to be assessed with the state’s adopted literacy universal screener three times per year.
Universal screeners typically assess skills with high correlations of future success. In correlative studies, correlating factors are ranked on a scale of 0 to 1 with 0 indicating no correlation and 1 indicating a perfect correlation where r represents the correlation factor (Snow, Burns, & Griffin 1998). North Carolina utilized mClass Reading 3D previously as its universal screener for grades K-3 and last year utilized iStation. The chart below shows correlating predictors of future reading success and which assessments from each of these two screeners measured which skill. Additional research studies have indicated that Word Identification Fluency (WIF) remains one of the strongest predictors of later reading success in grades K through 2 (Compton, Fuchs, Fuchs & Bryant, 2006; Fuchs, Fuchs, & Compton, 2004).
One note of caution for educators and administrators is that there is often a tendency to teach to the test, especially when an assessment is considered high stakes (Pearson 2006). In North Carolina student growth on these universal screeners throughout the year was used to determine Teacher EVAAS scores in grades K-2 which was then tied to Standard 6 on their teacher evaluation. This kind of high stakes environment can lead teachers to rely on teaching the specific skill tested through repetition and practice of the skill and assessment. While this can show increases in the student performance on the assessment, this rarely translates into overall growth in reading proficiency (Pressley, Hilden, & Shankland, 2005; Samuels, 2007). By teaching to the one skill, educators neglect to understand that other skills are foundational to the skill being assessed and thereby neglect instruction on the prerequisite skills needed to improve overall reading ability. Universal screeners should be used in conjunction with other assessments to not only identify at-risk students but to pinpoint specific skill needs for intervention.
Diagnostic Assessments
Diagnostic assessments are intended to provide more in depth information about specific areas of weakness and can help to identify root causes. They are typically given one on one to individual students indicating sustained needs and consist of literacy specific subtests. Universal screeners are a starting point, and diagnostic assessments help educators to delve more deeply into the specific issues a student is struggling with. Diagnostics help to drive where intensive interventions for a student should begin.
There are informal diagnostics that require little to no formalized training to administer, and formal diagnostics that require extensive amounts of training in order to be utilized and understood. Examples of informal diagnostics would be the CORE Phonics Survey, the Quick Phonics Screener (QPS), or the Phonological Awareness Skills Test (PAST). These do not require formal training in order to be able to administer to students. Examples of commonly used formal diagnostics would be the Wechsler Individual Achievement Test (WIAT) and The Woodcock-Johnson Tests of Achievement. These two assessments require formal training and are typically given by a certified psychologist. In my experience, these two assessments aren’t used until a student is going through the formal testing process to identify students with potential disabilities.
When looking at diagnostic assessments, they are either criterion referenced or norm referenced. Norm referenced means that a student’s individual scores are compared to the performance of students in the normative sample reference group. Student scores on the assessment from the normative sample are then placed on a bell curve to show where the majority of the students performed (Munger 2020). The tested individual’s scores are then compared to the scores of the normative sample. If the scores fell into the range in which the majority of the normative sample fell, then the tested individual’s scores would be considered “average.” The WIAT and the Woodcock-Johnson are both norm referenced assessments. Criterion referenced assessments compare student performance on individually defined skills to set goals for mastery (Reading Rockets 2020). The CORE Phonics Survey is one example of a criterion referenced assessment.
Thinking back to our medical comparison, if your doctor notices that your blood pressure is high at your annual exam, then he's going to ask you additional questions about problems you may be experiencing as a result to look for additional symptoms. He may also order additional tests to get a clearer picture of the cause of the problem in order to develop an appropriate plan of intervention. This is what we as educators are doing when we utilize diagnostic assessments with our student showing risk symptoms.
Progress Monitoring
Progress monitoring is used to determine the effectiveness of intervention by determining the student’s rate of growth toward a specific reading goal. While an intervention may focus on a specific skill area, it’s important to keep in mind that the ultimate goal is to improve overall reading abilities. Oral Reading Fluency has long been considered an accurate measure of a student’s overall growth in reading as it requires multiple cognitive processes to work in tandem (Fuchs, Fuchs, Hosp, & Jenkins 2001).
Progress monitoring begins with establishing a baseline for student performance which should include at least two to three data points. When progress monitoring with grade level passages, select grade level passages where the student is 90% proficient (Jenkins, Hudson, & Lee 2007). Grade level reading passages can be found at DIBELS, Easy CBM, AIMS Web, or some core reading programs (ex. Open Court) include grade level reading passages for this purpose. Just be sure that all passages used come from the same source. Baseline data points are then used in conjunction with the number of weeks the intervention will be conducted to establish a goal using normed rates of improvement. Baseline data, the established goal, and the weeks of intervention are used to create a graph and progress monitoring data collected should be included on the graph. Ideally, progress monitoring should be conducted weekly.
As students work through interventions within specific skills, the teacher should be collecting data on how the student is mastering the specific skill. However, the goal is for the student to make progress in overall reading skills because the skills need to transfer. This is why grade level passages are recommended for progress monitoring. It should take approximately nine weeks for gains in overall grade level reading to start to be seen (Fuchs, Compton, Fuchs, & Bryant 2006). Progress monitoring points should be graphed along with the baseline data points and the goal. There should be a line drawn to connect the baseline data points and the goal which becomes the aim line. Progress monitoring points are then compared to the aim line to determine the overall effectiveness of the intervention.
Think back to our medical analogy one more time. Once the doctor has gotten the results of the diagnostic assessments and developed a plan of care, he is also going to want to follow up to see how the plan of care is working. Based on the results of follow up assessments and appointments, the doctor will make adjustments to the plan of care as needed. As educators we use progress monitoring to determine if what we are doing is getting the desired results. If not, then we make adjustments to what we are doing, just as the doctor adjusts our plan of care based on our response to the plan.
Final Thoughts
A comprehensive assessment system is a critical component of any successful Multi-Tiered System of Support framework. When making decisions, multiple sources of data should be collected and analyzed. Universal screeners are the first step by helping us to look at the overall health of the school by grade level and classroom. They also help to identify students that are at risk for reading failure if instruction isn’t adjusted to meet these students’ needs. This is where diagnostic assessments come in. Once at risk students have been identified, diagnostic assessments help to pinpoint specific skill deficits to be addressed through interventions. Problem solving teams should use the data collected as well as any other relevant data to create an intervention plan to close gaps for identified students. This data should also be used to develop performance goals which are then progress monitored for a period of time to determine if the gaps are closing for identified students. It’s important to keep in mind that all assessments utilized should be valid and reliable and that multiple data points should be utilized to get a full picture of students’ needs. Ultimately, our goal is to close gaps for students who are at risk as quickly and as early as possible in order for those students to perform on grade level. Without assessments, we cannot adequately identify and meet our students’ needs.
Works Cited
Compton, D. L., Fuchs, D., Fuchs, L. S., & Bryant, J. D. (2006). Selecting at-risk readers in first grade for early intervention: A two-year longitudinal study of decision rules and procedures. Journal of Educational Psychology, 98, 394–409.
Fuchs, D. F., Compton, D. L., Fuchs, L. S., & Bryant, J. D. (2006, February). The Prevention and Identification of Reading Disability. Paper presented at the Pacific Coast Research Conference. San Diego: CA.
Fuchs, L. S, Fuchs, D., & Compton, D. L. (2004). Monitoring early reading development in first grade: Word identification fluency versus nonsense word fluency. Exceptional Children, 71, 7–21.
Fuchs, L. S., Fuchs, D. F., Hosp, M. K., & Jenkins, J. R. (2001). Oral reading fluency as an indicator of reading competence: A theoretical, empirical, and historical analysis. Scientific Studies of Reading, 5, 239–256.
International Dyslexia Association. (2019). Universal Screening: K-2 Reading. International Dyslexia Association Fact Sheet. Retrieved from https://dyslexiaida.org/universal-screening-k-2-reading/.
Jenkins, J., Hudson, R., and Lee, S. (2007, Spring). Using CBM-Reading Assessments to Monitor Progress. Perspectives on Language and Literacy. 33:2. Retrieved from the RTI Action Network July 9, 2020 from http://www.rtinetwork.org/essential/assessment/progress/usingcbm.
Munger, K. (2020). Chapter 5 Types of Literacy Assessment: Principles, Procedures, and Application. Steps to Success: Crossing the Bridge between Literacy Research and Applications, Ed: K. Hinchman. Retrieved July 8. 2020 from https://milnepublishing.geneseo.edu/steps-to-success/chapter/5-types-of-literacy-assessment-principles-procedures-and-applications/.
Pearson, P. D. (2006). Foreword. In K. S. Goodman (Ed), The truth about DIBELS: What it is, what it does. Portsmouth, NH: Heinemann.
Peterson, A. (2010). Essential Components of RTI: Screening [Webinar]. National Center on Response to Intervention. Retrieved from https://rti4success.org/video/essential-components-rti-screening-0.
Pressley, M., Hilden, K., & Shankland, R. (2005). An evaluation of end-grade-3 Dynamic Indicators of Basic Early Literacy Skills (DIBELS): Speed reading without comprehension, predicting little (Technical Report). East Lansing, MI: Literacy Achievement Research Center.
Reading Rockets. (2020). Assessment: In Depth. Reading 101: A Guide to Teaching Reading and Writing. Retrieved July 8, 2020 from https://www.readingrockets.org/teaching/reading101-course/modules/assessment/assessment-depth.
Samuels, S. J. (2007). The DIBELS tests: Is speed of barking at print what we mean by reading fluency? Reading Research Quarterly, 42, 563–566.
Scanlon D. and Vellutino F. (1996) Prerequisite skills, early instruction, and success in first grade reading: Selected results from a longitudinal study. Mental Retardation and Developmental Disabilities Research Reviews. 2, 54–63.
Scarborough, H. (1998) Predicting the future achievement of second graders with reading disabilities: Contributions of phonemic awareness, verbal memory, rapid naming, and IQ. Ann. of Dyslexia 48, 115–136. https://doi.org/10.1007/s11881-998-0006-5
Snow, C., Burns, M., and Griffin, P. (1998). Chapter 4 Predictors of Success and Failure in Reading. Preventing Reading Difficulties in Young Children. National Academy of Sciences National Research Council, Washington, DC. Commission on Behavioral and Social Sciences and Education.
Comments
Post a Comment