Reports

Publication prices current through September 30, 2013

Topics

Accessible Reading Assessments

Accommodations for State Reading Assessments: Policies Across the Nation
By M. L. Thurlow and J. M. Larson
A report summarizing accommodation policies for the content area of reading. Because of the increase in the use of computer-based assessments, the authors of this report also examined whether reading assessment policies varied as a function of test format. Two research questions are addressed in this report: (1) What did state accommodation policies specify for individual accommodations on state reading assessments? And (2) Did different formats (computer vs. paper and pencil) have different accommodations policies for state reading assessments? This analysis of states’ accommodation policies for their reading assessments indicates the importance of understanding the intended constructs to be assessed by the state reading assessment. Policies indicate that there are different perspectives on the constructs, and that further explication of the content to be assessed would be beneficial. Published by the Institute’s Partnership for Accessible Reading Assessment. (2011) • Cost: Free, available only on the Web

Accessible Reading Assessments for Students with Disabilities: The Role of Cognitive, Grammatical, Lexical, and Textual/Visual Features
By J. Abedi, S. Leon, J. Kao, R. Bayley, N. Ewers, J. Herman, and K. Mundhenk
A study examining the characteristics of reading test items that may differentially impede the performance of students with disabilities. By examining the relationship between select item features and performance, the study seeks to inform strategies for increasing the accessibility of reading assessments for individuals from this group. The results of this study can help the assessment community in two ways. First, by elaborating on some test accessibility features, this report may serve as a guideline for those who are involved in test development and the instruction and assessment of students with disabilities. Second, and more importantly, this report provides methodology for examining other features that may have a major impact on assessment outcomes for students with disabilities. Published by the Institute’s Partnership for Accessible Reading Assessment. (2010) • Cost: Free, available only on the Web

Studying Less-Accurately Measured Students
By R. E. Moen, K. K. Liu, M. L. Thurlow, A. J. Lekwa, S. B. Scullin, and K. E. Hausmann
A report describing a small-scale study that closely examined a limited number of students to shed light on several questions. The study provided a preliminary look at the possibility of using teacher judgment in identifying students at most risk of being inaccurately measured by typical annual large-scale reading tests and examined some of the characteristics of such students. Findings from the study indicate that teacher judgment could be useful as one part of the procedure for identifying less-accurately measured students. The report includes anecdotal descriptions of students that suggest how various cognitive and affective factors such as slow processing and levels of engagement and anxiety, alone or in interaction, can distort the test performance of some students. Published by the Institute’s Partnership for Accessible Reading Assessment. (2010) • Cost: Free, available only on the Web

Cognitive and Achievement Differences Between Students with Divergent Reading and Oral Comprehension Skills: Implications for Accessible Reading Assessment Research
By K. McGrew, R. Moen, and M. Thurlow
A study examining the question, “Are there any analysis of the Woodcock-Johnson III Battery (WJ III) data that might shed light on learner characteristics that differentiate students whose measured reading performance is below what might be considered their optimal/predicted reading performance?” The WJ III norm sample spans preschool through late adulthood and includes a diverse array of individually administered cognitive and achievement tests. Although the data set is not ideally designed for studying LAMR and MAMR (Less and More Accurately Measured Readers) effects, the results of an analysis of the WJ III data are potentially informative for research and development efforts focused on large-scale accessible reading assessment programs. Published by the Institute’s Partnership for Accessible Reading Assessment. (2010) • Cost: Free, available only on the Web

Examination of a Reading Pen as a Partial Auditory Accommodation for Reading Assessment
By M. Thurlow, R. Moen, A. Lekwa, and S. Scullin
The goal of the study described in this report was to evaluate the effectiveness of a reading pen as a Partial Auditory Accommodation for large scale reading assessments when the pen is used only to pronounce words on demand. Four research questions were addressed: (1) To what extent does use of the reading pen on a standardized test of reading affect scores? (2) To what extent is there a different effect for students with disabilities and students without disabilities? (3) Regardless of disability status, to what extent does use of the reading pen affect the scores of students with adequate fluency? (4) What are students’ perceptions of the helpfulness of the reading pen? Published by the Institute’s Partnership for Accessible Reading Assessment. (2010) • Cost: Free, available only on the Web

Accessibility Principles for Reading Assessments
By M. Thurlow, C. Laitusis, D. Dillon, L. Cook, R. Moen, J. Abedi, and D. O’Brien
The National Accessible Reading Assessment Projects (NARAP), of which the Institute’s Partnership for Accessible Reading Assessment is a member, have been conducting research to identify ways to increase the accessibility of reading assessments. This document is the culmination of one of NARAP's goals: to develop evidence-based principles for making large scale assessments of reading proficiency more accessible for students who have disabilities that affect reading, while maintaining a high level of validity for all students taking the assessments. Some of the principles clarify and underscore the importance of well-accepted and widely used practices in designing reading assessments. Other principles have been developed from theory to respond to the needs of specific groups of students. The principles are to be viewed as a whole, representing a coherent and integrated approach to accessibility. They provide a vision of accessible reading assessments. This document was written primarily for personnel in state assessment offices and for test developers of regular large scale reading assessments used for accountability purposes. Other audiences also may find the document to be of interest and useful for other types of assessments. Published by NARAP. (2009) • Cost: Free, available only on the Web

Disabilities and Reading: Understanding the Effects of Disabilities and Their Relationship to Reading Instruction and Assessment
By M. Thurlow, R. Moen, K. Liu, S. Scullin, K. Hausmann, and V. Shyyan
A report intended to provide enough common ground on the issues surrounding reading and students with various disabilities to facilitate discussion of accessible reading assessment. The information in this report was obtained through a broad review of literature and Web sites of national agencies and organizations, along with input and feedback from professionals in the disability areas. It is not intended to be a comprehensive research review of disabilities or reading-related issues, but nevertheless should prove useful for understanding the effects of disabilities and their relationship to reading. Seven disabilities are discussed in the order of their prevalence: specific learning disabilities, speech or language impairments, intellectual disabilities, emotional/behavioral disabilities, autism, deaf or hard of hearing, and visual impairments. Although these disabilities do not comprise all of the possible disability categories or even the most common disabilities, they do represent those often considered most challenging for reading assessment. This report provides: (1) an overview of the characteristics of students with each disability, (2) a description of common approaches to reading instruction for students with each disability, and (3) assessment approaches and issues that surround the assessment of reading for students with each disability. Published by the Institute’s Partnership for Accessible Reading Assessment. (2009) • Cost: Free, available only on the Web

Exploring Factors That Affect the Accessibility of Reading Comprehension Assessments for Students With Disabilities: A Study of Segmented Text
By J. Abedi, J. Kao, S. Leon, L. Sullivan, J. Herman, R. Pope, V. Nambiar, and A. Mastergeorge
A report from a study seeking to experimentally examine factors affecting accessibility of assessments for students with disabilities. This study focused on reading comprehension assessments since (1) reading is one of the primary areas of the NCLB Title I accountability requirements, and (2) reading is the underlying ability for understanding instruction and assessment in all other content areas. A randomized field trial study in which a reading comprehension assessment, designed to be potentially more accessible for students with disabilities, was administered to groups of students including students with disabilities. Three long reading comprehension passages from existing state assessments were broken down into more manageable segments with corresponding questions placed immediately after each segment. The results of the segmenting study indicated that: (1) segmenting did not affect reading performance of students without disabilities, suggesting that it does not compromise the validity of reading assessment; (2) segmenting did not affect reading performance of students with disabilities; (3) the segmented version had a higher reliability for students with disabilities without affecting the reliability for students without disabilities; and (4) no trends were observed for student motivation, general emotions, and moods with respect to segmented assessment in either disability or no disability groups. Published by the Institute’s Partnership for Accessible Reading Assessment. (2009) • Cost: Free, available only on the Web

Examining DIF, DDF, and Omit Rate by Discrete Disability Categories
By K. Kato, R., Moen, and M. Thurlow
A paper describing a study of differential item functioning (DIF), differential distractor functioning (DDF), and differential omission frequency (DOF) for one third grade and one fifth grade statewide reading test for three disability groupings: students with speech/language impairments, learning disabilities, and emotional/behaviorial disorders. The study found substantive DIF/DDF only for students with learning disabilities, not for students with speech/language impairments or emotional/behaviorial disorders. Furthermore, examination of response characteristic curve graphs showed that DIF/DDF did not necessarily indicate an item had statistical bias against students with learning disabilities. Rather, low-performing students with learning disabilities more often than low-performing students without disabilities appeared to choose wrong answers randomly rather than selecting the most appealing wrong answers. The researchers concluded that there was no evidence of test bias for students with disabilities in the state reading tests examined in the study. Published by the Institute’s Partnership for Accessible Reading Assessment. (2007) • Cost: Free, available only on the Web

What Do State Reading Test Specifications Specify?
By C. Johnstone, R. Moen, M. Thurlow, D. Matchett, K. Hausmann, and S. Scullin
A paper examining state assessment blueprints or test specifications for state reading assessments. The No Child Left Behind Act requires all states to assess student reading, but each state is responsible for selecting what will be tested and how in its large-scale statewide assessments. As part of this process, states develop standards with which both instruction and assessments are expected to align. State standards for reading vary by definition and focus from state to state. The authors of this paper looked at (1) themes related to the purposes and constructs of assessments, (2) how those themes related to state standards, (3) the number of items assigned to particular constructs, and (4) the types of items typically found in statewide assessments. Published by the Institute’s Partnership for Accessible Reading Assessment. (2007) • Cost: Free, available only on the Web

Examining Differential Distractor Functioning in Reading Assessments for Students With Disabilities
By J. Abedi, S. Leon, and J. Kao
A paper examining the incorrect response choices, or distractors, of students with disabilities in standardized reading assessments. Differential distractor functioning (DDF) analysis differs from differential item functioning (DIF) analysis, which treats all answers alike and examines all wrong answers against the correct answer. DDF analysis, in contrast, examines only the wrong answers. If different groups, such as students with disabilities and students without disabilities, preferred different incorrect responses to an item, then the item could mean something different to the different groups. The authors found items showing DDF for students with disabilities in grade 9, but not for grade 3. Results also suggest that items showing DDF were more likely to be located in the second half of the assessments rather than the first half. Additionally, results suggest that in items showing DDF, students with disabilities were less likely to choose the most common distractor than their non-disabled peers. Results of this study can shed light on potential factors affecting the accessibility of reading assessments for students with disabilities. Published by the Institute’s Partnership for Accessible Reading Assessment. Published by the Institute’s Partnership for Accessible Reading Assessment. (2007) • Cost: Free, available only on the Web

Examining Differential Item Functioning in Reading Assessments for Students With Disabilities
By J. Abedi, S. Leon, and J. Kao
A paper examining group differences between students with disabilities and students without disabilities using differential item functioning (DIF) analyses in a high-stakes reading assessment. Results indicated that for grade 9, many items exhibited DIF and these were more likely to be located in the second half of the assessment subscales. After accounting for reading ability, when compared to their non-disabled peers, students with disabilities consistently under-performed on items located in the second half relative to the items located in the first half. These results were seen in grade 9 for data from two different states, but these results were not seen for grade 3. This study has several limitations to the data. There was no access to information about the testing accommodations that students with disabilities might have received, and no information about the type of disabilities. Results of this study can shed light on potential factors affecting the accessibility of reading assessments for students with disabilities, in an ultimate effort to provide assessment tools that are conceptually and psychometrically sound for all students. Published by the Institute’s Partnership for Accessible Reading Assessment. (2007) • Cost: Free, available only on the Web

State Accommodations Policies: Implications for the Assessment of Reading
By S. Lazarus, M. Thurlow, K. Eisenbraun, K. Lail, D. Matchett, and M. Quenemoen
A report presenting the results of an analysis of the accommodations that are included in state accommodations policies and guidelines. The purpose of the analysis was to learn more about 10 accommodations that may have specific implications for the assessment of reading: audio-video equipment, Braille, large print, proctor/scribe, read-aloud directions, read-aloud questions, repeat/re-read/clarify directions, sign interpret directions, sign interpret questions, and sign responses to sign language interpreter. Much controversy has surrounded the use of some accommodations on statewide assessments used for accountability purposes. This report examines the variation across states for this group of accommodations. Published by the Institute’s Partnership for Accessible Reading Assessment. (2006) • Cost: Free, available only on the Web

Top of page

 

Progress Monitoring

The following Technical Reports are available from the Institute's Research Institute on Progress Monitoring (RIPM).

Examining Technical Features of Progress Monitoring Measures Across Grade Levels in Writing (RIPM Technical Report 38) (2010)

Technical Characteristics of General Outcome Measures (GOMs) in Mathematics for Students with Significant Cognitive Disabilities (RIPM Technical Report 37) (2010)

Monitoring Mathematics Progress in Middle School (RIPM Technical Report 36) (2009)

Teacher Use Study: Surveys from Rural Districts (RIPM Technical Report 35) (2009)

Teacher Use Study: Surveys from Urban District (RIPM Technical Report 34) (2009)

Exploring the Use of Early Numeracy Indicators for Progress Monitoring: 2008-2009 (RIPM Technical Report 33) (2009)

Teacher Use Study: Progress Monitoring With and Without Diagnostic Feedback (RIPM Technical Report 32) (2009)

Teacher Use Study: Reading Aloud vs. Maze Selection (RIPM Technical Report 31) (2009)

Teachers’ Understanding of Curriculum-Based Measurement Progress Monitoring Data (RIPM Technical Report 30) (2011)

Exploring the Use of Early Numeracy Indicators for Monitoring Progress in Two Intervention Contexts: 2007-08 (RIPM Technical Report 29) (2009)

Exploring the Use of Early Numeracy Indicators for Monitoring Progress in Two Intervention Contexts (RIPM Technical Report 28) (2009)

Study of General Outcome Measurement (GOMs) in Reading for Students with Significant Cognitive Disabilities: Year 1 (RIPM Technical Report 27) (2008)

Examining the Long-Term Predictive Validity of the Early Numeracy Indicators for Predicting Success on a High Stakes Mathematics Test (RIPM Technical Report 26) (2009)

Monitoring Progress of Beginning Writers: Technical Features of the Slope (RIPM Technical Report 25) (2009)

Iowa Early Numeracy Indicator Screening Data: Iowa 2008-2009 (RIPM Technical Report 24) (2009)

Iowa Early Numeracy Indicator Screening Data: Iowa 2007-2008 (RIPM Technical Report 23) (2009)

Iowa Early Numeracy Indicator Screening Data: Iowa 2006-2007 (RIPM Technical Report 22) (2009)

A Replication of Static Use of Six Brief Middle School Mathematics Measures (RIPM Technical Report 21) (2009)

Characteristics of Reading Aloud, Word Identification, and Maze Selection as Growth Measures: Relationship between Growth and Criterion Measures (RIPM Technical Report 20) (2009)

Reading Aloud, Word Identification, and Maze Selection as Growth Measures: A Comparison of Slopes Derived from Different Data Collection Schedules (RIPM Technical Report 19) (2009)

Characteristics of Reading Aloud, Word Identification, and Maze Selection as Growth Measures: Identifying the Number of Data Points Needed to Obtain Consistency in Slopes (RIPM Technical Report 18) (2009)

Characteristics of Reading Aloud, Word Identification, and Maze Selection as Growth Measures: Consistency of Standard Error of Estimate, Standard Error of Slope, and Confidence Intervals (RIPM Technical Report 17) (2009)

Technical Features of Beginning Writing Measures (RIPM Technical Report 16) (2008)

Reliability, Criterion Validity, and Changes in Performance Across Three Points in Time: Exploring Progress Monitoring Measures for Middle School Mathematics (RIPM Technical Report 15) (2008)

Establishing Technically Adequate Measures of Progress in Early Mathematics (RIPM Technical Report 14) (2008)

Technical Adequacy of Early Numeracy Indicators: Exploring Growth at Three Points in Time (RIPM Technical Report 13) (2008)

General Outcome Measures for Students with Significant Cognitive Disabilities: Pilot Study (RIPM Technical Report 12) (2007)

Assessing Written Expression for Students Who Are Deaf or Hard of Hearing: Curriculum Based Measurement (RIPM Technical Report 11) (2008)

Comparison of Different Scoring Procedures for the CBM Maze Selection Measure (RIPM Technical Report 10) (2009)

Silent Reading Fluency Test: Reliability, Validity, and Sensitivity to Growth for Students Who Are Deaf and Hard of Hearing at the Elementary, Middle School, and High School Levels (RIPM Technical Report 9) (2008)

Technical Features of Narrative vs. Expository and Handwritten vs. Word-Processed Measures of Written Expression for Secondary-Level Students (RIPM Technical Report 8) (2010)

Technical Features of New and Existing CBM Writing Measures Within and Across Grades (RIPM Technical Report 7) (2007)

Identifying Indicators of Early Mathematics Proficiency in Kindergarten and Grade 1 (RIPM Technical Report 6) (2005)

Developing Measures for Monitoring Progress in Elementary Grade Mathematics: An Investigation of Desirable Characteristics (RIPM Technical Report 5) (2009)

MBSP Concepts & Applications: Comparison of Desirable Characteristics for a Grade Level and Cross-Grade Common Measure (RIPM Technical Report 4) (2008)

MBSP Computation: Comparison of Desirable Characteristics for a Grade Level and Cross-Grade Common Measure (RIPM Technical Report 3) (2008)

Seamless and Flexible Progress Monitoring: Age and Skill Level Extensions in Math, Basic Facts (RIPM Technical Report 2) (2009)

Seamless and Flexible Progress Monitoring: Age and Skill Level Extensions in Reading (RIPM Technical Report 1) (2009)

Top of page

Continue to next page