NotesFAQContact Us
Search Tips
ERIC Number: ED504298
Record Type: Non-Journal
Publication Date: 2008-Sep
Pages: 38
Abstractor: ERIC
Reference Count: 51
A Brief History of Alternate Assessments Based on Alternate Achievement Standards. Synthesis Report 68
Quenemoen, Rachel
National Center on Educational Outcomes, University of Minnesota
This synthesis report provides a historical look back over the past 15 years of alternate assessment, from the early 1990s through the mid 2000s, as reported by state directors of special education on the National Center on Educational Outcomes (NCEO) state surveys, and augmented by other research and policy reports published by NCEO and related organizations during that time frame. It is meant to be a resource to state and federal policymakers and staff, researchers, test companies, and the public to help us understand why and where we have come from and where we may be going in the challenging of work of alternate assessment for students with significant cognitive disabilities. The work of the National Alternate Assessment Center and related projects and centers has focused on a validity framework as a heuristic for state practice, and that work is described here. The report ends with four recommendations to guide state practices at this point. Because of the number of uncertainties still in play, we need: (Transparency) We need to know what varying practices and targets yield for student outcomes, and the only way to build that knowledge base is to ensure that assessment development, implementation, and results are transparent and open to scrutiny; (Integrity) Building on the need for transparency is the need for integrity. The amount of flexibility needed to ensure that all students can demonstrate what they know and can do is higher in alternate assessments for this group of students than in more typical student populations. Flexibility can mask issues of teaching and learning unless it is carefully structured and controlled. Similarly, standardization as a solution risks reducing the integrity of the assessment results when the methods do not match the population being assessed and how that population demonstrates competence in the academic domains; (Validity Studies) Building on the issues of transparency and integrity, we have an obligation to monitor carefully the effects of alternate assessments over time, as well as to ensure the claims we are making for the use of the results are defensible; and (Planned Improvement over Time) In building a validity argument, we study whether the interpretations and uses of the test are defensible, and whether consequences that are hoped for and those that are to be avoided are in fact falling into their respective places. An important part of validity studies is the ongoing day-to-day oversight of the assessment development, implementation, and use of testing results, and high quality data collection and continuous improvement based on the data are absolutely necessary for these assessments. (Contains 4 tables and 2 figures.) [This report is an adaptation from a paper first presented at the Maryland Assessment Research Center for Education Success (MARCES) conference (College Park, Maryland, October 2007).]
National Center on Educational Outcomes. University of Minnesota, 350 Elliott Hall, 75 East River Road, Minneapolis, MN 55455. Tel: 612-626-1530; Fax: 612-624-0879; e-mail:; Web site:
Publication Type: Information Analyses; Reports - Evaluative
Education Level: Elementary Secondary Education
Audience: Community; Researchers; Policymakers
Language: English
Sponsor: N/A
Authoring Institution: National Center on Educational Outcomes; Council of Chief State School Officers; National Association of State Directors of Special Education (NASDSE)