NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 1 to 15 of 43 results
Peer reviewed Peer reviewed
Direct linkDirect link
Phillips, Gary W. – Applied Measurement in Education, 2015
This article proposes that sampling design effects have potentially huge unrecognized impacts on the results reported by large-scale district and state assessments in the United States. When design effects are unrecognized and unaccounted for they lead to underestimating the sampling error in item and test statistics. Underestimating the sampling…
Descriptors: State Programs, Sampling, Research Design, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Solano-Flores, Guillermo – Applied Measurement in Education, 2014
This article addresses validity and fairness in the testing of English language learners (ELLs)--students in the United States who are developing English as a second language. It discusses limitations of current approaches to examining the linguistic features of items and their effect on the performance of ELL students. The article submits that…
Descriptors: English Language Learners, Test Items, Probability, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Chia, Magda Y. – Applied Measurement in Education, 2014
The Smarter Balanced Assessment Consortium (Smarter Balanced) serves over 19 million primary, middle, and high school students from across 26 states and affiliates (Smarter Balanced, n.d). As one of the two Race to the Top (RTT)-funded assessment consortia, Smarter Balanced is responsible for developing formative, interim, and summative…
Descriptors: State Standards, Academic Standards, Educational Assessment, English Language Learners
Peer reviewed Peer reviewed
Direct linkDirect link
Eklöf, Hanna; Pavešic, Barbara Japelj; Grønmo, Liv Sissel – Applied Measurement in Education, 2014
The purpose of the study was to measure students' reported test-taking effort and the relationship between reported effort and performance on the Trends in International Mathematics and Science Study (TIMSS) Advanced mathematics test. This was done in three countries participating in TIMSS Advanced 2008 (Sweden, Norway, and Slovenia), and the…
Descriptors: Mathematics Tests, Cross Cultural Studies, Foreign Countries, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Hansen, Mary A.; Lyon, Steven R.; Heh, Peter; Zigmond, Naomi – Applied Measurement in Education, 2013
Large-scale assessment programs, including alternate assessments based on alternate achievement standards (AA-AAS), must provide evidence of technical quality and validity. This study provides information about the technical quality of one AA-AAS by evaluating the standard setting for the science component. The assessment was designed to have…
Descriptors: Alternative Assessment, Science Tests, Standard Setting, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Wan, Lei; Henly, George A. – Applied Measurement in Education, 2012
Many innovative item formats have been proposed over the past decade, but little empirical research has been conducted on their measurement properties. This study examines the reliability, efficiency, and construct validity of two innovative item formats--the figural response (FR) and constructed response (CR) formats used in a K-12 computerized…
Descriptors: Test Items, Test Format, Computer Assisted Testing, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Shen, Winny; Sackett, Paul R.; Kuncel, Nathan R.; Beatty, Adam S.; Rigdon, Jana L.; Kiger, Thomas B. – Applied Measurement in Education, 2012
Previous research has demonstrated that cognitive test validities are generalizable and predictive of academic performance across situations. However, even after accounting for statistical artifacts (e.g., sampling error, range restriction, criterion reliability), substantial variability often remains around estimates of cognitive test-performance…
Descriptors: College Entrance Examinations, Standardized Tests, Test Validity, Institutional Characteristics
Peer reviewed Peer reviewed
Direct linkDirect link
Rogers, W. Todd; Lin, Jie; Rinaldi, Christia M. – Applied Measurement in Education, 2011
The evidence gathered in the present study supports the use of the simultaneous development of test items for different languages. The simultaneous approach used in the present study involved writing an item in one language (e.g., French) and, before moving to the development of a second item, translating the item into the second language (e.g.,…
Descriptors: Test Items, Item Analysis, Achievement Tests, French
Peer reviewed Peer reviewed
Direct linkDirect link
Hendrickson, Amy; Huff, Kristen; Luecht, Richard – Applied Measurement in Education, 2010
Evidence-centered assessment design (ECD) explicates a transparent evidentiary argument to warrant the inferences we make from student test performance. This article describes how the vehicles for gathering student evidence--task models and test specifications--are developed. Task models, which are the basis for item development, flow directly…
Descriptors: Evidence, Test Construction, Measurement, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Brennan, Robert L. – Applied Measurement in Education, 2010
This paper provides an overview of evidence-centered assessment design (ECD) and some general information about of the Advanced Placement (AP[R]) Program. Then the papers in this special issue are discussed, as they relate to the use of ECD in the revision of various AP tests. This paper concludes with some observations about the need to validate…
Descriptors: Advanced Placement Programs, Equivalency Tests, Evidence, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Ewing, Maureen; Packman, Sheryl; Hamen, Cynthia; Thurber, Allison Clark – Applied Measurement in Education, 2010
In the last few years, the Advanced Placement (AP) Program[R] has used evidence-centered assessment design (ECD) to articulate the knowledge, skills, and abilities to be taught in the course and measured on the summative exam for four science courses, three history courses, and six world language courses; its application to calculus and English…
Descriptors: Advanced Placement Programs, Equivalency Tests, Evidence, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Huff, Kristen; Steinberg, Linda; Matts, Thomas – Applied Measurement in Education, 2010
The cornerstone of evidence-centered assessment design (ECD) is an evidentiary argument that requires that each target of measurement (e.g., learning goal) for an assessment be expressed as a "claim" to be made about an examinee that is relevant to the specific purpose and audience(s) for the assessment. The "observable evidence" required to…
Descriptors: Advanced Placement Programs, Equivalency Tests, Evidence, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Taylor, Catherine S.; Lee, Yoonsun – Applied Measurement in Education, 2010
Item response theory (IRT) methods are generally used to create score scales for large-scale tests. Research has shown that IRT scales are stable across groups and over time. Most studies have focused on items that are dichotomously scored. Now Rasch and other IRT models are used to create scales for tests that include polytomously scored items.…
Descriptors: Measures (Individuals), Item Response Theory, Robustness (Statistics), Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Noell, Jay; Ginsburg, Alan – Applied Measurement in Education, 2009
The report, "Evaluation of the National Assessment of Educational Progress", provides a number of recommendations for addressing validity concerns about NAEP. This article identifies actions that could be taken by the Congress, the National Center for Education Statistics, and the National Assessment Governing Board--which share responsibility for…
Descriptors: National Competency Tests, Federal Government, Public Agencies, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Sireci, Stephen G.; Hauger, Jeffrey B.; Wells, Craig S.; Shea, Christine; Zenisky, April L. – Applied Measurement in Education, 2009
The National Assessment Governing Board used a new method to set achievement level standards on the 2005 Grade 12 NAEP Math test. In this article, we summarize our independent evaluation of the process used to set these standards. The evaluation data included observations of the standard-setting meeting, observations of advisory committee meetings…
Descriptors: Advisory Committees, Mathematics Tests, Standard Setting, National Competency Tests
Previous Page | Next Page »
Pages: 1  |  2  |  3