NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 121 to 135 of 520 results
Peer reviewed Peer reviewed
Direct linkDirect link
Ito, Kyoko; Sykes, Robert C.; Yao, Lihua – Applied Measurement in Education, 2008
Reading and Mathematics tests of multiple-choice items for grades Kindergarten through 9 were vertically scaled using the three-parameter logistic model and two different scaling procedures: concurrent and separate by grade groups. Item parameters were estimated using Markov chain Monte Carlo methodology while fixing the grade 4 population…
Descriptors: Grades (Scholastic), Markov Processes, Mathematics Tests, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Keng, Leslie; McClarty, Katie Larsen; Davis, Laurie Laughlin – Applied Measurement in Education, 2008
This article describes a comparative study conducted at the item level for paper and online administrations of a statewide high stakes assessment. The goal was to identify characteristics of items that may have contributed to mode effects. Item-level analyses compared two modes of the Texas Assessment of Knowledge and Skills (TAKS) for up to four…
Descriptors: Computer Assisted Testing, Geometric Concepts, Grade 8, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Hardison, Chaitra M.; Sackett, Paul R. – Applied Measurement in Education, 2008
Despite the growing use of writing assessments in standardized tests, little is known about coaching effects on writing assessments. Therefore, this study tested the effects of short-term coaching on standardized writing tests, and the transfer of those effects to other writing genres. College freshmen were randomly assigned to either training…
Descriptors: Control Groups, Group Membership, College Freshmen, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Willse, John T.; Goodman, Joshua T.; Allen, Nancy; Klaric, John – Applied Measurement in Education, 2008
The current research demonstrates the effectiveness of using structural equation modeling (SEM) for the investigation of subgroup differences with sparse data designs where not every student takes every item. Simulations were conducted that reflected missing data structures like those encountered in large survey assessment programs (e.g., National…
Descriptors: Structural Equation Models, Simulation, Item Response Theory, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Ayala, Carlos C.; Shavelson, Richard J.; Ruiz-Primo, Maria Araceli; Brandon, Paul R.; Yin, Yue; Furtak, Erin Marie; Young, Donald B.; Tomita, Miki K. – Applied Measurement in Education, 2008
The idea that formative assessments embedded in a curriculum could help guide teachers toward better instructional practices that lead to greater student learning has taken center stage in science assessment research. In order to embed formative assessments in a curriculum, curriculum developers and assessment specialists should collaborate to…
Descriptors: Student Evaluation, Formative Evaluation, Teaching Methods, Alignment (Education)
Peer reviewed Peer reviewed
Direct linkDirect link
Brandon, Paul R.; Young, Donald B.; Shavelson, Richard J.; Jones, Rachael; Ayala, Carlos C.; Ruiz-Primo, Maria Araceli; Yin, Yue; Tomita, Miki K.; Furtak, Erin Marie – Applied Measurement in Education, 2008
Our project to embed formative student assessments in the Foundational Approaches in Science Teaching curriculum required a close collaboration between curriculum developers at the Curriculum Research & Development Group (CRDG) and assessment developers at the Stanford Educational Assessment Laboratory (SEAL). This was a new endeavor for each…
Descriptors: Curriculum Research, Program Effectiveness, Formative Evaluation, Cooperative Planning
Peer reviewed Peer reviewed
Direct linkDirect link
Furtak, Erin Marie; Ruiz-Primo, Maria Araceli; Shemwell, Jonathan T.; Ayala, Carlos C.; Brandon, Paul R.; Shavelson, Richard J.; Yin, Yue – Applied Measurement in Education, 2008
Given the current emphasis on conducting high-quality experimental studies, it is becoming increasingly important for researchers to accompany their studies with evaluations of the "fidelity of implementation" of the experimental treatments. This article compares the form and extent of an experimental treatment to student learning. The study…
Descriptors: Formative Evaluation, Academic Achievement, Physical Sciences, Science Teachers
Peer reviewed Peer reviewed
Direct linkDirect link
Shavelson, Richard J.; Young, Donald B.; Ayala, Carlos C.; Brandon, Paul R.; Furtak, Erin Marie; Ruiz-Primo, Maria Araceli; Tomita, Miki K.; Yin, Yue – Applied Measurement in Education, 2008
Assessment of and for learning has occupied center stage in education reform, especially with the advent of the No Child Left Behind Federal legislation. This study examined the formative function of assessment--assessment for learning--recognizing that such assessment needs to be aligned, at least in part, with the summative function of…
Descriptors: Federal Legislation, Formative Evaluation, Program Effectiveness, Educational Change
Peer reviewed Peer reviewed
Direct linkDirect link
Yin, Yue; Shavelson, Richard J.; Ayala, Carlos C.; Ruiz-Primo, Maria Araceli; Brandon, Paul R.; Furtak, Erin Marie; Tomita, Miki K.; Young, Donald B. – Applied Measurement in Education, 2008
Formative assessment was hypothesized to have a beneficial impact on students' science achievement and conceptual change, either directly or indirectly by enhancing motivation. We designed and embedded formatives assessments within an inquiry science unit. Twelve middle-school science teachers with their students were randomly assigned either to…
Descriptors: Classroom Techniques, Experimental Groups, Control Groups, Formative Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Sass, D. A.; Schmitt, T. A.; Walker, C. M. – Applied Measurement in Education, 2008
Item response theory (IRT) procedures have been used extensively to study normal latent trait distributions and have been shown to perform well; however, less is known concerning the performance of IRT with non-normal latent trait distributions. This study investigated the degree of latent trait estimation error under normal and non-normal…
Descriptors: Difficulty Level, Item Response Theory, Test Items, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
D'Agostino, Jerome V.; Welsh, Megan E.; Cimetta, Adriana D.; Falco, Lia D.; Smith, Shannon; VanWinkle, Waverely Hester; Powers, Sonya J. – Applied Measurement in Education, 2008
Central to the standards-based assessment validation process is an examination of the alignment between state standards and test items. Several alignment analysis systems have emerged recently, but most rely on either traditional rating or matching techniques. Little, if any, analyses have been reported on the degree of consistency between the two…
Descriptors: Test Items, Student Evaluation, State Standards, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, Holmes; Stage, Alan Kirk; Monahan, Patrick – Applied Measurement in Education, 2008
A primary assumption underlying several of the common methods for modeling item response data is unidimensionality, that is, test items tap into only one latent trait. This assumption can be assessed several ways, using nonlinear factor analysis and DETECT, a method based on the item conditional covariances. When multidimensionality is identified,…
Descriptors: Test Items, Factor Analysis, Item Response Theory, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Wells, Craig S.; Bolt, Daniel M. – Applied Measurement in Education, 2008
Tests of model misfit are often performed to validate the use of a particular model in item response theory. Douglas and Cohen (2001) introduced a general nonparametric approach for detecting misfit under the two-parameter logistic model. However, the statistical properties of their approach, and empirical comparisons to other methods, have not…
Descriptors: Test Length, Test Items, Monte Carlo Methods, Nonparametric Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Eckhout, Teresa J.; Plake, Barbara S.; Smith, Dawn L.; Larsen, Ann – Applied Measurement in Education, 2007
The No Child Left Behind Act of 2001 allows states to assess students with "significant cognitive disabilities" on "alternative content standards" for determining adequate yearly progress. Alternative standards that align with regular content standards can allow for a continuum of performance expectations from very basic to approaching grade…
Descriptors: Program Effectiveness, Federal Legislation, Educational Improvement, Academic Standards
Peer reviewed Peer reviewed
Direct linkDirect link
Porter, Andrew C.; Smithson, John; Blank, Rolf; Zeidner, Timothy – Applied Measurement in Education, 2007
With the exception of the procedures developed by Porter and colleagues (Porter, 2002), other methods of defining and measuring alignment are essentially limited to alignment between tests and standards. Porter's procedures have been generalized to investigating the alignment between content standards, tests, textbooks, and even classroom…
Descriptors: Teaching Methods, Computer Uses in Education, Instructional Innovation, Guidance Programs
Pages: 1  |  ...  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  12  |  13  |  ...  |  35