Publication Date
| In 2015 | 0 |
| Since 2014 | 0 |
| Since 2011 (last 5 years) | 4 |
| Since 2006 (last 10 years) | 13 |
| Since 1996 (last 20 years) | 21 |
Descriptor
Source
| Educational Measurement:… | 112 |
Author
| Mehrens, William A. | 5 |
| Popham, W. James | 5 |
| Linn, Robert L. | 4 |
| Madaus, George F. | 3 |
| Messick, Samuel | 3 |
| Coffman, William E. | 2 |
| Gardner, Eric F. | 2 |
| Hoover, H. D. | 2 |
| Jaeger, Richard M. | 2 |
| Koretz, Daniel | 2 |
| More ▼ | |
Publication Type
| Journal Articles | 112 |
| Opinion Papers | 112 |
| Reports - Evaluative | 22 |
| Speeches/Meeting Papers | 9 |
| Reports - Descriptive | 8 |
| Information Analyses | 5 |
| Reports - Research | 1 |
Education Level
| Elementary Secondary Education | 4 |
Audience
| Researchers | 6 |
Showing 1 to 15 of 112 results
Pommerich, Mary – Educational Measurement: Issues and Practice, 2012
Neil Dorans has made a career of advocating for the examinee. He continues to do so in his NCME career award address, providing a thought-provoking commentary on some current trends in educational measurement that could potentially affect the integrity of test scores. Concerns expressed in the address call attention to a conundrum that faces…
Descriptors: Testing, Scores, Measurement, Test Construction
Mislevy, Robert J. – Educational Measurement: Issues and Practice, 2012
This article presents the author's observations on Neil Dorans's NCME Career Award Address: "The Contestant Perspective on Taking Tests: Emanations from the Statue within." He calls attention to some points that Dr. Dorans made in his address, and offers his thoughts in response.
Descriptors: Testing, Test Reliability, Psychometrics, Scores
Kingston, Neal; Nash, Brooke – Educational Measurement: Issues and Practice, 2012
In their critique of Kingston and Nash (2011), Briggs, Ruiz-Primo, Furtak, Shepard, and Yin (2012) make several major points. First, Kingston and Nash's conclusions about the state of research on the efficacy of formative assessment are similar to other researchers, "including some of the authors." Second, their research may be unique in that they…
Descriptors: Formative Evaluation, Meta Analysis, Effect Size, Research Methodology
Briggs, Derek C.; Ruiz-Primo, Maria Araceli; Furtak, Erin; Shepard, Lorrie; Yin, Yue – Educational Measurement: Issues and Practice, 2012
In a recent article published in "EM:IP," Kingston and Nash report on the results of a meta-analysis on the efficacy of formative assessment. They conclude that the average effect of formative assessment on student achievement is about 0.20 SD units. This would seem to dispel the myth that effects between 0.40 and 0.70 can be attributed to…
Descriptors: Academic Achievement, Outcome Measures, Meta Analysis, Inferences
Shepard, Lorrie A. – Educational Measurement: Issues and Practice, 2009
In many school districts, the pressure to raise test scores has created overnight celebrity status for formative assessment. Its powers to raise student achievement have been touted, however, without attending to the research on which these claims were based. Sociocultural learning theory provides theoretical grounding for understanding how…
Descriptors: Learning Theories, Validity, Student Evaluation, Evaluation Methods
La Marca, Paul M. – Educational Measurement: Issues and Practice, 2006
Although it is assumed that student cognition contributes to student performance on achievement tests, it may be that current testing models lack the degree of specification necessary to warrant such inferences. With test score interpretations as the referent, the authors in this special issue address the role of student cognition in learning and…
Descriptors: Federal Legislation, Measurement, Testing, Achievement Tests
Lohman, David F.; Nichols, Paul – Educational Measurement: Issues and Practice, 2006
In this commentary, we summarize some of the main themes of the NRC report and note ways in which the papers by Mislevy and Haertel, Gorin, and Abedi and Gandara address the Panel's recommendations. We then briefly review and offer reflections on each paper. We see much to applaud here and also in the broader effort to build bridges between the…
Descriptors: Psychometrics, Student Evaluation, Measurement, Teacher Education
Schulz, E. Matthew – Educational Measurement: Issues and Practice, 2006
A look at real data shows that Reckase's psychometric theory for standard setting is not applicable to bookmark and that his simulations cannot explain actual differences between methods. It is suggested that exclusively test-centered, criterion-referenced approaches are too idealized and that a psychophysics paradigm and a theory of group…
Descriptors: Psychometrics, Group Behavior, Standard Setting, Simulation
Linn, Robert L. – Educational Measurement: Issues and Practice, 2006
The question of what it means to follow the "Standards" is discussed. It is argued that the "Standards" consists of statements of general principles, and that interpretation for specific applications requires professional judgment. As a result, disagreements among professionals on the applicability of particular standards to specific situations…
Descriptors: Standards, Accountability, Educational Testing, Context Effect
Reckase, Mark D. – Educational Measurement: Issues and Practice, 2006
Schulz (2006) provides a different perspective on standard setting than that provided in Reckase (2006). He also suggests a modification to the bookmark procedure and some alternative models for errors in panelists' judgments than those provided by Reckase. This article provides a response to some of the points made by Schulz and reports some…
Descriptors: Evaluation Methods, Standard Setting, Reader Response, Regression (Statistics)
Koretz, Daniel – Educational Measurement: Issues and Practice, 2006
The goal of the Standards for Educational and Psychological Testing is to improve testing practices, but their impact on practice appears spotty. Self-regulation clearly fails in some instances. The establishment of an external agency to oversee testing practices and adherence to the Standards would face substantial hurdles, and the ambiguity of…
Descriptors: Program Implementation, Educational Testing, Psychological Testing, Standard Setting
LaDuca, Tony – Educational Measurement: Issues and Practice, 2006
In the Spring 2005 issue, Wang, Schnipke, and Witt provided an informative description of the task inventory approach that centered on four functions of job analysis. The discussion included persuasive arguments for making systematic connections between tasks and KSAs. But several other facets of the discussion were much less persuasive. This…
Descriptors: Criticism, Task Analysis, Job Analysis, Persuasive Discourse
Wang, Ning; Witt, Elizabeth A.; Schnipke, Deborah – Educational Measurement: Issues and Practice, 2006
"In his commentary to our paper on the use of knowledge, skill, and ability statements in developing credentialing examinations (Wang, Schnipke, & Witt, 2005 )," Dr. LaDuca set forth his concerns while commending our paper for providing helpful insights into the importance of careful delineation of KSAs. We believe that there is little substantive…
Descriptors: Criticism, Job Analysis, Licensing Examinations (Professions), Certification
Hess, Frederick M. – Educational Measurement: Issues and Practice, 2005
Since 2001, considerations of school reform have been dominated by performance-based accountability. No Child Left Behind (NCLB) has changed the way policymakers and educators talk about education, look at educational performance, and think about educational challenges. Nonetheless, NCLB and the state accountability systems it has spawned have…
Descriptors: Federal Legislation, Accountability, Educational Policy, Educational Change
Lane, Suzanne – Educational Measurement: Issues and Practice, 2004
The validity of high-stakes assessments and accountability systems is discussed in relation to the requirements of No Child Left Behind (NCLB). The extent to which content standards and assessments are cognitively rich, the challenges in setting performance standards, and the impact of high-stakes assessments on instruction and student learning…
Descriptors: Federal Legislation, High Stakes Tests, Critical Thinking, Accountability

Peer reviewed
Direct link
