NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 1 to 15 of 19 results
Peer reviewed Peer reviewed
Direct linkDirect link
Timms, Mike – Measurement: Interdisciplinary Research and Perspectives, 2014
In his commentary on "How Task Features Impact Evidence from Assessments Embedded in Simulations and Games" by Almond et al., Mike Timms writes that his own research has involved the use of embedded assessments using simulations in interactive learning environments, and the Evidence Centered Design (ECD) approach has provided a solid…
Descriptors: Task Analysis, Models, Educational Assessment, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Walker, A. Adrienne; Engelhard, George, Jr. – Measurement: Interdisciplinary Research and Perspectives, 2014
"Game-Based Assessments: A Promising Way to Create Idiographic Perspectives" (Adrienne Walker and George Englehard) comments on: "How Task Features Impact Evidence from Assessments Embedded in Simulations and Games" by Russell G. Almond, Yoon Jeon Kim, Gertrudes Velasquez, and Valerie J. Shute. Here, Walker and Englehard write…
Descriptors: Educational Games, Task Analysis, Models, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
McClarty, Katie Larsen – Measurement: Interdisciplinary Research and Perspectives, 2013
The construct map is a promising tool for organizing the data standard-setting panelists interpret. The challenge in applying construct maps to standard-setting procedures will be the judicious selection of data to include within this organizing framework. Therefore, this commentary focuses on decisions about what to include in the construct map.…
Descriptors: Standard Setting (Scoring), Maps, Validity, Evidence
Peer reviewed Peer reviewed
Direct linkDirect link
Mislevy, Robert J. – Measurement: Interdisciplinary Research and Perspectives, 2013
Measurement is a semantic frame, a constellation of relationships and concepts that correspond to recurring patterns in human activity, highlighting typical roles, processes, and viewpoints (e.g., the "commercial event") but not others. One uses semantic frames to reason about unique and complex situations--sometimes intuitively, sometimes…
Descriptors: Educational Assessment, Measurement, Feedback (Response), Evidence
Peer reviewed Peer reviewed
Direct linkDirect link
Murphy, Kevin R. – Measurement: Interdisciplinary Research and Perspectives, 2012
As Paul Newton so ably demonstrates, the concept of validity is both important and problematic. Over the last several decades, a consensus definition of validity has emerged; the current edition of "Standards for Educational and Psychological Testing" notes, "Validity refers to the degree to which evidence and theory support the interpretations of…
Descriptors: Evidence, Validity, Educational Testing, Psychological Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Mislevy, Robert J. – Measurement: Interdisciplinary Research and Perspectives, 2012
Paul E. Newton's "Clarifying the Consensus Definition of Validity" addresses the single most important, yet stubbornly protean, value in educational and psychological assessment. "Standards for Educational and Psychological Testing" (American Educational Research Association, American Psychological Association, & National Council on Measurement in…
Descriptors: Evidence, Validity, Educational Testing, Psychological Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Mattern, Krista D.; Kobrin, Jennifer L.; Camara, Wayne J. – Measurement: Interdisciplinary Research and Perspectives, 2012
As researchers at a testing organization concerned with the appropriate uses and validity evidence for our assessments, we provide an applied perspective related to the issues raised in the focus article. Newton's proposal for elaborating the consensus definition of validity is offered with the intention to reduce the risks of inadequate…
Descriptors: Evidence, Validity, Tests, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Newton, Paul E. – Measurement: Interdisciplinary Research and Perspectives, 2012
The 1999 "Standards for Educational and Psychological Testing" defines validity as the degree to which evidence and theory support the interpretations of test scores entailed by proposed uses of tests. Although quite explicit, there are ways in which this definition lacks precision, consistency, and clarity. The history of validity has taught us…
Descriptors: Evidence, Validity, Educational Testing, Risk
Peer reviewed Peer reviewed
Direct linkDirect link
Samuelsen, Karen – Measurement: Interdisciplinary Research and Perspectives, 2012
The notion that there is often no clear distinction between factorial and typological models (von Davier, Naemi, & Roberts, this issue) is sound. As von Davier et al. state, theory often indicates a preference between these models; however the statistical criteria by which these are delineated offer much less clarity. In many ways the procedure…
Descriptors: Models, Statistical Analysis, Classification, Factor Structure
Peer reviewed Peer reviewed
Direct linkDirect link
Waltman, Ludo; Costas, Rodrigo; van Eck, Nees Jan – Measurement: Interdisciplinary Research and Perspectives, 2012
The literature on bibliometric indices for assessing scholarly impact, in particular the "h" index (Hirsch, 2005) and its many variants, is extensive, but nevertheless Ruscio and colleagues (this issue) succeed in making a valuable contribution. They have made the effort of collecting publication and citation data for no less than 1,750…
Descriptors: Evidence, Citations (References), Periodicals, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Barrett, Paul – Measurement: Interdisciplinary Research and Perspectives, 2011
The article by Stephen Humphry (this issue) is a technical tour de force. At one level, the author marvels at the ingenuity and sophisticated logic and argument on display. This is impressive work and thinking whichever way one looks at it. However, after twice re-reading the manuscript, the same question arises on the author's mind: What exactly…
Descriptors: Social Sciences, Measurement, Statistical Analysis, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Black, Paul; Wilson, Mark; Yao, Shih-Ying – Measurement: Interdisciplinary Research and Perspectives, 2011
The overall aim of this article is to analyze the relationships between the roles of assessment in pedagogy, the interactions between curriculum assessment and pedagogy, and the study of pupils' progression in learning. It is argued that well-grounded evidence of pupils' progressions in learning is crucial to the work of teachers, so that a method…
Descriptors: Evidence, Learning Strategies, Program Effectiveness, Grade 8
Peer reviewed Peer reviewed
Direct linkDirect link
Rose, L. Todd; Fischer, Kurt W. – Measurement: Interdisciplinary Research and Perspectives, 2011
The focus article by Coburn and Turner (this issue) seeks to provide a comprehensive framework for understanding data use in the context of data-use interventions. This commentary focuses on what the authors see as a glaring omission in what is otherwise a valuable framework: the issue of "useful data." It is their contention that the usefulness…
Descriptors: Decision Making, Data, Data Analysis, Data Interpretation
Peer reviewed Peer reviewed
Direct linkDirect link
Frey, Andreas; Carstensen, Claus H. – Measurement: Interdisciplinary Research and Perspectives, 2009
On a general level, the objective of diagnostic classifications models (DCMs) lies in a classification of individuals regarding multiple latent skills. In this article, the authors show that this objective can be achieved by multidimensional adaptive testing (MAT) as well. The authors discuss whether or not the restricted applicability of DCMs can…
Descriptors: Adaptive Testing, Test Items, Classification, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Tatsuoka, Curtis – Measurement: Interdisciplinary Research and Perspectives, 2009
In this commentary, the author addresses what is referred to as the deterministic input, noisy "and" gate (DINA) model. The author mentions concerns with how this model has been formulated and presented. In particular, the author points out that there is a lack of recognition of the confounding of profiles that generally arises and then discusses…
Descriptors: Test Items, Classification, Psychometrics, Item Response Theory
Previous Page | Next Page ยป
Pages: 1  |  2