NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers7
What Works Clearinghouse Rating
Showing 1 to 15 of 140 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Leventhal, Brian C.; Grabovsky, Irina – Educational Measurement: Issues and Practice, 2020
Standard setting is arguably one of the most subjective techniques in test development and psychometrics. The decisions when scores are compared to standards, however, are arguably the most consequential outcomes of testing. Providing licensure to practice in a profession has high stake consequences for the public. Denying graduation or forcing…
Descriptors: Standard Setting (Scoring), Weighted Scores, Test Construction, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Lewis, Daniel; Cook, Robert – Educational Measurement: Issues and Practice, 2020
In this paper we assert that the practice of principled assessment design renders traditional standard-setting methodology redundant at best and contradictory at worst. We describe the rationale for, and methodological details of, Embedded Standard Setting (ESS; previously, Engineered Cut Scores. Lewis, 2016), an approach to establish performance…
Descriptors: Standard Setting, Evaluation, Cutting Scores, Performance Based Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Jones, Andrew T.; Kopp, Jason P.; Ong, Thai Q. – Educational Measurement: Issues and Practice, 2020
Studies investigating invariance have often been limited to measurement or prediction invariance. Selection invariance, wherein the use of test scores for classification results in equivalent classification accuracy between groups, has received comparatively little attention in the psychometric literature. Previous research suggests that some form…
Descriptors: Test Construction, Test Bias, Classification, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Arslan, Burcu; Jiang, Yang; Keehner, Madeleine; Gong, Tao; Katz, Irvin R.; Yan, Fred – Educational Measurement: Issues and Practice, 2020
Computer-based educational assessments often include items that involve drag-and-drop responses. There are different ways that drag-and-drop items can be laid out and different choices that test developers can make when designing these items. Currently, these decisions are based on experts' professional judgments and design constraints, rather…
Descriptors: Test Items, Computer Assisted Testing, Test Format, Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
Wilkerson, Judy R. – Educational Measurement: Issues and Practice, 2020
Validity and reliability are a major focus in teacher education accreditation by the Council for Accreditation of Educator Preparation (CAEP). CAEP requires the use of "accepted research standards," but many faculty and administrators are unsure how to meet this requirement. The Standards of Educational and Psychological Testing…
Descriptors: Test Construction, Test Validity, Test Reliability, Teacher Education Programs
Peer reviewed Peer reviewed
Direct linkDirect link
Welch, Catherine J.; Dunbar, Stephen B. – Educational Measurement: Issues and Practice, 2020
The use of assessment results to inform school accountability relies on the assumption that the test design appropriately represents the content and cognitive emphasis reflected in the state's standards. Since the passage of the Every Student Succeeds Act and the certification of accountability assessments through federal peer review practices,…
Descriptors: Accountability, Test Construction, State Standards, Content Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Yoo, Hanwook; Hambleton, Ronald K. – Educational Measurement: Issues and Practice, 2019
Item analysis is an integral part of operational test development and is typically conducted within two popular statistical frameworks: classical test theory (CTT) and item response theory (IRT). In this digital ITEMS module, Hanwook Yoo and Ronald K. Hambleton provide an accessible overview of operational item analysis approaches within these…
Descriptors: Item Analysis, Item Response Theory, Guidelines, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Bradshaw, Laine; Levy, Roy – Educational Measurement: Issues and Practice, 2019
Although much research has been conducted on the psychometric properties of cognitive diagnostic models, they are only recently being used in operational settings to provide results to examinees and other stakeholders. Using this newer class of models in practice comes with a fresh challenge for diagnostic assessment developers: effectively…
Descriptors: Data Interpretation, Probability, Classification, Diagnostic Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Attali, Yigal – Educational Measurement: Issues and Practice, 2019
Rater training is an important part of developing and conducting large-scale constructed-response assessments. As part of this process, candidate raters have to pass a certification test to confirm that they are able to score consistently and accurately before they begin scoring operationally. Moreover, many assessment programs require raters to…
Descriptors: Evaluators, Certification, High Stakes Tests, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Russell, Mike; Ludlow, Larry; O'Dwyer, Laura – Educational Measurement: Issues and Practice, 2019
The field of educational measurement has evolved considerably since the first doctoral programs were established. In response, programs have typically tacked on courses that address newly developed theories, methods, tools, and techniques. As our review of current programs evidences, this approach produces artificial distinctions among topics and…
Descriptors: Educational Testing, Specialists, Doctoral Programs, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Mislevy, Robert J.; Oliveri, Maria Elena – Educational Measurement: Issues and Practice, 2019
In this digital ITEMS module, Dr. Robert [Bob] Mislevy and Dr. Maria Elena Oliveri introduce and illustrate a sociocognitive perspective on educational measurement, which focuses on a variety of design and implementation considerations for creating fair and valid assessments for learners from diverse populations with diverse sociocultural…
Descriptors: Educational Testing, Reliability, Test Validity, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Jue; Engelhard, George, Jr. – Educational Measurement: Issues and Practice, 2019
In this digital ITEMS module, Dr. Jue Wang and Dr. George Engelhard Jr. describe the Rasch measurement framework for the construction and evaluation of new measures and scales. From a theoretical perspective, they discuss the historical and philosophical perspectives on measurement with a focus on Rasch's concept of specific objectivity and…
Descriptors: Item Response Theory, Evaluation Methods, Measurement, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
Fidler, James R.; Risk, Nicole M. – Educational Measurement: Issues and Practice, 2019
Credentialing examination developers rely on task (job) analyses for establishing inventories of task and knowledge areas in which competency is required for safe and successful practice in target occupations. There are many ways in which task-related information may be gathered from practitioner ratings, each with its own advantage and…
Descriptors: Job Analysis, Scaling, Licensing Examinations (Professions), Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Moon, Jung Aa; Keehner, Madeleine; Katz, Irvin R. – Educational Measurement: Issues and Practice, 2019
The current study investigated how item formats and their inherent affordances influence test-takers' cognition under uncertainty. Adult participants solved content-equivalent math items in multiple-selection multiple-choice and four alternative grid formats. The results indicated that participants' affirmative response tendency (i.e., judge the…
Descriptors: Affordances, Test Items, Test Format, Test Wiseness
Peer reviewed Peer reviewed
Direct linkDirect link
Johnson, Evelyn S.; Crawford, Angela; Moylan, Laura A.; Zheng, Yuzhu – Educational Measurement: Issues and Practice, 2018
The evidence-centered design framework was used to create a special education teacher observation system, Recognizing Effective Special Education Teachers. Extensive reviews of research informed the domain analysis and modeling stages, and led to the conceptual framework in which effective special education teaching is operationalized as the…
Descriptors: Evidence Based Practice, Special Education Teachers, Observation, Disabilities
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10