NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 1,217 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Fox, Jean-Paul; Marianti, Sukaesi – Journal of Educational Measurement, 2017
Response accuracy and response time data can be analyzed with a joint model to measure ability and speed of working, while accounting for relationships between item and person characteristics. In this study, person-fit statistics are proposed for joint models to detect aberrant response accuracy and/or response time patterns. The person-fit tests…
Descriptors: Accuracy, Reaction Time, Statistics, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Sinharay, Sandip; Duong, Minh Q.; Wood, Scott W. – Journal of Educational Measurement, 2017
As noted by Fremer and Olson, analysis of answer changes is often used to investigate testing irregularities because the analysis is readily performed and has proven its value in practice. Researchers such as Belov, Sinharay and Johnson, van der Linden and Jeon, van der Linden and Lewis, and Wollack, Cohen, and Eckerly have suggested several…
Descriptors: Identification, Statistics, Change, Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Kang, Hyeon-Ah; Zhang, Susu; Chang, Hua-Hua – Journal of Educational Measurement, 2017
The development of cognitive diagnostic-computerized adaptive testing (CD-CAT) has provided a new perspective for gaining information about examinees' mastery on a set of cognitive attributes. This study proposes a new item selection method within the framework of dual-objective CD-CAT that simultaneously addresses examinees' attribute mastery…
Descriptors: Computer Assisted Testing, Adaptive Testing, Cognitive Tests, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Debeer, Dries; Ali, Usama S.; van Rijn, Peter W. – Journal of Educational Measurement, 2017
Test assembly is the process of selecting items from an item pool to form one or more new test forms. Often new test forms are constructed to be parallel with an existing (or an ideal) test. Within the context of item response theory, the test information function (TIF) or the test characteristic curve (TCC) are commonly used as statistical…
Descriptors: Test Format, Test Construction, Statistical Analysis, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Moses, Tim; Kim, YoungKoung – Journal of Educational Measurement, 2017
The focus of this article is on scale score transformations that can be used to stabilize conditional standard errors of measurement (CSEMs). Three transformations for stabilizing the estimated CSEMs are reviewed, including the traditional arcsine transformation, a recently developed general variance stabilization transformation, and a new method…
Descriptors: Error of Measurement, Scores, Comparative Analysis, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Hyung Jin; Brennan, Robert L.; Lee, Won-Chan – Journal of Educational Measurement, 2017
In equating, when common items are internal and scoring is conducted in terms of the number of correct items, some pairs of total scores ("X") and common-item scores ("V") can never be observed in a bivariate distribution of "X" and "V"; these pairs are called "structural zeros." This simulation…
Descriptors: Test Items, Equated Scores, Comparative Analysis, Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Scoular, Claire; Care, Esther; Hesse, Friedrich W. – Journal of Educational Measurement, 2017
Collaborative problem solving is a complex skill set that draws on social and cognitive factors. The construct remains in its infancy due to lack of empirical evidence that can be drawn upon for validation. The differences and similarities between two large-scale initiatives that reflect this state of the art, in terms of underlying assumptions…
Descriptors: Problem Solving, Automation, Evaluation Methods, Cooperative Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Rosen, Yigal – Journal of Educational Measurement, 2017
In order to understand potential applications of collaborative problem-solving (CPS) assessment tasks, it is necessary to examine empirically the multifaceted student performance that may be distributed across collaboration methods and purposes of the assessment. Ideally, each student should be matched with various types of group members and must…
Descriptors: Problem Solving, Pilot Projects, Electronic Learning, Cooperation
Peer reviewed Peer reviewed
Direct linkDirect link
Andrews, Jessica J.; Kerr, Deirdre; Mislevy, Robert J.; von Davier, Alina; Hao, Jiangang; Liu, Lei – Journal of Educational Measurement, 2017
Simulations and games offer interactive tasks that can elicit rich data, providing evidence of complex skills that are difficult to measure with more conventional items and tests. However, one notable challenge in using such technologies is making sense of the data generated in order to make claims about individuals or groups. This article…
Descriptors: Simulation, Interaction, Research Methodology, Cooperative Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Halpin, Peter F.; von Davier, Alina A.; Hao, Jiangang; Liu, Lei – Journal of Educational Measurement, 2017
This article addresses performance assessments that involve collaboration among students. We apply the Hawkes process to infer whether the actions of one student are associated with increased probability of further actions by his/her partner(s) in the near future. This leads to an intuitive notion of engagement among collaborators, and we consider…
Descriptors: Performance Based Assessment, Student Evaluation, Cooperative Learning, Inferences
Peer reviewed Peer reviewed
Direct linkDirect link
Wilson, Mark; Gochyyev, Perman; Scalise, Kathleen – Journal of Educational Measurement, 2017
This article summarizes assessment of cognitive skills through collaborative tasks, using field test results from the Assessment and Teaching of 21st Century Skills (ATC21S) project. This project, sponsored by Cisco, Intel, and Microsoft, aims to help educators around the world enable┬ástudents with the skills to succeed in future career and…
Descriptors: Cognitive Ability, Thinking Skills, Evaluation Methods, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Herborn, Katharina; Mustafic, Maida; Greiff, Samuel – Journal of Educational Measurement, 2017
Collaborative problem solving (CPS) assessment is a new academic research field with a number of educational implications. In 2015, the Programme for International Student Assessment (PISA) assessed CPS with a computer-simulated human-agent (H-A) approach that claimed to measure 12 individual CPS skills for the first time. After reviewing the…
Descriptors: Cooperative Learning, Problem Solving, Computer Simulation, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Olsen, Jennifer; Aleven, Vincent; Rummel, Nikol – Journal of Educational Measurement, 2017
Within educational data mining, many statistical models capture the learning of students working individually. However, not much work has been done to extend these statistical models of individual learning to a collaborative setting, despite the effectiveness of collaborative learning activities. We extend a widely used model (the additive factors…
Descriptors: Mathematical Models, Information Retrieval, Data Analysis, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Barrett, Michelle D.; van der Linden, Wim J. – Journal of Educational Measurement, 2017
Linking functions adjust for differences between identifiability restrictions used in different instances of the estimation of item response model parameters. These adjustments are necessary when results from those instances are to be compared. As linking functions are derived from estimated item response model parameters, parameter estimation…
Descriptors: Item Response Theory, Error of Measurement, Programming, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Chen-Wei; Wang, Wen-Chung – Journal of Educational Measurement, 2017
The examinee-selected-item (ESI) design, in which examinees are required to respond to a fixed number of items in a given set of items (e.g., choose one item to respond from a pair of items), always yields incomplete data (i.e., only the selected items are answered and the others have missing data) that are likely nonignorable. Therefore, using…
Descriptors: Item Response Theory, Models, Maximum Likelihood Statistics, Data Analysis
Previous Page | Next Page ┬╗
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  82