NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 600 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kong, Xiaojing; Davis, Laurie Laughlin; McBride, Yuanyuan; Morrison, Kristin – Applied Measurement in Education, 2018
Item response time data were used in investigating the differences in student test-taking behavior between two device conditions: computer and tablet. Analyses were conducted to address the questions of whether or not the device condition had a differential impact on rapid guessing and solution behaviors (with response time effort used as an…
Descriptors: Educational Technology, Technology Uses in Education, Computers, Handheld Devices
Peer reviewed Peer reviewed
Direct linkDirect link
Dadey, Nathan; Lyons, Susan; DePascale, Charles – Applied Measurement in Education, 2018
Evidence of comparability is generally needed whenever there are variations in the conditions of an assessment administration, including variations introduced by the administration of an assessment on multiple digital devices (e.g., tablet, laptop, desktop). This article is meant to provide a comprehensive examination of issues relevant to the…
Descriptors: Evaluation Methods, Computer Assisted Testing, Educational Technology, Technology Uses in Education
Peer reviewed Peer reviewed
Direct linkDirect link
Wyse, Adam E. – Applied Measurement in Education, 2018
This article discusses regression effects that are commonly observed in Angoff ratings where panelists tend to think that hard items are easier than they are and easy items are more difficult than they are in comparison to estimated item difficulties. Analyses of data from two credentialing exams illustrate these regression effects and the…
Descriptors: Regression (Statistics), Test Items, Difficulty Level, Licensing Examinations (Professions)
Peer reviewed Peer reviewed
Direct linkDirect link
George, Ann Cathrice; Robitzsch, Alexander – Applied Measurement in Education, 2018
This article presents a new perspective on measuring gender differences in the large-scale assessment study Trends in International Science Study (TIMSS). The suggested empirical model is directly based on the theoretical competence model of the domain mathematics and thus includes the interaction between content and cognitive sub-competencies.…
Descriptors: Achievement Tests, Elementary Secondary Education, Mathematics Achievement, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, HyeSun – Applied Measurement in Education, 2018
The current simulation study examined the effects of Item Parameter Drift (IPD) occurring in a short scale on parameter estimates in multilevel models where scores from a scale were employed as a time-varying predictor to account for outcome scores. Five factors, including three decisions about IPD, were considered for simulation conditions. It…
Descriptors: Test Items, Hierarchical Linear Modeling, Predictor Variables, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Oliveri, Maria Elena; Lawless, Rene; Robin, Frederic; Bridgeman, Brent – Applied Measurement in Education, 2018
We analyzed a pool of items from an admissions test for differential item functioning (DIF) for groups based on age, socioeconomic status, citizenship, or English language status using Mantel-Haenszel and item response theory. DIF items were systematically examined to identify its possible sources by item type, content, and wording. DIF was…
Descriptors: Test Bias, Comparative Analysis, Item Banks, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Furtak, Erin Marie; Circi, Ruhan; Heredia, Sara C. – Applied Measurement in Education, 2018
This article describes a 4-year study of experienced high school biology teachers' participation in a five-step professional development experience in which they iteratively studied student ideas with the support of a set of learning progressions, designed formative assessment activities, practiced using those activities with their students,…
Descriptors: Skill Development, Behavioral Objectives, Science Teachers, Biology
Peer reviewed Peer reviewed
Direct linkDirect link
Gotwals, Amelia Wenk – Applied Measurement in Education, 2018
In this commentary, I consider the three empirical studies in this special issue based on two main aspects: (a) the nature of the learning progressions and (b) what formative assessment practice(s) were investigated. Specifically, I describe differences among the learning progressions in terms of scope and grain size. I also identify three…
Descriptors: Skill Development, Behavioral Objectives, Formative Evaluation, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Covitt, Beth A.; Gunckel, Kristin L.; Caplan, Bess; Syswerda, Sara – Applied Measurement in Education, 2018
While learning progressions (LPs) hold promise as instructional tools, researchers are still in the early stages of understanding how teachers use LPs in formative assessment practices. We report on a study that assessed teachers' proficiency in using a LP for student ideas about hydrologic systems. Research questions were: (a) what were teachers'…
Descriptors: Skill Development, Behavioral Objectives, Formative Evaluation, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Shepard, Lorrie A. – Applied Measurement in Education, 2018
This article addresses the teaching and learning side of the learning progressions literature, calling out for measurement specialists the knowledge most needed when collaborating with subject-matter experts in the development of learning progressions. Learning progressions are one of the strongest instantiations of principles from "Knowing…
Descriptors: Skill Development, Behavioral Objectives, Student Evaluation, Formative Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
von Aufschnaiter, Claudia; Alonzo, Alicia C. – Applied Measurement in Education, 2018
Establishing nuanced interpretations of student thinking is central to formative assessment but difficult, especially for preservice teachers. Learning progressions (LPs) have been proposed as a framework for promoting interpretations of students' thinking; however, research is needed to investigate whether and how an LP can be used to support…
Descriptors: Formative Evaluation, Preservice Teachers, Physics, Science Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Alonzo, Alicia C. – Applied Measurement in Education, 2018
Learning progressions--particularly as defined and operationalized in science education--have significant potential to inform teachers' formative assessment practices. In this overview article, I lay out an argument for this potential, starting from definitions for "formative assessment practices" and "learning progressions"…
Descriptors: Skill Development, Behavioral Objectives, Science Education, Formative Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Yannakoudakis, Helen; Andersen, Øistein E.; Geranpayeh, Ardeshir; Briscoe, Ted; Nicholls, Diane – Applied Measurement in Education, 2018
There are quite a few challenges in the development of an automated writing placement model for non-native English learners, among them the fact that exams that encompass the full range of language proficiency exhibited at different stages of learning are hard to design. However, acquisition of appropriate training data that are relevant to the…
Descriptors: Automation, Data Processing, Student Placement, English Language Learners
Peer reviewed Peer reviewed
Direct linkDirect link
Rupp, André A. – Applied Measurement in Education, 2018
This article discusses critical methodological design decisions for collecting, interpreting, and synthesizing empirical evidence during the design, deployment, and operational quality-control phases for automated scoring systems. The discussion is inspired by work on operational large-scale systems for automated essay scoring but many of the…
Descriptors: Design, Automation, Scoring, Test Scoring Machines
Peer reviewed Peer reviewed
Direct linkDirect link
Raczynski, Kevin; Cohen, Allan – Applied Measurement in Education, 2018
The literature on Automated Essay Scoring (AES) systems has provided useful validation frameworks for any assessment that includes AES scoring. Furthermore, evidence for the scoring fidelity of AES systems is accumulating. Yet questions remain when appraising the scoring performance of AES systems. These questions include: (a) which essays are…
Descriptors: Essay Tests, Test Scoring Machines, Test Validity, Evaluators
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  40