NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational and Psychological…46
Audience
Laws, Policies, & Programs
No Child Left Behind Act 20012
What Works Clearinghouse Rating
Showing 1 to 15 of 46 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Marland, Joshua; Harrick, Matthew; Sireci, Stephen G. – Educational and Psychological Measurement, 2020
Student assessment nonparticipation (or opt out) has increased substantially in K-12 schools in states across the country. This increase in opt out has the potential to impact achievement and growth (or value-added) measures used for educator and institutional accountability. In this simulation study, we investigated the extent to which…
Descriptors: Value Added Models, Teacher Effectiveness, Teacher Evaluation, Elementary Secondary Education
Peer reviewed Peer reviewed
Direct linkDirect link
Aytürk, Ezgi; Cham, Heining; Jennings, Patricia A.; Brown, Joshua L. – Educational and Psychological Measurement, 2020
Methods to handle ordered-categorical indicators in latent variable interactions have been developed, yet they have not been widely applied. This article compares the performance of two popular latent variable interaction modeling approaches in handling ordered-categorical indicators: unconstrained product indicator (UPI) and latent moderated…
Descriptors: Evaluation Methods, Grade 3, Grade 4, Grade 5
Peer reviewed Peer reviewed
Direct linkDirect link
McGrath, Kathleen V.; Leighton, Elizabeth A.; Ene, Mihaela; DiStefano, Christine; Monrad, Diane M. – Educational and Psychological Measurement, 2020
Survey research frequently involves the collection of data from multiple informants. Results, however, are usually analyzed by informant group, potentially ignoring important relationships across groups. When the same construct(s) are measured, integrative data analysis (IDA) allows pooling of data from multiple sources into one data set to…
Descriptors: Educational Environment, Meta Analysis, Student Attitudes, Teacher Attitudes
Peer reviewed Peer reviewed
Direct linkDirect link
Briggs, Derek C.; Alzen, Jessica L. – Educational and Psychological Measurement, 2019
Observation protocol scores are commonly used as status measures to support inferences about teacher practices. When multiple observations are collected for the same teacher over the course of a year, some portion of a teacher's score on each occasion may be attributable to the rater, lesson, and the time of year of the observation. All three of…
Descriptors: Observation, Inferences, Generalizability Theory, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Cao, Chunhua; Kim, Eun Sook; Chen, Yi-Hsin; Ferron, John; Stark, Stephen – Educational and Psychological Measurement, 2019
In multilevel multiple-indicator multiple-cause (MIMIC) models, covariates can interact at the within level, at the between level, or across levels. This study examines the performance of multilevel MIMIC models in estimating and detecting the interaction effect of two covariates through a simulation and provides an empirical demonstration of…
Descriptors: Hierarchical Linear Modeling, Structural Equation Models, Computation, Identification
Biancarosa, Gina; Kennedy, Patrick C.; Carlson, Sarah E.; Yoon, HyeonJin; Seipel, Ben; Liu, Bowen; Davison, Mark L. – Educational and Psychological Measurement, 2019
Prior research suggests that subscores from a single achievement test seldom add value over a single total score. Such scores typically correspond to subcontent areas in the total content domain, but content subdomains might not provide a sound basis for subscores. Using scores on an inferential reading comprehension test from 625 third, fourth,…
Descriptors: Scores, Scoring, Achievement Tests, Grade 3
Peer reviewed Peer reviewed
Direct linkDirect link
Huggins-Manley, Anne Corinne – Educational and Psychological Measurement, 2017
This study defines subpopulation item parameter drift (SIPD) as a change in item parameters over time that is dependent on subpopulations of examinees, and hypothesizes that the presence of SIPD in anchor items is associated with bias and/or lack of invariance in three psychometric outcomes. Results show that SIPD in anchor items is associated…
Descriptors: Psychometrics, Test Items, Item Response Theory, Hypothesis Testing
Lockwood, J. R.; Castellano, Katherine E. – Educational and Psychological Measurement, 2017
Student Growth Percentiles (SGPs) increasingly are being used in the United States for inferences about student achievement growth and educator effectiveness. Emerging research has indicated that SGPs estimated from observed test scores have large measurement errors. As such, little is known about "true" SGPs, which are defined in terms…
Descriptors: Item Response Theory, Correlation, Student Characteristics, Academic Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Sideridis, Georgios D. – Educational and Psychological Measurement, 2016
The purpose of the present studies was to test the hypothesis that the psychometric characteristics of ability scales may be significantly distorted if one accounts for emotional factors during test taking. Specifically, the present studies evaluate the effects of anxiety and motivation on the item difficulties of the Rasch model. In Study 1, the…
Descriptors: Learning Disabilities, Test Validity, Measures (Individuals), Hierarchical Linear Modeling
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Francis L.; Cornell, Dewey G. – Educational and Psychological Measurement, 2016
Bullying among youth is recognized as a serious student problem, especially in middle school. The most common approach to measuring bullying is through student self-report surveys that ask questions about different types of bullying victimization. Although prior studies have shown that question-order effects may influence participant responses, no…
Descriptors: Victims of Crime, Bullying, Middle School Students, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Cheng, Ying; Shao, Can; Lathrop, Quinn N. – Educational and Psychological Measurement, 2016
Due to its flexibility, the multiple-indicator, multiple-causes (MIMIC) model has become an increasingly popular method for the detection of differential item functioning (DIF). In this article, we propose the mediated MIMIC model method to uncover the underlying mechanism of DIF. This method extends the usual MIMIC model by including one variable…
Descriptors: Test Bias, Models, Simulation, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Feiming; Cohen, Allan; Bottge, Brian; Templin, Jonathan – Educational and Psychological Measurement, 2016
Latent transition analysis (LTA) was initially developed to provide a means of measuring change in dynamic latent variables. In this article, we illustrate the use of a cognitive diagnostic model, the DINA model, as the measurement model in a LTA, thereby demonstrating a means of analyzing change in cognitive skills over time. An example is…
Descriptors: Statistical Analysis, Change, Thinking Skills, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Konstantopoulos, Spyros; Li, Wei; Miller, Shazia R.; van der Ploeg, Arie – Educational and Psychological Measurement, 2016
We use data from a large-scale experiment conducted in Indiana in 2009-2010 to examine the impact of two interim assessment programs (mCLASS and Acuity) across the mathematics and reading achievement distributions. Specifically, we focus on whether the use of interim assessments has a particularly strong effect on improving outcomes for low…
Descriptors: Educational Assessment, Mathematics Achievement, Reading Achievement, Regression (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Attali, Yigal; Laitusis, Cara; Stone, Elizabeth – Educational and Psychological Measurement, 2016
There are many reasons to believe that open-ended (OE) and multiple-choice (MC) items elicit different cognitive demands of students. However, empirical evidence that supports this view is lacking. In this study, we investigated the reactions of test takers to an interactive assessment with immediate feedback and answer-revision opportunities for…
Descriptors: Test Items, Questioning Techniques, Differences, Student Reaction
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wen-Chung; Chen, Hui-Fang; Jin, Kuan-Yu – Educational and Psychological Measurement, 2015
Many scales contain both positively and negatively worded items. Reverse recoding of negatively worded items might not be enough for them to function as positively worded items do. In this study, we commented on the drawbacks of existing approaches to wording effect in mixed-format scales and used bi-factor item response theory (IRT) models to…
Descriptors: Item Response Theory, Test Format, Language Usage, Test Items
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4