NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Journal of Educational and…51
Audience
Teachers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 51 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Keller, Bryan; Tipton, Elizabeth – Journal of Educational and Behavioral Statistics, 2016
In this article, we review four software packages for implementing propensity score analysis in R: "Matching, MatchIt, PSAgraphics," and "twang." After briefly discussing essential elements for propensity score analysis, we apply each package to a data set from the Early Childhood Longitudinal Study in order to estimate the…
Descriptors: Computer Software, Probability, Statistical Analysis, Longitudinal Studies
Peer reviewed Peer reviewed
Direct linkDirect link
Giada Spaccapanico Proietti; Mariagiulia Matteucci; Stefania Mignani; Bernard P. Veldkamp – Journal of Educational and Behavioral Statistics, 2024
Classical automated test assembly (ATA) methods assume fixed and known coefficients for the constraints and the objective function. This hypothesis is not true for the estimates of item response theory parameters, which are crucial elements in test assembly classical models. To account for uncertainty in ATA, we propose a chance-constrained…
Descriptors: Automation, Computer Assisted Testing, Ambiguity (Context), Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Longford, Nicholas T. – Journal of Educational and Behavioral Statistics, 2014
A method for medical screening is adapted to differential item functioning (DIF). Its essential elements are explicit declarations of the level of DIF that is acceptable and of the loss function that quantifies the consequences of the two kinds of inappropriate classification of an item. Instead of a single level and a single function, sets of…
Descriptors: Test Items, Test Bias, Simulation, Hypothesis Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Loeb, Susanna; Christian, Michael S.; Hough, Heather; Meyer, Robert H.; Rice, Andrew B.; West, Martin R. – Journal of Educational and Behavioral Statistics, 2019
Measures of school-level growth in student outcomes are common tools for assessing the impacts of schools. The vast majority of these measures use standardized tests as the outcome of interest, even though emerging evidence demonstrates the importance of social-emotional learning (SEL). In this article, we present results from using the first…
Descriptors: Social Development, Emotional Development, Student Surveys, Institutional Characteristics
Peer reviewed Peer reviewed
Direct linkDirect link
Mark Wilson – Journal of Educational and Behavioral Statistics, 2024
This article introduces a new framework for articulating how educational assessments can be related to teacher uses in the classroom. It articulates three levels of assessment: macro (use of standardized tests), meso (externally developed items), and micro (on-the-fly in the classroom). The first level is the usual context for educational…
Descriptors: Educational Assessment, Measurement, Standardized Tests, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Yamaguchi, Kazuhiro – Journal of Educational and Behavioral Statistics, 2023
Understanding whether or not different types of students master various attributes can aid future learning remediation. In this study, two-level diagnostic classification models (DCMs) were developed to represent the probabilistic relationship between external latent classes and attribute mastery patterns. Furthermore, variational Bayesian (VB)…
Descriptors: Bayesian Statistics, Classification, Statistical Inference, Sampling
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Xiao; Xu, Hanchen; Zhang, Jinming; Chang, Hua-hua – Journal of Educational and Behavioral Statistics, 2023
The adaptive learning problem concerns how to create an individualized learning plan (also referred to as a learning policy) that chooses the most appropriate learning materials based on a learner's latent traits. In this article, we study an important yet less-addressed adaptive learning problem--one that assumes continuous latent traits.…
Descriptors: Learning Processes, Models, Algorithms, Individualized Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Yu; Chiu, Chia-Yi; Köhn, Hans Friedrich – Journal of Educational and Behavioral Statistics, 2023
The multiple-choice (MC) item format has been widely used in educational assessments across diverse content domains. MC items purportedly allow for collecting richer diagnostic information. The effectiveness and economy of administering MC items may have further contributed to their popularity not just in educational assessment. The MC item format…
Descriptors: Multiple Choice Tests, Nonparametric Statistics, Test Format, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Yu, Albert; Douglas, Jeffrey A. – Journal of Educational and Behavioral Statistics, 2023
We propose a new item response theory growth model with item-specific learning parameters, or ISLP, and two variations of this model. In the ISLP model, either items or blocks of items have their own learning parameters. This model may be used to improve the efficiency of learning in a formative assessment. We show ways that the ISLP model's…
Descriptors: Item Response Theory, Learning, Markov Processes, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Zachary K. Collier; Minji Kong; Olushola Soyoye; Kamal Chawla; Ann M. Aviles; Yasser Payne – Journal of Educational and Behavioral Statistics, 2024
Asymmetric Likert-type items in research studies can present several challenges in data analysis, particularly concerning missing data. These items are often characterized by a skewed scaling, where either there is no neutral response option or an unequal number of possible positive and negative responses. The use of conventional techniques, such…
Descriptors: Likert Scales, Test Items, Item Analysis, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Bergner, Yoav; von Davier, Alina A. – Journal of Educational and Behavioral Statistics, 2019
This article reviews how National Assessment of Educational Progress (NAEP) has come to collect and analyze data about cognitive and behavioral processes (process data) in the transition to digital assessment technologies over the past two decades. An ordered five-level structure is proposed for describing the uses of process data. The levels in…
Descriptors: National Competency Tests, Data Collection, Data Analysis, Cognitive Processes
Reardon, Sean F.; Kalogrides, Demetra; Ho, Andrew D. – Journal of Educational and Behavioral Statistics, 2021
Linking score scales across different tests is considered speculative and fraught, even at the aggregate level. We introduce and illustrate validation methods for aggregate linkages, using the challenge of linking U.S. school district average test scores across states as a motivating example. We show that aggregate linkages can be validated both…
Descriptors: Equated Scores, Validity, Methods, School Districts
Peer reviewed Peer reviewed
Direct linkDirect link
Culpepper, Steven Andrew; Chen, Yinghan – Journal of Educational and Behavioral Statistics, 2019
Exploratory cognitive diagnosis models (CDMs) estimate the Q matrix, which is a binary matrix that indicates the attributes needed for affirmative responses to each item. Estimation of Q is an important next step for improving classifications and broadening application of CDMs. Prior research primarily focused on an exploratory version of the…
Descriptors: Cognitive Measurement, Models, Bayesian Statistics, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Nájera, Pablo; Abad, Francisco J.; Chiu, Chia-Yi; Sorrel, Miguel A. – Journal of Educational and Behavioral Statistics, 2023
The nonparametric classification (NPC) method has been proven to be a suitable procedure for cognitive diagnostic assessments at a classroom level. However, its nonparametric nature impedes the obtention of a model likelihood, hindering the exploration of crucial psychometric aspects, such as model fit or reliability. Reporting the reliability and…
Descriptors: Models, Diagnostic Tests, Psychometrics, Cognitive Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Hao, Jiangang; Ho, Tin Kam – Journal of Educational and Behavioral Statistics, 2019
Machine learning is a popular topic in data analysis and modeling. Many different machine learning algorithms have been developed and implemented in a variety of programming languages over the past 20 years. In this article, we first provide an overview of machine learning and clarify its difference from statistical inference. Then, we review…
Descriptors: Artificial Intelligence, Statistical Inference, Data Analysis, Programming Languages
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4