NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational and Psychological…2021
What Works Clearinghouse Rating
Showing 1 to 15 of 2,021 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Betts, Joe; Muntean, William; Kim, Doyoung; Kao, Shu-chuan – Educational and Psychological Measurement, 2022
The multiple response structure can underlie several different technology-enhanced item types. With the increased use of computer-based testing, multiple response items are becoming more common. This response type holds the potential for being scored polytomously for partial credit. However, there are several possible methods for computing raw…
Descriptors: Scoring, Test Items, Test Format, Raw Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Dimitrov, Dimiter M.; Atanasov, Dimitar V. – Educational and Psychological Measurement, 2022
This study offers an approach to testing for differential item functioning (DIF) in a recently developed measurement framework, referred to as "D"-scoring method (DSM). Under the proposed approach, called "P-Z" method of testing for DIF, the item response functions of two groups (reference and focal) are compared by…
Descriptors: Test Bias, Methods, Test Items, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Sim, Mikyung; Kim, Su-Young; Suh, Youngsuk – Educational and Psychological Measurement, 2022
Mediation models have been widely used in many disciplines to better understand the underlying processes between independent and dependent variables. Despite their popularity and importance, the appropriate sample sizes for estimating those models are not well known. Although several approaches (such as Monte Carlo methods) exist, applied…
Descriptors: Sample Size, Statistical Analysis, Predictor Variables, Path Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Rios, Joseph A. – Educational and Psychological Measurement, 2022
The presence of rapid guessing (RG) presents a challenge to practitioners in obtaining accurate estimates of measurement properties and examinee ability. In response to this concern, researchers have utilized response times as a proxy of RG and have attempted to improve parameter estimation accuracy by filtering RG responses using popular scoring…
Descriptors: Guessing (Tests), Classification, Accuracy, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Schroeders, Ulrich; Schmidt, Christoph; Gnambs, Timo – Educational and Psychological Measurement, 2022
Careless responding is a bias in survey responses that disregards the actual item content, constituting a threat to the factor structure, reliability, and validity of psychological measurements. Different approaches have been proposed to detect aberrant responses such as probing questions that directly assess test-taking behavior (e.g., bogus…
Descriptors: Response Style (Tests), Surveys, Artificial Intelligence, Identification
Peer reviewed Peer reviewed
Direct linkDirect link
Falk, Carl F.; Feuerstahler, Leah M. – Educational and Psychological Measurement, 2022
Large-scale assessments often use a computer adaptive test (CAT) for selection of items and for scoring respondents. Such tests often assume a parametric form for the relationship between item responses and the underlying construct. Although semi- and nonparametric response functions could be used, there is scant research on their performance in a…
Descriptors: Item Response Theory, Adaptive Testing, Computer Assisted Testing, Nonparametric Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Peng, Luyao; Sinharay, Sandip – Educational and Psychological Measurement, 2022
Wollack et al. (2015) suggested the erasure detection index (EDI) for detecting fraudulent erasures for individual examinees. Wollack and Eckerly (2017) and Sinharay (2018) extended the index of Wollack et al. (2015) to suggest three EDIs for detecting fraudulent erasures at the aggregate or group level. This article follows up on the research of…
Descriptors: Cheating, Identification, Statistical Analysis, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Yan; Kim, Eunsook; Yi, Zhiyao – Educational and Psychological Measurement, 2022
Latent profile analysis (LPA) identifies heterogeneous subgroups based on continuous indicators that represent different dimensions. It is a common practice to measure each dimension using items, create composite or factor scores for each dimension, and use these scores as indicators of profiles in LPA. In this case, measurement models for…
Descriptors: Robustness (Statistics), Profiles, Statistical Analysis, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Xue, Kang; Huggins-Manley, Anne Corinne; Leite, Walter – Educational and Psychological Measurement, 2022
In data collected from virtual learning environments (VLEs), item response theory (IRT) models can be used to guide the ongoing measurement of student ability. However, such applications of IRT rely on unbiased item parameter estimates associated with test items in the VLE. Without formal piloting of the items, one can expect a large amount of…
Descriptors: Virtual Classrooms, Artificial Intelligence, Item Response Theory, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ren; Liu, Haiyan; Shi, Dexin; Jiang, Zhehan – Educational and Psychological Measurement, 2022
Assessments with a large amount of small, similar, or often repetitive tasks are being used in educational, neurocognitive, and psychological contexts. For example, respondents are asked to recognize numbers or letters from a large pool of those and the number of correct answers is a count variable. In 1960, George Rasch developed the Rasch…
Descriptors: Classification, Models, Statistical Distributions, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
van Dijk, Wilhelmina; Schatschneider, Christopher; Al Otaiba, Stephanie; Hart, Sara A. – Educational and Psychological Measurement, 2022
Complex research questions often need large samples to obtain accurate estimates of parameters and adequate power. Combining extant data sets into a large, pooled data set is one way this can be accomplished without expending resources. Measurement invariance (MI) modeling is an established approach to ensure participant scores are on the same…
Descriptors: Sample Size, Data Analysis, Goodness of Fit, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Calatrava, Maria; de Irala, Jokin; Osorio, Alfonso; Benítez, Edgar; Lopez-del Burgo, Cristina – Educational and Psychological Measurement, 2022
Anonymous questionnaires are frequently used in research with adolescents in order to obtain sincere answers about sensitive topics. Most longitudinal studies include self-generated identification codes (SGICs) to match information. Typical elements include a combination of letters and digits from personal data. However, these data may make the…
Descriptors: Privacy, Questionnaires, Coding, Adolescents
Peer reviewed Peer reviewed
Direct linkDirect link
Goretzko, David – Educational and Psychological Measurement, 2022
Determining the number of factors in exploratory factor analysis is arguably the most crucial decision a researcher faces when conducting the analysis. While several simulation studies exist that compare various so-called factor retention criteria under different data conditions, little is known about the impact of missing data on this process.…
Descriptors: Factor Analysis, Research Problems, Data, Prediction
Peer reviewed Peer reviewed
Direct linkDirect link
Kush, Joseph M.; Konold, Timothy R.; Bradshaw, Catherine P. – Educational and Psychological Measurement, 2022
Multilevel structural equation modeling (MSEM) allows researchers to model latent factor structures at multiple levels simultaneously by decomposing within- and between-group variation. Yet the extent to which the sampling ratio (i.e., proportion of cases sampled from each group) influences the results of MSEM models remains unknown. This article…
Descriptors: Structural Equation Models, Factor Structure, Statistical Bias, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Ames, Allison J. – Educational and Psychological Measurement, 2022
Individual response style behaviors, unrelated to the latent trait of interest, may influence responses to ordinal survey items. Response style can introduce bias in the total score with respect to the trait of interest, threatening valid interpretation of scores. Despite claims of response style stability across scales, there has been little…
Descriptors: Response Style (Tests), Individual Differences, Scores, Test Items
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  135