NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational and Psychological…266
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 266 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Jin, Kuan-Yu; Eckes, Thomas – Educational and Psychological Measurement, 2022
Performance assessments heavily rely on human ratings. These ratings are typically subject to various forms of error and bias, threatening the assessment outcomes' validity and fairness. Differential rater functioning (DRF) is a special kind of threat to fairness manifesting itself in unwanted interactions between raters and performance- or…
Descriptors: Performance Based Assessment, Rating Scales, Test Bias, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Sooyong; Han, Suhwa; Choi, Seung W. – Educational and Psychological Measurement, 2022
Response data containing an excessive number of zeros are referred to as zero-inflated data. When differential item functioning (DIF) detection is of interest, zero-inflation can attenuate DIF effects in the total sample and lead to underdetection of DIF items. The current study presents a DIF detection procedure for response data with excess…
Descriptors: Test Bias, Monte Carlo Methods, Simulation, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Schulte, Niklas; Holling, Heinz; Bürkner, Paul-Christian – Educational and Psychological Measurement, 2021
Forced-choice questionnaires can prevent faking and other response biases typically associated with rating scales. However, the derived trait scores are often unreliable and ipsative, making interindividual comparisons in high-stakes situations impossible. Several studies suggest that these problems vanish if the number of measured traits is high.…
Descriptors: Questionnaires, Measurement Techniques, Test Format, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Goretzko, David; Heumann, Christian; Bühner, Markus – Educational and Psychological Measurement, 2020
Exploratory factor analysis is a statistical method commonly used in psychological research to investigate latent variables and to develop questionnaires. Although such self-report questionnaires are prone to missing values, there is not much literature on this topic with regard to exploratory factor analysis--and especially the process of factor…
Descriptors: Factor Analysis, Data Analysis, Research Methodology, Psychological Studies
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Yue; Cheng, Ying; Liu, Hongyun – Educational and Psychological Measurement, 2020
The responses of non-effortful test-takers may have serious consequences as non-effortful responses can impair model calibration and latent trait inferences. This article introduces a mixture model, using both response accuracy and response time information, to help differentiating non-effortful and effortful individuals, and to improve item…
Descriptors: Item Response Theory, Test Wiseness, Response Style (Tests), Reaction Time
Peer reviewed Peer reviewed
Direct linkDirect link
Dueber, David M.; Love, Abigail M. A.; Toland, Michael D.; Turner, Trisha A. – Educational and Psychological Measurement, 2019
One of the most cited methodological issues is with the response format, which is traditionally a single-response Likert response format. Therefore, our study aims to elucidate and illustrate an alternative response format and analytic technique, Thurstonian item response theory (IRT), for analyzing data from surveys using an alternate response…
Descriptors: Item Response Theory, Surveys, Measurement Techniques, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Bürkner, Paul-Christian; Schulte, Niklas; Holling, Heinz – Educational and Psychological Measurement, 2019
Forced-choice questionnaires have been proposed to avoid common response biases typically associated with rating scale questionnaires. To overcome ipsativity issues of trait scores obtained from classical scoring approaches of forced-choice items, advanced methods from item response theory (IRT) such as the Thurstonian IRT model have been…
Descriptors: Item Response Theory, Measurement Techniques, Questionnaires, Rating Scales
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2018
This article outlines a procedure for examining the degree to which a common factor may be dominating additional factors in a multicomponent measuring instrument consisting of binary items. The procedure rests on an application of the latent variable modeling methodology and accounts for the discrete nature of the manifest indicators. The method…
Descriptors: Measurement Techniques, Factor Analysis, Item Response Theory, Likert Scales
Peer reviewed Peer reviewed
Direct linkDirect link
Engelhard, George, Jr.; Rabbitt, Matthew P.; Engelhard, Emily M. – Educational and Psychological Measurement, 2018
This study focuses on model-data fit with a particular emphasis on household-level fit within the context of measuring household food insecurity. Household fit indices are used to examine the psychometric quality of household-level measures of food insecurity. In the United States, measures of food insecurity are commonly obtained from the U.S.…
Descriptors: Food, Hunger, Psychometrics, Low Income Groups
Peer reviewed Peer reviewed
Direct linkDirect link
Cain, Meghan K.; Zhang, Zhiyong; Bergeman, C. S. – Educational and Psychological Measurement, 2018
This article serves as a practical guide to mediation design and analysis by evaluating the ability of mediation models to detect a significant mediation effect using limited data. The cross-sectional mediation model, which has been shown to be biased when the mediation is happening over time, is compared with longitudinal mediation models:…
Descriptors: Mediation Theory, Case Studies, Longitudinal Studies, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Lin, Yin; Brown, Anna – Educational and Psychological Measurement, 2017
A fundamental assumption in computerized adaptive testing is that item parameters are invariant with respect to context--items surrounding the administered item. This assumption, however, may not hold in forced-choice (FC) assessments, where explicit comparisons are made between items included in the same block. We empirically examined the…
Descriptors: Personality Measures, Measurement Techniques, Context Effect, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Wei; Konstantopoulos, Spyros – Educational and Psychological Measurement, 2017
Field experiments in education frequently assign entire groups such as schools to treatment or control conditions. These experiments incorporate sometimes a longitudinal component where for example students are followed over time to assess differences in the average rate of linear change, or rate of acceleration. In this study, we provide methods…
Descriptors: Educational Experiments, Field Studies, Models, Randomized Controlled Trials
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Francis L.; Cornell, Dewey G. – Educational and Psychological Measurement, 2016
Bullying among youth is recognized as a serious student problem, especially in middle school. The most common approach to measuring bullying is through student self-report surveys that ask questions about different types of bullying victimization. Although prior studies have shown that question-order effects may influence participant responses, no…
Descriptors: Victims of Crime, Bullying, Middle School Students, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Andrich, David – Educational and Psychological Measurement, 2016
This article reproduces correspondence between Georg Rasch of The University of Copenhagen and Benjamin Wright of The University of Chicago in the period from January 1966 to July 1967. This correspondence reveals their struggle to operationalize a unidimensional measurement model with sufficient statistics for responses in a set of ordered…
Descriptors: Statistics, Item Response Theory, Rating Scales, Mathematical Models
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A.; Tong, Bing – Educational and Psychological Measurement, 2016
A latent variable modeling procedure is discussed that can be used to test if two or more homogeneous multicomponent instruments with distinct components are measuring the same underlying construct. The method is widely applicable in scale construction and development research and can also be of special interest in construct validation studies.…
Descriptors: Models, Statistical Analysis, Measurement Techniques, Factor Analysis
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  18