NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Practitioners1
Laws, Policies, & Programs
Comprehensive Employment and…1
What Works Clearinghouse Rating
Showing 1 to 15 of 791 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Xiaowen; Jane Rogers, H. – Educational and Psychological Measurement, 2022
Test fairness is critical to the validity of group comparisons involving gender, ethnicities, culture, or treatment conditions. Detection of differential item functioning (DIF) is one component of efforts to ensure test fairness. The current study compared four treatments for items that have been identified as showing DIF: deleting, ignoring,…
Descriptors: Item Analysis, Comparative Analysis, Culture Fair Tests, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Menold, Natalja; Raykov, Tenko – Educational and Psychological Measurement, 2022
The possible dependency of criterion validity on item formulation in a multicomponent measuring instrument is examined. The discussion is concerned with evaluation of the differences in criterion validity between two or more groups (populations/subpopulations) that have been administered instruments with items having differently formulated item…
Descriptors: Test Items, Measures (Individuals), Test Validity, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Hong, Maxwell; Steedle, Jeffrey T.; Cheng, Ying – Educational and Psychological Measurement, 2020
Insufficient effort responding (IER) affects many forms of assessment in both educational and psychological contexts. Much research has examined different types of IER, IER's impact on the psychometric properties of test scores, and preprocessing procedures used to detect IER. However, there is a gap in the literature in terms of practical advice…
Descriptors: Responses, Psychometrics, Test Validity, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2019
This note discusses the merits of coefficient alpha and their conditions in light of recent critical publications that miss out on significant research findings over the past several decades. That earlier research has demonstrated the empirical relevance and utility of coefficient alpha under certain empirical circumstances. The article highlights…
Descriptors: Test Validity, Test Reliability, Test Items, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Fu, Yuanshu; Wen, Zhonglin; Wang, Yang – Educational and Psychological Measurement, 2018
The maximal reliability of a congeneric measure is achieved by weighting item scores to form the optimal linear combination as the total score; it is never lower than the composite reliability of the measure when measurement errors are uncorrelated. The statistical method that renders maximal reliability would also lead to maximal criterion…
Descriptors: Test Reliability, Test Validity, Comparative Analysis, Attitude Measures
Peer reviewed Peer reviewed
Direct linkDirect link
Miller, Jeff – Educational and Psychological Measurement, 2017
Critics of null hypothesis significance testing suggest that (a) its basic logic is invalid and (b) it addresses a question that is of no interest. In contrast to (a), I argue that the underlying logic of hypothesis testing is actually extremely straightforward and compelling. To substantiate that, I present examples showing that hypothesis…
Descriptors: Hypothesis Testing, Testing Problems, Test Validity, Relevance (Education)
Peer reviewed Peer reviewed
Direct linkDirect link
Hamby, Tyler; Taylor, Wyn – Educational and Psychological Measurement, 2016
This study examined the predictors and psychometric outcomes of survey satisficing, wherein respondents provide quick, "good enough" answers (satisficing) rather than carefully considered answers (optimizing). We administered surveys to university students and respondents--half of whom held college degrees--from a for-pay survey website,…
Descriptors: Surveys, Test Reliability, Test Validity, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Stanley, Leanne M.; Edwards, Michael C. – Educational and Psychological Measurement, 2016
The purpose of this article is to highlight the distinction between the reliability of test scores and the fit of psychometric measurement models, reminding readers why it is important to consider both when evaluating whether test scores are valid for a proposed interpretation and/or use. It is often the case that an investigator judges both the…
Descriptors: Test Reliability, Goodness of Fit, Scores, Patients
Peer reviewed Peer reviewed
Direct linkDirect link
Sideridis, Georgios D. – Educational and Psychological Measurement, 2016
The purpose of the present studies was to test the hypothesis that the psychometric characteristics of ability scales may be significantly distorted if one accounts for emotional factors during test taking. Specifically, the present studies evaluate the effects of anxiety and motivation on the item difficulties of the Rasch model. In Study 1, the…
Descriptors: Learning Disabilities, Test Validity, Measures (Individuals), Hierarchical Linear Modeling
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Xijuan; Savalei, Victoria – Educational and Psychological Measurement, 2016
Many psychological scales written in the Likert format include reverse worded (RW) items in order to control acquiescence bias. However, studies have shown that RW items often contaminate the factor structure of the scale by creating one or more method factors. The present study examines an alternative scale format, called the Expanded format,…
Descriptors: Factor Structure, Psychological Testing, Alternative Assessment, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Paulhus, Delroy L.; Dubois, Patrick J. – Educational and Psychological Measurement, 2014
The overclaiming technique is a novel assessment procedure that uses signal detection analysis to generate indices of knowledge accuracy (OC-accuracy) and self-enhancement (OC-bias). The technique has previously shown robustness over varied knowledge domains as well as low reactivity across administration contexts. Here we compared the OC-accuracy…
Descriptors: Educational Assessment, Knowledge Level, Accuracy, Cognitive Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Thomas, Troy D.; Abts, Koen; Vander Weyden, Patrick – Educational and Psychological Measurement, 2014
This article investigates the effect of the rural-urban divide on mean response styles (RSs) and their relationships with the sociodemographic characteristics of the respondents. It uses the Representative Indicator Response Style Means and Covariance Structure (RIRSMACS) method and data from Guyana--a developing country in the Caribbean. The…
Descriptors: Rural Urban Differences, Response Style (Tests), Demography, Social Characteristics
Peer reviewed Peer reviewed
Direct linkDirect link
Keeley, Jared W.; English, Taylor; Irons, Jessica; Henslee, Amber M. – Educational and Psychological Measurement, 2013
Many measurement biases affect student evaluations of instruction (SEIs). However, two have been relatively understudied: halo effects and ceiling/floor effects. This study examined these effects in two ways. To examine the halo effect, using a videotaped lecture, we manipulated specific teacher behaviors to be "good" or "bad"…
Descriptors: Robustness (Statistics), Test Bias, Course Evaluation, Student Evaluation of Teacher Performance
Peer reviewed Peer reviewed
Direct linkDirect link
Medhanie, Amanuel G.; Dupuis, Danielle N.; LeBeau, Brandon; Harwell, Michael R.; Post, Thomas R. – Educational and Psychological Measurement, 2012
The first college mathematics course a student enrolls in is often affected by performance on a college mathematics placement test. Yet validity evidence of mathematics placement tests remains limited, even for nationally standardized placement tests, and when it is available usually consists of examining a student's subsequent performance in…
Descriptors: College Mathematics, Student Placement, Mathematics Tests, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Lakin, Joni M.; Lai, Emily R. – Educational and Psychological Measurement, 2012
For educators seeking to differentiate instruction, cognitive ability tests sampling multiple content domains, including verbal, quantitative, and nonverbal reasoning, provide superior information about student strengths and weaknesses compared with unidimensional reasoning measures. However, these ability tests have not been fully evaluated with…
Descriptors: Aptitude Tests, Nonverbal Ability, Cognitive Ability, Verbal Ability
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  53