NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational and Psychological…258
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 258 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Hong, Maxwell; Steedle, Jeffrey T.; Cheng, Ying – Educational and Psychological Measurement, 2020
Insufficient effort responding (IER) affects many forms of assessment in both educational and psychological contexts. Much research has examined different types of IER, IER's impact on the psychometric properties of test scores, and preprocessing procedures used to detect IER. However, there is a gap in the literature in terms of practical advice…
Descriptors: Responses, Psychometrics, Test Validity, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Dimitrov, Dimiter M. – Educational and Psychological Measurement, 2020
This study presents new models for item response functions (IRFs) in the framework of the D-scoring method (DSM) that is gaining attention in the field of educational and psychological measurement and largescale assessments. In a previous work on DSM, the IRFs of binary items were estimated using a logistic regression model (LRM). However, the LRM…
Descriptors: Item Response Theory, Scoring, True Scores, Scaling
Peer reviewed Peer reviewed
Direct linkDirect link
Dowling, N. Maritza; Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2020
Equating of psychometric scales and tests is frequently required and conducted in educational, behavioral, and clinical research. Construct comparability or equivalence between measuring instruments is a necessary condition for making decisions about linking and equating resulting scores. This article is concerned with a widely applicable method…
Descriptors: Evaluation Methods, Psychometrics, Screening Tests, Dementia
Peer reviewed Peer reviewed
Direct linkDirect link
Kara, Yusuf; Kamata, Akihito; Potgieter, Cornelis; Nese, Joseph F. T. – Educational and Psychological Measurement, 2020
Oral reading fluency (ORF), used by teachers and school districts across the country to screen and progress monitor at-risk readers, has been documented as a good indicator of reading comprehension and overall reading competence. In traditional ORF administration, students are given one minute to read a grade-level passage, after which the…
Descriptors: Oral Reading, Reading Fluency, Reading Rate, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2019
This note discusses the merits of coefficient alpha and their conditions in light of recent critical publications that miss out on significant research findings over the past several decades. That earlier research has demonstrated the empirical relevance and utility of coefficient alpha under certain empirical circumstances. The article highlights…
Descriptors: Test Validity, Test Reliability, Test Items, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Dueber, David M.; Love, Abigail M. A.; Toland, Michael D.; Turner, Trisha A. – Educational and Psychological Measurement, 2019
One of the most cited methodological issues is with the response format, which is traditionally a single-response Likert response format. Therefore, our study aims to elucidate and illustrate an alternative response format and analytic technique, Thurstonian item response theory (IRT), for analyzing data from surveys using an alternate response…
Descriptors: Item Response Theory, Surveys, Measurement Techniques, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Xia, Yan; Green, Samuel B.; Xu, Yuning; Thompson, Marilyn S. – Educational and Psychological Measurement, 2019
Past research suggests revised parallel analysis (R-PA) tends to yield relatively accurate results in determining the number of factors in exploratory factor analysis. R-PA can be interpreted as a series of hypothesis tests. At each step in the series, a null hypothesis is tested that an additional factor accounts for zero common variance among…
Descriptors: Effect Size, Factor Analysis, Hypothesis Testing, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Dimitrov, Dimiter M.; Luo, Yong – Educational and Psychological Measurement, 2019
An approach to scoring tests with binary items, referred to as D-scoring method, was previously developed as a classical analog to basic models in item response theory (IRT) for binary items. As some tests include polytomous items, this study offers an approach to D-scoring of such items and parallels the results with those obtained under the…
Descriptors: Scoring, Test Items, Item Response Theory, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Fujimoto, Ken A. – Educational and Psychological Measurement, 2019
Advancements in item response theory (IRT) have led to models for dual dependence, which control for cluster and method effects during a psychometric analysis. Currently, however, this class of models does not include one that controls for when the method effects stem from two method sources in which one source functions differently across the…
Descriptors: Bayesian Statistics, Item Response Theory, Psychometrics, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A.; Dimitrov, Dimiter M.; Li, Tatyana – Educational and Psychological Measurement, 2018
This article extends the procedure outlined in the article by Raykov, Marcoulides, and Tong for testing congruence of latent constructs to the setting of binary items and clustering effects. In this widely used setting in contemporary educational and psychological research, the method can be used to examine if two or more homogeneous…
Descriptors: Tests, Psychometrics, Test Items, Construct Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ren; Huggins-Manley, Anne Corinne; Bulut, Okan – Educational and Psychological Measurement, 2018
Developing a diagnostic tool within the diagnostic measurement framework is the optimal approach to obtain multidimensional and classification-based feedback on examinees. However, end users may seek to obtain diagnostic feedback from existing item responses to assessments that have been designed under either the classical test theory or item…
Descriptors: Models, Item Response Theory, Psychometrics, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Preston, Kathleen Suzanne Johnson; Gottfried, Allen W.; Park, Jonathan J.; Manapat, Patrick Don; Gottfried, Adele Eskeles; Oliver, Pamella H. – Educational and Psychological Measurement, 2018
Measurement invariance is a prerequisite when comparing different groups of individuals or when studying a group of individuals across time. This assures that the same construct is assessed without measurement artifacts. This investigation applied a novel approach of simultaneous parameter linking to cross-sectional and longitudinal measures of…
Descriptors: Longitudinal Studies, Family Relationship, Measurement, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Engelhard, George, Jr.; Rabbitt, Matthew P.; Engelhard, Emily M. – Educational and Psychological Measurement, 2018
This study focuses on model-data fit with a particular emphasis on household-level fit within the context of measuring household food insecurity. Household fit indices are used to examine the psychometric quality of household-level measures of food insecurity. In the United States, measures of food insecurity are commonly obtained from the U.S.…
Descriptors: Food, Hunger, Psychometrics, Low Income Groups
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A.; Li, Tenglong – Educational and Psychological Measurement, 2017
The measurement error in principal components extracted from a set of fallible measures is discussed and evaluated. It is shown that as long as one or more measures in a given set of observed variables contains error of measurement, so also does any principal component obtained from the set. The error variance in any principal component is shown…
Descriptors: Error of Measurement, Factor Analysis, Research Methodology, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Huggins-Manley, Anne Corinne – Educational and Psychological Measurement, 2017
This study defines subpopulation item parameter drift (SIPD) as a change in item parameters over time that is dependent on subpopulations of examinees, and hypothesizes that the presence of SIPD in anchor items is associated with bias and/or lack of invariance in three psychometric outcomes. Results show that SIPD in anchor items is associated…
Descriptors: Psychometrics, Test Items, Item Response Theory, Hypothesis Testing
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  18