NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational and Psychological…3822
What Works Clearinghouse Rating
Showing 106 to 120 of 3,822 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Sideridis, Georgios D.; Tsaousis, Ioannis; Alamri, Abeer A. – Educational and Psychological Measurement, 2020
The main thesis of the present study is to use the Bayesian structural equation modeling (BSEM) methodology of establishing approximate measurement invariance (A-MI) using data from a national examination in Saudi Arabia as an alternative to not meeting strong invariance criteria. Instead, we illustrate how to account for the absence of…
Descriptors: Bayesian Statistics, Structural Equation Models, Foreign Countries, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Fujimoto, Ken A.; Neugebauer, Sabina R. – Educational and Psychological Measurement, 2020
Although item response theory (IRT) models such as the bifactor, two-tier, and between-item-dimensionality IRT models have been devised to confirm complex dimensional structures in educational and psychological data, they can be challenging to use in practice. The reason is that these models are multidimensional IRT (MIRT) models and thus are…
Descriptors: Bayesian Statistics, Item Response Theory, Sample Size, Factor Structure
Peer reviewed Peer reviewed
Direct linkDirect link
Feuerstahler, Leah M.; Waller, Niels; MacDonald, Angus, III – Educational and Psychological Measurement, 2020
Although item response models have grown in popularity in many areas of educational and psychological assessment, there are relatively few applications of these models in experimental psychopathology. In this article, we explore the use of item response models in the context of a computerized cognitive task designed to assess visual working memory…
Descriptors: Item Response Theory, Psychopathology, Intelligence Tests, Psychological Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Jinho; Wilson, Mark – Educational and Psychological Measurement, 2020
This study investigates polytomous item explanatory item response theory models under the multivariate generalized linear mixed modeling framework, using the linear logistic test model approach. Building on the original ideas of the many-facet Rasch model and the linear partial credit model, a polytomous Rasch model is extended to the item…
Descriptors: Item Response Theory, Test Items, Models, Responses
Peer reviewed Peer reviewed
Direct linkDirect link
Goretzko, David; Heumann, Christian; Bühner, Markus – Educational and Psychological Measurement, 2020
Exploratory factor analysis is a statistical method commonly used in psychological research to investigate latent variables and to develop questionnaires. Although such self-report questionnaires are prone to missing values, there is not much literature on this topic with regard to exploratory factor analysis--and especially the process of factor…
Descriptors: Factor Analysis, Data Analysis, Research Methodology, Psychological Studies
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Yue; Cheng, Ying; Liu, Hongyun – Educational and Psychological Measurement, 2020
The responses of non-effortful test-takers may have serious consequences as non-effortful responses can impair model calibration and latent trait inferences. This article introduces a mixture model, using both response accuracy and response time information, to help differentiating non-effortful and effortful individuals, and to improve item…
Descriptors: Item Response Theory, Test Wiseness, Response Style (Tests), Reaction Time
Peer reviewed Peer reviewed
Direct linkDirect link
Walker, Cindy M.; Göçer Sahin, Sakine – Educational and Psychological Measurement, 2020
The purpose of this study was to investigate a new way of evaluating interrater reliability that can allow one to determine if two raters differ with respect to their rating on a polytomous rating scale or constructed response item. Specifically, differential item functioning (DIF) analyses were used to assess interrater reliability and compared…
Descriptors: Test Bias, Interrater Reliability, Responses, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Mansolf, Maxwell; Vreeker, Annabel; Reise, Steven P.; Freimer, Nelson B.; Glahn, David C.; Gur, Raquel E.; Moore, Tyler M.; Pato, Carlos N.; Pato, Michele T.; Palotie, Aarno; Holm, Minna; Suvisaari, Jaana; Partonen, Timo; Kieseppä, Tuula; Paunio, Tiina; Boks, Marco; Kahn, René; Ophoff, Roel A.; Bearden, Carrie E.; Loohuis, Loes Olde; Teshiba, Terri; deGeorge, Daniella; Bilder, Robert M. – Educational and Psychological Measurement, 2020
Large-scale studies spanning diverse project sites, populations, languages, and measurements are increasingly important to relate psychological to biological variables. National and international consortia already are collecting and executing mega-analyses on aggregated data from individuals, with different measures on each person. In this…
Descriptors: Item Response Theory, Data Analysis, Measurement, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Jang, Yoonsun; Cohen, Allan S. – Educational and Psychological Measurement, 2020
A nonconverged Markov chain can potentially lead to invalid inferences about model parameters. The purpose of this study was to assess the effect of a nonconverged Markov chain on the estimation of parameters for mixture item response theory models using a Markov chain Monte Carlo algorithm. A simulation study was conducted to investigate the…
Descriptors: Markov Processes, Item Response Theory, Accuracy, Inferences
Peer reviewed Peer reviewed
Direct linkDirect link
Olvera Astivia, Oscar Lorenzo; Kroc, Edward; Zumbo, Bruno D. – Educational and Psychological Measurement, 2020
Simulations concerning the distributional assumptions of coefficient alpha are contradictory. To provide a more principled theoretical framework, this article relies on the Fréchet-Hoeffding bounds, in order to showcase that the distribution of the items play a role on the estimation of correlations and covariances. More specifically, these bounds…
Descriptors: Test Items, Test Reliability, Computation, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Raborn, Anthony W.; Leite, Walter L.; Marcoulides, Katerina M. – Educational and Psychological Measurement, 2020
This study compares automated methods to develop short forms of psychometric scales. Obtaining a short form that has both adequate internal structure and strong validity with respect to relationships with other variables is difficult with traditional methods of short-form development. Metaheuristic algorithms can select items for short forms while…
Descriptors: Test Construction, Automation, Heuristics, Mathematics
Kara, Yusuf; Kamata, Akihito; Potgieter, Cornelis; Nese, Joseph F. T. – Educational and Psychological Measurement, 2020
Oral reading fluency (ORF), used by teachers and school districts across the country to screen and progress monitor at-risk readers, has been documented as a good indicator of reading comprehension and overall reading competence. In traditional ORF administration, students are given one minute to read a grade-level passage, after which the…
Descriptors: Oral Reading, Reading Fluency, Reading Rate, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Yang, Lihong; Reckase, Mark D. – Educational and Psychological Measurement, 2020
The present study extended the "p"-optimality method to the multistage computerized adaptive test (MST) context in developing optimal item pools to support different MST panel designs under different test configurations. Using the Rasch model, simulated optimal item pools were generated with and without practical constraints of exposure…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Marland, Joshua; Harrick, Matthew; Sireci, Stephen G. – Educational and Psychological Measurement, 2020
Student assessment nonparticipation (or opt out) has increased substantially in K-12 schools in states across the country. This increase in opt out has the potential to impact achievement and growth (or value-added) measures used for educator and institutional accountability. In this simulation study, we investigated the extent to which…
Descriptors: Value Added Models, Teacher Effectiveness, Teacher Evaluation, Elementary Secondary Education
Peer reviewed Peer reviewed
Direct linkDirect link
LaVoie, Noelle; Parker, James; Legree, Peter J.; Ardison, Sharon; Kilcullen, Robert N. – Educational and Psychological Measurement, 2020
Automated scoring based on Latent Semantic Analysis (LSA) has been successfully used to score essays and constrained short answer responses. Scoring tests that capture open-ended, short answer responses poses some challenges for machine learning approaches. We used LSA techniques to score short answer responses to the Consequences Test, a measure…
Descriptors: Semantics, Evaluators, Essays, Scoring
Pages: 1  |  ...  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  12  |  ...  |  255