NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational and Psychological…248
Audience
Practitioners1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 248 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Nana; Bolt, Daniel M. – Educational and Psychological Measurement, 2021
This paper presents a mixture item response tree (IRTree) model for extreme response style. Unlike traditional applications of single IRTree models, a mixture approach provides a way of representing the mixture of respondents following different underlying response processes (between individuals), as well as the uncertainty present at the…
Descriptors: Item Response Theory, Response Style (Tests), Models, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Thompson, Yutian T.; Song, Hairong; Shi, Dexin; Liu, Zhengkui – Educational and Psychological Measurement, 2021
Conventional approaches for selecting a reference indicator (RI) could lead to misleading results in testing for measurement invariance (MI). Several newer quantitative methods have been available for more rigorous RI selection. However, it is still unknown how well these methods perform in terms of correctly identifying a truly invariant item to…
Descriptors: Measurement, Statistical Analysis, Selection, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Weigl, Klemens; Forstner, Thomas – Educational and Psychological Measurement, 2021
Paper-based visual analogue scale (VAS) items were developed 100 years ago. Although they gained great popularity in clinical and medical research for assessing pain, they have been scarcely applied in other areas of psychological research for several decades. However, since the beginning of digitization, VAS have attracted growing interest among…
Descriptors: Test Construction, Visual Measures, Gender Differences, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Mansolf, Maxwell; Vreeker, Annabel; Reise, Steven P.; Freimer, Nelson B.; Glahn, David C.; Gur, Raquel E.; Moore, Tyler M.; Pato, Carlos N.; Pato, Michele T.; Palotie, Aarno; Holm, Minna; Suvisaari, Jaana; Partonen, Timo; Kieseppä, Tuula; Paunio, Tiina; Boks, Marco; Kahn, René; Ophoff, Roel A.; Bearden, Carrie E.; Loohuis, Loes Olde; Teshiba, Terri; deGeorge, Daniella; Bilder, Robert M. – Educational and Psychological Measurement, 2020
Large-scale studies spanning diverse project sites, populations, languages, and measurements are increasingly important to relate psychological to biological variables. National and international consortia already are collecting and executing mega-analyses on aggregated data from individuals, with different measures on each person. In this…
Descriptors: Item Response Theory, Data Analysis, Measurement, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Beauducel, André; Kersting, Martin – Educational and Psychological Measurement, 2020
We investigated by means of a simulation study how well methods for factor rotation can identify a two-facet simple structure. Samples were generated from orthogonal and oblique two-facet population factor models with 4 (2 factors per facet) to 12 factors (6 factors per facet). Samples drawn from orthogonal populations were submitted to factor…
Descriptors: Factor Structure, Factor Analysis, Sample Size, Intelligence
Peer reviewed Peer reviewed
Direct linkDirect link
Sideridis, Georgios D.; Tsaousis, Ioannis; Alamri, Abeer A. – Educational and Psychological Measurement, 2020
The main thesis of the present study is to use the Bayesian structural equation modeling (BSEM) methodology of establishing approximate measurement invariance (A-MI) using data from a national examination in Saudi Arabia as an alternative to not meeting strong invariance criteria. Instead, we illustrate how to account for the absence of…
Descriptors: Bayesian Statistics, Structural Equation Models, Foreign Countries, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Hung-Yu – Educational and Psychological Measurement, 2020
In educational assessments and achievement tests, test developers and administrators commonly assume that test-takers attempt all test items with full effort and leave no blank responses with unplanned missing values. However, aberrant response behavior--such as performance decline, dropping out beyond a certain point, and skipping certain items…
Descriptors: Item Response Theory, Response Style (Tests), Test Items, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Ranger, Jochen; Kuhn, Jörg Tobias; Ortner, Tuulia M. – Educational and Psychological Measurement, 2020
The hierarchical model of van der Linden is the most popular model for responses and response times in tests. It is composed of two separate submodels--one for the responses and one for the response times--that are joined at a higher level. The submodel for the response times is based on the lognormal distribution. The lognormal distribution is a…
Descriptors: Reaction Time, Tests, Statistical Distributions, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Yuan; Hau, Kit-Tai – Educational and Psychological Measurement, 2020
In large-scale low-stake assessment such as the Programme for International Student Assessment (PISA), students may skip items (missingness) which are within their ability to complete. The detection and taking care of these noneffortful responses, as a measure of test-taking motivation, is an important issue in modern psychometric models.…
Descriptors: Response Style (Tests), Motivation, Test Items, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Sideridis, Georgios D.; Tsaousis, Ioannis; Al-Sadaawi, Abdullah – Educational and Psychological Measurement, 2019
The purpose of the present study was to apply the methodology developed by Raykov on modeling item-specific variance for the measurement of internal consistency reliability with longitudinal data. Participants were a randomly selected sample of 500 individuals who took on a professional qualifications test in Saudi Arabia over four different…
Descriptors: Test Reliability, Test Items, Longitudinal Studies, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Sachse, Karoline A.; Mahler, Nicole; Pohl, Steffi – Educational and Psychological Measurement, 2019
Mechanisms causing item nonresponses in large-scale assessments are often said to be nonignorable. Parameter estimates can be biased if nonignorable missing data mechanisms are not adequately modeled. In trend analyses, it is plausible for the missing data mechanism and the percentage of missing values to change over time. In this article, we…
Descriptors: International Assessment, Response Style (Tests), Achievement Tests, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Luo, Yong; Dimitrov, Dimiter M. – Educational and Psychological Measurement, 2019
Plausible values can be used to either estimate population-level statistics or compute point estimates of latent variables. While it is well known that five plausible values are usually sufficient for accurate estimation of population-level statistics in large-scale surveys, the minimum number of plausible values needed to obtain accurate latent…
Descriptors: Item Response Theory, Monte Carlo Methods, Markov Processes, Outcome Measures
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Yan; Kim, Eun Sook; Dedrick, Robert F.; Ferron, John M.; Tan, Tony – Educational and Psychological Measurement, 2018
Wording effects associated with positively and negatively worded items have been found in many scales. Such effects may threaten construct validity and introduce systematic bias in the interpretation of results. A variety of models have been applied to address wording effects, such as the correlated uniqueness model and the correlated traits and…
Descriptors: Test Items, Test Format, Correlation, Construct Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A.; Dimitrov, Dimiter M.; Li, Tatyana – Educational and Psychological Measurement, 2018
This article extends the procedure outlined in the article by Raykov, Marcoulides, and Tong for testing congruence of latent constructs to the setting of binary items and clustering effects. In this widely used setting in contemporary educational and psychological research, the method can be used to examine if two or more homogeneous…
Descriptors: Tests, Psychometrics, Test Items, Construct Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Fu, Yuanshu; Wen, Zhonglin; Wang, Yang – Educational and Psychological Measurement, 2018
The maximal reliability of a congeneric measure is achieved by weighting item scores to form the optimal linear combination as the total score; it is never lower than the composite reliability of the measure when measurement errors are uncorrelated. The statistical method that renders maximal reliability would also lead to maximal criterion…
Descriptors: Test Reliability, Test Validity, Comparative Analysis, Attitude Measures
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  17