NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 16 to 30 of 3,486 results
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2014
This research note contributes to the discussion of methods that can be used to identify useful auxiliary variables for analyses of incomplete data sets. A latent variable approach is discussed, which is helpful in finding auxiliary variables with the property that if included in subsequent maximum likelihood analyses they may enhance considerably…
Descriptors: Data Analysis, Identification, Maximum Likelihood Statistics, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, HwaYoung; Beretvas, S. Natasha – Educational and Psychological Measurement, 2014
Conventional differential item functioning (DIF) detection methods (e.g., the Mantel-Haenszel test) can be used to detect DIF only across observed groups, such as gender or ethnicity. However, research has found that DIF is not typically fully explained by an observed variable. True sources of DIF may include unobserved, latent variables, such as…
Descriptors: Item Analysis, Factor Structure, Bayesian Statistics, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
Attali, Yigal – Educational and Psychological Measurement, 2014
This article presents a comparative judgment approach for holistically scored constructed response tasks. In this approach, the grader rank orders (rather than rate) the quality of a small set of responses. A prior automated evaluation of responses guides both set formation and scaling of rankings. Sets are formed to have similar prior scores and…
Descriptors: Responses, Item Response Theory, Scores, Rating Scales
Peer reviewed Peer reviewed
Direct linkDirect link
Straat, J. Hendrik; van der Ark, L. Andries; Sijtsma, Klaas – Educational and Psychological Measurement, 2014
An automated item selection procedure in Mokken scale analysis partitions a set of items into one or more Mokken scales, if the data allow. Two algorithms are available that pursue the same goal of selecting Mokken scales of maximum length: Mokken's original automated item selection procedure (AISP) and a genetic algorithm (GA). Minimum…
Descriptors: Sampling, Test Items, Effect Size, Scaling
Peer reviewed Peer reviewed
Direct linkDirect link
Plieninger, Hansjörg; Meiser, Thorsten – Educational and Psychological Measurement, 2014
Response styles, the tendency to respond to Likert-type items irrespective of content, are a widely known threat to the reliability and validity of self-report measures. However, it is still debated how to measure and control for response styles such as extreme responding. Recently, multiprocess item response theory models have been proposed that…
Descriptors: Validity, Item Response Theory, Rating Scales, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Lin, Pei-Ying; Lin, Yu-Cheng – Educational and Psychological Measurement, 2014
This exploratory study investigated potential sources of setting accommodation resulting in differential item functioning (DIF) on math and reading assessments for examinees with varied learning characteristics. The examinees were those who participated in large-scale assessments and were tested in either standardized or accommodated testing…
Descriptors: Test Bias, Multivariate Analysis, Testing Accommodations, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Sideridis, Georgios; Simos, Panagiotis; Papanicolaou, Andrew; Fletcher, Jack – Educational and Psychological Measurement, 2014
The present study assessed the impact of sample size on the power and fit of structural equation modeling applied to functional brain connectivity hypotheses. The data consisted of time-constrained minimum norm estimates of regional brain activity during performance of a reading task obtained with magnetoencephalography. Power analysis was first…
Descriptors: Structural Equation Models, Brain Hemisphere Functions, Simulation, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Wiley, Edward W.; Shavelson, Richard J.; Kurpius, Amy A. – Educational and Psychological Measurement, 2014
The name "SAT" has become synonymous with college admissions testing; it has been dubbed "the gold standard." Numerous studies on its reliability and predictive validity show that the SAT predicts college performance beyond high school grade point average. Surprisingly, studies of the factorial structure of the current version…
Descriptors: College Readiness, College Admission, College Entrance Examinations, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Padilla, Miguel A.; Veprinsky, Anna – Educational and Psychological Measurement, 2014
Correlation attenuation due to measurement error and a corresponding correction, the deattenuated correlation, have been known for over a century. Nevertheless, the deattenuated correlation remains underutilized. A few studies in recent years have investigated factors affecting the deattenuated correlation, and a couple of them provide alternative…
Descriptors: Correlation, Sampling, Statistical Inference, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Min; Hancock, Gregory R. – Educational and Psychological Measurement, 2014
Growth mixture modeling has gained much attention in applied and methodological social science research recently, but the selection of the number of latent classes for such models remains a challenging issue, especially when the assumption of proper model specification is violated. The current simulation study compared the performance of a linear…
Descriptors: Models, Classification, Simulation, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Ferrando, Pere J. – Educational and Psychological Measurement, 2014
Item response theory (IRT) models allow model-data fit to be assessed at the individual level by using person-fit indices. This assessment is also feasible when IRT is used to model test-retest data. However, person-fit developments for this type of modeling are virtually nonexistent. This article proposes a general person-fit approach for…
Descriptors: Item Response Theory, Goodness of Fit, Statistical Analysis, Likert Scales
Peer reviewed Peer reviewed
Direct linkDirect link
Okumura, Taichi – Educational and Psychological Measurement, 2014
This study examined the empirical differences between the tendency to omit items and reading ability by applying tree-based item response (IRTree) models to the Japanese data of the Programme for International Student Assessment (PISA) held in 2009. For this purpose, existing IRTree models were expanded to contain predictors and to handle…
Descriptors: Foreign Countries, Item Response Theory, Test Items, Reading Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Huggins, Anne Corinne – Educational and Psychological Measurement, 2014
Invariant relationships in the internal mechanisms of estimating achievement scores on educational tests serve as the basis for concluding that a particular test is fair with respect to statistical bias concerns. Equating invariance and differential item functioning are both concerned with invariant relationships yet are treated separately in the…
Descriptors: Test Bias, Test Items, Equated Scores, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Paek, Insu; Park, Hyun-Jeong; Cai, Li; Chi, Eunlim – Educational and Psychological Measurement, 2014
Typically a longitudinal growth modeling based on item response theory (IRT) requires repeated measures data from a single group with the same test design. If operational or item exposure problems are present, the same test may not be employed to collect data for longitudinal analyses and tests at multiple time points are constructed with unique…
Descriptors: Item Response Theory, Comparative Analysis, Test Items, Equated Scores
Peer reviewed Peer reviewed
Direct linkDirect link
He, Wei; Diao, Qi; Hauser, Carl – Educational and Psychological Measurement, 2014
This study compared four item-selection procedures developed for use with severely constrained computerized adaptive tests (CATs). Severely constrained CATs refer to those adaptive tests that seek to meet a complex set of constraints that are often not conclusive to each other (i.e., an item may contribute to the satisfaction of several…
Descriptors: Comparative Analysis, Test Items, Selection, Computer Assisted Testing
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  233