Publication Date

In 2022 | 3 |

Since 2021 | 9 |

Since 2018 (last 5 years) | 23 |

Since 2013 (last 10 years) | 36 |

Since 2003 (last 20 years) | 75 |

Descriptor

Test Reliability | 511 |

Test Validity | 272 |

Test Construction | 116 |

Higher Education | 108 |

Factor Structure | 74 |

Correlation | 66 |

Factor Analysis | 63 |

Item Analysis | 56 |

Rating Scales | 55 |

Test Items | 53 |

Scores | 52 |

More ▼ |

Source

Educational and Psychological… | 511 |

Author

Publication Type

Education Level

Higher Education | 13 |

Postsecondary Education | 7 |

Elementary Education | 4 |

Middle Schools | 4 |

Secondary Education | 4 |

Grade 4 | 3 |

Junior High Schools | 3 |

Grade 3 | 2 |

High Schools | 2 |

Intermediate Grades | 2 |

Early Childhood Education | 1 |

More ▼ |

Audience

Location

Australia | 7 |

Canada | 6 |

Taiwan | 3 |

Germany | 2 |

Jordan | 2 |

Netherlands | 2 |

Norway | 2 |

Philippines | 2 |

Saudi Arabia | 2 |

Belgium | 1 |

Brazil | 1 |

More ▼ |

Laws, Policies, & Programs

Assessments and Surveys

What Works Clearinghouse Rating

Kroc, Edward; Olvera Astivia, Oscar L. – Educational and Psychological Measurement, 2022

Setting cutoff scores is one of the most common practices when using scales to aid in classification purposes. This process is usually done univariately where each optimal cutoff value is decided sequentially, subscale by subscale. While it is widely known that this process necessarily reduces the probability of "passing" such a test,…

Descriptors: Multivariate Analysis, Cutting Scores, Classification, Measurement

Liu, Xiaowen; Jane Rogers, H. – Educational and Psychological Measurement, 2022

Test fairness is critical to the validity of group comparisons involving gender, ethnicities, culture, or treatment conditions. Detection of differential item functioning (DIF) is one component of efforts to ensure test fairness. The current study compared four treatments for items that have been identified as showing DIF: deleting, ignoring,…

Descriptors: Item Analysis, Comparative Analysis, Culture Fair Tests, Test Validity

Brennan, Robert L.; Kim, Stella Y.; Lee, Won-Chan – Educational and Psychological Measurement, 2022

This article extends multivariate generalizability theory (MGT) to tests with different random-effects designs for each level of a fixed facet. There are numerous situations in which the design of a test and the resulting data structure are not definable by a single design. One example is mixed-format tests that are composed of multiple-choice and…

Descriptors: Multivariate Analysis, Generalizability Theory, Multiple Choice Tests, Test Construction

Ellis, Jules L. – Educational and Psychological Measurement, 2021

This study develops a theoretical model for the costs of an exam as a function of its duration. Two kind of costs are distinguished: (1) the costs of measurement errors and (2) the costs of the measurement. Both costs are expressed in time of the student. Based on a classical test theory model, enriched with assumptions on the context, the costs…

Descriptors: Test Length, Models, Error of Measurement, Measurement

Schulte, Niklas; Holling, Heinz; Bürkner, Paul-Christian – Educational and Psychological Measurement, 2021

Forced-choice questionnaires can prevent faking and other response biases typically associated with rating scales. However, the derived trait scores are often unreliable and ipsative, making interindividual comparisons in high-stakes situations impossible. Several studies suggest that these problems vanish if the number of measured traits is high.…

Descriptors: Questionnaires, Measurement Techniques, Test Format, Scoring

Jiang, Zhehan; Shi, Dexin; Distefano, Christine – Educational and Psychological Measurement, 2021

The costs of an objective structured clinical examination (OSCE) are of concern to health profession educators globally. As OSCEs are usually designed under generalizability theory (G-theory) framework, this article proposes a machine-learning-based approach to optimize the costs, while maintaining the minimum required generalizability…

Descriptors: Artificial Intelligence, Generalizability Theory, Objective Tests, Foreign Countries

Foster, Robert C. – Educational and Psychological Measurement, 2021

This article presents some equivalent forms of the common Kuder-Richardson Formula 21 and 20 estimators for nondichotomous data belonging to certain other exponential families, such as Poisson count data, exponential data, or geometric counts of trials until failure. Using the generalized framework of Foster (2020), an equation for the reliability…

Descriptors: Test Reliability, Data, Computation, Mathematical Formulas

Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2021

The population discrepancy between unstandardized and standardized reliability of homogeneous multicomponent measuring instruments is examined. Within a latent variable modeling framework, it is shown that the standardized reliability coefficient for unidimensional scales can be markedly higher than the corresponding unstandardized reliability…

Descriptors: Test Reliability, Computation, Measures (Individuals), Research Problems

Wyse, Adam E. – Educational and Psychological Measurement, 2021

An essential question when computing test--retest and alternate forms reliability coefficients is how many days there should be between tests. This article uses data from reading and math computerized adaptive tests to explore how the number of days between tests impacts alternate forms reliability coefficients. Results suggest that the highest…

Descriptors: Computer Assisted Testing, Adaptive Testing, Test Reliability, Reading Tests

Using Differential Item Functioning to Test for Interrater Reliability in Constructed Response Items

Walker, Cindy M.; Göçer Sahin, Sakine – Educational and Psychological Measurement, 2020

The purpose of this study was to investigate a new way of evaluating interrater reliability that can allow one to determine if two raters differ with respect to their rating on a polytomous rating scale or constructed response item. Specifically, differential item functioning (DIF) analyses were used to assess interrater reliability and compared…

Descriptors: Test Bias, Interrater Reliability, Responses, Correlation

Olvera Astivia, Oscar Lorenzo; Kroc, Edward; Zumbo, Bruno D. – Educational and Psychological Measurement, 2020

Simulations concerning the distributional assumptions of coefficient alpha are contradictory. To provide a more principled theoretical framework, this article relies on the Fréchet-Hoeffding bounds, in order to showcase that the distribution of the items play a role on the estimation of correlations and covariances. More specifically, these bounds…

Descriptors: Test Items, Test Reliability, Computation, Correlation

Raborn, Anthony W.; Leite, Walter L.; Marcoulides, Katerina M. – Educational and Psychological Measurement, 2020

This study compares automated methods to develop short forms of psychometric scales. Obtaining a short form that has both adequate internal structure and strong validity with respect to relationships with other variables is difficult with traditional methods of short-form development. Metaheuristic algorithms can select items for short forms while…

Descriptors: Test Construction, Automation, Heuristics, Mathematics

Hong, Maxwell; Steedle, Jeffrey T.; Cheng, Ying – Educational and Psychological Measurement, 2020

Insufficient effort responding (IER) affects many forms of assessment in both educational and psychological contexts. Much research has examined different types of IER, IER's impact on the psychometric properties of test scores, and preprocessing procedures used to detect IER. However, there is a gap in the literature in terms of practical advice…

Descriptors: Responses, Psychometrics, Test Validity, Test Reliability

Raykov, Tenko; Marcoulides, George A.; Harrison, Michael; Menold, Natalja – Educational and Psychological Measurement, 2019

This note confronts the common use of a single coefficient alpha as an index informing about reliability of a multicomponent measurement instrument in a heterogeneous population. Two or more alpha coefficients could instead be meaningfully associated with a given instrument in finite mixture settings, and this may be increasingly more likely the…

Descriptors: Statistical Analysis, Test Reliability, Measures (Individuals), Computation

Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2019

This note discusses the merits of coefficient alpha and their conditions in light of recent critical publications that miss out on significant research findings over the past several decades. That earlier research has demonstrated the empirical relevance and utility of coefficient alpha under certain empirical circumstances. The article highlights…

Descriptors: Test Validity, Test Reliability, Test Items, Correlation