NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational and Psychological…27
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 27 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Deribo, Tobias; Goldhammer, Frank; Kroehne, Ulf – Educational and Psychological Measurement, 2023
As researchers in the social sciences, we are often interested in studying not directly observable constructs through assessments and questionnaires. But even in a well-designed and well-implemented study, rapid-guessing behavior may occur. Under rapid-guessing behavior, a task is skimmed shortly but not read and engaged with in-depth. Hence, a…
Descriptors: Reaction Time, Guessing (Tests), Behavior Patterns, Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Cooperman, Allison W.; Weiss, David J.; Wang, Chun – Educational and Psychological Measurement, 2022
Adaptive measurement of change (AMC) is a psychometric method for measuring intra-individual change on one or more latent traits across testing occasions. Three hypothesis tests--a Z test, likelihood ratio test, and score ratio index--have demonstrated desirable statistical properties in this context, including low false positive rates and high…
Descriptors: Error of Measurement, Psychometrics, Hypothesis Testing, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Robie, Chet; Meade, Adam W.; Risavy, Stephen D.; Rasheed, Sabah – Educational and Psychological Measurement, 2022
The effects of different response option orders on survey responses have been studied extensively. The typical research design involves examining the differences in response characteristics between conditions with the same item stems and response option orders that differ in valence--either incrementally arranged (e.g., strongly disagree to…
Descriptors: Likert Scales, Psychometrics, Surveys, Responses
Peer reviewed Peer reviewed
Direct linkDirect link
Dimitrov, Dimiter M. – Educational and Psychological Measurement, 2020
This study presents new models for item response functions (IRFs) in the framework of the D-scoring method (DSM) that is gaining attention in the field of educational and psychological measurement and largescale assessments. In a previous work on DSM, the IRFs of binary items were estimated using a logistic regression model (LRM). However, the LRM…
Descriptors: Item Response Theory, Scoring, True Scores, Scaling
Peer reviewed Peer reviewed
Direct linkDirect link
Huggins-Manley, Anne Corinne – Educational and Psychological Measurement, 2017
This study defines subpopulation item parameter drift (SIPD) as a change in item parameters over time that is dependent on subpopulations of examinees, and hypothesizes that the presence of SIPD in anchor items is associated with bias and/or lack of invariance in three psychometric outcomes. Results show that SIPD in anchor items is associated…
Descriptors: Psychometrics, Test Items, Item Response Theory, Hypothesis Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Stanley, Leanne M.; Edwards, Michael C. – Educational and Psychological Measurement, 2016
The purpose of this article is to highlight the distinction between the reliability of test scores and the fit of psychometric measurement models, reminding readers why it is important to consider both when evaluating whether test scores are valid for a proposed interpretation and/or use. It is often the case that an investigator judges both the…
Descriptors: Test Reliability, Goodness of Fit, Scores, Patients
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Xijuan; Savalei, Victoria – Educational and Psychological Measurement, 2016
Many psychological scales written in the Likert format include reverse worded (RW) items in order to control acquiescence bias. However, studies have shown that RW items often contaminate the factor structure of the scale by creating one or more method factors. The present study examines an alternative scale format, called the Expanded format,…
Descriptors: Factor Structure, Psychological Testing, Alternative Assessment, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Stone, Gregory Ethan; Koskey, Kristin L. K.; Sondergeld, Toni A. – Educational and Psychological Measurement, 2011
Typical validation studies on standard setting models, most notably the Angoff and modified Angoff models, have ignored construct development, a critical aspect associated with all conceptualizations of measurement processes. Stone compared the Angoff and objective standard setting (OSS) models and found that Angoff failed to define a legitimate…
Descriptors: Cutting Scores, Standard Setting (Scoring), Models, Construct Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Leite, Walter L.; Svinicki, Marilla; Shi, Yuying – Educational and Psychological Measurement, 2010
The authors examined the dimensionality of the VARK learning styles inventory. The VARK measures four perceptual preferences: visual (V), aural (A), read/write (R), and kinesthetic (K). VARK questions can be viewed as testlets because respondents can select multiple items within a question. The correlations between items within testlets are a type…
Descriptors: Multitrait Multimethod Techniques, Construct Validity, Reliability, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Donnellan, M. Brent – Educational and Psychological Measurement, 2008
The properties of the achievement goal inventories developed by Grant and Dweck (2003) and Elliot and McGregor (2001) were evaluated in two studies with a total of 780 participants. A four-factor specification for the Grant and Dweck inventory did not closely replicate results published in their original report. In contrast, the structure of the…
Descriptors: Academic Achievement, Psychometrics, Program Validation, Achievement Rating
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Do-Hong; Huynh, Huynh – Educational and Psychological Measurement, 2008
The current study compared student performance between paper-and-pencil testing (PPT) and computer-based testing (CBT) on a large-scale statewide end-of-course English examination. Analyses were conducted at both the item and test levels. The overall results suggest that scores obtained from PPT and CBT were comparable. However, at the content…
Descriptors: Reading Comprehension, Computer Assisted Testing, Factor Analysis, Comparative Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Prevatt, Frances; Petscher, Yaacov; Proctor, Briley E.; Hurst, Abigail; Adams, Katharine – Educational and Psychological Measurement, 2006
Two competing structural models for the revised Learning and Study Strategies Inventory (LASSI) were examined. The test developers promote a model related to three uncorrelated components of strategic learning: skill, will, and self-regulation. Other investigators have shown empirical support for a three-factor correlated model characterized by…
Descriptors: College Students, Structural Equation Models, Learning Strategies, Factor Analysis
Peer reviewed Peer reviewed
Nevo, Barukh; And Others – Educational and Psychological Measurement, 1975
A two-phase FORTRAN IV program called ITANA-III for an IBM 1130 computer is described that permits computation of psychometric characteristics of multiple-choice examinations including test statistics (phase I) and item statistics (phase II). Consisting of 280 statements, the program can handle up to 200 items with not more than 9 alternatives…
Descriptors: Computer Programs, Input Output, Item Analysis, Multiple Choice Tests
Peer reviewed Peer reviewed
Rogers, Paul W. – Educational and Psychological Measurement, 1978
Two procedures for the display of item analysis statistics are described. One procedure allows for investigation of difficulty; the second plots item difficulty against item discrimination. (Author/JKS)
Descriptors: Difficulty Level, Graphs, Guidelines, Item Analysis
Peer reviewed Peer reviewed
Andrich, David – Educational and Psychological Measurement, 1978
A generalization of the Rasch model is employed to quantify statements on a scale in the Thurstone tradition, as well as to measure a person's attitudes in the Likert tradition. An illustration of the technique is provided. (JKS)
Descriptors: Attitude Measures, Item Analysis, Measurement Techniques, Psychometrics
Previous Page | Next Page ยป
Pages: 1  |  2