NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational and Psychological…251
Audience
Practitioners1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 251 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Nana; Bolt, Daniel M. – Educational and Psychological Measurement, 2021
This paper presents a mixture item response tree (IRTree) model for extreme response style. Unlike traditional applications of single IRTree models, a mixture approach provides a way of representing the mixture of respondents following different underlying response processes (between individuals), as well as the uncertainty present at the…
Descriptors: Item Response Theory, Response Style (Tests), Models, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Bürkner, Paul-Christian; Schulte, Niklas; Holling, Heinz – Educational and Psychological Measurement, 2019
Forced-choice questionnaires have been proposed to avoid common response biases typically associated with rating scale questionnaires. To overcome ipsativity issues of trait scores obtained from classical scoring approaches of forced-choice items, advanced methods from item response theory (IRT) such as the Thurstonian IRT model have been…
Descriptors: Item Response Theory, Measurement Techniques, Questionnaires, Rating Scales
Peer reviewed Peer reviewed
Direct linkDirect link
Matlock Cole, Ki Lynn; Turner, Ronna C.; Gitchel, W. Dent – Educational and Psychological Measurement, 2018
The generalized partial credit model (GPCM) is often used for polytomous data; however, the nominal response model (NRM) allows for the investigation of how adjacent categories may discriminate differently when items are positively or negatively worded. Ten items from three different self-reported scales were used (anxiety, depression, and…
Descriptors: Item Response Theory, Anxiety, Depression (Psychology), Self Evaluation (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Jia, Yuane; Konold, Timothy R.; Cornell, Dewey; Huang, Francis – Educational and Psychological Measurement, 2018
Self-report surveys are widely used to measure adolescent risk behavior and academic adjustment, with results having an impact on national policy, assessment of school quality, and evaluation of school interventions. However, data obtained from self-reports can be distorted when adolescents intentionally provide inaccurate or careless responses.…
Descriptors: Surveys, Self Disclosure (Individuals), Adolescents, High School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Wind, Stefanie A.; Jones, Eli – Educational and Psychological Measurement, 2018
Previous research includes frequent admonitions regarding the importance of establishing connectivity in data collection designs prior to the application of Rasch models. However, details regarding the influence of characteristics of the linking sets used to establish connections among facets, such as locations on the latent variable, model-data…
Descriptors: Data Collection, Goodness of Fit, Computation, Networks
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Xijuan; Savalei, Victoria – Educational and Psychological Measurement, 2016
Many psychological scales written in the Likert format include reverse worded (RW) items in order to control acquiescence bias. However, studies have shown that RW items often contaminate the factor structure of the scale by creating one or more method factors. The present study examines an alternative scale format, called the Expanded format,…
Descriptors: Factor Structure, Psychological Testing, Alternative Assessment, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Andrich, David – Educational and Psychological Measurement, 2016
This article reproduces correspondence between Georg Rasch of The University of Copenhagen and Benjamin Wright of The University of Chicago in the period from January 1966 to July 1967. This correspondence reveals their struggle to operationalize a unidimensional measurement model with sufficient statistics for responses in a set of ordered…
Descriptors: Statistics, Item Response Theory, Rating Scales, Mathematical Models
Peer reviewed Peer reviewed
Direct linkDirect link
Attali, Yigal – Educational and Psychological Measurement, 2014
This article presents a comparative judgment approach for holistically scored constructed response tasks. In this approach, the grader rank orders (rather than rate) the quality of a small set of responses. A prior automated evaluation of responses guides both set formation and scaling of rankings. Sets are formed to have similar prior scores and…
Descriptors: Responses, Item Response Theory, Scores, Rating Scales
Peer reviewed Peer reviewed
Direct linkDirect link
Jin, Kuan-Yu; Wang, Wen-Chung – Educational and Psychological Measurement, 2014
Extreme response style (ERS) is a systematic tendency for a person to endorse extreme options (e.g., strongly disagree, strongly agree) on Likert-type or rating-scale items. In this study, we develop a new class of item response theory (IRT) models to account for ERS so that the target latent trait is free from the response style and the tendency…
Descriptors: Item Response Theory, Research Methodology, Bayesian Statistics, Response Style (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Plieninger, Hansjörg; Meiser, Thorsten – Educational and Psychological Measurement, 2014
Response styles, the tendency to respond to Likert-type items irrespective of content, are a widely known threat to the reliability and validity of self-report measures. However, it is still debated how to measure and control for response styles such as extreme responding. Recently, multiprocess item response theory models have been proposed that…
Descriptors: Validity, Item Response Theory, Rating Scales, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Thomas, Troy D.; Abts, Koen; Vander Weyden, Patrick – Educational and Psychological Measurement, 2014
This article investigates the effect of the rural-urban divide on mean response styles (RSs) and their relationships with the sociodemographic characteristics of the respondents. It uses the Representative Indicator Response Style Means and Covariance Structure (RIRSMACS) method and data from Guyana--a developing country in the Caribbean. The…
Descriptors: Rural Urban Differences, Response Style (Tests), Demography, Social Characteristics
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Pohl, Steffi – Educational and Psychological Measurement, 2013
A procedure for examining essential unidimensionality in multicomponent measuring instruments is discussed. The method is based on an application of latent variable modeling and is concerned with the extent to which a common factor for all components of a given scale accounts for their correlations. The approach provides point and interval…
Descriptors: Measures (Individuals), Statistical Analysis, Factor Structure, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Keeley, Jared W.; English, Taylor; Irons, Jessica; Henslee, Amber M. – Educational and Psychological Measurement, 2013
Many measurement biases affect student evaluations of instruction (SEIs). However, two have been relatively understudied: halo effects and ceiling/floor effects. This study examined these effects in two ways. To examine the halo effect, using a videotaped lecture, we manipulated specific teacher behaviors to be "good" or "bad"…
Descriptors: Robustness (Statistics), Test Bias, Course Evaluation, Student Evaluation of Teacher Performance
Peer reviewed Peer reviewed
Direct linkDirect link
Zhu, Xiaowen; Stone, Clement A. – Educational and Psychological Measurement, 2012
This study examined the relative effectiveness of Bayesian model comparison methods in selecting an appropriate graded response (GR) model for performance assessment applications. Three popular methods were considered: deviance information criterion (DIC), conditional predictive ordinate (CPO), and posterior predictive model checking (PPMC). Using…
Descriptors: Bayesian Statistics, Item Response Theory, Comparative Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Brown, Anna; Maydeu-Olivares, Alberto – Educational and Psychological Measurement, 2011
Multidimensional forced-choice formats can significantly reduce the impact of numerous response biases typically associated with rating scales. However, if scored with classical methodology, these questionnaires produce ipsative data, which lead to distorted scale relationships and make comparisons between individuals problematic. This research…
Descriptors: Item Response Theory, Models, Questionnaires, Measurement Techniques
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  17