NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Audience
Showing 1 to 15 of 232 results
Peer reviewed Peer reviewed
Direct linkDirect link
Sliter, Katherine A.; Zickar, Michael J. – Educational and Psychological Measurement, 2014
This study compared the functioning of positively and negatively worded personality items using item response theory. In Study 1, word pairs from the Goldberg Adjective Checklist were analyzed using the Graded Response Model. Across subscales, negatively worded items produced comparatively higher difficulty and lower discrimination parameters than…
Descriptors: Item Response Theory, Psychometrics, Personality Measures, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Lake, Christopher J.; Withrow, Scott; Zickar, Michael J.; Wood, Nicole L.; Dalal, Dev K.; Bochinski, Joseph – Educational and Psychological Measurement, 2013
Adapting the original latitude of acceptance concept to Likert-type surveys, response latitudes are defined as the range of graded response options a person is willing to endorse. Response latitudes were expected to relate to attitude involvement such that high involvement was linked to narrow latitudes (the result of selective, careful…
Descriptors: Item Response Theory, Likert Scales, Attitude Measures, Surveys
Peer reviewed Peer reviewed
Direct linkDirect link
Cheng, Ying; Yuan, Ke-Hai; Liu, Cheng – Educational and Psychological Measurement, 2012
Reliability of test scores is one of the most pervasive psychometric concepts in measurement. Reliability coefficients based on a unifactor model for continuous indicators include maximal reliability rho and an unweighted sum score-based omega, among many others. With increasing popularity of item response theory, a parallel reliability measure pi…
Descriptors: Reliability, Factor Analysis, Psychometrics, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Green, Samuel B.; Levy, Roy; Thompson, Marilyn S.; Lu, Min; Lo, Wen-Juo – Educational and Psychological Measurement, 2012
A number of psychometricians have argued for the use of parallel analysis to determine the number of factors. However, parallel analysis must be viewed at best as a heuristic approach rather than a mathematically rigorous one. The authors suggest a revision to parallel analysis that could improve its accuracy. A Monte Carlo study is conducted to…
Descriptors: Monte Carlo Methods, Factor Structure, Data Analysis, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Yan; Zumbo, Bruno D. – Educational and Psychological Measurement, 2012
There is a lack of research on the effects of outliers on the decisions about the number of factors to retain in an exploratory factor analysis, especially for outliers arising from unintended and unknowingly included subpopulations. The purpose of the present research was to investigate how outliers from an unintended and unknowingly included…
Descriptors: Factor Analysis, Factor Structure, Evaluation Research, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Meijer, Rob R.; Egberink, Iris J. L. – Educational and Psychological Measurement, 2012
In recent studies, different methods were proposed to investigate invariant item ordering (IIO), but practical IIO research is an unexploited field in questionnaire construction and evaluation. In the present study, the authors explored the usefulness of different IIO methods to analyze personality scales and clinical scales. From the authors'…
Descriptors: Test Items, Personality Measures, Questionnaires, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Prati, Gabriele – Educational and Psychological Measurement, 2012
The study aimed to develop the Homophobic Bullying Scale and to investigate its psychometric properties. The items of the Homophobic Bullying Scale were created to measure high school students' bullying behaviors motivated by homophobia, including verbal bullying, relational bullying, physical bullying, property bullying, sexual harassment, and…
Descriptors: Factor Analysis, Validity, Measures (Individuals), Bullying
Peer reviewed Peer reviewed
Direct linkDirect link
Nilsson, Johanna E.; Marszalek, Jacob M.; Linnemeyer, Rachel M.; Bahner, Angela D.; Misialek, Leah Hanson – Educational and Psychological Measurement, 2011
This article describes the development and the initial psychometric evaluation of the Social Issues Advocacy Scale in two studies. In the first study, an exploratory factor analysis (n = 278) revealed a four-factor scale, accounting for 71.4% of the variance, measuring different aspects of social issue advocacy: Political and Social Advocacy,…
Descriptors: Social Problems, Life Satisfaction, Test Validity, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Cheng, Ying-Yao; Chen, Li-Ming; Liu, Kun-Shia; Chen, Yi-Ling – Educational and Psychological Measurement, 2011
The study aims to develop three school bullying scales--the Bully Scale, the Victim Scale, and the Witness Scale--to assess secondary school students' bullying behaviors, including physical bullying, verbal bullying, relational bullying, and cyber bullying. The items of the three scales were developed from viewpoints of bullies, victims, and…
Descriptors: Bullying, School Safety, Measures (Individuals), Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Brown, Allison R.; Finney, Sara J.; France, Megan K. – Educational and Psychological Measurement, 2011
The Hong Psychological Reactance Scale (HPRS) purports to measure reactance: a motivational state experienced when a behavioral freedom is threatened with elimination. To date, five studies have examined the psychometric properties of the HPRS, but reached different conclusions regarding its factor structure. The current study further investigated…
Descriptors: Measures (Individuals), Motivation, Psychometrics, Factor Structure
Peer reviewed Peer reviewed
Direct linkDirect link
Hardin, Andrew; Marcoulides, George A. – Educational and Psychological Measurement, 2011
The recent flurry of articles on formative measurement, particularly in the information systems literature, appears to be symptomatic of a much larger problem. Despite significant objections by methodological experts, these articles continue to deliver a predominately pro formative measurement message to researchers who rapidly incorporate these…
Descriptors: Measurement, Theories, Statistical Analysis, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Schroeders, Ulrich; Wilhelm, Oliver – Educational and Psychological Measurement, 2011
Whether an ability test delivered on either paper or computer provides the same information is an important question in applied psychometrics. Besides the validity, it is also the fairness of a measure that is at stake if the test medium affects performance. This study provides a comprehensive review of existing equivalence research in the field…
Descriptors: Reading Comprehension, Listening Comprehension, English (Second Language), Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Patelis, Thanos; Marcoulides, George A. – Educational and Psychological Measurement, 2011
A latent variable modeling approach that can be used to examine whether several psychometric tests are parallel is discussed. The method consists of sequentially testing the properties of parallel measures via a corresponding relaxation of parameter constraints in a saturated model or an appropriately constructed latent variable model. The…
Descriptors: Models, Psychometrics, Evaluation Methods, Evaluation Research
Peer reviewed Peer reviewed
Direct linkDirect link
Stone, Gregory Ethan; Koskey, Kristin L. K.; Sondergeld, Toni A. – Educational and Psychological Measurement, 2011
Typical validation studies on standard setting models, most notably the Angoff and modified Angoff models, have ignored construct development, a critical aspect associated with all conceptualizations of measurement processes. Stone compared the Angoff and objective standard setting (OSS) models and found that Angoff failed to define a legitimate…
Descriptors: Cutting Scores, Standard Setting (Scoring), Models, Construct Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Engelhard, George, Jr. – Educational and Psychological Measurement, 2011
The purpose of this study is to describe a new approach for evaluating the judgments of standard-setting panelists within the context of the bookmark procedure. The bookmark procedure is widely used for setting performance standards on high-stakes assessments. A many-faceted Rasch (MFR) model is proposed for evaluating the bookmark judgments of…
Descriptors: Educational Assessment, Performance Based Assessment, Grade 3, Evaluation Methods
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  16