NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 1 to 15 of 86 results
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Hung-Yu; Wang, Wen-Chung – Educational and Psychological Measurement, 2014
In the social sciences, latent traits often have a hierarchical structure, and data can be sampled from multiple levels. Both hierarchical latent traits and multilevel data can occur simultaneously. In this study, we developed a general class of item response theory models to accommodate both hierarchical latent traits and multilevel data. The…
Descriptors: Item Response Theory, Hierarchical Linear Modeling, Computation, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Plieninger, Hansjörg; Meiser, Thorsten – Educational and Psychological Measurement, 2014
Response styles, the tendency to respond to Likert-type items irrespective of content, are a widely known threat to the reliability and validity of self-report measures. However, it is still debated how to measure and control for response styles such as extreme responding. Recently, multiprocess item response theory models have been proposed that…
Descriptors: Validity, Item Response Theory, Rating Scales, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Wiley, Edward W.; Shavelson, Richard J.; Kurpius, Amy A. – Educational and Psychological Measurement, 2014
The name "SAT" has become synonymous with college admissions testing; it has been dubbed "the gold standard." Numerous studies on its reliability and predictive validity show that the SAT predicts college performance beyond high school grade point average. Surprisingly, studies of the factorial structure of the current version…
Descriptors: College Readiness, College Admission, College Entrance Examinations, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Sliter, Katherine A.; Zickar, Michael J. – Educational and Psychological Measurement, 2014
This study compared the functioning of positively and negatively worded personality items using item response theory. In Study 1, word pairs from the Goldberg Adjective Checklist were analyzed using the Graded Response Model. Across subscales, negatively worded items produced comparatively higher difficulty and lower discrimination parameters than…
Descriptors: Item Response Theory, Psychometrics, Personality Measures, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Jones, W. Paul – Educational and Psychological Measurement, 2014
A study in a university clinic/laboratory investigated adaptive Bayesian scaling as a supplement to interpretation of scores on the Mini-IPIP. A "probability of belonging" in categories of low, medium, or high on each of the Big Five traits was calculated after each item response and continued until all items had been used or until a…
Descriptors: Personality Traits, Personality Measures, Bayesian Statistics, Clinics
Peer reviewed Peer reviewed
Direct linkDirect link
Paulhus, Delroy L.; Dubois, Patrick J. – Educational and Psychological Measurement, 2014
The overclaiming technique is a novel assessment procedure that uses signal detection analysis to generate indices of knowledge accuracy (OC-accuracy) and self-enhancement (OC-bias). The technique has previously shown robustness over varied knowledge domains as well as low reactivity across administration contexts. Here we compared the OC-accuracy…
Descriptors: Educational Assessment, Knowledge Level, Accuracy, Cognitive Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Keeley, Jared W.; English, Taylor; Irons, Jessica; Henslee, Amber M. – Educational and Psychological Measurement, 2013
Many measurement biases affect student evaluations of instruction (SEIs). However, two have been relatively understudied: halo effects and ceiling/floor effects. This study examined these effects in two ways. To examine the halo effect, using a videotaped lecture, we manipulated specific teacher behaviors to be "good" or "bad"…
Descriptors: Robustness (Statistics), Test Bias, Course Evaluation, Student Evaluation of Teacher Performance
Peer reviewed Peer reviewed
Direct linkDirect link
Kaliski, Pamela K.; Wind, Stefanie A.; Engelhard, George, Jr.; Morgan, Deanna L.; Plake, Barbara S.; Reshetar, Rosemary A. – Educational and Psychological Measurement, 2013
The many-faceted Rasch (MFR) model has been used to evaluate the quality of ratings on constructed response assessments; however, it can also be used to evaluate the quality of judgments from panel-based standard setting procedures. The current study illustrates the use of the MFR model for examining the quality of ratings obtained from a standard…
Descriptors: Item Response Theory, Models, Standard Setting (Scoring), Science Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Shaw, Emily J.; Marini, Jessica P.; Mattern, Krista D. – Educational and Psychological Measurement, 2013
The current study evaluated the relationship between various operationalizations of the Advanced Placement[R] (AP) exam and course information with first-year grade point average (FYGPA) in college to better understand the role of AP in college admission decisions. In particular, the incremental validity of the different AP variables, above…
Descriptors: Advanced Placement Programs, Grade Point Average, College Freshmen, College Admission
Peer reviewed Peer reviewed
Direct linkDirect link
Ferrando, Pere J.; Anguiano-Carrasco, Cristina – Educational and Psychological Measurement, 2013
This article proposes a two-stage procedure aimed at identifying faking in personality tests. The procedure, which can be considered as an extension and refinement of previous item response theory (IRT)-based proposals, combines the information provided by a structural equation model (SEM) in the first stage with that provided by an IRT-based…
Descriptors: Structural Equation Models, Item Response Theory, Personality Measures, Deception
Peer reviewed Peer reviewed
Direct linkDirect link
Lake, Christopher J.; Withrow, Scott; Zickar, Michael J.; Wood, Nicole L.; Dalal, Dev K.; Bochinski, Joseph – Educational and Psychological Measurement, 2013
Adapting the original latitude of acceptance concept to Likert-type surveys, response latitudes are defined as the range of graded response options a person is willing to endorse. Response latitudes were expected to relate to attitude involvement such that high involvement was linked to narrow latitudes (the result of selective, careful…
Descriptors: Item Response Theory, Likert Scales, Attitude Measures, Surveys
Peer reviewed Peer reviewed
Direct linkDirect link
Albano, Anthony D.; Rodriguez, Michael C. – Educational and Psychological Measurement, 2013
Although a substantial amount of research has been conducted on differential item functioning in testing, studies have focused on detecting differential item functioning rather than on explaining how or why it may occur. Some recent work has explored sources of differential functioning using explanatory and multilevel item response models. This…
Descriptors: Test Bias, Hierarchical Linear Modeling, Gender Differences, Educational Opportunities
Peer reviewed Peer reviewed
Direct linkDirect link
Seo, Dong Gi; Weiss, David J. – Educational and Psychological Measurement, 2013
The usefulness of the l[subscript z] person-fit index was investigated with achievement test data from 20 exams given to more than 3,200 college students. Results for three methods of estimating ? showed that the distributions of l[subscript z] were not consistent with its theoretical distribution, resulting in general overfit to the item response…
Descriptors: Achievement Tests, College Students, Goodness of Fit, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Dunn, Karee E.; Lo, Wen-Juo; Mulvenon, Sean W.; Sutcliffe, Rachel – Educational and Psychological Measurement, 2012
The Motivated Strategies for Learning Questionnaire (MSLQ) has dominated self-regulated learning research since the early 1990s. In this study, the two MSLQ subscales specifically designed to assess self-regulation--Metacognitive Self-Regulation subscale and Effort Regulation subscale--were examined. Results indicated that the structure of the two…
Descriptors: Questionnaires, Self Control, Learning Strategies, Metacognition
Peer reviewed Peer reviewed
Direct linkDirect link
Davison, Mark L.; Semmes, Robert; Huang, Lan; Close, Catherine N. – Educational and Psychological Measurement, 2012
Data from 181 college students were used to assess whether math reasoning item response times in computerized testing can provide valid and reliable measures of a speed dimension. The alternate forms reliability of the speed dimension was .85. A two-dimensional structural equation model suggests that the speed dimension is related to the accuracy…
Descriptors: Computer Assisted Testing, Reaction Time, Reliability, Validity
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6