NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing all 10 results
Peer reviewed Peer reviewed
Direct linkDirect link
Papanastasiou, Elena C. – Practical Assessment, Research & Evaluation, 2015
If good measurement depends in part on the estimation of accurate item characteristics, it is essential that test developers become aware of discrepancies that may exist on the item parameters before and after item review. The purpose of this study was to examine the answer changing patterns of students while taking paper-and-pencil multiple…
Descriptors: Psychometrics, Difficulty Level, Test Items, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Zumbach, Joerg; Funke, Joachim – Practical Assessment, Research & Evaluation, 2014
In two subsequent experiments, the influence of mood on academic course evaluation is examined. By means of facial feedback, either a positive or a negative mood was induced while students were completing a course evaluation questionnaire during lectures. Results from both studies reveal that a positive mood leads to better ratings of different…
Descriptors: Course Evaluation, Psychological Patterns, Student Attitudes, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
Rusticus, Shayna A.; Lovato, Chris Y. – Practical Assessment, Research & Evaluation, 2014
The question of equivalence between two or more groups is frequently of interest to many applied researchers. Equivalence testing is a statistical method designed to provide evidence that groups are comparable by demonstrating that the mean differences found between groups are small enough that they are considered practically unimportant. Few…
Descriptors: Sample Size, Equivalency Tests, Simulation, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Kennelly, Brendan; Flannery, Darragh; Considine, John; Doherty, Edel; Hynes, Stephen – Practical Assessment, Research & Evaluation, 2014
This paper outlines how a discrete choice experiment (DCE) can be used to learn more about how students are willing to trade off various features of assignments such as the nature and timing of feedback and the method used to submit assignments. A DCE identifies plausible levels of the key attributes of a good or service and then presents the…
Descriptors: Foreign Countries, Preferences, Assignments, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
Carleton, R. Nicholas; Thibodeau, Michel A.; Osborne, Jason W.; Asmundson, Gordon J. G. – Practical Assessment, Research & Evaluation, 2012
The present study was designed to test for item order effects by measuring four distinct constructs that contribute substantively to anxiety-related psychopathology (i.e., anxiety sensitivity, fear of negative evaluation, injury/illness sensitivity, and intolerance of uncertainty). Participants (n = 999; 71% women) were randomly assigned to…
Descriptors: Anxiety, Test Items, Serial Ordering, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Peer, Eyal; Gamliel, Eyal – Practical Assessment, Research & Evaluation, 2011
When respondents answer paper-and-pencil (PP) questionnaires, they sometimes modify their responses to correspond to previously answered items. As a result, this response bias might artificially inflate the reliability of PP questionnaires. We compared the internal consistency of PP questionnaires to computerized questionnaires that presented a…
Descriptors: Response Style (Tests), Questionnaires, Reliability, Undergraduate Students
Peer reviewed Peer reviewed
Direct linkDirect link
Bleske-Rechek, April; Michels, Kelsey – Practical Assessment, Research & Evaluation, 2010
Since its inception in 1999, the RateMyProfessors.com (RMP.com) website has grown in popularity and, with that, notoriety. In this research we tested three assumptions about the website: (1) Students use RMP.com to either rant or rave; (2) Students who post on RMP.com are different from students who do not post; and (3) Students reward easiness by…
Descriptors: Student Motivation, Student Evaluation of Teacher Performance, Undergraduate Students, College Faculty
Peer reviewed Peer reviewed
Direct linkDirect link
Reynolds-Keefer, Laura – Practical Assessment, Research & Evaluation, 2010
In Andrade and Du (2005), the authors discuss the ways in which students perceive and use rubrics to support learning in the classroom. In an effort to further examine the impact of rubrics on student learning, this study explored how rubrics impacted students learning, as well as whether using rubrics influenced the likelihood that they would use…
Descriptors: Scoring Rubrics, Preservice Teacher Education, Preservice Teachers, Undergraduate Students
Peer reviewed Peer reviewed
Cassady, Jerrell C. – Practical Assessment, Research & Evaluation, 2001
Studied the accuracy and trends of deviation noted in undergraduates' self-reported Scholastic Assessment Test (SAT) and grade point average (GPA) values. Results for 89 undergraduates show that students had highly reliable ratings of cumulative GPA, but the overall accuracies of self-reported SAT scores were considerably lower than the accuracy…
Descriptors: Error of Measurement, Grade Point Average, Higher Education, Trend Analysis
Peer reviewed Peer reviewed
Cassady, Jerrell C. – Practical Assessment, Research & Evaluation, 2001
Studied the stability of test anxiety over time by examining the level of reported cognitive test anxiety at three points in an academic semester. Results for 64 undergraduates show that it is practical to collect test anxiety data at times other than when a test is being completed. It does not seem necessary to collect test anxiety data prior to…
Descriptors: Cognitive Tests, Data Collection, Higher Education, Reliability