NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 1 to 15 of 16 results
Peer reviewed Peer reviewed
Direct linkDirect link
Beaujean, A. Alexander – Practical Assessment, Research & Evaluation, 2014
A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…
Descriptors: Regression (Statistics), Sample Size, Sampling, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Gadermann, Anne M.; Guhn, Martin; Zumbo, Bruno D. – Practical Assessment, Research & Evaluation, 2012
This paper provides a conceptual, empirical, and practical guide for estimating ordinal reliability coefficients for ordinal item response data (also referred to as Likert, Likert-type, ordered categorical, or rating scale item responses). Conventionally, reliability coefficients, such as Cronbach's alpha, are calculated using a Pearson…
Descriptors: Likert Scales, Rating Scales, Reliability, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Carleton, R. Nicholas; Thibodeau, Michel A.; Osborne, Jason W.; Asmundson, Gordon J. G. – Practical Assessment, Research & Evaluation, 2012
The present study was designed to test for item order effects by measuring four distinct constructs that contribute substantively to anxiety-related psychopathology (i.e., anxiety sensitivity, fear of negative evaluation, injury/illness sensitivity, and intolerance of uncertainty). Participants (n = 999; 71% women) were randomly assigned to…
Descriptors: Anxiety, Test Items, Serial Ordering, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Osborne, Jason W.; Fitzpatrick, David C. – Practical Assessment, Research & Evaluation, 2012
Exploratory Factor Analysis (EFA) is a powerful and commonly-used tool for investigating the underlying variable structure of a psychometric instrument. However, there is much controversy in the social sciences with regard to the techniques used in EFA (Ford, MacCallum, & Tait, 1986; Henson & Roberts, 2006) and the reliability of the outcome.…
Descriptors: Factor Analysis, Replication (Evaluation), Reliability, Factor Structure
Peer reviewed Peer reviewed
Direct linkDirect link
Schafer, William D.; Lissitz, Robert W.; Zhu, Xiaoshu; Zhang, Yuan; Hou, Xiaodong; Li, Ying – Practical Assessment, Research & Evaluation, 2012
Interest in Student Growth Modeling (SGM) and Value Added Modeling (VAM) arises from educators concerned with measuring the effectiveness of teaching and other school activities through changes in student performance as a companion and perhaps even an alternative to status. Several formal statistical models have been proposed for year-to-year…
Descriptors: Teacher Evaluation, Teacher Effectiveness, School Effectiveness, Academic Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Sanders, Shane; Walia, Bhavneet; Potter, Joel; Linna, Kenneth W. – Practical Assessment, Research & Evaluation, 2011
Online instructional ratings are taken by many with a grain of salt. This study analyzes the ability of said ratings to estimate the official (university-administered) instructional ratings of the same respective university instructors. Given self-selection among raters, we further test whether more online ratings of instructors lead to better…
Descriptors: Prediction, Student Evaluation of Teacher Performance, Teacher Evaluation, Web Sites
Peer reviewed Peer reviewed
Direct linkDirect link
Peer, Eyal; Gamliel, Eyal – Practical Assessment, Research & Evaluation, 2011
When respondents answer paper-and-pencil (PP) questionnaires, they sometimes modify their responses to correspond to previously answered items. As a result, this response bias might artificially inflate the reliability of PP questionnaires. We compared the internal consistency of PP questionnaires to computerized questionnaires that presented a…
Descriptors: Response Style (Tests), Questionnaires, Reliability, Undergraduate Students
Peer reviewed Peer reviewed
Direct linkDirect link
Schafer, William D.; Coverdale, Bradley J.; Luxenberg, Harlan; Jin, Ying – Practical Assessment, Research & Evaluation, 2011
There are relatively few examples of quantitative approaches to quality control in educational assessment and accountability contexts. Among the several techniques that are used in other fields, Shewart charts have been found in a few instances to be applicable in educational settings. This paper describes Shewart charts and gives examples of how…
Descriptors: Charts, Quality Control, Educational Assessment, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Lovorn, Michael G.; Rezaei, Ali Reza – Practical Assessment, Research & Evaluation, 2011
Recent studies report that the use of rubrics may not improve the reliability of assessment if raters are not well trained on how to design and employ them effectively. The intent of this two-phase study was to test if training pre-service and new in-service teachers in the construction, use, and evaluation of rubrics would improve the reliability…
Descriptors: Scoring Rubrics, Training, Preservice Teacher Education, Inservice Teacher Education
Peer reviewed Peer reviewed
Direct linkDirect link
Brimi, Hunter M. – Practical Assessment, Research & Evaluation, 2011
This research replicates the work of Starch and Elliot (1912) by examining the reliability of the grading by English teachers in a single school district. Ninety high school teachers graded the same student paper following professional development sessions in which they were trained to use NWREL's "6+1 Traits of Writing." These participants had…
Descriptors: Grading, Reliability, Secondary School Teachers, English Teachers
Peer reviewed Peer reviewed
Direct linkDirect link
Bleske-Rechek, April; Fritsch, Amber – Practical Assessment, Research & Evaluation, 2011
At the same time as some faculty committees and corporations are appealing to the use of online ratings from RateMyProfessors.com to inform promotion decisions and nationwide university rankings, others are derogating the site as an unreliable source of idiosyncratic student ratings and commentary. In this paper we describe a study designed to…
Descriptors: Student Evaluation of Teacher Performance, College Faculty, College Students, Web Sites
Peer reviewed Peer reviewed
Osbourne, Jason W.; Waters, Elaine – Practical Assessment, Research & Evaluation, 2002
Discusses assumptions of multiple regression that are not robust to violation: linearity, reliability of measurement, homoscedasticity, and normality. Stresses the importance of checking assumptions. (SLD)
Descriptors: Error of Measurement, Regression (Statistics), Reliability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rudner, Lawrence M., Ed.; Schafer, William D., Ed. – Practical Assessment, Research & Evaluation, 2001
This document consists of papers published in the electronic journal "Practical Assessment, Research & Evaluation" during 2000-2001: (1) "Advantages of Hierarchical Linear Modeling" (Jason W. Osborne); (2) "Prediction in Multiple Regression" (Jason W. Osborne); (3) Scoring Rubrics: What, When, and How?" (Barbara M. Moskal); (4) "Organizational…
Descriptors: Educational Assessment, Educational Research, Elementary Secondary Education, Evaluation Methods
Peer reviewed Peer reviewed
Simon, Marielle; Forgette-Giroux, Renee – Practical Assessment, Research & Evaluation, 2001
Presents a generic rubric to assess postsecondary academic skills, describes its preliminary application in a university setting, and discusses related issues from a research point of view. The rubric was used with four graduate and two undergraduate classes (n=approximately 100 students). Interrater and intrarater aspects of reliability were…
Descriptors: Academic Achievement, College Students, Higher Education, Reliability
Peer reviewed Peer reviewed
Cassady, Jerrell C. – Practical Assessment, Research & Evaluation, 2001
Studied the stability of test anxiety over time by examining the level of reported cognitive test anxiety at three points in an academic semester. Results for 64 undergraduates show that it is practical to collect test anxiety data at times other than when a test is being completed. It does not seem necessary to collect test anxiety data prior to…
Descriptors: Cognitive Tests, Data Collection, Higher Education, Reliability
Previous Page | Next Page ยป
Pages: 1  |  2