NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing all 12 results
Peer reviewed Peer reviewed
Direct linkDirect link
Baglin, James – Practical Assessment, Research & Evaluation, 2014
Exploratory factor analysis (EFA) methods are used extensively in the field of assessment and evaluation. Due to EFA's widespread use, common methods and practices have come under close scrutiny. A substantial body of literature has been compiled highlighting problems with many of the methods and practices used in EFA, and, in response, many…
Descriptors: Factor Analysis, Data, Likert Scales, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
de Winter, J. C .F. – Practical Assessment, Research & Evaluation, 2013
Researchers occasionally have to work with an extremely small sample size, defined herein as "N" less than or equal to 5. Some methodologists have cautioned against using the "t"-test when the sample size is extremely small, whereas others have suggested that using the "t"-test is feasible in such a case. The present…
Descriptors: Sample Size, Statistical Analysis, Hypothesis Testing, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Beavers, Amy S.; Lounsbury, John W.; Richards, Jennifer K.; Huck, Schuyler W.; Skolits, Gary J.; Esquivel, Shelley L. – Practical Assessment, Research & Evaluation, 2013
The uses and methodology of factor analysis are widely debated and discussed, especially the issues of rotational use, methods of confirmatory factor analysis, and adequate sample size. The variety of perspectives and often conflicting opinions can lead to confusion among researchers about best practices for using factor analysis. The focus of the…
Descriptors: Factor Analysis, Educational Research, Best Practices, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Gadermann, Anne M.; Guhn, Martin; Zumbo, Bruno D. – Practical Assessment, Research & Evaluation, 2012
This paper provides a conceptual, empirical, and practical guide for estimating ordinal reliability coefficients for ordinal item response data (also referred to as Likert, Likert-type, ordered categorical, or rating scale item responses). Conventionally, reliability coefficients, such as Cronbach's alpha, are calculated using a Pearson…
Descriptors: Likert Scales, Rating Scales, Reliability, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Nathans, Laura L.; Oswald, Frederick L.; Nimon, Kim – Practical Assessment, Research & Evaluation, 2012
Multiple regression (MR) analyses are commonly employed in social science fields. It is also common for interpretation of results to typically reflect overreliance on beta weights, often resulting in very limited interpretations of variable importance. It appears that few researchers employ other methods to obtain a fuller understanding of what…
Descriptors: Multiple Regression Analysis, Predictor Variables, Measurement, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Schafer, William D.; Lissitz, Robert W.; Zhu, Xiaoshu; Zhang, Yuan; Hou, Xiaodong; Li, Ying – Practical Assessment, Research & Evaluation, 2012
Interest in Student Growth Modeling (SGM) and Value Added Modeling (VAM) arises from educators concerned with measuring the effectiveness of teaching and other school activities through changes in student performance as a companion and perhaps even an alternative to status. Several formal statistical models have been proposed for year-to-year…
Descriptors: Teacher Evaluation, Teacher Effectiveness, School Effectiveness, Academic Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Cahan, Sorel; Gamliel, Eyal – Practical Assessment, Research & Evaluation, 2011
Standardized effect size measures typically employed in behavioral and social sciences research in the multi-group case (e.g., [eta][superscript 2], f[superscript 2]) evaluate between-group variability in terms of either total or within-group variability, such as variance or standard deviation--that is, measures of dispersion about the mean. In…
Descriptors: Social Sciences, Effect Size, Evaluation, Behavioral Science Research
Peer reviewed Peer reviewed
Direct linkDirect link
Osborne, Jason W. – Practical Assessment, Research & Evaluation, 2010
Many of us in the social sciences deal with data that do not conform to assumptions of normality and/or homoscedasticity/homogeneity of variance. Some research has shown that parametric tests (e.g., multiple regression, ANOVA) can be robust to modest violations of these assumptions. Yet the reality is that almost all analyses (even nonparametric…
Descriptors: Social Sciences, Regression (Statistics), Nonparametric Statistics, Data
Peer reviewed Peer reviewed
Direct linkDirect link
Filetti, Jean; Wright, Mary; King, William M. – Practical Assessment, Research & Evaluation, 2010
This article examines how a faculty member's status--either tenured or tenure-track--might affect the grades assigned to students in a writing class. We begin with a brief review of the research surrounding faculty to student assessment practices and follow with specific controversies regarding faculty motivation pertaining to grading practices.…
Descriptors: Tenure, Academic Rank (Professional), Grades (Scholastic), Grading
Peer reviewed Peer reviewed
Direct linkDirect link
Goltz, Heather Honore; Smith, Matthew Lee – Practical Assessment, Research & Evaluation, 2010
Yule (1903) and Simpson (1951) described a statistical paradox that occurs when data is aggregated. In such situations, aggregated data may reveal a trend that directly contrasts those of sub-groups trends. In fact, the aggregate data trends may even be opposite in direction of sub-group trends. To reveal Yule-Simpson's paradox (YSP)-type…
Descriptors: Data, Statistics, Statistical Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Wiberg, Marie; Sundstrom, Anna – Practical Assessment, Research & Evaluation, 2009
A common problem in predictive validity studies in the educational and psychological fields, e.g. in educational and employment selection, is restriction in range of the predictor variables. There are several methods for correcting correlations for restriction of range. The aim of this paper was to examine the usefulness of two approaches to…
Descriptors: Predictive Validity, Predictor Variables, Correlation, Mathematics
Peer reviewed Peer reviewed
Rudner, Lawrence; Gagne, Phill – Practical Assessment, Research & Evaluation, 2001
Describes the three most promising approaches to essay scoring by computer: (1) Project Essay Grade (PEG; E. Page, 1966); (2) Intelligent Essay Assessor (IEA; T. Landauer, 1997); and (3) E-rater (J. Burstein, Educational Testing Service). All of these proprietary systems return grades that correlate meaningfully with those of human raters. (SLD)
Descriptors: Computer Uses in Education, Correlation, Essays, Scoring