NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing all 11 results
Peer reviewed Peer reviewed
Direct linkDirect link
Han, Kyung T.; Guo, Fanmin – Practical Assessment, Research & Evaluation, 2014
The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…
Descriptors: Maximum Likelihood Statistics, Structural Equation Models, Data, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Smith, William C. – Practical Assessment, Research & Evaluation, 2014
The ability of regression discontinuity (RD) designs to provide an unbiased treatment effect while overcoming the ethical concerns plagued by Random Control Trials (RCTs) make it a valuable and useful approach in education evaluation. RD is the only explicitly recognized quasi-experimental approach identified by the Institute of Education…
Descriptors: Computation, Regression (Statistics), Statistical Bias, Quasiexperimental Design
Peer reviewed Peer reviewed
Direct linkDirect link
Beaujean, A. Alexander – Practical Assessment, Research & Evaluation, 2014
A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…
Descriptors: Regression (Statistics), Sample Size, Sampling, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Wyse, Adam E.; Seo, Dong Gi – Practical Assessment, Research & Evaluation, 2014
This article provides a brief overview and comparison of three conditional growth percentile methods; student growth percentiles, percentile rank residuals, and a nonparametric matching method. These approaches seek to describe student growth in terms of the relative percentile ranking of a student in relationship to students that had the same…
Descriptors: Academic Achievement, Achievement Gains, Evaluation Methods, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Osborne, Jason W. – Practical Assessment, Research & Evaluation, 2013
Osborne and Waters (2002) focused on checking some of the assumptions of multiple linear regression. In a critique of that paper, Williams, Grajales, and Kurkiewicz correctly clarify that regression models estimated using ordinary least squares require the assumption of normally distributed errors, but not the assumption of normally distributed…
Descriptors: Multiple Regression Analysis, Least Squares Statistics, Computation, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Courtney, Matthew Gordon Ray – Practical Assessment, Research & Evaluation, 2013
Exploratory factor analysis (EFA) is a common technique utilized in the development of assessment instruments. The key question when performing this procedure is how to best estimate the number of factors to retain. This is especially important as under- or over-extraction may lead to erroneous conclusions. Although recent advancements have been…
Descriptors: Factor Analysis, Computer Software, Open Source Technology, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Han, Kyung T. – Practical Assessment, Research & Evaluation, 2012
For several decades, the "three-parameter logistic model" (3PLM) has been the dominant choice for practitioners in the field of educational measurement for modeling examinees' response data from multiple-choice (MC) items. Past studies, however, have pointed out that the c-parameter of 3PLM should not be interpreted as a guessing parameter. This…
Descriptors: Statistical Analysis, Models, Multiple Choice Tests, Guessing (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Gadermann, Anne M.; Guhn, Martin; Zumbo, Bruno D. – Practical Assessment, Research & Evaluation, 2012
This paper provides a conceptual, empirical, and practical guide for estimating ordinal reliability coefficients for ordinal item response data (also referred to as Likert, Likert-type, ordered categorical, or rating scale item responses). Conventionally, reliability coefficients, such as Cronbach's alpha, are calculated using a Pearson…
Descriptors: Likert Scales, Rating Scales, Reliability, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Konstantopoulos, Spyros – Practical Assessment, Research & Evaluation, 2009
Power computations for one-level experimental designs that assume simple random samples are greatly facilitated by power tables such as those presented in Cohen's book about statistical power analysis. However, in education and the social sciences experimental designs have naturally nested structures and multilevel models are needed to compute the…
Descriptors: Social Science Research, Effect Size, Computation, Tables (Data)
Peer reviewed Peer reviewed
Direct linkDirect link
DiStefano, Christine; Zhu, Min; Mindrila, Diana – Practical Assessment, Research & Evaluation, 2009
Following an exploratory factor analysis, factor scores may be computed and used in subsequent analyses. Factor scores are composite variables which provide information about an individual's placement on the factor(s). This article discusses popular methods to create factor scores under two different classes: refined and non-refined. Strengths and…
Descriptors: Factor Structure, Factor Analysis, Researchers, Scores
Peer reviewed Peer reviewed
Rudner, Lawrence M. – Practical Assessment, Research & Evaluation, 2001
Provides and illustrates a method to compute the expected number of misclassifications of examinees using three-parameter item response theory and two state classifications (mastery or nonmastery). The method uses the standard error and the expected examinee ability distribution. (SLD)
Descriptors: Ability, Classification, Computation, Error of Measurement