NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Audience
Showing 1 to 15 of 17 results
Peer reviewed Peer reviewed
Direct linkDirect link
Dickinson, Emily R.; Adelson, Jill L. – Practical Assessment, Research & Evaluation, 2014
This study uses a nationally representative student dataset to explore the limitations of commonly used measures of socioeconomic status (SES). Among the identified limitations are patterns of missing data that conflate the traditional conceptualization of SES with differences in family structure that have emerged in recent years and a lack of…
Descriptors: Socioeconomic Status, Measures (Individuals), Kindergarten, Young Children
Peer reviewed Peer reviewed
Direct linkDirect link
Benton, Tom – Practical Assessment, Research & Evaluation, 2014
This article demonstrates how meta-analytic techniques, that have typically been used to synthesize findings across numerous studies, can also be applied to examine the reasons why relationships between background characteristics and outcomes may vary across different locations in a single multi-site survey. This application is particularly…
Descriptors: Regression (Statistics), Meta Analysis, Academic Achievement, Institutional Autonomy
Peer reviewed Peer reviewed
Direct linkDirect link
Smith, William C. – Practical Assessment, Research & Evaluation, 2014
The ability of regression discontinuity (RD) designs to provide an unbiased treatment effect while overcoming the ethical concerns plagued by Random Control Trials (RCTs) make it a valuable and useful approach in education evaluation. RD is the only explicitly recognized quasi-experimental approach identified by the Institute of Education…
Descriptors: Computation, Regression (Statistics), Statistical Bias, Quasiexperimental Design
Peer reviewed Peer reviewed
Direct linkDirect link
Beaujean, A. Alexander – Practical Assessment, Research & Evaluation, 2014
A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…
Descriptors: Regression (Statistics), Sample Size, Sampling, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Francis L. – Practical Assessment, Research & Evaluation, 2014
Clustered data (e.g., students within schools) are often analyzed in educational research where data are naturally nested. As a consequence, multilevel modeling (MLM) has commonly been used to study the contextual or group-level (e.g., school) effects on individual outcomes. The current study investigates the use of an alternative procedure to…
Descriptors: Hierarchical Linear Modeling, Regression (Statistics), Educational Research, Sampling
Peer reviewed Peer reviewed
Direct linkDirect link
Stone, Clement A.; Tang, Yun – Practical Assessment, Research & Evaluation, 2013
Propensity score applications are often used to evaluate educational program impact. However, various options are available to estimate both propensity scores and construct comparison groups. This study used a student achievement dataset with commonly available covariates to compare different propensity scoring estimation methods (logistic…
Descriptors: Comparative Analysis, Probability, Sample Size, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Osborne, Jason W. – Practical Assessment, Research & Evaluation, 2012
Logistic regression is slowly gaining acceptance in the social sciences, and fills an important niche in the researcher's toolkit: being able to predict important outcomes that are not continuous in nature. While OLS regression is a valuable tool, it cannot routinely be used to predict outcomes that are binary or categorical in nature. These…
Descriptors: Regression (Statistics), Prediction, Mathematics, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
Dadey, Nathan; Briggs, Derek C. – Practical Assessment, Research & Evaluation, 2012
A vertical scale, in principle, provides a common metric across tests with differing difficulties (e.g., spanning multiple grades) so that statements of "absolute" growth can be made. This paper compares 16 states' 2007-2008 effect size growth trends on vertically scaled reading and math assessments across grades 3 to 8. Two patterns common in…
Descriptors: Meta Analysis, Scaling, Effect Size, Reading Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Sanders, Shane; Walia, Bhavneet; Potter, Joel; Linna, Kenneth W. – Practical Assessment, Research & Evaluation, 2011
Online instructional ratings are taken by many with a grain of salt. This study analyzes the ability of said ratings to estimate the official (university-administered) instructional ratings of the same respective university instructors. Given self-selection among raters, we further test whether more online ratings of instructors lead to better…
Descriptors: Prediction, Student Evaluation of Teacher Performance, Teacher Evaluation, Web Sites
Peer reviewed Peer reviewed
Direct linkDirect link
Osborne, Jason W. – Practical Assessment, Research & Evaluation, 2011
Large surveys often use probability sampling in order to obtain representative samples, and these data sets are valuable tools for researchers in all areas of science. Yet many researchers are not formally prepared to appropriately utilize these resources. Indeed, users of one popular dataset were generally found "not" to have modeled the analyses…
Descriptors: Best Practices, Sampling, Sample Size, Data Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Osborne, Jason W. – Practical Assessment, Research & Evaluation, 2010
Many of us in the social sciences deal with data that do not conform to assumptions of normality and/or homoscedasticity/homogeneity of variance. Some research has shown that parametric tests (e.g., multiple regression, ANOVA) can be robust to modest violations of these assumptions. Yet the reality is that almost all analyses (even nonparametric…
Descriptors: Social Sciences, Regression (Statistics), Nonparametric Statistics, Data
Peer reviewed Peer reviewed
Direct linkDirect link
Bao, Han; Dayton, C. Mitchell; Hendrickson, Amy B. – Practical Assessment, Research & Evaluation, 2009
When testlet effects and item idiosyncratic features are both considered to be the reasons of DIF in educational tests using testlets (Wainer & Kiely, 1987) or item bundles (Rosenbaum, 1988), it is interesting to investigate the phenomena of DIF amplification and cancellation due to the interactive effects of these two factors. This research…
Descriptors: Test Bias, Reading Tests, Item Response Theory, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
DiStefano, Christine; Zhu, Min; Mindrila, Diana – Practical Assessment, Research & Evaluation, 2009
Following an exploratory factor analysis, factor scores may be computed and used in subsequent analyses. Factor scores are composite variables which provide information about an individual's placement on the factor(s). This article discusses popular methods to create factor scores under two different classes: refined and non-refined. Strengths and…
Descriptors: Factor Structure, Factor Analysis, Researchers, Scores
Peer reviewed Peer reviewed
Osbourne, Jason W.; Waters, Elaine – Practical Assessment, Research & Evaluation, 2002
Discusses assumptions of multiple regression that are not robust to violation: linearity, reliability of measurement, homoscedasticity, and normality. Stresses the importance of checking assumptions. (SLD)
Descriptors: Error of Measurement, Regression (Statistics), Reliability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rudner, Lawrence M., Ed.; Schafer, William D., Ed. – Practical Assessment, Research & Evaluation, 2001
This document consists of papers published in the electronic journal "Practical Assessment, Research & Evaluation" during 2000-2001: (1) "Advantages of Hierarchical Linear Modeling" (Jason W. Osborne); (2) "Prediction in Multiple Regression" (Jason W. Osborne); (3) Scoring Rubrics: What, When, and How?" (Barbara M. Moskal); (4) "Organizational…
Descriptors: Educational Assessment, Educational Research, Elementary Secondary Education, Evaluation Methods
Previous Page | Next Page ยป
Pages: 1  |  2