NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Audience
Showing 16 to 30 of 161 results
Peer reviewed Peer reviewed
Direct linkDirect link
Beaujean, A. Alexander – Practical Assessment, Research & Evaluation, 2014
A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…
Descriptors: Regression (Statistics), Sample Size, Sampling, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Francis L. – Practical Assessment, Research & Evaluation, 2014
Clustered data (e.g., students within schools) are often analyzed in educational research where data are naturally nested. As a consequence, multilevel modeling (MLM) has commonly been used to study the contextual or group-level (e.g., school) effects on individual outcomes. The current study investigates the use of an alternative procedure to…
Descriptors: Hierarchical Linear Modeling, Regression (Statistics), Educational Research, Sampling
Peer reviewed Peer reviewed
Direct linkDirect link
Stoffel, Heather; Raymond, Mark R.; Bucak, S. Deniz; Haist, Steven A. – Practical Assessment, Research & Evaluation, 2014
Previous research on the impact of text and formatting changes on test-item performance has produced mixed results. This matter is important because it is generally acknowledged that "any" change to an item requires that it be recalibrated. The present study investigated the effects of seven classes of stylistic changes on item…
Descriptors: Test Construction, Test Items, Standardized Tests, Physicians
Peer reviewed Peer reviewed
Direct linkDirect link
Wyse, Adam E.; Seo, Dong Gi – Practical Assessment, Research & Evaluation, 2014
This article provides a brief overview and comparison of three conditional growth percentile methods; student growth percentiles, percentile rank residuals, and a nonparametric matching method. These approaches seek to describe student growth in terms of the relative percentile ranking of a student in relationship to students that had the same…
Descriptors: Academic Achievement, Achievement Gains, Evaluation Methods, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Kennelly, Brendan; Flannery, Darragh; Considine, John; Doherty, Edel; Hynes, Stephen – Practical Assessment, Research & Evaluation, 2014
This paper outlines how a discrete choice experiment (DCE) can be used to learn more about how students are willing to trade off various features of assignments such as the nature and timing of feedback and the method used to submit assignments. A DCE identifies plausible levels of the key attributes of a good or service and then presents the…
Descriptors: Foreign Countries, Preferences, Assignments, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
Warne, Russell T. – Practical Assessment, Research & Evaluation, 2014
Reviews of statistical procedures (e.g., Bangert & Baumberger, 2005; Kieffer, Reese, & Thompson, 2001; Warne, Lazo, Ramos, & Ritter, 2012) show that one of the most common multivariate statistical methods in psychological research is multivariate analysis of variance (MANOVA). However, MANOVA and its associated procedures are often not…
Descriptors: Multivariate Analysis, Behavioral Science Research, Discriminant Analysis, Psychological Studies
Peer reviewed Peer reviewed
Direct linkDirect link
Randolph, Justus J.; Falbe, Kristina; Manuel, Austin Kureethara; Balloun, Joseph L. – Practical Assessment, Research & Evaluation, 2014
Propensity score matching is a statistical technique in which a treatment case is matched with one or more control cases based on each case's propensity score. This matching can help strengthen causal arguments in quasi-experimental and observational studies by reducing selection bias. In this article we concentrate on how to conduct…
Descriptors: Statistical Analysis, Probability, Experimental Groups, Control Groups
Peer reviewed Peer reviewed
Direct linkDirect link
Becker, Kirk A.; Bergstrom, Betty A. – Practical Assessment, Research & Evaluation, 2013
The need for increased exam security, improved test formats, more flexible scheduling, better measurement, and more efficient administrative processes has caused testing agencies to consider converting the administration of their exams from paper-and-pencil to computer-based testing (CBT). Many decisions must be made in order to provide an optimal…
Descriptors: Testing, Models, Testing Programs, Program Administration
Peer reviewed Peer reviewed
Direct linkDirect link
Adelson, Jill L. – Practical Assessment, Research & Evaluation, 2013
Often it is infeasible or unethical to use random assignment in educational settings to study important constructs and questions. Hence, educational research often uses observational data, such as large-scale secondary data sets and state and school district data, and quasi-experimental designs. One method of reducing selection bias in estimations…
Descriptors: Educational Research, Data, Statistical Bias, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
Williams, Matt N.; Gomez Grajales, Carlos Alberto; Kurkiewicz, Dason – Practical Assessment, Research & Evaluation, 2013
In 2002, an article entitled "Four assumptions of multiple regression that researchers should always test" by Osborne and Waters was published in "PARE." This article has gone on to be viewed more than 275,000 times (as of August 2013), and it is one of the first results displayed in a Google search for "regression…
Descriptors: Multiple Regression Analysis, Misconceptions, Reader Response, Predictor Variables
Peer reviewed Peer reviewed
Direct linkDirect link
de Winter, J. C .F. – Practical Assessment, Research & Evaluation, 2013
Researchers occasionally have to work with an extremely small sample size, defined herein as "N" less than or equal to 5. Some methodologists have cautioned against using the "t"-test when the sample size is extremely small, whereas others have suggested that using the "t"-test is feasible in such a case. The present…
Descriptors: Sample Size, Statistical Analysis, Hypothesis Testing, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Osborne, Jason W. – Practical Assessment, Research & Evaluation, 2013
Osborne and Waters (2002) focused on checking some of the assumptions of multiple linear regression. In a critique of that paper, Williams, Grajales, and Kurkiewicz correctly clarify that regression models estimated using ordinary least squares require the assumption of normally distributed errors, but not the assumption of normally distributed…
Descriptors: Multiple Regression Analysis, Least Squares Statistics, Computation, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Hathcoat, John D. – Practical Assessment, Research & Evaluation, 2013
The semantics, or meaning, of validity is a fluid concept in educational and psychological testing. Contemporary controversies surrounding this concept appear to stem from the proper location of validity. Under one view, validity is a property of score-based inferences and entailed uses of test scores. This view is challenged by the…
Descriptors: Test Validity, Educational Testing, Psychological Testing, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Stone, Clement A.; Tang, Yun – Practical Assessment, Research & Evaluation, 2013
Propensity score applications are often used to evaluate educational program impact. However, various options are available to estimate both propensity scores and construct comparison groups. This study used a student achievement dataset with commonly available covariates to compare different propensity scoring estimation methods (logistic…
Descriptors: Comparative Analysis, Probability, Sample Size, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Beauducel, Andre; Leue, Anja – Practical Assessment, Research & Evaluation, 2013
In several studies unit-weighted sum scales based on the unweighted sum of items are derived from the pattern of salient loadings in confirmatory factor analysis. The problem of this procedure is that the unit-weighted sum scales imply a model other than the initially tested confirmatory factor model. In consequence, it remains generally unknown…
Descriptors: Factor Analysis, Structural Equation Models, Goodness of Fit, Personality Measures
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11