NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 1 to 15 of 974 results
Peer reviewed Peer reviewed
Direct linkDirect link
Knight, David B. – Educational Evaluation and Policy Analysis, 2014
Colleges and universities are being pressed to seek innovative ways to measure student learning outcomes and identify the conditions that lead to their development. Understanding how students group according to a multidimensional set of learning outcomes provides information on the extent to which institutions are meeting goals. This study…
Descriptors: Classification, Multivariate Analysis, Engineering Education, Higher Education
Peer reviewed Peer reviewed
Direct linkDirect link
Preston, Kathleen Suzanne Johnson; Reise, Steven Paul – Educational and Psychological Measurement, 2014
The nominal response model (NRM), a much understudied polytomous item response theory (IRT) model, provides researchers the unique opportunity to evaluate within-item category distinctions. Polytomous IRT models, such as the NRM, are frequently applied to psychological assessments representing constructs that are unlikely to be normally…
Descriptors: Item Response Theory, Computation, Models, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Nargundkar, Satish; Shrikhande, Milind – Decision Sciences Journal of Innovative Education, 2014
Student Evaluations of Instruction (SEIs) from about 6,000 sections over 4 years representing over 100,000 students at the college of business at a large public university are analyzed, to study the impact of noninstructional factors on student ratings. Administrative factors like semester, time of day, location, and instructor attributes like…
Descriptors: Student Evaluation of Teacher Performance, Business Education Teachers, Teacher Characteristics, Academic Rank (Professional)
Peer reviewed Peer reviewed
Direct linkDirect link
Reardon, Sean F.; Unlu, Fatih; Zhu, Pei; Bloom, Howard S. – Journal of Educational and Behavioral Statistics, 2014
We explore the use of instrumental variables (IV) analysis with a multisite randomized trial to estimate the effect of a mediating variable on an outcome in cases where it can be assumed that the observed mediator is the only mechanism linking treatment assignment to outcomes, an assumption known in the IV literature as the exclusion restriction.…
Descriptors: Statistical Bias, Statistical Analysis, Least Squares Statistics, Sampling
Peer reviewed Peer reviewed
Direct linkDirect link
Lockwood, J. R.; McCaffrey, Daniel F. – Journal of Educational and Behavioral Statistics, 2014
A common strategy for estimating treatment effects in observational studies using individual student-level data is analysis of covariance (ANCOVA) or hierarchical variants of it, in which outcomes (often standardized test scores) are regressed on pretreatment test scores, other student characteristics, and treatment group indicators. Measurement…
Descriptors: Error of Measurement, Scores, Statistical Analysis, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Henson, Robin K.; Natesan, Prathiba; Axelson, Erika D. – Journal of Experimental Education, 2014
The authors examined the distributional properties of 3 improvement-over-chance, I, effect sizes each derived from linear and quadratic predictive discriminant analysis and from logistic regression analysis for the 2-group univariate classification. These 3 classification methods (3 levels) were studied under varying levels of data conditions,…
Descriptors: Effect Size, Probability, Comparative Analysis, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Cook, Bryan G. – Remedial and Special Education, 2014
Valid, scientific research is critical for ascertaining the effects of instructional techniques on learners with disabilities and for guiding effective special education practice and policy. Researchers in fields such as psychology and medicine have identified serious and widespread shortcomings in their research literatures related to replication…
Descriptors: Special Education, Educational Research, Bias, Replication (Evaluation)
Peer reviewed Peer reviewed
Direct linkDirect link
St.Clair, Travis; Cook, Thomas D.; Hallberg, Kelly – American Journal of Evaluation, 2014
Although evaluators often use an interrupted time series (ITS) design to test hypotheses about program effects, there are few empirical tests of the design's validity. We take a randomized experiment on an educational topic and compare its effects to those from a comparative ITS (CITS) design that uses the same treatment group as the…
Descriptors: Time, Evaluation Methods, Measurement Techniques, Research Design
Scammacca, Nancy; Roberts, Greg; Stuebing, Karla K. – Review of Educational Research, 2014
Previous research has shown that treating dependent effect sizes as independent inflates the variance of the mean effect size and introduces bias by giving studies with more effect sizes more weight in the meta-analysis. This article summarizes the different approaches to handling dependence that have been advocated by methodologists, some of…
Descriptors: Meta Analysis, Research Design, Effect Size, Statistical Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Feyzi-Behnagh, Reza; Azevedo, Roger; Legowski, Elizabeth; Reitmeyer, Kayse; Tseytlin, Eugene; Crowley, Rebecca S. – Instructional Science: An International Journal of the Learning Sciences, 2014
In this study, we examined the effect of two metacognitive scaffolds on the accuracy of confidence judgments made while diagnosing dermatopathology slides in SlideTutor. Thirty-one (N = 31) first- to fourth-year pathology and dermatology residents were randomly assigned to one of the two scaffolding conditions. The cases used in this study were…
Descriptors: Metacognition, Scaffolding (Teaching Technique), Accuracy, Evaluative Thinking
Peer reviewed Peer reviewed
Direct linkDirect link
Lang, Kyle M.; Little, Todd D. – International Journal of Behavioral Development, 2014
We present a new paradigm that allows simplified testing of multiparameter hypotheses in the presence of incomplete data. The proposed technique is a straight-forward procedure that combines the benefits of two powerful data analytic tools: multiple imputation and nested-model ?2 difference testing. A Monte Carlo simulation study was conducted to…
Descriptors: Hypothesis Testing, Data Analysis, Error of Measurement, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Jorgensen, Terrence D.; Rhemtulla, Mijke; Schoemann, Alexander; McPherson, Brent; Wu, Wei; Little, Todd D. – International Journal of Behavioral Development, 2014
Planned missing designs are becoming increasingly popular, but because there is no consensus on how to implement them in longitudinal research, we simulated longitudinal data to distinguish between strategies of assigning items to forms and of assigning forms to participants across measurement occasions. Using relative efficiency as the criterion,…
Descriptors: Longitudinal Studies, Research Design, Data Analysis, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Jia, Fan; Moore, E. Whitney G.; Kinai, Richard; Crowe, Kelly S.; Schoemann, Alexander M.; Little, Todd D. – International Journal of Behavioral Development, 2014
Utilizing planned missing data (PMD) designs (ex. 3-form surveys) enables researchers to ask participants fewer questions during the data collection process. An important question, however, is just how few participants are needed to effectively employ planned missing data designs in research studies. This article explores this question by using…
Descriptors: Data Analysis, Statistical Inference, Error of Measurement, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Garnier-Villarreal, Mauricio; Rhemtulla, Mijke; Little, Todd D. – International Journal of Behavioral Development, 2014
We examine longitudinal extensions of the two-method measurement design, which uses planned missingness to optimize cost-efficiency and validity of hard-to-measure constructs. These designs use a combination of two measures: a "gold standard" that is highly valid but expensive to administer, and an inexpensive (e.g., survey-based)…
Descriptors: Longitudinal Studies, Data Analysis, Error of Measurement, Research Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Asendorpf, Jens B.; van de Schoot, Rens; Denissen, Jaap J. A.; Hutteman, Roos – International Journal of Behavioral Development, 2014
Most longitudinal studies are plagued by drop-out related to variables at earlier assessments (systematic attrition). Although systematic attrition is often analysed in longitudinal studies, surprisingly few researchers attempt to reduce biases due to systematic attrition, even though this is possible and nowadays technically easy. This is…
Descriptors: Longitudinal Studies, Attrition (Research Studies), Statistical Bias, Statistical Analysis
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  65