NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 1 to 15 of 53 results
Peer reviewed Peer reviewed
Direct linkDirect link
Papanastasiou, Elena C. – Practical Assessment, Research & Evaluation, 2015
If good measurement depends in part on the estimation of accurate item characteristics, it is essential that test developers become aware of discrepancies that may exist on the item parameters before and after item review. The purpose of this study was to examine the answer changing patterns of students while taking paper-and-pencil multiple…
Descriptors: Psychometrics, Difficulty Level, Test Items, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Theyson, Katherine C. – Practical Assessment, Research & Evaluation, 2015
Existing literature indicates that physical attractiveness positively affects variables such as income, perceived employee quality and performance evaluations. Similarly, in the academic arena, studies indicate instructors who are better looking receive better teaching evaluations from their students. Previous analysis of the website…
Descriptors: Teacher Effectiveness, Gender Differences, Teacher Characteristics, Student Evaluation of Teacher Performance
Peer reviewed Peer reviewed
Direct linkDirect link
Dickinson, Emily R.; Adelson, Jill L. – Practical Assessment, Research & Evaluation, 2014
This study uses a nationally representative student dataset to explore the limitations of commonly used measures of socioeconomic status (SES). Among the identified limitations are patterns of missing data that conflate the traditional conceptualization of SES with differences in family structure that have emerged in recent years and a lack of…
Descriptors: Socioeconomic Status, Measures (Individuals), Kindergarten, Young Children
Peer reviewed Peer reviewed
Direct linkDirect link
Han, Kyung T.; Guo, Fanmin – Practical Assessment, Research & Evaluation, 2014
The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…
Descriptors: Maximum Likelihood Statistics, Structural Equation Models, Data, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Zumbach, Joerg; Funke, Joachim – Practical Assessment, Research & Evaluation, 2014
In two subsequent experiments, the influence of mood on academic course evaluation is examined. By means of facial feedback, either a positive or a negative mood was induced while students were completing a course evaluation questionnaire during lectures. Results from both studies reveal that a positive mood leads to better ratings of different…
Descriptors: Course Evaluation, Psychological Patterns, Student Attitudes, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
Rubright, Jonathan D.; Nandakumar, Ratna; Glutting, Joseph J. – Practical Assessment, Research & Evaluation, 2014
When exploring missing data techniques in a realistic scenario, the current literature is limited: most studies only consider consequences with data missing on a single variable. This simulation study compares the relative bias of two commonly used missing data techniques when data are missing on more than one variable. Factors varied include type…
Descriptors: Simulation, Data, Comparative Analysis, Predictor Variables
Peer reviewed Peer reviewed
Direct linkDirect link
Rusticus, Shayna A.; Lovato, Chris Y. – Practical Assessment, Research & Evaluation, 2014
The question of equivalence between two or more groups is frequently of interest to many applied researchers. Equivalence testing is a statistical method designed to provide evidence that groups are comparable by demonstrating that the mean differences found between groups are small enough that they are considered practically unimportant. Few…
Descriptors: Sample Size, Equivalency Tests, Simulation, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Francis L. – Practical Assessment, Research & Evaluation, 2014
Clustered data (e.g., students within schools) are often analyzed in educational research where data are naturally nested. As a consequence, multilevel modeling (MLM) has commonly been used to study the contextual or group-level (e.g., school) effects on individual outcomes. The current study investigates the use of an alternative procedure to…
Descriptors: Hierarchical Linear Modeling, Regression (Statistics), Educational Research, Sampling
Peer reviewed Peer reviewed
Direct linkDirect link
Stoffel, Heather; Raymond, Mark R.; Bucak, S. Deniz; Haist, Steven A. – Practical Assessment, Research & Evaluation, 2014
Previous research on the impact of text and formatting changes on test-item performance has produced mixed results. This matter is important because it is generally acknowledged that "any" change to an item requires that it be recalibrated. The present study investigated the effects of seven classes of stylistic changes on item…
Descriptors: Test Construction, Test Items, Standardized Tests, Physicians
Peer reviewed Peer reviewed
Direct linkDirect link
Wyse, Adam E.; Seo, Dong Gi – Practical Assessment, Research & Evaluation, 2014
This article provides a brief overview and comparison of three conditional growth percentile methods; student growth percentiles, percentile rank residuals, and a nonparametric matching method. These approaches seek to describe student growth in terms of the relative percentile ranking of a student in relationship to students that had the same…
Descriptors: Academic Achievement, Achievement Gains, Evaluation Methods, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Kennelly, Brendan; Flannery, Darragh; Considine, John; Doherty, Edel; Hynes, Stephen – Practical Assessment, Research & Evaluation, 2014
This paper outlines how a discrete choice experiment (DCE) can be used to learn more about how students are willing to trade off various features of assignments such as the nature and timing of feedback and the method used to submit assignments. A DCE identifies plausible levels of the key attributes of a good or service and then presents the…
Descriptors: Foreign Countries, Preferences, Assignments, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
de Winter, J. C .F. – Practical Assessment, Research & Evaluation, 2013
Researchers occasionally have to work with an extremely small sample size, defined herein as "N" less than or equal to 5. Some methodologists have cautioned against using the "t"-test when the sample size is extremely small, whereas others have suggested that using the "t"-test is feasible in such a case. The present…
Descriptors: Sample Size, Statistical Analysis, Hypothesis Testing, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Stone, Clement A.; Tang, Yun – Practical Assessment, Research & Evaluation, 2013
Propensity score applications are often used to evaluate educational program impact. However, various options are available to estimate both propensity scores and construct comparison groups. This study used a student achievement dataset with commonly available covariates to compare different propensity scoring estimation methods (logistic…
Descriptors: Comparative Analysis, Probability, Sample Size, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
McMillan, James H.; Venable, Jessica C.; Varier, Divya – Practical Assessment, Research & Evaluation, 2013
Kingston and Nash (2011) recently presented a meta-analysis of studies showing that the effect of formative assessment on K-12 student achievement may not be as robust as widely believed. This investigation analyzes the methodology used in the Kingston and Nash meta-analysis and provides further analyses of the studies included in the study. These…
Descriptors: Formative Evaluation, Academic Achievement, Elementary Secondary Education, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Baghaei, Purya; Carstensen, Claus H. – Practical Assessment, Research & Evaluation, 2013
Standard unidimensional Rasch models assume that persons with the same ability parameters are comparable. That is, the same interpretation applies to persons with identical ability estimates as regards the underlying mental processes triggered by the test. However, research in cognitive psychology shows that persons at the same trait level may…
Descriptors: Item Response Theory, Models, Reading Comprehension, Reading Tests
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4