NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 1 to 15 of 17 results
Peer reviewed Peer reviewed
Direct linkDirect link
Benton, Tom – Practical Assessment, Research & Evaluation, 2014
This article demonstrates how meta-analytic techniques, that have typically been used to synthesize findings across numerous studies, can also be applied to examine the reasons why relationships between background characteristics and outcomes may vary across different locations in a single multi-site survey. This application is particularly…
Descriptors: Regression (Statistics), Meta Analysis, Academic Achievement, Institutional Autonomy
Peer reviewed Peer reviewed
Direct linkDirect link
Zumbach, Joerg; Funke, Joachim – Practical Assessment, Research & Evaluation, 2014
In two subsequent experiments, the influence of mood on academic course evaluation is examined. By means of facial feedback, either a positive or a negative mood was induced while students were completing a course evaluation questionnaire during lectures. Results from both studies reveal that a positive mood leads to better ratings of different…
Descriptors: Course Evaluation, Psychological Patterns, Student Attitudes, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
Rusticus, Shayna A.; Lovato, Chris Y. – Practical Assessment, Research & Evaluation, 2014
The question of equivalence between two or more groups is frequently of interest to many applied researchers. Equivalence testing is a statistical method designed to provide evidence that groups are comparable by demonstrating that the mean differences found between groups are small enough that they are considered practically unimportant. Few…
Descriptors: Sample Size, Equivalency Tests, Simulation, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Kennelly, Brendan; Flannery, Darragh; Considine, John; Doherty, Edel; Hynes, Stephen – Practical Assessment, Research & Evaluation, 2014
This paper outlines how a discrete choice experiment (DCE) can be used to learn more about how students are willing to trade off various features of assignments such as the nature and timing of feedback and the method used to submit assignments. A DCE identifies plausible levels of the key attributes of a good or service and then presents the…
Descriptors: Foreign Countries, Preferences, Assignments, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
Beauducel, Andre; Leue, Anja – Practical Assessment, Research & Evaluation, 2013
In several studies unit-weighted sum scales based on the unweighted sum of items are derived from the pattern of salient loadings in confirmatory factor analysis. The problem of this procedure is that the unit-weighted sum scales imply a model other than the initially tested confirmatory factor model. In consequence, it remains generally unknown…
Descriptors: Factor Analysis, Structural Equation Models, Goodness of Fit, Personality Measures
Peer reviewed Peer reviewed
Direct linkDirect link
Baghaei, Purya; Carstensen, Claus H. – Practical Assessment, Research & Evaluation, 2013
Standard unidimensional Rasch models assume that persons with the same ability parameters are comparable. That is, the same interpretation applies to persons with identical ability estimates as regards the underlying mental processes triggered by the test. However, research in cognitive psychology shows that persons at the same trait level may…
Descriptors: Item Response Theory, Models, Reading Comprehension, Reading Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Nordstokke, David W.; Zumbo, Bruno D.; Cairns, Sharon L.; Saklofske, Donald H. – Practical Assessment, Research & Evaluation, 2011
Many assessment and evaluation studies use statistical hypothesis tests, such as the independent samples t test or analysis of variance, to test the equality of two or more means for gender, age groups, cultures or language group comparisons. In addition, some, but far fewer, studies compare variability across these same groups or research…
Descriptors: Nonparametric Statistics, Statistical Analysis, Error of Measurement, Statistical Data
Peer reviewed Peer reviewed
Direct linkDirect link
Pibal, Florian; Cesnik, Hermann S. – Practical Assessment, Research & Evaluation, 2011
When administering tests across grades, vertical scaling is often employed to place scores from different tests on a common overall scale so that test-takers' progress can be tracked. In order to be able to link the results across grades, however, common items are needed that are included in both test forms. In the literature there seems to be no…
Descriptors: Scaling, Test Items, Equated Scores, Reading Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Rusticus, Shayna A.; Lovato, Chris Y. – Practical Assessment, Research & Evaluation, 2011
Assessing the comparability of different groups is an issue facing many researchers and evaluators in a variety of settings. Commonly, null hypothesis significance testing (NHST) is incorrectly used to demonstrate comparability when a non-significant result is found. This is problematic because a failure to find a difference between groups is not…
Descriptors: Medical Education, Evaluators, Intervals, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Harris, Lois R.; Brown, Gavin T. L. – Practical Assessment, Research & Evaluation, 2010
Structured questionnaires and semi-structured interviews are often used in mixed method studies to generate confirmatory results despite differences in methods of data collection, analysis, and interpretation. A review of 19 questionnaire-interview comparison studies found that consensus and consistency statistics were generally weak between…
Descriptors: Research Methodology, Questionnaires, Interviews, Data Collection
Peer reviewed Peer reviewed
Direct linkDirect link
Buckendahl, Chad W.; Ferdous, Abdullah A.; Gerrow, Jack – Practical Assessment, Research & Evaluation, 2010
Many testing programs face the practical challenge of having limited resources to conduct comprehensive standard setting studies. Some researchers have suggested that replicating a group's recommended cut score on a full-length test may be possible by using a subset of the items. However, these studies were based on simulated data. This study…
Descriptors: Cutting Scores, Test Items, Standard Setting (Scoring), Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Miller, Tess; Chahine, Saad; Childs, Ruth A. – Practical Assessment, Research & Evaluation, 2010
This study illustrates the use of differential item functioning (DIF) and differential step functioning (DSF) analyses to detect differences in item difficulty that are related to experiences of examinees, such as their teachers' instructional practices, that are relevant to the knowledge, skill, or ability the test is intended to measure. This…
Descriptors: Test Bias, Difficulty Level, Test Items, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Goltz, Heather Honore; Smith, Matthew Lee – Practical Assessment, Research & Evaluation, 2010
Yule (1903) and Simpson (1951) described a statistical paradox that occurs when data is aggregated. In such situations, aggregated data may reveal a trend that directly contrasts those of sub-groups trends. In fact, the aggregate data trends may even be opposite in direction of sub-group trends. To reveal Yule-Simpson's paradox (YSP)-type…
Descriptors: Data, Statistics, Statistical Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Lyren, Per-Erik – Practical Assessment, Research & Evaluation, 2009
The added value of reporting subscores on a college admission test (SweSAT) was examined in this study. Using a CTT-derived objective method for determining the value of reporting subscores, it was concluded that there is added value in reporting section scores (Verbal/Quantitative) as well as subtest scores. These results differ from a study of…
Descriptors: College Entrance Examinations, Scores, Test Theory, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Wiberg, Marie; Sundstrom, Anna – Practical Assessment, Research & Evaluation, 2009
A common problem in predictive validity studies in the educational and psychological fields, e.g. in educational and employment selection, is restriction in range of the predictor variables. There are several methods for correcting correlations for restriction of range. The aim of this paper was to examine the usefulness of two approaches to…
Descriptors: Predictive Validity, Predictor Variables, Correlation, Mathematics
Previous Page | Next Page ยป
Pages: 1  |  2