NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing all 7 results
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Francis L. – Practical Assessment, Research & Evaluation, 2014
Clustered data (e.g., students within schools) are often analyzed in educational research where data are naturally nested. As a consequence, multilevel modeling (MLM) has commonly been used to study the contextual or group-level (e.g., school) effects on individual outcomes. The current study investigates the use of an alternative procedure to…
Descriptors: Hierarchical Linear Modeling, Regression (Statistics), Educational Research, Sampling
Peer reviewed Peer reviewed
Direct linkDirect link
Stone, Clement A.; Tang, Yun – Practical Assessment, Research & Evaluation, 2013
Propensity score applications are often used to evaluate educational program impact. However, various options are available to estimate both propensity scores and construct comparison groups. This study used a student achievement dataset with commonly available covariates to compare different propensity scoring estimation methods (logistic…
Descriptors: Comparative Analysis, Probability, Sample Size, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Baghaei, Purya; Carstensen, Claus H. – Practical Assessment, Research & Evaluation, 2013
Standard unidimensional Rasch models assume that persons with the same ability parameters are comparable. That is, the same interpretation applies to persons with identical ability estimates as regards the underlying mental processes triggered by the test. However, research in cognitive psychology shows that persons at the same trait level may…
Descriptors: Item Response Theory, Models, Reading Comprehension, Reading Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Parke, Carol S. – Practical Assessment, Research & Evaluation, 2012
This paper describes how districts can better use their extensive student databases and other existing data to explore questions of interest. School districts are required to maintain a wealth of student information in electronic data systems and other formats. The meaningfulness of the data depends to a large degree on whether they can understand…
Descriptors: Educational Indicators, Information Utilization, Guidelines, School Districts
Peer reviewed Peer reviewed
Direct linkDirect link
Pibal, Florian; Cesnik, Hermann S. – Practical Assessment, Research & Evaluation, 2011
When administering tests across grades, vertical scaling is often employed to place scores from different tests on a common overall scale so that test-takers' progress can be tracked. In order to be able to link the results across grades, however, common items are needed that are included in both test forms. In the literature there seems to be no…
Descriptors: Scaling, Test Items, Equated Scores, Reading Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Schafer, William D.; Hou, Xiaodong – Practical Assessment, Research & Evaluation, 2011
This study discusses and presents an example of a use of spline functions to establish and report test scores using a moderated system of any number of cut scores. Our main goals include studying the need for and establishing moderated standards and creating a reporting scale that is referenced to all the standards. Our secondary goals are to make…
Descriptors: Cutting Scores, Standard Setting (Scoring), Achievement Tests, National Competency Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Miller, Tess; Chahine, Saad; Childs, Ruth A. – Practical Assessment, Research & Evaluation, 2010
This study illustrates the use of differential item functioning (DIF) and differential step functioning (DSF) analyses to detect differences in item difficulty that are related to experiences of examinees, such as their teachers' instructional practices, that are relevant to the knowledge, skill, or ability the test is intended to measure. This…
Descriptors: Test Bias, Difficulty Level, Test Items, Mathematics Tests