NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Audience
Showing 91 to 105 of 161 results
Peer reviewed Peer reviewed
Direct linkDirect link
Lyren, Per-Erik – Practical Assessment, Research & Evaluation, 2009
The added value of reporting subscores on a college admission test (SweSAT) was examined in this study. Using a CTT-derived objective method for determining the value of reporting subscores, it was concluded that there is added value in reporting section scores (Verbal/Quantitative) as well as subtest scores. These results differ from a study of…
Descriptors: College Entrance Examinations, Scores, Test Theory, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Wiberg, Marie; Sundstrom, Anna – Practical Assessment, Research & Evaluation, 2009
A common problem in predictive validity studies in the educational and psychological fields, e.g. in educational and employment selection, is restriction in range of the predictor variables. There are several methods for correcting correlations for restriction of range. The aim of this paper was to examine the usefulness of two approaches to…
Descriptors: Predictive Validity, Predictor Variables, Correlation, Mathematics
Peer reviewed Peer reviewed
Direct linkDirect link
Knapp, Thomas R.; Schafer, William D. – Practical Assessment, Research & Evaluation, 2009
Although they test somewhat different hypotheses, analysis of gain scores (or its repeated-measures analog) and analysis of covariance are both common methods that researchers use for pre-post data. The results of the two approaches yield non-comparable outcomes, but since the same generic data are used, it is possible to transform the test…
Descriptors: Statistical Analysis, Pretests Posttests, Meta Analysis, Mathematical Formulas
Peer reviewed Peer reviewed
Direct linkDirect link
Dunn, Karee E.; Mulvenon, Sean W. – Practical Assessment, Research & Evaluation, 2009
The existence of a plethora of empirical evidence documenting the improvement of educational outcomes through the use of formative assessment is conventional wisdom within education. In reality, a limited body of scientifically based empirical evidence exists to support that formative assessment directly contributes to positive educational…
Descriptors: Evidence, Formative Evaluation, Educational Objectives, Outcomes of Education
Peer reviewed Peer reviewed
Direct linkDirect link
Rudner, Lawrence M. – Practical Assessment, Research & Evaluation, 2009
This paper describes and evaluates the use of measurement decision theory (MDT) to classify examinees based on their item response patterns. The model has a simple framework that starts with the conditional probabilities of examinees in each category or mastery state responding correctly to each item. The presented evaluation investigates: (1) the…
Descriptors: Classification, Scoring, Item Response Theory, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Judd, Wallace – Practical Assessment, Research & Evaluation, 2009
Over the past twenty years in performance testing a specific item type with distinguishing characteristics has arisen time and time again. It's been invented independently by dozens of test development teams. And yet this item type is not recognized in the research literature. This article is an invitation to investigate the item type, evaluate…
Descriptors: Test Items, Test Format, Evaluation, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Konstantopoulos, Spyros – Practical Assessment, Research & Evaluation, 2009
Power computations for one-level experimental designs that assume simple random samples are greatly facilitated by power tables such as those presented in Cohen's book about statistical power analysis. However, in education and the social sciences experimental designs have naturally nested structures and multilevel models are needed to compute the…
Descriptors: Social Science Research, Effect Size, Computation, Tables (Data)
Peer reviewed Peer reviewed
Direct linkDirect link
Childs, Ruth A.; Ram, Anita; Xu, Yunmei – Practical Assessment, Research & Evaluation, 2009
Dual scaling, a variation of multidimensional scaling, can reveal the dimensions underlying scores, such as raters' judgments. This study illustrates the use of a dual scaling analysis with semi-structured interviews of raters to investigate the differences among the raters as captured by the dimensions. Thirty applications to a one-year…
Descriptors: Teacher Education Programs, Interviews, Multidimensional Scaling, Teacher Educators
Peer reviewed Peer reviewed
Direct linkDirect link
Bresciani, Marilee J.; Oakleaf, Megan; Kolkhorst, Fred; Nebeker, Camille; Barlow, Jessica; Duncan, Kristin; Hickmott, Jessica – Practical Assessment, Research & Evaluation, 2009
The paper presents a rubric to help evaluate the quality of research projects. The rubric was applied in a competition across a variety of disciplines during a two-day research symposium at one institution in the southwest region of the United States of America. It was collaboratively designed by a faculty committee at the institution and was…
Descriptors: Interrater Reliability, Scoring Rubrics, Research Methodology, Research Projects
Peer reviewed Peer reviewed
Direct linkDirect link
Randolph, Justus J. – Practical Assessment, Research & Evaluation, 2009
Writing a faulty literature review is one of many ways to derail a dissertation. This article summarizes some pivotal information on how to write a high-quality dissertation literature review. It begins with a discussion of the purposes of a review, presents taxonomy of literature reviews, and then discusses the steps in conducting a quantitative…
Descriptors: Literature Reviews, Doctoral Dissertations, Statistical Analysis, Qualitative Research
Peer reviewed Peer reviewed
Direct linkDirect link
Cor, Ken; Alves, Cecilia; Gierl, Mark – Practical Assessment, Research & Evaluation, 2009
While linear programming is a common tool in business and industry, there have not been many applications in educational assessment and only a handful of individuals have been actively involved in conducting psychometric research in this area. Perhaps this is due, at least in part, to the complexity of existing software packages. This article…
Descriptors: Educational Assessment, Psychometrics, Mathematical Applications, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Osborne, Jason W.; Holland, Abigail – Practical Assessment, Research & Evaluation, 2009
Before the mid 20th century most scientific writing was solely authored (Claxton, 2005; Greene, 2007) and thus it is only relatively recently, as science has grown more complex, that the ethical and procedural issues around authorship have arisen. Fields as diverse as medicine (International Committee of Medical Journal Editors, 2008), mathematics…
Descriptors: Guidelines, Guides, Authors, Writing for Publication
Peer reviewed Peer reviewed
Direct linkDirect link
Ketterlin-Geller, Leanne R.; Yovanoff, Paul – Practical Assessment, Research & Evaluation, 2009
Diagnosis is an integral part of instructional decision-making. As the bridge between identification of students who may be at-risk for failure and delivery of carefully designed supplemental interventions, diagnosis provides valuable information about students' persistent misconceptions in the targeted domain. In this paper, we discuss current…
Descriptors: Diagnostic Tests, Mathematics Tests, Mathematics Instruction, Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
MacCann, Robert G.; Stanley, Gordon – Practical Assessment, Research & Evaluation, 2009
An item banking method that does not use Item Response Theory (IRT) is described. This method provides a comparable grading system across schools that would be suitable for low-stakes testing. It uses the Angoff standard-setting method to obtain item ratings that are stored with each item. An example of such a grading system is given, showing how…
Descriptors: Item Banks, Testing, Standard Setting (Scoring), Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Cawthon, Stephanie W.; Ho, Eching; Patel, Puja G.; Potvin, Deborah C.; Trundt, Katherine M. – Practical Assessment, Research & Evaluation, 2009
Students with disabilities frequently use accommodations to participate in large-scale, standardized assessments. Accommodations can include changes to the administration of the test, such as extended time, changes to the test items, such as read aloud, or changes to the student's response, such as the use of a scribe. Some accommodations or…
Descriptors: Test Items, Student Evaluation, Test Validity, Student Characteristics
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11