NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 1 to 15 of 22 results
Peer reviewed Peer reviewed
Direct linkDirect link
Yin, Yue – Educational Assessment, 2012
This study examines the potential of the tree diagram, a type of graphic organizer, as an assessment tool to measure students' knowledge structures in statistics education. Students' knowledge structures in statistics have not been sufficiently assessed in statistics, despite their importance. This article first presents the rationale and method…
Descriptors: Statistics, Mathematics Education, Instructional Materials, Visual Aids
Peer reviewed Peer reviewed
Direct linkDirect link
Bell, Courtney A.; Gitomer, Drew H.; McCaffrey, Daniel F.; Hamre, Bridget K.; Pianta, Robert C.; Qi, Yi – Educational Assessment, 2012
This article develops a validity argument approach for use on observation protocols currently used to assess teacher quality for high-stakes personnel and professional development decisions. After defining the teaching quality domain, we articulate an interpretive argument for observation protocols. To illustrate the types of evidence that might…
Descriptors: Teacher Effectiveness, Teacher Evaluation, Observation, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Martinez, Jose Felipe; Borko, Hilda; Stecher, Brian; Luskin, Rebecca; Kloser, Matt – Educational Assessment, 2012
We report the results of a pilot validation study of the Quality Assessment in Science Notebook, a portfolio-like instrument for measuring teacher assessment practices in middle school science classrooms. A statewide sample of 42 teachers collected 2 notebooks during the school year, corresponding to science topics taught in the fall and spring.…
Descriptors: Validity, Middle School Teachers, Evaluation Methods, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Whittaker, Tiffany A.; Williams, Natasha J.; Dodd, Barbara G. – Educational Assessment, 2011
This study assessed the interpretability of scaled scores based on either number correct (NC) scoring for a paper-and-pencil test or one of two methods of scoring computer-based tests: an item pattern (IP) scoring method and a method based on equated NC scoring. The equated NC scoring method for computer-based tests was proposed as an alternative…
Descriptors: Computer Assisted Testing, Scoring, Test Interpretation, Equated Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Cheng, Liying; DeLuca, Christopher – Educational Assessment, 2011
Test-takers' interpretations of validity as related to test constructs and test use have been widely debated in large-scale language assessment. This study contributes further evidence to this debate by examining 59 test-takers' perspectives in writing large-scale English language tests. Participants wrote about their test-taking experiences in…
Descriptors: Language Tests, Test Validity, Test Use, English
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ou Lydia; Lee, Hee-Sun; Linn, Marcia C. – Educational Assessment, 2011
Both multiple-choice and constructed-response items have known advantages and disadvantages in measuring scientific inquiry. In this article we explore the function of explanation multiple-choice (EMC) items and examine how EMC items differ from traditional multiple-choice and constructed-response items in measuring scientific reasoning. A group…
Descriptors: Science Tests, Multiple Choice Tests, Responses, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Reed, Deborah K. – Educational Assessment, 2011
This narrative synthesis reviews the psychometric properties of commercially and publicly available retell instruments used to assess the reading comprehension of students in grades K-12. Eleven instruments met selection criteria and were systematically coded for data related to the administration procedures, scoring procedures, and technical…
Descriptors: Reading Comprehension, Elementary Secondary Education, Construct Validity, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Niemi, David; Baker, Eva L.; Sylvester, Roxanne M. – Educational Assessment, 2007
To provide an accurate reading of students' and schools' rates of progress, and to provide cues for instruction, assessment at every level should be connected to explicit learning goals and standards. To show how this requirement can be fulfilled, and how research-based assessment can effectively support learning and instruction, this article…
Descriptors: Student Evaluation, Performance Based Assessment, Scaling, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Norvilitis, Jill M.; Zhang, Jie – Educational Assessment, Evaluation and Accountability, 2009
A total of 232 college students in six different courses in three departments participated in a study to examine the effect of perceived course mean on course and instructor evaluations. Following a midsemester exam, students were given their actual earned exam scores and a manipulated class mean that was either ten percentage points higher or…
Descriptors: Academic Achievement, Scoring, Higher Education, College Students
Peer reviewed Peer reviewed
Direct linkDirect link
Borko, Hilda; Stecher, Brian M.; Alonzo, Alicia C.; Moncure, Shannon; McClam, Sherie – Educational Assessment, 2005
This article describes the development of artifact collection and scoring procedures to characterize classroom practice in mathematics and science. A data collection tool called the "Scoop Notebook" was used to gather artifacts related to key features of classroom practice, such as teachers' use of instructional materials and strategies, classroom…
Descriptors: Teaching Methods, Scoring, Mathematics Instruction, Instructional Materials
Peer reviewed Peer reviewed
Direct linkDirect link
Dawson, Theo L.; Wilson, Mark – Educational Assessment, 2004
The evaluation of developmental interventions has been hampered by a lack of practical, reliable, and objective developmental assessment systems. This article describes the construction of a domain-general computerized developmental assessment system for texts: the Lexical Abstraction Assessment System (LAAS). The LAAS provides assessments of the…
Descriptors: Scoring, Evaluation Methods, Discriminant Analysis, Computer Uses in Education
Peer reviewed Peer reviewed
Direct linkDirect link
Schaeffer, Gary A.; Henderson-Montero, Diane; Julian, Marc; Bene, Nancy H. – Educational Assessment, 2002
A number of methods for scoring tests with selected-response (SR) and constructed-response (CR) items are available. The selection of a method depends on the requirements of the program, the particular psychometric model and assumptions employed in the analysis of item and score data, and how scores are to be used. This article compares 3 methods:…
Descriptors: Scoring, Responses, Test Items, Raw Scores
Peer reviewed Peer reviewed
Callahan, Susan – Educational Assessment, 2001
Presents the case study of a high school English department's decision making process during portfolio scoring, focusing on nine teachers, to show one unintended consequence of using portfolios for accountability. Findings show that because the teachers could not be equally caring, just, and truthful to all stakeholders, they experienced ethical…
Descriptors: Accountability, Case Studies, Decision Making, English
Peer reviewed Peer reviewed
Rogosa, David – Educational Assessment, 2001
Illustrates that there is a critical distinction between reliability and accuracy, explaining through a hypothetical example of shoe fitting that high test reliability does not guarantee good accuracy without consideration of percentile rank scoring, complex measurement models, and other technical detail. (SLD)
Descriptors: Measurement Techniques, Reliability, Scoring
Peer reviewed Peer reviewed
Goldberg, Gail Lynn; Roswell, Barbara Sherr – Educational Assessment, 2000
Studied the impact of experience scoring the Maryland School Performance Assessment tasks on teachers' instructional and classroom assessment practice. Interview data, questionnaires, classroom observation, and classroom artifacts from approximately 5 teacher-scorers demonstrated that teachers' appropriation of performance-based instruction may be…
Descriptors: Educational Practices, Elementary Education, Elementary School Teachers, Experience
Previous Page | Next Page ยป
Pages: 1  |  2