NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 1 to 15 of 1,800 results
Peer reviewed Peer reviewed
Direct linkDirect link
Almond, Russell G. – International Journal of Testing, 2014
Assessments consisting of only a few extended constructed response items (essays) are not typically equated using anchor test designs as there are typically too few essay prompts in each form to allow for meaningful equating. This article explores the idea that output from an automated scoring program designed to measure writing fluency (a common…
Descriptors: Automation, Equated Scores, Writing Tests, Essay Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Kwanghyun – Language Assessment Quarterly, 2014
This article outlines the current state of and recent developments in the use of corpora for language assessment and considers future directions with a special focus on computational methodology. Because corpora began to make inroads into language assessment in the 1990s, test developers have increasingly used them as a reference resource to…
Descriptors: Language Tests, Computational Linguistics, Natural Language Processing, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Wells, Craig S.; Hambleton, Ronald K.; Kirkpatrick, Robert; Meng, Yu – Applied Measurement in Education, 2014
The purpose of the present study was to develop and evaluate two procedures flagging consequential item parameter drift (IPD) in an operational testing program. The first procedure was based on flagging items that exhibit a meaningful magnitude of IPD using a critical value that was defined to represent barely tolerable IPD. The second procedure…
Descriptors: Test Items, Test Bias, Equated Scores, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ou Lydia; Brew, Chris; Blackmore, John; Gerard, Libby; Madhok, Jacquie; Linn, Marcia C. – Educational Measurement: Issues and Practice, 2014
Content-based automated scoring has been applied in a variety of science domains. However, many prior applications involved simplified scoring rubrics without considering rubrics representing multiple levels of understanding. This study tested a concept-based scoring tool for content-based scoring, c-rater™, for four science items with rubrics…
Descriptors: Science Tests, Test Items, Scoring, Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Higgins, Derrick; Heilman, Michael – Educational Measurement: Issues and Practice, 2014
As methods for automated scoring of constructed-response items become more widely adopted in state assessments, and are used in more consequential operational configurations, it is critical that their susceptibility to gaming behavior be investigated and managed. This article provides a review of research relevant to how construct-irrelevant…
Descriptors: Automation, Scoring, Responses, Test Wiseness
Peer reviewed Peer reviewed
Direct linkDirect link
Jones, Corinne A.; Hoffman, Matthew R.; Geng, Zhixian; Abdelhalim, Suzan M.; Jiang, Jack J.; McCulloch, Timothy M. – Journal of Speech, Language, and Hearing Research, 2014
Purpose: The purpose of this study was to investigate inter- and intrarater reliability among expert users, novice users, and speech-language pathologists with a semiautomated high-resolution manometry analysis program. We hypothesized that all users would have high intrarater reliability and high interrater reliability. Method: Three expert…
Descriptors: Reliability, Automation, Measurement, Expertise
Peer reviewed Peer reviewed
Direct linkDirect link
Cavus, Nadire – Interactive Learning Environments, 2013
Learning management systems (LMSs) contain hidden costs, unclear user environments, bulky developer and administration manuals, and limitations with regard to interoperability, integration, localization, and bandwidth requirements. Careful evaluation is required in selecting the most appropriate LMS for use, and this is a general problem in…
Descriptors: Foreign Countries, Developing Nations, Integrated Learning Systems, Computer Software Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Zipitria, I.; Arruarte, A.; Elorriaga, J. A. – Interactive Learning Environments, 2013
In the context of Learning Technologies, the need to be able to assess the learning and domain comprehension in open-ended learner responses has been present in artificial intelligence and education since its beginnings. The advantage of using summaries is that they allow teachers to diagnose comprehension and the amount of information remembered…
Descriptors: Languages, Grading, Connected Discourse, Language Usage
Peer reviewed Peer reviewed
Direct linkDirect link
Gierl, Mark J.; Lai, Hollis – Educational Measurement: Issues and Practice, 2013
Changes to the design and development of our educational assessments are resulting in the unprecedented demand for a large and continuous supply of content-specific test items. One way to address this growing demand is with automatic item generation (AIG). AIG is the process of using item models to generate test items with the aid of computer…
Descriptors: Educational Assessment, Test Items, Automation, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Botzer, Assaf; Meyer, Joachim; Parmet, Yisrael – Journal of Experimental Psychology: Applied, 2013
Binary cueing systems assist in many tasks, often alerting people about potential hazards (such as alarms and alerts). We investigate whether cues, besides possibly improving decision accuracy, also affect the effort users invest in tasks and whether the required effort in tasks affects the responses to cues. We developed a novel experimental tool…
Descriptors: Foreign Countries, College Students, Cues, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Sherin, Bruce – Journal of the Learning Sciences, 2013
A large body of research in the learning sciences has focused on students' commonsense science knowledge--the everyday knowledge of the natural world that is gained outside of formal instruction. Although researchers studying commonsense science have employed a variety of methods, 1-on-1 clinical interviews have played a unique role. The data…
Descriptors: Informal Education, Computational Linguistics, Transcripts (Written Records), Interviews
Peer reviewed Peer reviewed
Direct linkDirect link
Bates, Alan – Physics Teacher, 2013
Instrumentation available for teachers and students has changed considerably during the last 20 years. The data logger-sensor system has the advantage of taking reliable measurements over time with suitable sample rates. This experiment is not an open-ended investigation but an opportunity to explore the established relationship between the…
Descriptors: Water, Physics, Science Instruction, Science Experiments
Peer reviewed Peer reviewed
Direct linkDirect link
T.H.E. Journal, 2013
The West Virginia Department of Education's auto grading initiative dates back to 2004--a time when school districts were making their first forays into automation. The Charleston based WVDE had instituted a statewide writing assessment in 1984 for students in fourth, seventh, and 10th grades and was looking to expand that program without…
Descriptors: Automation, Grading, Scoring, Computer Uses in Education
Peer reviewed Peer reviewed
Direct linkDirect link
Harik, Polina; Baldwin, Peter; Clauser, Brian – Applied Psychological Measurement, 2013
Growing reliance on complex constructed response items has generated considerable interest in automated scoring solutions. Many of these solutions are described in the literature; however, relatively few studies have been published that "compare" automated scoring strategies. Here, comparisons are made among five strategies for…
Descriptors: Computer Assisted Testing, Automation, Scoring, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Porter, Andrew; Polikoff, Morgan S.; Barghaus, Katherine M.; Yang, Rui – Educational Researcher, 2013
We describe an innovative automated test construction algorithm for building aligned achievement tests. By incorporating the algorithm into the test construction process, along with other test construction procedures for building reliable and unbiased assessments, the result is much more valid tests than result from current test construction…
Descriptors: Achievement Tests, Automation, Test Construction, Alignment (Education)
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  120