NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 661 to 675 of 9,262 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Gombert, Sebastian; Di Mitri, Daniele; Karademir, Onur; Kubsch, Marcus; Kolbe, Hannah; Tautz, Simon; Grimm, Adrian; Bohm, Isabell; Neumann, Knut; Drachsler, Hendrik – Journal of Computer Assisted Learning, 2023
Background: Formative assessments are needed to enable monitoring how student knowledge develops throughout a unit. Constructed response items which require learners to formulate their own free-text responses are well suited for testing their active knowledge. However, assessing such constructed responses in an automated fashion is a complex task…
Descriptors: Coding, Energy, Scientific Concepts, Formative Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Saša Horvat; Dušica Rodic; Nevena Jovic; Tamara Roncevic; Snežana Babic-Kekez – Center for Educational Policy Studies Journal, 2023
The main goal of this study was to validate the strategy for the assessment of the cognitive complexity of chemical kinetics exam items. The strategy included three steps: 1) assessment of the difficulty of concepts, 2) assessment of distractor value, and 3) assessment of concepts' interactivity. One of the tasks was to determine whether there…
Descriptors: Cognitive Measurement, Performance Based Assessment, Chemistry, Thinking Skills
Yeom, Semi – ProQuest LLC, 2023
Data literacy is crucial for adolescents to access and navigate data in today's technology-driven world. Researchers emphasize the need for K-12 students to attain data literacy. However, few available instructions have incorporated validated assessments. Therefore, I developed and implemented the Data literacy Assessment for Middle graders…
Descriptors: Language Minorities, Minority Group Students, Middle School Students, Statistics Education
Peer reviewed Peer reviewed
Direct linkDirect link
Emery-Wetherell, Meaghan; Wang, Ruoyao – Assessment & Evaluation in Higher Education, 2023
Over four semesters of a large introductory statistics course the authors found students were engaging in contract cheating on Chegg.com during multiple choice examinations. In this paper we describe our methodology for identifying, addressing and eventually eliminating cheating. We successfully identified 23 out of 25 students using a combination…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Cheating, Identification
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Julien Corven; Teo Paoletti; Allison L. Gantt – North American Chapter of the International Group for the Psychology of Mathematics Education, 2023
We previously (Gantt et al., 2023; Paoletti et al., 2021) identified items from the publicly released TIMSS 2011 assessments that had potential for students to employ covariational reasoning as a solution strategy. In this report, we explore the extent to which fourth-grade students' performance on such items in mathematics differed among 26…
Descriptors: Achievement Tests, Foreign Countries, Mathematics Achievement, Mathematics Tests
Maarten T. P. Beerepoot – Journal of Chemical Education, 2023
Digital automated assessment is a valuable and time-efficient tool for educators to provide immediate and objective feedback to learners. Automated assessment, however, puts high demands on the quality of the questions, alignment with the intended learning outcomes, and the quality of the feedback provided to the learners. We here describe the…
Descriptors: Formative Evaluation, Summative Evaluation, Chemistry, Science Instruction
Cari F. Herrmann-Abell; George E. DeBoer – Grantee Submission, 2023
This study describes the role that Rasch measurement played in the development of assessments aligned to the "Next Generation Science Standards," tasks that require students to use the three dimensions of science practices, disciplinary core ideas and cross-cutting concepts to make sense of energy-related phenomena. A set of 27…
Descriptors: Item Response Theory, Computer Simulation, Science Tests, Energy
Peer reviewed Peer reviewed
Direct linkDirect link
Huggins-Manley, Anne Corinne; Qiu, Yuxi; Penfield, Randall D. – International Journal of Testing, 2018
Score equity assessment (SEA) refers to an examination of population invariance of equating across two or more subpopulations of test examinees. Previous SEA studies have shown that score equity may be present for examinees scoring at particular test score ranges but absent for examinees scoring at other score ranges. No studies to date have…
Descriptors: Equated Scores, Test Bias, Test Items, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Hauenstein, Clifford E.; Embretson, Susan E. – Journal of Cognitive Education and Psychology, 2020
The Concept Formation subtest of the Woodcock Johnson Tests of Cognitive Abilities represents a dynamic test due to continual provision of feedback from examiner to examinee. Yet, the original scoring protocol for the test largely ignores this dynamic structure. The current analysis applies a dynamic adaptation of an explanatory item response…
Descriptors: Test Items, Difficulty Level, Cognitive Tests, Cognitive Ability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bayrakci, Mustafa; Karacaoglu, Ömer Cem – International Journal of Curriculum and Instruction, 2020
Learning outcomes are the first and most essential element of the curricula and the correct and rigorous determination of the learning outcomes is very important in order to ensure formal education in schools to be well-planned and to design and apply curriculums effectively. Because, the other elements of the curriculum which are content,…
Descriptors: Foreign Countries, Occupational Tests, Curriculum Development, Teaching (Occupation)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Myszkowski, Nils – Journal of Intelligence, 2020
Raven's Standard Progressive Matrices (Raven 1941) is a widely used 60-item long measure of general mental ability. It was recently suggested that, for situations where taking this test is too time consuming, a shorter version, comprised of only the last series of the Standard Progressive Matrices (Myszkowski and Storme 2018) could be used, while…
Descriptors: Intelligence Tests, Psychometrics, Nonparametric Statistics, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Partchev, Ivailo – Journal of Intelligence, 2020
We analyze a 12-item version of Raven's Standard Progressive Matrices test, traditionally scored with the sum score. We discuss some important differences between assessment in practice and psychometric modelling. We demonstrate some advanced diagnostic tools in the freely available R package, dexter. We find that the first item in the test…
Descriptors: Intelligence Tests, Scores, Psychometrics, Diagnostic Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Metsämuuronen, Jari – International Journal of Educational Methodology, 2020
Kelley's Discrimination Index (DI) is a simple and robust, classical non-parametric short-cut to estimate the item discrimination power (IDP) in the practical educational settings. Unlike item-total correlation, DI can reach the ultimate values of +1 and -1, and it is stable against the outliers. Because of the computational easiness, DI is…
Descriptors: Test Items, Computation, Item Analysis, Nonparametric Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Yue; Cheng, Ying; Liu, Hongyun – Educational and Psychological Measurement, 2020
The responses of non-effortful test-takers may have serious consequences as non-effortful responses can impair model calibration and latent trait inferences. This article introduces a mixture model, using both response accuracy and response time information, to help differentiating non-effortful and effortful individuals, and to improve item…
Descriptors: Item Response Theory, Test Wiseness, Response Style (Tests), Reaction Time
Peer reviewed Peer reviewed
Direct linkDirect link
Leventhal, Brian; Ames, Allison – Educational Measurement: Issues and Practice, 2020
In this digital ITEMS module, Dr. Brian Leventhal and Dr. Allison Ames provide an overview of "Monte Carlo simulation studies" (MCSS) in "item response theory" (IRT). MCSS are utilized for a variety of reasons, one of the most compelling being that they can be used when analytic solutions are impractical or nonexistent because…
Descriptors: Item Response Theory, Monte Carlo Methods, Simulation, Test Items
Pages: 1  |  ...  |  41  |  42  |  43  |  44  |  45  |  46  |  47  |  48  |  49  |  ...  |  618