NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 57 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Basman, Munevver – International Journal of Assessment Tools in Education, 2023
To ensure the validity of the tests is to check that all items have similar results across different groups of individuals. However, differential item functioning (DIF) occurs when the results of individuals with equal ability levels from different groups differ from each other on the same test item. Based on Item Response Theory and Classic Test…
Descriptors: Test Bias, Test Items, Test Validity, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ayse Bilicioglu Gunes; Bayram Bicak – International Journal of Assessment Tools in Education, 2023
The main purpose of this study is to examine the Type I error and statistical power ratios of Differential Item Functioning (DIF) techniques based on different theories under different conditions. For this purpose, a simulation study was conducted by using Mantel-Haenszel (MH), Logistic Regression (LR), Lord's [chi-squared], and Raju's Areas…
Descriptors: Test Items, Item Response Theory, Error of Measurement, Test Bias
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Balbuena, Sherwin – International Journal of Assessment Tools in Education, 2023
Depression is a latent characteristic that is measured through self-reported or clinician-mediated instruments such as scales and inventories. The precision of depression estimates largely depends on the validity of the items used and on the truthfulness of people responding to these items. The existing methodology in instrumentation based on a…
Descriptors: Depression (Psychology), Test Items, Test Validity, Test Reliability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bruno D. Zumbo – International Journal of Assessment Tools in Education, 2023
In line with the journal volume's theme, this essay considers lessons from the past and visions for the future of test validity. In the first part of the essay, a description of historical trends in test validity since the early 1900s leads to the natural question of whether the discipline has progressed in its definition and description of test…
Descriptors: Test Theory, Test Validity, True Scores, Definitions
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Qiwei He – International Journal of Assessment Tools in Education, 2023
Collaborative problem solving (CPS) is inherently an interactive, conjoint, dual-strand process that considers how a student reasons about a problem as well as how s/he interacts with others to regulate social processes and exchange information (OECD, 2013). Measuring CPS skills presents a challenge for obtaining consistent, accurate, and reliable…
Descriptors: Cooperative Learning, Problem Solving, Test Items, International Assessment
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ince Araci, F. Gul; Tan, Seref – International Journal of Assessment Tools in Education, 2022
Computerized Adaptive Testing (CAT) is a beneficial test technique that decreases the number of items that need to be administered by taking items in accordance with individuals' own ability levels. After the CAT applications were constructed based on the unidimensional Item Response Theory (IRT), Multidimensional CAT (MCAT) applications have…
Descriptors: Adaptive Testing, Computer Assisted Testing, Simulation, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Uyar, Seyma; Yayla, Onur; Zunber, Hidayet – International Journal of Assessment Tools in Education, 2022
The purpose of the current study is to examine the map reading skills of Social Studies pre-service teachers with orienteering, which is an activity-based and more active practice. To this end, a total of 10 students attending the Department of Social Studies Teaching in the Education Faculty of Burdur Mehmet Akif Ersoy University and taking the…
Descriptors: Map Skills, Navigation, Item Response Theory, Social Studies
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Celen, Umit; Aybek, Eren Can – International Journal of Assessment Tools in Education, 2022
Item analysis is performed by developers as an integral part of the scale development process. Thus, items are excluded from the scale depending on the item analysis prior to the factor analysis. Existing item discrimination indices are calculated based on correlation, yet items with different response patterns are likely to have a similar item…
Descriptors: Likert Scales, Factor Analysis, Item Analysis, Correlation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Arslan Mancar, Sinem; Gulleroglu, H. Deniz – International Journal of Assessment Tools in Education, 2022
The aim of this study is to analyse the importance of the number of raters and compare the results obtained by techniques based on Classical Test Theory (CTT) and Generalizability (G) Theory. The Kappa and Krippendorff alpha techniques based on CTT were used to determine the inter-rater reliability. In this descriptive research data consists of…
Descriptors: Comparative Analysis, Interrater Reliability, Advanced Placement, Scoring Rubrics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mor, Ezgi; Kula-Kartal, Seval – International Journal of Assessment Tools in Education, 2022
The dimensionality is one of the most investigated concepts in the psychological assessment, and there are many ways to determine the dimensionality of a measured construct. The Automated Item Selection Procedure (AISP) and the DETECT are non-parametric methods aiming to determine the factorial structure of a data set. In the current study,…
Descriptors: Psychological Evaluation, Nonparametric Statistics, Test Items, Item Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ozdemir, Hasan Fehmi; Kutlu, Omer; Huang, Shaofu; Crick, Ruth – International Journal of Assessment Tools in Education, 2022
The aim of this study is to adapt the Crick Learning for Resilient Agency (CLARA) to Turkish culture, and to examine the psychometric features of the Inventory according to both Classical Test Theory (CTT) and Item Response Theory (IRT). In this respect, it is a descriptive level survey design research. Two different study groups were formed in…
Descriptors: Item Response Theory, Psychometrics, English (Second Language), English Literature
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Saatcioglu, Fatima Munevver; Atar, Hakan Yavuz – International Journal of Assessment Tools in Education, 2022
This study aims to examine the effects of mixture item response theory (IRT) models on item parameter estimation and classification accuracy under different conditions. The manipulated variables of the simulation study are set as mixture IRT models (Rasch, 2PL, 3PL); sample size (600, 1000); the number of items (10, 30); the number of latent…
Descriptors: Accuracy, Classification, Item Response Theory, Programming Languages
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sayin, Ayfer; Sata, Mehmet – International Journal of Assessment Tools in Education, 2022
The aim of the present study was to examine Turkish teacher candidates' competency levels in writing different types of test items by utilizing Rasch analysis. In addition, the effect of the expertise of the raters scoring the items written by the teacher candidates was examined within the scope of the study. 84 Turkish teacher candidates…
Descriptors: Foreign Countries, Item Response Theory, Evaluators, Expertise
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Özdogan, Didem; Kelecioglu, Hülya – International Journal of Assessment Tools in Education, 2022
This study aims to analyze the differential bundle functioning in multidimensional tests with a specific purpose to detect this effect through differentiating the location of the item with DIF in the test, the correlation between the dimensions, the sample size, and the ratio of reference to focal group size. The first 10 items of the test that is…
Descriptors: Correlation, Sample Size, Test Items, Item Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Aybek, Eren Can; Toraman, Cetin – International Journal of Assessment Tools in Education, 2022
The current study investigates the optimum number of response categories for the Likert type of scales under the item response theory (IRT). The data was collected from university students attend to mainly the faculty of medicine and the faculty of education. A form of the "Social Gender Equity Scale" developed by Gozutok et al. (2017)…
Descriptors: Likert Scales, Item Response Theory, College Students, Test Reliability
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4