NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational and Psychological…256
Audience
Practitioners1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 256 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Schroeders, Ulrich; Schmidt, Christoph; Gnambs, Timo – Educational and Psychological Measurement, 2022
Careless responding is a bias in survey responses that disregards the actual item content, constituting a threat to the factor structure, reliability, and validity of psychological measurements. Different approaches have been proposed to detect aberrant responses such as probing questions that directly assess test-taking behavior (e.g., bogus…
Descriptors: Response Style (Tests), Surveys, Artificial Intelligence, Identification
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ren; Liu, Haiyan; Shi, Dexin; Jiang, Zhehan – Educational and Psychological Measurement, 2022
Assessments with a large amount of small, similar, or often repetitive tasks are being used in educational, neurocognitive, and psychological contexts. For example, respondents are asked to recognize numbers or letters from a large pool of those and the number of correct answers is a count variable. In 1960, George Rasch developed the Rasch…
Descriptors: Classification, Models, Statistical Distributions, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Calatrava, Maria; de Irala, Jokin; Osorio, Alfonso; Benítez, Edgar; Lopez-del Burgo, Cristina – Educational and Psychological Measurement, 2022
Anonymous questionnaires are frequently used in research with adolescents in order to obtain sincere answers about sensitive topics. Most longitudinal studies include self-generated identification codes (SGICs) to match information. Typical elements include a combination of letters and digits from personal data. However, these data may make the…
Descriptors: Privacy, Questionnaires, Coding, Adolescents
Peer reviewed Peer reviewed
Direct linkDirect link
Menold, Natalja; Raykov, Tenko – Educational and Psychological Measurement, 2022
The possible dependency of criterion validity on item formulation in a multicomponent measuring instrument is examined. The discussion is concerned with evaluation of the differences in criterion validity between two or more groups (populations/subpopulations) that have been administered instruments with items having differently formulated item…
Descriptors: Test Items, Measures (Individuals), Test Validity, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Jin, Kuan-Yu; Eckes, Thomas – Educational and Psychological Measurement, 2022
Performance assessments heavily rely on human ratings. These ratings are typically subject to various forms of error and bias, threatening the assessment outcomes' validity and fairness. Differential rater functioning (DRF) is a special kind of threat to fairness manifesting itself in unwanted interactions between raters and performance- or…
Descriptors: Performance Based Assessment, Rating Scales, Test Bias, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Rios, Joseph A. – Educational and Psychological Measurement, 2021
Low test-taking effort as a validity threat is common when examinees perceive an assessment context to have minimal personal value. Prior research has shown that in such contexts, subgroups may differ in their effort, which raises two concerns when making subgroup mean comparisons. First, it is unclear how differential effort could influence…
Descriptors: Response Style (Tests), Statistical Analysis, Measurement, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Weigl, Klemens; Forstner, Thomas – Educational and Psychological Measurement, 2021
Paper-based visual analogue scale (VAS) items were developed 100 years ago. Although they gained great popularity in clinical and medical research for assessing pain, they have been scarcely applied in other areas of psychological research for several decades. However, since the beginning of digitization, VAS have attracted growing interest among…
Descriptors: Test Construction, Visual Measures, Gender Differences, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Nana; Bolt, Daniel M. – Educational and Psychological Measurement, 2021
This paper presents a mixture item response tree (IRTree) model for extreme response style. Unlike traditional applications of single IRTree models, a mixture approach provides a way of representing the mixture of respondents following different underlying response processes (between individuals), as well as the uncertainty present at the…
Descriptors: Item Response Theory, Response Style (Tests), Models, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Thompson, Yutian T.; Song, Hairong; Shi, Dexin; Liu, Zhengkui – Educational and Psychological Measurement, 2021
Conventional approaches for selecting a reference indicator (RI) could lead to misleading results in testing for measurement invariance (MI). Several newer quantitative methods have been available for more rigorous RI selection. However, it is still unknown how well these methods perform in terms of correctly identifying a truly invariant item to…
Descriptors: Measurement, Statistical Analysis, Selection, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Jiang, Zhehan; Shi, Dexin; Distefano, Christine – Educational and Psychological Measurement, 2021
The costs of an objective structured clinical examination (OSCE) are of concern to health profession educators globally. As OSCEs are usually designed under generalizability theory (G-theory) framework, this article proposes a machine-learning-based approach to optimize the costs, while maintaining the minimum required generalizability…
Descriptors: Artificial Intelligence, Generalizability Theory, Objective Tests, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Revuelta, Javier; Franco-Martínez, Alicia; Ximénez, Carmen – Educational and Psychological Measurement, 2021
Situational judgment tests have gained popularity in educational and psychological measurement and are widely used in personnel assessment. A situational judgment item presents a hypothetical scenario and a list of actions, and the individuals are asked to select their most likely action for that scenario. Because actions have no explicit order,…
Descriptors: Factor Analysis, Situational Tests, Statistical Analysis, Sex Stereotypes
Peer reviewed Peer reviewed
Direct linkDirect link
Beauducel, André; Kersting, Martin – Educational and Psychological Measurement, 2020
We investigated by means of a simulation study how well methods for factor rotation can identify a two-facet simple structure. Samples were generated from orthogonal and oblique two-facet population factor models with 4 (2 factors per facet) to 12 factors (6 factors per facet). Samples drawn from orthogonal populations were submitted to factor…
Descriptors: Factor Structure, Factor Analysis, Sample Size, Intelligence
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Hung-Yu – Educational and Psychological Measurement, 2020
In educational assessments and achievement tests, test developers and administrators commonly assume that test-takers attempt all test items with full effort and leave no blank responses with unplanned missing values. However, aberrant response behavior--such as performance decline, dropping out beyond a certain point, and skipping certain items…
Descriptors: Item Response Theory, Response Style (Tests), Test Items, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Ranger, Jochen; Kuhn, Jörg Tobias; Ortner, Tuulia M. – Educational and Psychological Measurement, 2020
The hierarchical model of van der Linden is the most popular model for responses and response times in tests. It is composed of two separate submodels--one for the responses and one for the response times--that are joined at a higher level. The submodel for the response times is based on the lognormal distribution. The lognormal distribution is a…
Descriptors: Reaction Time, Tests, Statistical Distributions, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Yuan; Hau, Kit-Tai – Educational and Psychological Measurement, 2020
In large-scale low-stake assessment such as the Programme for International Student Assessment (PISA), students may skip items (missingness) which are within their ability to complete. The detection and taking care of these noneffortful responses, as a measure of test-taking motivation, is an important issue in modern psychometric models.…
Descriptors: Response Style (Tests), Motivation, Test Items, Statistical Analysis
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  18