Publication Date
| In 2015 | 0 |
| Since 2014 | 0 |
| Since 2011 (last 5 years) | 0 |
| Since 2006 (last 10 years) | 1 |
| Since 1996 (last 20 years) | 25 |
Descriptor
| Higher Education | 11 |
| Test Items | 10 |
| Factor Structure | 9 |
| Meta Analysis | 9 |
| Scores | 9 |
| Test Validity | 8 |
| Correlation | 7 |
| Factor Analysis | 6 |
| Test Construction | 6 |
| Comparative Analysis | 5 |
| More ▼ | |
Source
| Educational and Psychological… | 53 |
Author
| Smith, Richard M. | 5 |
| Viswesvaran, Chockalingam | 3 |
| Hurtz, Gregory M. | 2 |
| Kogan, Lori R. | 2 |
| Ones, Deniz S. | 2 |
| Vacha-Haase, Tammi | 2 |
| Alliger, George M. | 1 |
| Andrich, David | 1 |
| Auerbach, Meredith A. | 1 |
| Brennan, Robert L. | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 53 |
| Speeches/Meeting Papers | 53 |
| Reports - Evaluative | 25 |
| Reports - Research | 25 |
| Information Analyses | 3 |
| Historical Materials | 1 |
Education Level
Audience
Showing 1 to 15 of 53 results
Stone, Clement A.; Yeh, Chien-Chi – Educational and Psychological Measurement, 2006
Examination of a test's internal structure can be used to identify what domains or dimensions are being measured, identify relationships between the dimensions, provide evidence for hypothesized multidimensionality and test score interpretations, and identify construct-irrelevant variance. The purpose of this research is to provide a…
Descriptors: Multiple Choice Tests, Factor Structure, Factor Analysis, Licensing Examinations (Professions)
Peer reviewedHurtz, Gregory M.; Auerbach, Meredith A. – Educational and Psychological Measurement, 2003
Conducted a meta analysis of studies of procedural modifications of the Angoff method of setting cutoff scores. Findings for 38 studies (113 judges) show that common modifications have produced systematic effects on cutoff scores and the degree of consensus among judges. (SLD)
Descriptors: Cutting Scores, Judges, Meta Analysis, Standard Setting
Peer reviewedCooper-Hakim, Amy; Viswesvaran, Chockalingam – Educational and Psychological Measurement, 2002
Using meta analysis, examined the predictive validity of scores on the MacAndrew Alcoholism Scale (C. MacAndrew, 1965). Compared results for 161 studies with results for 63 studies using cut scores. Discusses why the use of continuous measures rather than cut scores is recommended. (SLD)
Descriptors: Alcoholism, Cutting Scores, Meta Analysis, Predictive Validity
Peer reviewedWatkins, Marley W.; Greenawalt, Chris G.; Marcel, Catherine M. – Educational and Psychological Measurement, 2002
Applied factor analysis to the Wechsler Intelligence Scale for Children-Third Edition (WISC-III) scores for 505 gifted students, to evaluate the construct validity of the WISC-III with this population. Results are consistent with the hypothesis that subtests that emphasize speed of reading are not valid for gifted children and suggest that an…
Descriptors: Construct Validity, Elementary Education, Elementary School Students, Factor Analysis
Peer reviewedVacha-Haase, Tammi; Kogan, Lori R.; Tani, Crystal R.; Woodall, Renee A. – Educational and Psychological Measurement, 2001
Used reliability generalization to explore the variance of scores on 10 Minnesota Multiphasic Personality Inventory (MMPI) clinical scales drawing on 1,972 articles in the literature on the MMPI. Results highlight the premise that scores, not tests, are reliable or unreliable, and they show that study characteristics do influence scores on the…
Descriptors: Clinical Diagnosis, Diagnostic Tests, Generalization, Reliability
Peer reviewedHenson, Robin K.; Kogan, Lori R.; Vacha-Haase, Tammi – Educational and Psychological Measurement, 2001
Studied sources of measurement error variance in the Teacher Efficacy Scale (TES) (Gibson and Dembo, 1984). Used reliability generalization to characterize the typical score reliability for the TES and potential sources of measurement error variance across 43 studies. Also examined related instruments for measurement integrity. (SLD)
Descriptors: Error of Measurement, Generalization, Meta Analysis, Psychometrics
Peer reviewedViswesvaran, Chockalingam; Ones, Deniz S. – Educational and Psychological Measurement, 2000
Used meta-analysis to cumulate reliabilities of personality scale scores, using 848 coefficients of stability and 1,359 internal consistency reliabilities across the Big Five factors of personality. The dimension of personality being measured does not appear to moderate strongly either internal consistency or the test-retest reliabilities.…
Descriptors: Error of Measurement, Meta Analysis, Personality Assessment, Personality Traits
Peer reviewedKonczak, Lee J.; Stelly, Damian J.; Trusty, Michael L. – Educational and Psychological Measurement, 2000
Developed an instrument to measure empowering leader behavior (study 1, n=1,309) and studied the relationship of the instrument to several theoretically relevant variables (study 2, n=84). Confirmatory factor analyses support a six-dimension model of empowering leader behavior. Psychological empowerment mediated the relationship between six…
Descriptors: Administrators, Behavior Patterns, Employees, Empowerment
Peer reviewedDwight, Stephen A.; Feigelson, Melissa E. – Educational and Psychological Measurement, 2000
Conducted a meta-analysis to determine the extent to which the computer administration of a measure influences socially desirable responding. Discusses implications of the findings about impression management in terms of how they contribute to the explication of the construct of social desirability and cross-mode equivalence. (Author/SLD)
Descriptors: Computer Assisted Testing, Meta Analysis, Social Desirability
Peer reviewedVispoel, Walter P. – Educational and Psychological Measurement, 2000
Compared results from computerized vocabulary tests under conditions in which item review was permitted or not permitted. Results from 177 college students reveal that performance gains after review were greater for examinees of high ability, and that review was desired more by examinees with higher test anxiety. The major drawback to review was…
Descriptors: Ability, College Students, Computer Assisted Testing, Higher Education
Peer reviewedSchmidt, Amy Elizabeth – Educational and Psychological Measurement, 2000
Conducted a validity study to examine the degree to which scores on the newly developed Diagnostic Readiness Test (DRT) and National League for Nursing Pre-Admission Test scores could predict success or failure on the National Council Licensure Examination for Registered Nurses (NCLEX-RN). Results for 5,698 students indicate that the DRT is a…
Descriptors: Licensing Examinations (Professions), Nurses, Prediction, Readiness
Peer reviewedEllis, Barbara B.; Mead, Alan D. – Educational and Psychological Measurement, 2000
Used the differential functioning of items and tests (DFIT) framework to examine the measurement equivalence of a Spanish translation of the Sixteen Personality Factor (16PF) Questionnaire using samples of 309 Anglo American college students and other adults, 280 English-speaking Hispanics, and 244 Spanish-speaking college students. Results show…
Descriptors: Adults, College Students, Higher Education, Hispanic American Students
Peer reviewedHurtz, Gregory M.; Hertz, Norman R. – Educational and Psychological Measurement, 1999
Evaluated Angoff ratings from eight different occupational licensing examinations through generalizability theory to estimate the optimal number of raters. Results indicate that approximately 10 to 15 raters is an optimal target range. (SLD)
Descriptors: Cutting Scores, Evaluators, Generalizability Theory, Interrater Reliability
Peer reviewedSturman, Michael C. – Educational and Psychological Measurement, 1999
Compares eight models for analyzing count data through simulation in the context of prediction of absenteeism to indicate the extent to which each model produces false positives. Results suggest that ordinary least-squares regression does not produce more false positives than expected by chance. The Tobit and Poisson models do yield too many false…
Descriptors: Attendance, Individual Differences, Least Squares Statistics, Models
Peer reviewedPerlow, Richard; Moore, D. De Wayne; Kyle, Rebecca; Killen, Thomas – Educational and Psychological Measurement, 1999
Examined a set of working memory scales containing two versions of test items that are reading and mathematics based. Data from 201 undergraduates support the hypothesis that an oblique two-factor model in which the factors are based on item content would fit the data well. (SLD)
Descriptors: Factor Structure, Higher Education, Mathematics, Models

Direct link
