NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20210
Since 20200
Since 2017 (last 5 years)0
Since 2012 (last 10 years)7
Since 2002 (last 20 years)9
Source
Educational and Psychological…48
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 48 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Tay, Louis; Huang, Qiming; Vermunt, Jeroen K. – Educational and Psychological Measurement, 2016
In large-scale testing, the use of multigroup approaches is limited for assessing differential item functioning (DIF) across multiple variables as DIF is examined for each variable separately. In contrast, the item response theory with covariate (IRT-C) procedure can be used to examine DIF across multiple variables (covariates) simultaneously. To…
Descriptors: Item Response Theory, Test Bias, Simulation, College Entrance Examinations
Peer reviewed Peer reviewed
Direct linkDirect link
Wiley, Edward W.; Shavelson, Richard J.; Kurpius, Amy A. – Educational and Psychological Measurement, 2014
The name "SAT" has become synonymous with college admissions testing; it has been dubbed "the gold standard." Numerous studies on its reliability and predictive validity show that the SAT predicts college performance beyond high school grade point average. Surprisingly, studies of the factorial structure of the current version…
Descriptors: College Readiness, College Admission, College Entrance Examinations, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Shaw, Emily J.; Marini, Jessica P.; Mattern, Krista D. – Educational and Psychological Measurement, 2013
The current study evaluated the relationship between various operationalizations of the Advanced Placement[R] (AP) exam and course information with first-year grade point average (FYGPA) in college to better understand the role of AP in college admission decisions. In particular, the incremental validity of the different AP variables, above…
Descriptors: Advanced Placement Programs, Grade Point Average, College Freshmen, College Admission
Peer reviewed Peer reviewed
Direct linkDirect link
Wolkowitz, Amanda A.; Skorupski, William P. – Educational and Psychological Measurement, 2013
When missing values are present in item response data, there are a number of ways one might impute a correct or incorrect response to a multiple-choice item. There are significantly fewer methods for imputing the actual response option an examinee may have provided if he or she had not omitted the item either purposely or accidentally. This…
Descriptors: Multiple Choice Tests, Statistical Analysis, Models, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Davison, Mark L.; Semmes, Robert; Huang, Lan; Close, Catherine N. – Educational and Psychological Measurement, 2012
Data from 181 college students were used to assess whether math reasoning item response times in computerized testing can provide valid and reliable measures of a speed dimension. The alternate forms reliability of the speed dimension was .85. A two-dimensional structural equation model suggests that the speed dimension is related to the accuracy…
Descriptors: Computer Assisted Testing, Reaction Time, Reliability, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Kobrin, Jennifer L.; Kim, YoungKoung; Sackett, Paul R. – Educational and Psychological Measurement, 2012
There is much debate on the merits and pitfalls of standardized tests for college admission, with questions regarding the format (multiple-choice vs. constructed response), cognitive complexity, and content of these assessments (achievement vs. aptitude) at the forefront of the discussion. This study addressed these questions by investigating the…
Descriptors: Grade Point Average, Standardized Tests, Predictive Validity, Predictor Variables
Peer reviewed Peer reviewed
Direct linkDirect link
Santelices, Maria Veronica; Wilson, Mark – Educational and Psychological Measurement, 2012
The relationship between differential item functioning (DIF) and item difficulty on the SAT is such that more difficult items tended to exhibit DIF in favor of the focal group (usually minority groups). These results were reported by Kulick and Hu, and Freedle and have been enthusiastically discussed by more recent literature. Examining the…
Descriptors: Test Bias, Test Items, Difficulty Level, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Mattern, Krista D.; Shaw, Emily J.; Kobrin, Jennifer L. – Educational and Psychological Measurement, 2011
This study examined discrepant high school grade point average (HSGPA) and SAT performance as measured by the difference between a student's standardized SAT composite score and standardized HSGPA. The SAT-HSGPA discrepancy measure was used to examine whether certain students are more likely to exhibit discrepant performance and in what direction.…
Descriptors: Grade Point Average, College Entrance Examinations, Predictive Validity, College Admission
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Jinghua; Sinharay, Sandip; Holland, Paul; Feigenbaum, Miriam; Curley, Edward – Educational and Psychological Measurement, 2011
Two different types of anchors are investigated in this study: a mini-version anchor and an anchor that has a less spread of difficulty than the tests to be equated. The latter is referred to as a midi anchor. The impact of these two different types of anchors on observed score equating are evaluated and compared with respect to systematic error…
Descriptors: Equated Scores, Test Items, Difficulty Level, Statistical Bias
Peer reviewed Peer reviewed
Zeleznik, Carter; And Others – Educational and Psychological Measurement, 1983
The long-range predictive and differential validities of the Scholastic Aptitude Test (SAT) are investigated. Data from students (n=1284) who entered Jefferson Medical College from 1965 through 1974 were analyzed and supported the convergent and divergent validities of the SAT over an extended period of time. (Author/PN)
Descriptors: Achievement, College Entrance Examinations, Higher Education, Longitudinal Studies
Peer reviewed Peer reviewed
Houston, Lawrence N. – Educational and Psychological Measurement, 1983
The Ammons Quick Test (AQT) was investigated to determine the degree to which it could predict first-semester freshman year college grade point average for a sample of 63 specially-admitted Black female undergraduates. A multiple regression analysis indicated that the AQT did not appear to be a valid predictor. (Author/PN)
Descriptors: Black Students, Class Rank, College Entrance Examinations, Comparative Analysis
Peer reviewed Peer reviewed
McCornack, Robert L. – Educational and Psychological Measurement, 1983
The purpose of this study was to obtain additional evidence regarding bias in the validity of college grade predictions based on high school achievement and a scholastic aptitude test. Bias was examined in ethnic minority groups of Asian, Hispanic, Black, and American Indian college freshmen. (Author/PN)
Descriptors: Academic Achievement, American Indians, Asian Americans, Black Students
Peer reviewed Peer reviewed
Levine, Michael V.; Drasgow, Fritz – Educational and Psychological Measurement, 1983
The relation between incorrect option choice and estimated ability level was examined for two widely used aptitude tests, the Scholastic Aptitude Test, and the Graduate Record Examination. Incorrect option choice was found to be related to estimated ability for many items. Implications of these findings are briefly discussed. (Author)
Descriptors: Academic Ability, College Entrance Examinations, Error Patterns, High Achievement
Peer reviewed Peer reviewed
Pentony, Joseph F. – Educational and Psychological Measurement, 1992
The reliability and validity of E. D. Hirsch's (1988) Cultural Literacy Test (CLT) was studied with 150 first-year college students at the University of St. Thomas in Houston (Texas). The test appears reliable, with a split-half reliability estimate of 0.93, and the cultural literacy construct and the CLT are valid. (SLD)
Descriptors: College Freshmen, Concurrent Validity, Construct Validity, Correlation
Peer reviewed Peer reviewed
Baron, Jonathan; Norman, M. Frank – Educational and Psychological Measurement, 1992
The ability of the Scholastic Aptitude Test (SAT), average achievement test scores, and high school class rank to predict students' grades was examined for 3,816 undergraduate students entering the University of Pennsylvania (Philadelphia) in 1983 and 1984. Results suggest that the SAT makes a relatively small contribution to prediction. (SLD)
Descriptors: Achievement Tests, Class Rank, College Entrance Examinations, College Freshmen
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4