Publication Date
| In 2015 | 8 |
| Since 2014 | 55 |
| Since 2011 (last 5 years) | 206 |
| Since 2006 (last 10 years) | 509 |
| Since 1996 (last 20 years) | 1047 |
Descriptor
| Test Validity | 781 |
| Higher Education | 571 |
| Correlation | 536 |
| Factor Analysis | 531 |
| Test Reliability | 481 |
| Factor Structure | 423 |
| Statistical Analysis | 421 |
| Scores | 368 |
| Comparative Analysis | 356 |
| Test Construction | 347 |
| More ▼ | |
Author
| Michael, William B. | 66 |
| Thompson, Bruce | 26 |
| Krus, David J. | 21 |
| Marcoulides, George A. | 20 |
| Vegelius, Jan | 20 |
| Aiken, Lewis R. | 19 |
| Plake, Barbara S. | 19 |
| Wang, Wen-Chung | 19 |
| Wilcox, Rand R. | 19 |
| Powers, Stephen | 18 |
| More ▼ | |
Publication Type
Education Level
| Higher Education | 86 |
| Postsecondary Education | 35 |
| Elementary Education | 30 |
| High Schools | 27 |
| Secondary Education | 24 |
| Middle Schools | 17 |
| Elementary Secondary Education | 16 |
| Grade 4 | 14 |
| Grade 3 | 12 |
| Grade 8 | 11 |
| More ▼ | |
Audience
| Researchers | 4 |
| Practitioners | 3 |
| Students | 1 |
Showing 1,096 to 1,110 of 3,486 results
Peer reviewedJacobs, Keith W. – Educational and Psychological Measurement, 1976
Problems of type I errors associated with multiple comparisons in the same experiment are discussed. A table is provided for the rapid determination of experimentwise alpha level when a number of independent statistical tests are employed in the same experiment. Suggested applications and the rationale for this procedure are supplied. (Author/JKS)
Descriptors: Hypothesis Testing, Probability, Research Design
Peer reviewedMays, Robert – Educational and Psychological Measurement, 1976
Variables used in applicant selection into the British Civil Service were analyzed for a sample using varimax factor analysis, multidimensional scaling and multiple regression. Results indicated that four dimensions or factors of applicant selection were consistent across a variety of settings and analyses. (JKS)
Descriptors: Factor Analysis, Government Employees, Multidimensional Scaling, Multiple Regression Analysis
Peer reviewedJacobson, Leonard I.; And Others – Educational and Psychological Measurement, 1976
A scale was constructed measuring beliefs about equal rights for men and women. The scale had high internal reliability. Scale scores were significantly related to subject, sex, age, and ethnic group in the directions predicted. Further, the scale discriminated involvement in a women's rights organization. (Author/JKS)
Descriptors: Attitude Measures, Beliefs, Equal Education, Equal Protection
Peer reviewedPyrczak, Fred – Educational and Psychological Measurement, 1976
Items designed to measure the ability to derive the meanings of words from context were drawn from seven published reading tests. The items were administered to high school students without the context. Only one published test had vocabulary-in-context items that were consistently context-dependent. (Author)
Descriptors: Context Clues, Item Analysis, Reading Tests, Test Validity
Peer reviewedRothrock, Julia E.; Michael, William B. – Educational and Psychological Measurement, 1976
Tests constructed through item-sampling (where several tests are constructed and randomly assigned to testees) were compared to a traditional constant-item procedure for evaluating student and class progress in a community college freshman psychology class. The item-sampling approach did not appear to be superior. (JKS)
Descriptors: Evaluation Methods, Item Sampling, Measurement Techniques, Test Validity
Peer reviewedForester, Donald Lee; Michael, William B. – Educational and Psychological Measurement, 1976
Tests constructed through item-sampling (where several tests are constructed and randomly assigned to testees) were compared to a traditional constant-item procedure for evaluating class progress in a community college intermediate algebra class. The item-sampling format appeared to be slightly less valid than the traditional format. (JKS)
Descriptors: Evaluation Methods, Item Sampling, Measurement Techniques, Test Validity
Peer reviewedLueptow, Lloyd B.; And Others – Educational and Psychological Measurement, 1976
After taking tests in introductory college courses, students were asked to rate the quality of the items. Correlations between student ratings and item-test point biserial correlations revealed little or no relationship except for a subset of students who had performed well when taking the tests. (JKS)
Descriptors: College Students, Correlation, Course Evaluation, Item Analysis
Peer reviewedPayne, David A.; And Others – Educational and Psychological Measurement, 1976
Validity data were obtained for an observation instrument designed to assess seven types of competencies for public school principals. Ability to provide product documentation in two areas of administrative responsibility was found to be most predictive of teacher job satisfaction, students' positive evaluation of school climate, and student…
Descriptors: Competency Based Education, Evaluation Methods, Observation, Principals
Peer reviewedHedges, Larry V.; Majer, Kenneth – Educational and Psychological Measurement, 1976
The effect of adding a high school rating factor to high school grade point average and aptitude test scores in predicting freshman grade point average for minority students was investigated. The rating factor, based on prior students' performances from the same high school, added little to the prediction equation. (JKS)
Descriptors: College Admission, College Entrance Examinations, Grades (Scholastic), Minority Groups
Peer reviewedGustafson, Charles R.; Michael, Joan J. – Educational and Psychological Measurement, 1976
Grade point average for a sample of graduate school of education students was more accurately predicted from undergraduate grade point average than from part of total scores on the Undergraduate Record Examination. However, the Advance Education examination of the Undergraduate Record Examination did improve the prediction equation. (Author/JKS)
Descriptors: College Entrance Examinations, Grades (Scholastic), Graduate Students, Multiple Regression Analysis
Peer reviewedPrice, Forrest W.; Kim, Suk Hi – Educational and Psychological Measurement, 1976
High school grades and ACT scores were used to predict grade point average for junior and senior business students. A large proportion of the variance in grade point average was explained by using grades and ACT scores with the aptitude measure making a larger contribution to the prediction equation. (JKS)
Descriptors: Business Education, College Admission, College Entrance Examinations, Grades (Scholastic)
Peer reviewedTaylor, C. Leigh; And Others – Educational and Psychological Measurement, 1976
Thirty-six cognitive, affective and demographic variables in addition to two measures of musical experience were used to predict algebra and geometry grades and teacher ratings of student math aptitude for three hundred high school students. Cognitive measures were the best predictors although some affective variables made significant…
Descriptors: Affective Measures, Cognitive Tests, Grades (Scholastic), High School Students
Peer reviewedLloyd, Dee Norman – Educational and Psychological Measurement, 1976
A multiple regression was used to predict the grade in which secondary school dropouts would leave school. Predictors were twenty measures drawn from sixth grade records. It was concluded that a construct labelled "level of educational attainment" is the best determinant of both whether and when a student will drop out. (Author/JKS)
Descriptors: Dropout Characteristics, Dropout Research, Multiple Regression Analysis, Predictor Variables
Peer reviewedWhite, Gordon W.; And Others – Educational and Psychological Measurement, 1976
The CEEB biology test was compared to two locally developed college proficiency tests in biology. The three tests were administered in a pre and post test fashion to three samples of introductory biology students and then correlated to independently determined course grades. Results are discussed and tables presented. (JKS)
Descriptors: Biology, Equivalency Tests, Science Instruction, Test Validity
Peer reviewedMerrifield, Philip; Hummel-Rossi, Barbara – Educational and Psychological Measurement, 1976
The nine subtests of the Stanford Achievement Test were factor analyzed for a sample of two twenty-six eighth grade students. The first factor dominated the analysis with no other factor accounting for any substantial variance. Tables are presented and implications discussed. (JKS)
Descriptors: Achievement Tests, Factor Analysis, Standardized Tests, Validity


