Publication Date
| In 2015 | 0 |
| Since 2014 | 0 |
| Since 2011 (last 5 years) | 0 |
| Since 2006 (last 10 years) | 0 |
| Since 1996 (last 20 years) | 1 |
Descriptor
| Achievement Tests | 4 |
| Equated Scores | 4 |
| Test Bias | 4 |
| Test Items | 4 |
| Testing Problems | 4 |
| Bias | 3 |
| Comparative Analysis | 3 |
| Higher Education | 3 |
| Predictive Validity | 3 |
| Reading Tests | 3 |
| More ▼ | |
Source
| Journal of Educational… | 16 |
Author
| Linn, Robert L. | 16 |
| Harnisch, Delwyn L. | 2 |
| Slinde, Jeffrey A. | 2 |
| Dunbar, Stephen B. | 1 |
| Hastings, C. Nicholas | 1 |
| Miller, M. David | 1 |
| Slinde, Jefferey A. | 1 |
| Werts, Charles E. | 1 |
Publication Type
| Journal Articles | 11 |
| Reports - Research | 5 |
| Reports - Evaluative | 2 |
| Information Analyses | 1 |
| Opinion Papers | 1 |
| Reports - Descriptive | 1 |
| Speeches/Meeting Papers | 1 |
Education Level
Audience
| Researchers | 1 |
Showing 1 to 15 of 16 results
Peer reviewedLinn, Robert L. – Journal of Educational Measurement, 1996
Describes the career and contributions of educational researcher Maurice M. Tatsuoka, best known professionally for his interest in multivariate statistics. (SLD)
Descriptors: Biographies, Educational Research, Multivariate Analysis, Researchers
Peer reviewedLinn, Robert L. – Journal of Educational Measurement, 1978
Political and educational consequences of standard setting for criterion-referenced tests are discussed. Suggestions for setting standards are given. (JKS)
Descriptors: Academic Standards, Criterion Referenced Tests, Decision Making, Evaluation Criteria
Peer reviewedSlinde, Jefferey A.; Linn, Robert L. – Journal of Educational Measurement, 1978
Use of the Rasch model for vertical equating of tests is discussed. Although use of the model is promising, empirical results raise questions about the adequacy of the Rasch model. Latent trait models with more parameters may be necessary. (JKS)
Descriptors: Achievement Tests, Difficulty Level, Equated Scores, Higher Education
Peer reviewedSlinde, Jeffrey A.; Linn, Robert L. – Journal of Educational Measurement, 1977
Conventional linear and equipercentile procedures are reviewed with an emphasis on their utility for equating achievement tests pitched at different levels of difficulty. It is argued that the equipercentile procedure is superior to the linear procedure, but that neither is very satisfactory for the vertical equating problem. (Author/JKS)
Descriptors: Equated Scores, Grade Equivalent Scores, Testing Problems
Peer reviewedLinn, Robert L. – Journal of Educational Measurement, 1976
Discusses some models, including the Petersen Novick Model (TM 502 259) regarding fair selection procedures. (DEP)
Descriptors: Bias, Decision Making, Evaluation Criteria, Models
Peer reviewedLinn, Robert L. – Journal of Educational Measurement, 1975
Reviews the Anchor Test Study which had two major objectives: to provide a method for translating a child's score on any one of eight widely used standardized reading tests into a score on any of the other tests and to provide new nationally representative norms for each of these eight tests. (Author/BJG)
Descriptors: Achievement Tests, Book Reviews, Comparative Analysis, Elementary Education
Peer reviewedMiller, M. David; Linn, Robert L. – Journal of Educational Measurement, 1988
Effects of instructional coverage variations on item characteristic functions were examined, using eighth-grade data from the Second International Mathematics Study (1985). Although some differences in item response curves were large, better performance was not necessarily related to greater learning opportunity. Item curve response differences…
Descriptors: Achievement Tests, Black Students, Elementary Education, Grade 8
Peer reviewedLinn, Robert L. – Journal of Educational Measurement, 1983
Four considerations that enhance the instructional importance of tests are content match, use of feedback, a flagging function, and the increasing tendency to attach sanctions and rewards to standardized test results. These sanctions are apt to force greater attention to the other three characteristics, strengthening the links between instruction…
Descriptors: Instruction, Instructional Improvement, Measurement Objectives, Minimum Competency Testing
Peer reviewedLinn, Robert L.; Hastings, C. Nicholas – Journal of Educational Measurement, 1984
Using predictive validity studies of the Law School Admissions Test (LSAT) and the undergraduate grade-point average (UGPA), this study examined the large variation in the magnitude of the validity coefficients across schools. LSAT standard deviation and correlation between LSAT and UGPA accounted for 58.5 percent of the variability. (Author/EGS)
Descriptors: Academic Achievement, College Applicants, College Entrance Examinations, Grade Point Average
Peer reviewedLinn, Robert L. – Journal of Educational Measurement, 1984
The common approach to studies of predictive bias is analyzed within the context of a conceptual model in which predictors and criterion measures are viewed as fallible indicators of idealized qualifications. (Author/PN)
Descriptors: Certification, Models, Predictive Measurement, Predictive Validity
Peer reviewedLinn, Robert L. – Journal of Educational Measurement, 1983
When the precise basis of selection effect on correlation and regression equations is unknown but can be modeled by selection on a variable that is highly but not perfectly related to observed scores, the selection effects can lead to the commonly observed "overprediction" results in studies of predictive bias. (Author/PN)
Descriptors: Bias, Correlation, Higher Education, Prediction
Peer reviewedHarnisch, Delwyn L.; Linn, Robert L. – Journal of Educational Measurement, 1981
Different indices can be used to measure an individual's pattern of responses on an achievement test as usual or consistent with the norm. The relationships among eight of these indices are investigated for a math and reading test given to approximately 2,100 fourth-grade students. (Author/BW)
Descriptors: Comparative Analysis, Correlation, Error Patterns, Grade 4
Peer reviewedLinn, Robert L.; Harnisch, Delwyn L. – Journal of Educational Measurement, 1981
The three-parameter logistic model has been used for identifying test items that may be biased. An approach is presented here that is applicable even when one of the subgroups is relatively small. Interactions between item content and group membership, commonly called item bias, may be conceptualized as multidimensionality. (Author/BW)
Descriptors: Achievement Tests, Black Students, Grade 8, Junior High Schools
Peer reviewedLinn, Robert L.; Werts, Charles E. – Journal of Educational Measurement, 1971
Two problems in the investigation of predictive bias in tests, the effect of unreliability of the predictors, and the effect of excluding a predictor from the regression equation on which there are preexisting group differences, are discussed. (Author)
Descriptors: Comparative Analysis, Minority Groups, Predictive Measurement, Predictor Variables
Peer reviewedLinn, Robert L.; Dunbar, Stephen B. – Journal of Educational Measurement, 1992
Several issues related to the design and reporting of results from the National Assessment of Educational Progress (NAEP) are discussed in the context of current expectations for the NAEP and its origins. These issues include: (1) content coverage and format; (2) estimation procedures; and (3) reporting problems. (SLD)
Descriptors: Content Analysis, Educational Assessment, Elementary Secondary Education, Estimation (Mathematics)
Previous Page | Next Page ยป
Pages: 1 | 2

