NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing all 9 results
Peer reviewed Peer reviewed
Direct linkDirect link
Kane, Michael T. – Journal of Educational Measurement, 2013
To validate an interpretation or use of test scores is to evaluate the plausibility of the claims based on the scores. An argument-based approach to validation suggests that the claims based on the test scores be outlined as an argument that specifies the inferences and supporting assumptions needed to get from test responses to score-based…
Descriptors: Test Interpretation, Validity, Scores, Test Use
Peer reviewed Peer reviewed
Direct linkDirect link
Kane, Michael T. – Journal of Educational Measurement, 2013
This response to the comments contains three main sections, each addressing a subset of the comments. In the first section, I will respond to the comments by Brennan, Haertel, and Moss. All of these comments suggest ways in which my presentation could be extended or improved; I generally agree with their suggestions, so my response to their…
Descriptors: Validity, Test Interpretation, Test Use, Scores
Peer reviewed Peer reviewed
Kane, Michael T. – Journal of Educational Measurement, 2001
Provides a brief historical review of construct validity and discusses the current state of validity theory, emphasizing the role of arguments in validation. Examines the application of an argument-based approach with regard to the distinction between performance-based and theory-based interpretations and the role of consequences in validation.…
Descriptors: Construct Validity, Educational Testing, Performance Based Assessment, Theories
Peer reviewed Peer reviewed
Kane, Michael T.; And Others – Journal of Educational Measurement, 1976
This discussion illustrates the application of generalizability theory to a design commonly employed in the collection of evaluation data and provides a detailed analysis of the dependability of student evaluations of college teaching. (RC)
Descriptors: Course Evaluation, Student Evaluation of Teacher Performance, Test Reliability, True Scores
Peer reviewed Peer reviewed
Brennan, Robert L.; Kane, Michael T. – Journal of Educational Measurement, 1977
An index for the dependability of mastery tests is described. Assumptions necessary for the index and the mathematical development of the index are provided. (Author/JKS)
Descriptors: Criterion Referenced Tests, Mastery Tests, Mathematical Models, Test Reliability
Peer reviewed Peer reviewed
Kane, Michael T. – Journal of Educational Measurement, 1987
The use of item response theory models for analyzing the results of judgmental standard setting studies (the Angoff technique) for establishing minimum pass levels is discussed. A comparison of three methods indicates the traditional approach may not be best. A procedure based on generalizability theory is suggested. (GDC)
Descriptors: Comparative Analysis, Cutting Scores, Generalizability Theory, Latent Trait Theory
Peer reviewed Peer reviewed
Kane, Michael T. – Journal of Educational Measurement, 1986
These analyses suggest that if a criterion-referenced test had a reliability (defined in terms of internal consistency) below 0.5, a simple a priori procedure would provide better estimates of students' universe scores than would individual observed scores. (Author/LMO)
Descriptors: Criterion Referenced Tests, Educational Research, Error of Measurement, Generalizability Theory
Peer reviewed Peer reviewed
Kane, Michael T.; And Others – Journal of Educational Measurement, 1989
This paper develops a multiplicative model as a means of combining ratings of criticality and frequency of various activities involved in job analyses. The model incorporates adjustments to ensure that effective weights of criticality and frequency are appropriate. An example of the model's use is presented. (TJH)
Descriptors: Critical Incidents Method, Higher Education, Job Analysis, Licensing Examinations (Professions)
Peer reviewed Peer reviewed
Plake, Barbara S.; Kane, Michael T. – Journal of Educational Measurement, 1991
Several methods for determining a passing score on an examination from individual raters' estimates of minimal pass levels were compared through simulation. The methods used differed in the weighting estimates for each item received in the aggregation process. Reasons why the simplest procedure is most preferred are discussed. (SLD)
Descriptors: Comparative Analysis, Computer Simulation, Cutting Scores, Estimation (Mathematics)