Publication Date
| In 2015 | 0 |
| Since 2014 | 0 |
| Since 2011 (last 5 years) | 2 |
| Since 2006 (last 10 years) | 2 |
| Since 1996 (last 20 years) | 3 |
Descriptor
| Mathematical Models | 4 |
| Scores | 3 |
| Test Reliability | 3 |
| Comparative Analysis | 2 |
| Construct Validity | 2 |
| Criterion Referenced Tests | 2 |
| Cutting Scores | 2 |
| Evidence | 2 |
| Generalizability Theory | 2 |
| Generalization | 2 |
| More ▼ | |
Source
| Journal of Educational… | 9 |
Author
| Kane, Michael T. | 9 |
| Brennan, Robert L. | 1 |
| Plake, Barbara S. | 1 |
Publication Type
| Journal Articles | 7 |
| Reports - Research | 3 |
| Reports - Descriptive | 2 |
| Opinion Papers | 1 |
| Reports - Evaluative | 1 |
Education Level
Audience
Showing all 9 results
Kane, Michael T. – Journal of Educational Measurement, 2013
To validate an interpretation or use of test scores is to evaluate the plausibility of the claims based on the scores. An argument-based approach to validation suggests that the claims based on the test scores be outlined as an argument that specifies the inferences and supporting assumptions needed to get from test responses to score-based…
Descriptors: Test Interpretation, Validity, Scores, Test Use
Kane, Michael T. – Journal of Educational Measurement, 2013
This response to the comments contains three main sections, each addressing a subset of the comments. In the first section, I will respond to the comments by Brennan, Haertel, and Moss. All of these comments suggest ways in which my presentation could be extended or improved; I generally agree with their suggestions, so my response to their…
Descriptors: Validity, Test Interpretation, Test Use, Scores
Peer reviewedKane, Michael T. – Journal of Educational Measurement, 2001
Provides a brief historical review of construct validity and discusses the current state of validity theory, emphasizing the role of arguments in validation. Examines the application of an argument-based approach with regard to the distinction between performance-based and theory-based interpretations and the role of consequences in validation.…
Descriptors: Construct Validity, Educational Testing, Performance Based Assessment, Theories
Peer reviewedKane, Michael T.; And Others – Journal of Educational Measurement, 1976
This discussion illustrates the application of generalizability theory to a design commonly employed in the collection of evaluation data and provides a detailed analysis of the dependability of student evaluations of college teaching. (RC)
Descriptors: Course Evaluation, Student Evaluation of Teacher Performance, Test Reliability, True Scores
Peer reviewedBrennan, Robert L.; Kane, Michael T. – Journal of Educational Measurement, 1977
An index for the dependability of mastery tests is described. Assumptions necessary for the index and the mathematical development of the index are provided. (Author/JKS)
Descriptors: Criterion Referenced Tests, Mastery Tests, Mathematical Models, Test Reliability
Peer reviewedKane, Michael T. – Journal of Educational Measurement, 1987
The use of item response theory models for analyzing the results of judgmental standard setting studies (the Angoff technique) for establishing minimum pass levels is discussed. A comparison of three methods indicates the traditional approach may not be best. A procedure based on generalizability theory is suggested. (GDC)
Descriptors: Comparative Analysis, Cutting Scores, Generalizability Theory, Latent Trait Theory
Peer reviewedKane, Michael T. – Journal of Educational Measurement, 1986
These analyses suggest that if a criterion-referenced test had a reliability (defined in terms of internal consistency) below 0.5, a simple a priori procedure would provide better estimates of students' universe scores than would individual observed scores. (Author/LMO)
Descriptors: Criterion Referenced Tests, Educational Research, Error of Measurement, Generalizability Theory
Peer reviewedKane, Michael T.; And Others – Journal of Educational Measurement, 1989
This paper develops a multiplicative model as a means of combining ratings of criticality and frequency of various activities involved in job analyses. The model incorporates adjustments to ensure that effective weights of criticality and frequency are appropriate. An example of the model's use is presented. (TJH)
Descriptors: Critical Incidents Method, Higher Education, Job Analysis, Licensing Examinations (Professions)
Peer reviewedPlake, Barbara S.; Kane, Michael T. – Journal of Educational Measurement, 1991
Several methods for determining a passing score on an examination from individual raters' estimates of minimal pass levels were compared through simulation. The methods used differed in the weighting estimates for each item received in the aggregation process. Reasons why the simplest procedure is most preferred are discussed. (SLD)
Descriptors: Comparative Analysis, Computer Simulation, Cutting Scores, Estimation (Mathematics)

Direct link
