NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 1 to 15 of 110 results
Peer reviewed Peer reviewed
Direct linkDirect link
Harik, Polina; Clauser, Brian E.; Grabovsky, Irina; Nungester, Ronald J.; Swanson, Dave; Nandakumar, Ratna – Journal of Educational Measurement, 2009
The present study examined the long-term usefulness of estimated parameters used to adjust the scores from a performance assessment to account for differences in rater stringency. Ratings from four components of the USMLE[R] Step 2 Clinical Skills Examination data were analyzed. A generalizability-theory framework was used to examine the extent to…
Descriptors: Generalizability Theory, Performance Based Assessment, Performance Tests, Clinical Experience
Peer reviewed Peer reviewed
Direct linkDirect link
Cahan, Sorel; Gamliel, Eyal – Journal of Educational Measurement, 2006
Despite its intuitive appeal and popularity, Thorndike's constant ratio (CR) model for unbiased selection is inherently inconsistent in "n"-free selection. Satisfaction of the condition for unbiased selection, when formulated in terms of success/acceptance probabilities, usually precludes satisfaction by the converse probabilities of…
Descriptors: Probability, Bias, Mathematical Concepts, Mathematical Models
Peer reviewed Peer reviewed
Stone, Clement A. – Journal of Educational Measurement, 2000
Describes a goodness-of-fit statistic that considers the imprecision with which ability is estimated and involves constructing item fit tables based on each examinee's posterior distribution of ability, given the likelihood of the response pattern and an assumed marginal ability distribution. Also describes a Monte Carlo resampling procedure to…
Descriptors: Goodness of Fit, Item Response Theory, Mathematical Models, Monte Carlo Methods
Peer reviewed Peer reviewed
Sawyer, Richard – Journal of Educational Measurement, 1996
Decision theory is a useful method for assessing the effectiveness of the components of a course placement system. The effectiveness of placement tests or other variables in identifying underprepared students is described by the conditional probability of success in a standard course. Estimating the conditional probability of success is discussed.…
Descriptors: College Students, Estimation (Mathematics), Higher Education, Mathematical Models
Peer reviewed Peer reviewed
Huynh, Huynh – Journal of Educational Measurement, 1976
Within the beta-binomial Bayesian framework, procedures are described for the evaluation of the kappa index of reliability on the basis of one administration of a domain-referenced test. Major factors affecting this index include cutoff score, test score variability and test length. Empirical data which substantiate some theoretical trends deduced…
Descriptors: Criterion Referenced Tests, Decision Making, Mathematical Models, Probability
Peer reviewed Peer reviewed
Subkoviak, Michael J. – Journal of Educational Measurement, 1976
A number of different reliability coefficients have recently been proposed for tests used to differentiate between groups such as masters and nonmasters. One promising index is the proportion of students in a class that are consistently assigned to the same mastery group across two testings. The present paper proposes a single test administration…
Descriptors: Criterion Referenced Tests, Mastery Tests, Mathematical Models, Probability
Peer reviewed Peer reviewed
Novick, Melvin R.; Lindley, Dennis V. – Journal of Educational Measurement, 1978
The use of some very simple loss or utility functions in educational evaluation has recently been advocated by Gross and Su, Petersen and Novick, and Petersen. This paper demonstrates that more realistic utility functions can easily be used and may be preferable in some applications. (Author/CTM)
Descriptors: Bayesian Statistics, Cost Effectiveness, Mathematical Models, Statistical Analysis
Peer reviewed Peer reviewed
Sirotnik, Kenneth; Wellington, Roger – Journal of Educational Measurement, 1977
A single conceptual and theoretical framework for sampling any configuration of data from one or more population matrices is presented, integrating past designs and discussing implications for more general designs. The theory is based upon a generalization of the generalized symmetric mean approach for single matrix samples. (Author/CTM)
Descriptors: Analysis of Variance, Data Analysis, Item Sampling, Mathematical Models
Peer reviewed Peer reviewed
Thissen, David M. – Journal of Educational Measurement, 1976
Where estimation of abilities in the lower half of the ability distribution for the Raven Progressive Matrices is important, or an increase in accuracy of ability estimation is needed, the multiple category latent trait estimation provides a rational procedure for realizing gains in accuracy from the use of information in wrong responses.…
Descriptors: Intelligence Tests, Item Analysis, Junior High Schools, Mathematical Models
Peer reviewed Peer reviewed
Wilcox, Rand R.; Harris, Chester W. – Journal of Educational Measurement, 1977
Emrick's proposed method for determining a mastery level cut-off score is questioned. Emrick's method is shown to be useful only in limited situations. (JKS)
Descriptors: Correlation, Cutting Scores, Mastery Tests, Mathematical Models
Peer reviewed Peer reviewed
Wright, Benjamin D. – Journal of Educational Measurement, 1977
Statements made in a previous article of this journal concerning the Rasch latent trait test model are questioned. Methods of estimation, necessary sample sizes, several formuli, and the general usefulness of the Rasch model are discussed. (JKS)
Descriptors: Computers, Error of Measurement, Item Analysis, Mathematical Models
Peer reviewed Peer reviewed
Whitely, Susan E. – Journal of Educational Measurement, 1977
A debate concerning specific issues and the general usefulness of the Rasch latent trait test model is continued. Methods of estimation, necessary sample size, and the applicability of the model are discussed. (JKS)
Descriptors: Error of Measurement, Item Analysis, Mathematical Models, Measurement
Peer reviewed Peer reviewed
Brennan, Robert L.; Kane, Michael T. – Journal of Educational Measurement, 1977
An index for the dependability of mastery tests is described. Assumptions necessary for the index and the mathematical development of the index are provided. (Author/JKS)
Descriptors: Criterion Referenced Tests, Mastery Tests, Mathematical Models, Test Reliability
Peer reviewed Peer reviewed
Swaminathan, H.; And Others – Journal of Educational Measurement, 1975
A decision-theoretic procedure is outlined which provides a framework within which Bayesian statistical methods can be employed with criterion-referenced tests to improve the quality of decision making in objectives based instructional programs. (Author/DEP)
Descriptors: Bayesian Statistics, Computer Assisted Instruction, Criterion Referenced Tests, Decision Making
Peer reviewed Peer reviewed
Levin, Joel R. – Journal of Educational Measurement, 1975
A set procedure developed in this study is useful in determining sample size, based on specification of linear contrasts involving certain formula treatments. (Author/DEP)
Descriptors: Analysis of Variance, Comparative Analysis, Mathematical Models, Measurement Techniques
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8