NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 1,126 to 1,140 of 3,486 results
Peer reviewed Peer reviewed
Guttentag, Marcia; Klein, Isobel – Educational and Psychological Measurement, 1976
The relationship between each of several dimensions of the expectancies of fifth through eighth grade black urban pupils and their school achievement was examined. While anticipated differentiation on racial items was not seen, two important factors were found: general personal efficacy and interpersonal control. (Author/JKS)
Descriptors: Academic Achievement, Blacks, Elementary School Students, Junior High School Students
Peer reviewed Peer reviewed
Ollendick, Duane G.; Ollendick, Thomas H. – Educational and Psychological Measurement, 1976
Measures of locus of control, intelligence, and academic achievement were administered to a sample of male juvenile delinquents. All intercorrelations among the three tests were positive and significant. However, when the effects of intelligence were partialed out, achievement did not vary significantly for different levels of locus of control.…
Descriptors: Academic Achievement, Analysis of Covariance, Delinquency, Intelligence Quotient
Peer reviewed Peer reviewed
McQuitty, Louis L.; Koch, Valerie L. – Educational and Psychological Measurement, 1976
A relatively reliable and valid hierarchy of clusters of objects is plotted from the highest column entries, exclusively, of a matrix of interassociations between the objects. Having developed out of a loose definition of types, the method isolates both loose and highly definitive types, and all those in between. (Author/RC)
Descriptors: Cluster Analysis, Cluster Grouping, Comparative Analysis, Data Analysis
Peer reviewed Peer reviewed
Dyck, Walter; Plancke-Schuyten, Gilberte – Educational and Psychological Measurement, 1976
Previous knowledge of the difficulty index and the intercorrelations of the items will allow group results to be predicted and manipulated. A compound bionomial probability function of a testscore is established for which a computer program has been written. Three item selections and the appropriate probability distributions are given which give…
Descriptors: Computer Programs, Multiple Choice Tests, Prediction, Probability
Peer reviewed Peer reviewed
Keselman, H. J.; And Others – Educational and Psychological Measurement, 1976
Compares the harmonic mean and Kramer unequal group forms of the Tukey test for various: (a) degrees of disparate group sizes, (b) numbers of groups, and (c) nominal significant levels. (RC)
Descriptors: Comparative Analysis, Probability, Sampling, Statistical Significance
Peer reviewed Peer reviewed
Jobson, J. D. – Educational and Psychological Measurement, 1976
Given a sample of responses to a pair of questionnaire items with interval scale values it is sometimes of interest to know the degree to which respondents select the same response for both items. The coefficient of equality measures the departure from independence in the direction of equality. (RC)
Descriptors: Correlation, Item Analysis, Questionnaires, Response Style (Tests)
Peer reviewed Peer reviewed
Forsyth, Robert A. – Educational and Psychological Measurement, 1976
Shoemaker's conclusions related to the influence of various data base characteristics (reliability, variability of item difficulty indices, and degree of skewness in the normative distribution) on the standard error of a mean estimated via multiple matrix sampling procedures are examined. (Author/RC)
Descriptors: Item Sampling, Statistical Analysis, Test Reliability
Peer reviewed Peer reviewed
Carroll, Robert M. – Educational and Psychological Measurement, 1976
Examines the similarity between the coordinates which resulted when correlations were used as similarity measures and the factor loadings obtained by factor analyzing the same correlation matrix. Real data, a set of error free data, and some computer generated data containing deliberately introduced sampling error are analyzed. (RC)
Descriptors: Comparative Analysis, Correlation, Data Analysis, Factor Analysis
Peer reviewed Peer reviewed
Willson, Victor L. – Educational and Psychological Measurement, 1976
It is shown that the rank-biserial correlation coefficient is a linear function of the U-statistic (Mann and Whitney), so that a test of group mean difference is equivalent to a test of zero correlation for the rank-biserial coefficient. (RC)
Descriptors: Correlation, Hypothesis Testing, Statistical Significance
Peer reviewed Peer reviewed
Echternacht, Gary – Educational and Psychological Measurement, 1976
Compares various item option scoring methods with respect to coefficient alpha and a concurrent validity coefficient. Scoring methods compared were: formula scoring, a priori scoring, empirical scoring with an internal criterion, and two modifications of formula scoring. The empirically determined scoring system is seen as superior. (RC)
Descriptors: Aptitude Tests, Multiple Choice Tests, Response Style (Tests), Scoring Formulas
Peer reviewed Peer reviewed
Th.van der Kamp, Leo J.; Mellenbergh, Gideon J. – Educational and Psychological Measurement, 1976
Joreskog's model of cogeneric tests is used to analyze agreement between raters. Raters are treated as measuring instruments. The model of cogeneric tests, of which classical parallelism and tau-equivalence are shown to be special cases, is applied to teachers' ratings of students' responses on open-end questions. (Author/RC)
Descriptors: Goodness of Fit, Mathematical Models, Rating Scales, Statistical Analysis
Peer reviewed Peer reviewed
Werts, C. E.; And Others – Educational and Psychological Measurement, 1976
A procedure is presented for the analysis of rating data with correlated intrajudge and uncorrelated interjudge measurement errors. Correlations between true scores on different rating dimensions, reliabilities for each judge on each dimension and correlations between intrajudge errors can be estimated given a minimum of three raters and two…
Descriptors: Correlation, Data Analysis, Error of Measurement, Error Patterns
Peer reviewed Peer reviewed
Whitely, Susan E.; Dawis, Rene V. – Educational and Psychological Measurement, 1976
Systematically investigates the effects of test context on verbal analogy item difficulty, in terms of both simple percentage correct and easiness estimates from a parameter-invariant model (Rasch, 1960). (RC)
Descriptors: Analysis of Variance, High School Students, Item Analysis, Mathematical Models
Peer reviewed Peer reviewed
Fiske, Donald W.; Barack, Leonard I. – Educational and Psychological Measurement, 1976
The diversity among interpretations of single items in personality questionnaires has been noted previously. Using adjectives from the Adjective Check List (ACL), the study sought evidence bearing on these questions: Does such diversity make the responses to an item not comparable across subjects? If so, what are the implications for scores based…
Descriptors: Adjectives, Check Lists, Individual Differences, Item Analysis
Peer reviewed Peer reviewed
Halperin, Silas – Educational and Psychological Measurement, 1976
Component analysis provides an attractive alternative to factor analysis, since component scores are easily determined while factor scores can only be estimated. The correct method of determining component scores is presented as well as several illustrations of how commonly used incorrect methods distort the meaning of the component solution. (RC)
Descriptors: Factor Analysis, Mathematical Models, Matrices, Scores
Pages: 1  |  ...  |  72  |  73  |  74  |  75  |  76  |  77  |  78  |  79  |  80  |  ...  |  233