NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20240
Since 20230
Since 2020 (last 5 years)0
Since 2015 (last 10 years)0
Since 2005 (last 20 years)5
Source
Educational and Psychological…30
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 30 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Hung-Yu; Wang, Wen-Chung – Educational and Psychological Measurement, 2014
In the social sciences, latent traits often have a hierarchical structure, and data can be sampled from multiple levels. Both hierarchical latent traits and multilevel data can occur simultaneously. In this study, we developed a general class of item response theory models to accommodate both hierarchical latent traits and multilevel data. The…
Descriptors: Item Response Theory, Hierarchical Linear Modeling, Computation, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Kaliski, Pamela K.; Wind, Stefanie A.; Engelhard, George, Jr.; Morgan, Deanna L.; Plake, Barbara S.; Reshetar, Rosemary A. – Educational and Psychological Measurement, 2013
The many-faceted Rasch (MFR) model has been used to evaluate the quality of ratings on constructed response assessments; however, it can also be used to evaluate the quality of judgments from panel-based standard setting procedures. The current study illustrates the use of the MFR model for examining the quality of ratings obtained from a standard…
Descriptors: Item Response Theory, Models, Standard Setting (Scoring), Science Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Kuncel, Nathan R.; Wee, Serena; Serafin, Lauren; Hezlett, Sarah A. – Educational and Psychological Measurement, 2010
Extensive research has examined the effectiveness of admissions tests for use in higher education. What has gone unexamined is the extent to which tests are similarly effective for predicting performance at both the master's and doctoral levels. This study empirically synthesizes previous studies to investigate whether or not the Graduate Record…
Descriptors: Graduate Students, Grade Point Average, Doctoral Programs, Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ou Lydia; Minsky, Jennifer; Ling, Guangming; Kyllonen, Patrick – Educational and Psychological Measurement, 2009
In an effort to standardize academic application procedures, the authors developed the Standardized Letters of Recommendation (SLR) to capture important cognitive and noncognitive qualities of graduate school candidates. The SLR, which consists of seven scales, is applied to an intern-selection scenario. Both professor ratings (n = 414) during the…
Descriptors: Rating Scales, Reliability, Validity, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Toland, Michael D.; De Ayala, R. J. – Educational and Psychological Measurement, 2005
This study examined the factorial validity of scores on the newly developed Students' Evaluation of Teaching Effectiveness Rating Scale (SETERS) through a series of confirmatory and multilevel structures. Conventional confirmatory factor analyses using the total covariance and pooled within-covariance matrices from two midwestern universities…
Descriptors: Teacher Evaluation, Test Validity, Rating Scales, Factor Analysis
Peer reviewed Peer reviewed
Klein, Gary D.; Denham, Carolyn H. – Educational and Psychological Measurement, 1974
Descriptors: Administrator Evaluation, College Faculty, Models, Questionnaires
Peer reviewed Peer reviewed
Aleamoni, Lawrence M. – Educational and Psychological Measurement, 1978
The development of the Arizona Course/Instructor Evaluation Questionnaire is described. Related research is presented demonstrating the reliability and validity of the Arizona Course/Instructor Evaluation Questionnaire. (Author/JKS)
Descriptors: College Faculty, Course Evaluation, Factor Analysis, Higher Education
Peer reviewed Peer reviewed
Abbott, Robert D.; Perkins, David – Educational and Psychological Measurement, 1978
The development and implementation in a psychology department of a set of student rating-of-instruction items was discussed. The results of item descriptive statistics, correlational, and principal component analysis supported the construct validity of the items. (Author)
Descriptors: College Faculty, Factor Analysis, Higher Education, Item Analysis
Peer reviewed Peer reviewed
Doyle, Kenneth O., Jr. – Educational and Psychological Measurement, 1977
The development of the Student Opinion Survey is described in terms of construction of the initial item pool, sampling of items, formation of general and specific scales, and evaluation of psychometric properties. Internal consistency reliabilities, scale intercorrelations, and specific variances are reported. (Author/JKS)
Descriptors: College Faculty, College Students, Higher Education, Questionnaires
Peer reviewed Peer reviewed
Ware, John E., Jr.; Williams, Reed G. – Educational and Psychological Measurement, 1977
College students viewed taped lectures which were systematically varied by lecturers's enthusiasm and amount of information presented. They then rated the lectures with a standard faculty evaluation form. Discriminant analysis was performed in order to derive optimal scoring methods for the form. (Author/JKS)
Descriptors: College Faculty, College Students, Discriminant Analysis, Higher Education
Peer reviewed Peer reviewed
Umble, M. Michael; Whitten, Betty J. – Educational and Psychological Measurement, 1977
Dimensions of student attitude toward teacher behavior at the college level were investigated via factor analysis. Six factors were extracted and examined with a second sample and compared to dimensions discussed in the literature. (JKS)
Descriptors: College Faculty, College Students, Factor Analysis, Higher Education
Peer reviewed Peer reviewed
Moore, Michael – Educational and Psychological Measurement, 1977
The validity of a Hebrew version of the Illinois Course Evaluation Questionnaire was investigated in eighty courses in science and engineering at the Israel Institute of Technology. Factor analysis revealed a reasonable correspondence between empirical structure and that afforded by logic and intuition. (Author/JKS)
Descriptors: College Faculty, Course Evaluation, Factor Analysis, Factor Structure
Peer reviewed Peer reviewed
Barnoski, Robert P.; Sockloff, Alan L. – Educational and Psychological Measurement, 1976
Consistent with the distinction between student ratings (ratings assigned) and class ratals (ratings received), various properties of the FACE Instrument scales were investigated. (RC)
Descriptors: College Faculty, College Students, Course Evaluation, Factor Analysis
Peer reviewed Peer reviewed
Schurr, Terry K.; And Others – Educational and Psychological Measurement, 1986
The effects of different instructors and student Myers-Briggs Type Inventory personality characteristics on the accuracy of predicting 1041 introductory English composition grades from a single regression equation were investigated. Results suggested that grades were inflated in classes with high means and deflated in classes with low means.…
Descriptors: Aptitude Tests, College English, College Faculty, College Students
Peer reviewed Peer reviewed
Kinicki, Angelo J.; And Others – Educational and Psychological Measurement, 1985
Using both the Behaviorally Anchored Rating Scales (BARS) and the Purdue University Scales, 727 undergraduates rated 32 instructors. The BARS had less halo effect, more leniency error, and lower interrater reliability. Both formats were valid. The two tests did not differ in rate discrimination or susceptibility to rating bias. (Author/GDC)
Descriptors: Behavior Rating Scales, College Faculty, Comparative Testing, Higher Education
Previous Page | Next Page ยป
Pages: 1  |  2