NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Audience
Showing 31 to 45 of 161 results
Peer reviewed Peer reviewed
Direct linkDirect link
McMillan, James H.; Venable, Jessica C.; Varier, Divya – Practical Assessment, Research & Evaluation, 2013
Kingston and Nash (2011) recently presented a meta-analysis of studies showing that the effect of formative assessment on K-12 student achievement may not be as robust as widely believed. This investigation analyzes the methodology used in the Kingston and Nash meta-analysis and provides further analyses of the studies included in the study. These…
Descriptors: Formative Evaluation, Academic Achievement, Elementary Secondary Education, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Fives, Helenrose; DiDonato-Barnes, Nicole – Practical Assessment, Research & Evaluation, 2013
Classroom tests provide teachers with essential information used to make decisions about instruction and student grades. A table of specification (TOS) can be used to help teachers frame the decision making process of test construction and improve the validity of teachers' evaluations based on tests constructed for classroom use. In this article…
Descriptors: Student Evaluation, Test Construction, Test Content, Teacher Made Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Beaujean, A. Alexander – Practical Assessment, Research & Evaluation, 2013
"R" (R Development Core Team, 2011) is a very powerful tool to analyze data, that is gaining in popularity due to its costs (its free) and flexibility (its open-source). This article gives a general introduction to using "R" (i.e., loading the program, using functions, importing data). Then, using data from Canivez, Konold, Collins, and Wilson…
Descriptors: Factor Analysis, Data Analysis, Computer Software, Open Source Technology
Peer reviewed Peer reviewed
Direct linkDirect link
Baghaei, Purya; Carstensen, Claus H. – Practical Assessment, Research & Evaluation, 2013
Standard unidimensional Rasch models assume that persons with the same ability parameters are comparable. That is, the same interpretation applies to persons with identical ability estimates as regards the underlying mental processes triggered by the test. However, research in cognitive psychology shows that persons at the same trait level may…
Descriptors: Item Response Theory, Models, Reading Comprehension, Reading Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Beavers, Amy S.; Lounsbury, John W.; Richards, Jennifer K.; Huck, Schuyler W.; Skolits, Gary J.; Esquivel, Shelley L. – Practical Assessment, Research & Evaluation, 2013
The uses and methodology of factor analysis are widely debated and discussed, especially the issues of rotational use, methods of confirmatory factor analysis, and adequate sample size. The variety of perspectives and often conflicting opinions can lead to confusion among researchers about best practices for using factor analysis. The focus of the…
Descriptors: Factor Analysis, Educational Research, Best Practices, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Derzon, James H.; Alford, Aaron A. – Practical Assessment, Research & Evaluation, 2013
Forest plots provide an effective means of presenting a wealth of information in a single graphic. Whether used to illustrate multiple results in a single study or the cumulative knowledge of an entire field, forest plots have become an accepted and generally understood way of presenting many estimates simultaneously. This article explores…
Descriptors: Spreadsheets, Graphs, Statistical Analysis, Meta Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Courtney, Matthew Gordon Ray – Practical Assessment, Research & Evaluation, 2013
Exploratory factor analysis (EFA) is a common technique utilized in the development of assessment instruments. The key question when performing this procedure is how to best estimate the number of factors to retain. This is especially important as under- or over-extraction may lead to erroneous conclusions. Although recent advancements have been…
Descriptors: Factor Analysis, Computer Software, Open Source Technology, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Han, Kyung T. – Practical Assessment, Research & Evaluation, 2012
For several decades, the "three-parameter logistic model" (3PLM) has been the dominant choice for practitioners in the field of educational measurement for modeling examinees' response data from multiple-choice (MC) items. Past studies, however, have pointed out that the c-parameter of 3PLM should not be interpreted as a guessing parameter. This…
Descriptors: Statistical Analysis, Models, Multiple Choice Tests, Guessing (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Frey, Bruce B.; Schmitt, Vicki L.; Allen, Justin P. – Practical Assessment, Research & Evaluation, 2012
A commonly advocated best practice for classroom assessment is to make the assessments authentic. Authentic is often used as meaning the mirroring of real-world tasks or expectations. There is no consensus, however, in the actual definition of the term or the characteristics of an authentic classroom assessment. Sometimes, the realistic component…
Descriptors: Performance Based Assessment, Educational Research, Elementary Secondary Education, Preschool Children
Peer reviewed Peer reviewed
Direct linkDirect link
Gadermann, Anne M.; Guhn, Martin; Zumbo, Bruno D. – Practical Assessment, Research & Evaluation, 2012
This paper provides a conceptual, empirical, and practical guide for estimating ordinal reliability coefficients for ordinal item response data (also referred to as Likert, Likert-type, ordered categorical, or rating scale item responses). Conventionally, reliability coefficients, such as Cronbach's alpha, are calculated using a Pearson…
Descriptors: Likert Scales, Rating Scales, Reliability, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Baryla, Ed; Shelley, Gary; Trainor, William – Practical Assessment, Research & Evaluation, 2012
Student learning and program effectiveness is often assessed using rubrics. While much time and effort may go into their creation, it is equally important to assess how effective and efficient the rubrics actually are in terms of measuring competencies over a number of criteria. This study demonstrates the use of common factor analysis to identify…
Descriptors: Program Effectiveness, Factor Analysis, Competence, Scoring Rubrics
Peer reviewed Peer reviewed
Direct linkDirect link
Kalaian, Sema A.; Kasim, Rafa M. – Practical Assessment, Research & Evaluation, 2012
The Delphi survey technique is an iterative mail or electronic (e-mail or web-based) survey method used to obtain agreement or consensus among a group of experts in a specific field on a particular issue through a well-designed and systematic multiple sequential rounds of survey administrations. Each of the multiple rounds of the Delphi survey…
Descriptors: Delphi Technique, Mail Surveys, Online Surveys, Data Collection
Peer reviewed Peer reviewed
Direct linkDirect link
Heath, Linda; DeHoek, Adam; Locatelli, Sara House – Practical Assessment, Research & Evaluation, 2012
Evaluators frequently make use of indirect measures of participant learning or skill mastery, with participants either being asked if they have learned material or mastered a skill or being asked to indicate how confident they are that they know the material or can perform the task in question. Unfortunately, myriad research in social psychology…
Descriptors: Measures (Individuals), Validity, Self Evaluation (Individuals), Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Carleton, R. Nicholas; Thibodeau, Michel A.; Osborne, Jason W.; Asmundson, Gordon J. G. – Practical Assessment, Research & Evaluation, 2012
The present study was designed to test for item order effects by measuring four distinct constructs that contribute substantively to anxiety-related psychopathology (i.e., anxiety sensitivity, fear of negative evaluation, injury/illness sensitivity, and intolerance of uncertainty). Participants (n = 999; 71% women) were randomly assigned to…
Descriptors: Anxiety, Test Items, Serial Ordering, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Nathans, Laura L.; Oswald, Frederick L.; Nimon, Kim – Practical Assessment, Research & Evaluation, 2012
Multiple regression (MR) analyses are commonly employed in social science fields. It is also common for interpretation of results to typically reflect overreliance on beta weights, often resulting in very limited interpretations of variable importance. It appears that few researchers employ other methods to obtain a fuller understanding of what…
Descriptors: Multiple Regression Analysis, Predictor Variables, Measurement, Correlation
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11