NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 1 to 15 of 24 results
Peer reviewed Peer reviewed
Direct linkDirect link
Benton, Tom – Practical Assessment, Research & Evaluation, 2014
This article demonstrates how meta-analytic techniques, that have typically been used to synthesize findings across numerous studies, can also be applied to examine the reasons why relationships between background characteristics and outcomes may vary across different locations in a single multi-site survey. This application is particularly…
Descriptors: Regression (Statistics), Meta Analysis, Academic Achievement, Institutional Autonomy
Peer reviewed Peer reviewed
Direct linkDirect link
Baglin, James – Practical Assessment, Research & Evaluation, 2014
Exploratory factor analysis (EFA) methods are used extensively in the field of assessment and evaluation. Due to EFA's widespread use, common methods and practices have come under close scrutiny. A substantial body of literature has been compiled highlighting problems with many of the methods and practices used in EFA, and, in response, many…
Descriptors: Factor Analysis, Data, Likert Scales, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Walser, Tamara M. – Practical Assessment, Research & Evaluation, 2014
There is increased emphasis on using experimental and quasi-experimental methods to evaluate educational programs; however, educational evaluators and school leaders are often faced with challenges when implementing such designs in educational settings. Use of a historical cohort control group design provides a viable option for conducting…
Descriptors: Quasiexperimental Design, Cohort Analysis, Control Groups, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Becker, Kirk A.; Bergstrom, Betty A. – Practical Assessment, Research & Evaluation, 2013
The need for increased exam security, improved test formats, more flexible scheduling, better measurement, and more efficient administrative processes has caused testing agencies to consider converting the administration of their exams from paper-and-pencil to computer-based testing (CBT). Many decisions must be made in order to provide an optimal…
Descriptors: Testing, Models, Testing Programs, Program Administration
Peer reviewed Peer reviewed
Direct linkDirect link
Adelson, Jill L. – Practical Assessment, Research & Evaluation, 2013
Often it is infeasible or unethical to use random assignment in educational settings to study important constructs and questions. Hence, educational research often uses observational data, such as large-scale secondary data sets and state and school district data, and quasi-experimental designs. One method of reducing selection bias in estimations…
Descriptors: Educational Research, Data, Statistical Bias, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
Williams, Matt N.; Gomez Grajales, Carlos Alberto; Kurkiewicz, Dason – Practical Assessment, Research & Evaluation, 2013
In 2002, an article entitled "Four assumptions of multiple regression that researchers should always test" by Osborne and Waters was published in "PARE." This article has gone on to be viewed more than 275,000 times (as of August 2013), and it is one of the first results displayed in a Google search for "regression…
Descriptors: Multiple Regression Analysis, Misconceptions, Reader Response, Predictor Variables
Peer reviewed Peer reviewed
Direct linkDirect link
Osborne, Jason W. – Practical Assessment, Research & Evaluation, 2013
Osborne and Waters (2002) focused on checking some of the assumptions of multiple linear regression. In a critique of that paper, Williams, Grajales, and Kurkiewicz correctly clarify that regression models estimated using ordinary least squares require the assumption of normally distributed errors, but not the assumption of normally distributed…
Descriptors: Multiple Regression Analysis, Least Squares Statistics, Computation, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Beauducel, Andre; Leue, Anja – Practical Assessment, Research & Evaluation, 2013
In several studies unit-weighted sum scales based on the unweighted sum of items are derived from the pattern of salient loadings in confirmatory factor analysis. The problem of this procedure is that the unit-weighted sum scales imply a model other than the initially tested confirmatory factor model. In consequence, it remains generally unknown…
Descriptors: Factor Analysis, Structural Equation Models, Goodness of Fit, Personality Measures
Peer reviewed Peer reviewed
Direct linkDirect link
Derzon, James H.; Alford, Aaron A. – Practical Assessment, Research & Evaluation, 2013
Forest plots provide an effective means of presenting a wealth of information in a single graphic. Whether used to illustrate multiple results in a single study or the cumulative knowledge of an entire field, forest plots have become an accepted and generally understood way of presenting many estimates simultaneously. This article explores…
Descriptors: Spreadsheets, Graphs, Statistical Analysis, Meta Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Buckendahl, Chad W.; Davis-Becker, Susan L. – Practical Assessment, Research & Evaluation, 2012
The consequences associated with the uses and interpretations of scores for many credentialing testing programs have important implications for a range of stakeholders. Within licensure settings specifically, results from examination programs are often one of the final steps in the process of assessing whether individuals will be allowed to enter…
Descriptors: Licensing Examinations (Professions), Test Items, Dentistry, Minimum Competency Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Osborne, Jason W.; Fitzpatrick, David C. – Practical Assessment, Research & Evaluation, 2012
Exploratory Factor Analysis (EFA) is a powerful and commonly-used tool for investigating the underlying variable structure of a psychometric instrument. However, there is much controversy in the social sciences with regard to the techniques used in EFA (Ford, MacCallum, & Tait, 1986; Henson & Roberts, 2006) and the reliability of the outcome.…
Descriptors: Factor Analysis, Replication (Evaluation), Reliability, Factor Structure
Peer reviewed Peer reviewed
Direct linkDirect link
Filsecker, Michael; Kerres, Michael – Practical Assessment, Research & Evaluation, 2012
Within the recognized tensions between statewide testing and the process of teaching and learning, formative assessment's potential for improving student learning and for shedding light "inside the black box," has received increased attention from scholars in different countries. In their critical review, Dunn & Mulvenon (2009) pointed out the…
Descriptors: Formative Evaluation, Educational Assessment, Definitions, Evidence
Peer reviewed Peer reviewed
Direct linkDirect link
Thompson, Nathan A.; Weiss, David J. – Practical Assessment, Research & Evaluation, 2011
A substantial amount of research has been conducted over the past 40 years on technical aspects of computerized adaptive testing (CAT), such as item selection algorithms, item exposure controls, and termination criteria. However, there is little literature providing practical guidance on the development of a CAT. This paper seeks to collate some…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Construction, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Thompson, Nathan A. – Practical Assessment, Research & Evaluation, 2011
Computerized classification testing (CCT) is an approach to designing tests with intelligent algorithms, similar to adaptive testing, but specifically designed for the purpose of classifying examinees into categories such as "pass" and "fail." Like adaptive testing for point estimation of ability, the key component is the termination criterion,…
Descriptors: Adaptive Testing, Computer Assisted Testing, Classification, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
Rusticus, Shayna A.; Lovato, Chris Y. – Practical Assessment, Research & Evaluation, 2011
Assessing the comparability of different groups is an issue facing many researchers and evaluators in a variety of settings. Commonly, null hypothesis significance testing (NHST) is incorrectly used to demonstrate comparability when a non-significant result is found. This is problematic because a failure to find a difference between groups is not…
Descriptors: Medical Education, Evaluators, Intervals, Testing
Previous Page | Next Page ยป
Pages: 1  |  2