Publication Date
| In 2015 | 0 |
| Since 2014 | 3 |
| Since 2011 (last 5 years) | 8 |
| Since 2006 (last 10 years) | 13 |
| Since 1996 (last 20 years) | 13 |
Descriptor
| Comparative Analysis | 13 |
| Statistical Analysis | 5 |
| Intervals | 3 |
| Item Response Theory | 3 |
| Predictor Variables | 3 |
| Questionnaires | 3 |
| Sample Size | 3 |
| Academic Achievement | 2 |
| Computer Assisted Testing | 2 |
| Computer Software | 2 |
| More ▼ | |
Source
| Practical Assessment,… | 13 |
Author
| Baglin, James | 1 |
| Briggs, Derek C. | 1 |
| Dadey, Nathan | 1 |
| Dodou, Dimitra | 1 |
| Gamliel, Eyal | 1 |
| Glutting, Joseph J. | 1 |
| Lovato, Chris Y. | 1 |
| Millet, Ido | 1 |
| Nandakumar, Ratna | 1 |
| Peer, Eyal | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 13 |
| Reports - Evaluative | 6 |
| Reports - Research | 6 |
| Reports - Descriptive | 1 |
Education Level
| Junior High Schools | 2 |
| Middle Schools | 2 |
| Postsecondary Education | 2 |
| Elementary Education | 1 |
| Elementary Secondary Education | 1 |
| Grade 3 | 1 |
| Grade 4 | 1 |
| Grade 5 | 1 |
| Grade 6 | 1 |
| Grade 7 | 1 |
| More ▼ | |
Audience
Showing all 13 results
Baglin, James – Practical Assessment, Research & Evaluation, 2014
Exploratory factor analysis (EFA) methods are used extensively in the field of assessment and evaluation. Due to EFA's widespread use, common methods and practices have come under close scrutiny. A substantial body of literature has been compiled highlighting problems with many of the methods and practices used in EFA, and, in response, many…
Descriptors: Factor Analysis, Data, Likert Scales, Computer Software
Walser, Tamara M. – Practical Assessment, Research & Evaluation, 2014
There is increased emphasis on using experimental and quasi-experimental methods to evaluate educational programs; however, educational evaluators and school leaders are often faced with challenges when implementing such designs in educational settings. Use of a historical cohort control group design provides a viable option for conducting…
Descriptors: Quasiexperimental Design, Cohort Analysis, Control Groups, Educational Assessment
Rubright, Jonathan D.; Nandakumar, Ratna; Glutting, Joseph J. – Practical Assessment, Research & Evaluation, 2014
When exploring missing data techniques in a realistic scenario, the current literature is limited: most studies only consider consequences with data missing on a single variable. This simulation study compares the relative bias of two commonly used missing data techniques when data are missing on more than one variable. Factors varied include type…
Descriptors: Simulation, Data, Comparative Analysis, Predictor Variables
Stone, Clement A.; Tang, Yun – Practical Assessment, Research & Evaluation, 2013
Propensity score applications are often used to evaluate educational program impact. However, various options are available to estimate both propensity scores and construct comparison groups. This study used a student achievement dataset with commonly available covariates to compare different propensity scoring estimation methods (logistic…
Descriptors: Comparative Analysis, Probability, Sample Size, Program Evaluation
Dadey, Nathan; Briggs, Derek C. – Practical Assessment, Research & Evaluation, 2012
A vertical scale, in principle, provides a common metric across tests with differing difficulties (e.g., spanning multiple grades) so that statements of "absolute" growth can be made. This paper compares 16 states' 2007-2008 effect size growth trends on vertically scaled reading and math assessments across grades 3 to 8. Two patterns common in…
Descriptors: Meta Analysis, Scaling, Effect Size, Reading Tests
Thompson, Nathan A. – Practical Assessment, Research & Evaluation, 2011
Computerized classification testing (CCT) is an approach to designing tests with intelligent algorithms, similar to adaptive testing, but specifically designed for the purpose of classifying examinees into categories such as "pass" and "fail." Like adaptive testing for point estimation of ability, the key component is the termination criterion,…
Descriptors: Adaptive Testing, Computer Assisted Testing, Classification, Probability
Rusticus, Shayna A.; Lovato, Chris Y. – Practical Assessment, Research & Evaluation, 2011
Assessing the comparability of different groups is an issue facing many researchers and evaluators in a variety of settings. Commonly, null hypothesis significance testing (NHST) is incorrectly used to demonstrate comparability when a non-significant result is found. This is problematic because a failure to find a difference between groups is not…
Descriptors: Medical Education, Evaluators, Intervals, Testing
Peer, Eyal; Gamliel, Eyal – Practical Assessment, Research & Evaluation, 2011
When respondents answer paper-and-pencil (PP) questionnaires, they sometimes modify their responses to correspond to previously answered items. As a result, this response bias might artificially inflate the reliability of PP questionnaires. We compared the internal consistency of PP questionnaires to computerized questionnaires that presented a…
Descriptors: Response Style (Tests), Questionnaires, Reliability, Undergraduate Students
Millet, Ido – Practical Assessment, Research & Evaluation, 2010
We define Grade Lift as the difference between average class grade and average cumulative class GPA. This metric provides an assessment of how lenient the grading was for a given course. In 2006, we started providing faculty members individualized Grade Lift reports reflecting their position relative to an anonymously plotted school-wide…
Descriptors: Grade Point Average, Grading, Statistical Analysis, Comparative Analysis
de Winter, Joost C. F.; Dodou, Dimitra – Practical Assessment, Research & Evaluation, 2010
Likert questionnaires are widely used in survey research, but it is unclear whether the item data should be investigated by means of parametric or nonparametric procedures. This study compared the Type I and II error rates of the "t" test versus the Mann-Whitney-Wilcoxon (MWW) for five-point Likert items. Fourteen population distributions were…
Descriptors: Evaluation Methods, Questionnaires, Likert Scales, Statistical Analysis
Shin, Seon-Hi – Practical Assessment, Research & Evaluation, 2009
This study investigated the impact of the coding scheme on IRT-based true score equating under a common-item nonequivalent groups design. Two different coding schemes under investigation were carried out by assigning either a zero or a blank to a missing item response in the equating data. The investigation involved a comparison study using actual…
Descriptors: True Scores, Equated Scores, Item Response Theory, Coding
Strang, Kenneth David – Practical Assessment, Research & Evaluation, 2009
This paper discusses how a seldom-used statistical procedure, recursive regression (RR), can numerically and graphically illustrate data-driven nonlinear relationships and interaction of variables. This routine falls into the family of exploratory techniques, yet a few interesting features make it a valuable compliment to factor analysis and…
Descriptors: Multicultural Education, Computer Software, Multiple Regression Analysis, Multidimensional Scaling
Wiberg, Marie; Sundstrom, Anna – Practical Assessment, Research & Evaluation, 2009
A common problem in predictive validity studies in the educational and psychological fields, e.g. in educational and employment selection, is restriction in range of the predictor variables. There are several methods for correcting correlations for restriction of range. The aim of this paper was to examine the usefulness of two approaches to…
Descriptors: Predictive Validity, Predictor Variables, Correlation, Mathematics

Peer reviewed
Direct link
