NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
No Child Left Behind Act 20014
What Works Clearinghouse Rating
Showing 1 to 15 of 222 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Wilhelm, Anne Garrison; Gillespie Rouse, Amy; Jones, Francesca – Practical Assessment, Research & Evaluation, 2018
Although inter-rater reliability is an important aspect of using observational instruments, it has received little theoretical attention. In this article, we offer some guidance for practitioners and consumers of classroom observations so that they can make decisions about inter-rater reliability, both for study design and in the reporting of data…
Descriptors: Interrater Reliability, Measurement, Observation, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Howard, Matt C. – Practical Assessment, Research & Evaluation, 2018
Scale pretests analyze the suitability of individual scale items for further analysis, whether through judging their face validity, wording concerns, and/or other aspects. The current article reviews scale pretests, separated by qualitative and quantitative methods, in order to identify the differences, similarities, and even existence of the…
Descriptors: Pretesting, Measures (Individuals), Test Items, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Nordstokke, David W.; Colp, S. Mitchell – Practical Assessment, Research & Evaluation, 2018
Often, when testing for shift in location, researchers will utilize nonparametric statistical tests in place of their parametric counterparts when there is evidence or belief that the assumptions of the parametric test are not met (i.e., normally distributed dependent variables). An underlying and often unattended to assumption of nonparametric…
Descriptors: Nonparametric Statistics, Statistical Analysis, Monte Carlo Methods, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
He, Lingjun; Levine, Richard A.; Fan, Juanjuan; Beemer, Joshua; Stronach, Jeanne – Practical Assessment, Research & Evaluation, 2018
In institutional research, modern data mining approaches are seldom considered to address predictive analytics problems. The goal of this paper is to highlight the advantages of tree-based machine learning algorithms over classic (logistic) regression methods for data-informed decision making in higher education problems, and stress the success of…
Descriptors: Institutional Research, Regression (Statistics), Statistical Analysis, Data Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Francis L. – Practical Assessment, Research & Evaluation, 2018
Among econometricians, instrumental variable (IV) estimation is a commonly used technique to estimate the causal effect of a particular variable on a specified outcome. However, among applied researchers in the social sciences, IV estimation may not be well understood. Although there are several IV estimation primers from different fields, most…
Descriptors: Computation, Statistical Analysis, Compliance (Psychology), Randomized Controlled Trials
Peer reviewed Peer reviewed
Direct linkDirect link
Dogan, Enis – Practical Assessment, Research & Evaluation, 2018
Several large scale assessments include student, teacher, and school background questionnaires. Results from such questionnaires can be reported for each item separately, or as indices based on aggregation of multiple items into a scale. Interpreting scale scores is not always an easy task though. In disseminating results of achievement tests, one…
Descriptors: Rating Scales, Benchmarking, Questionnaires, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Wyse, Adam E. – Practical Assessment, Research & Evaluation, 2018
One common modification to the Angoff standard-setting method is to have panelists round their ratings to the nearest 0.05 or 0.10 instead of 0.01. Several reasons have been offered as to why it may make sense to have panelists round their ratings to the nearest 0.05 or 0.10. In this article, we examine one reason that has been suggested, which is…
Descriptors: Interrater Reliability, Evaluation Criteria, Scoring Formulas, Achievement Rating
Peer reviewed Peer reviewed
Direct linkDirect link
Pearce, Joshua M. – Practical Assessment, Research & Evaluation, 2018
As it provides a firm foundation for advancing knowledge, a solid literature review is a critical feature of any academic investigation. Yet, there are several challenges in performing literature reviews including: (1) lack of access to the literature because of costs, (2) fracturing of the literature into many sources, lack of access and…
Descriptors: Literature Reviews, Computer Software, Open Source Technology, Best Practices
Peer reviewed Peer reviewed
Direct linkDirect link
Haans, Antal – Practical Assessment, Research & Evaluation, 2018
Contrast analysis is a relatively simple but effective statistical method for testing theoretical predictions about differences between group means against the empirical data. Despite its advantages, contrast analysis is hardly used to date, perhaps because it is not implemented in a convenient manner in many statistical software packages. This…
Descriptors: Comparative Analysis, Statistical Analysis, Matrices, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Wyse, Adam E.; Babcock, Ben – Practical Assessment, Research & Evaluation, 2018
Two common approaches for performing job analysis in credentialing programs are committee-based methods, which rely solely on subject matter experts' judgments, and task inventory surveys. This study evaluates how well subject matter experts' perceptions coincide with task inventory survey results for three credentialing programs. Results suggest…
Descriptors: Comparative Analysis, Expertise, Attitudes, Job Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Schoepp, Kevin; Danaher, Maurice; Kranov, Ashley Ater – Practical Assessment, Research & Evaluation, 2018
Within higher education, rubric use is expanding. Whereas some years ago the topic of rubrics may have been of interest only to faculty in colleges of education, in recent years the focus on teaching and learning and the emphasis from accrediting bodies has elevated the importance of rubrics across disciplines and different types of assessment.…
Descriptors: Scoring Rubrics, Norms, Higher Education, Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Jönsson, Anders; Balan, Andreia – Practical Assessment, Research & Evaluation, 2018
Research on teachers' grading has shown that there is great variability among teachers regarding both the process and product of grading, resulting in low comparability and issues of inequality when using grades for selection purposes. Despite this situation, not much is known about the merits or disadvantages of different models for grading. In…
Descriptors: Grading, Models, Reliability, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Shear, Benjamin R.; Nordstokke, David W.; Zumbo, Bruno D. – Practical Assessment, Research & Evaluation, 2018
This computer simulation study evaluates the robustness of the nonparametric Levene test of equal variances (Nordstokke & Zumbo, 2010) when sampling from populations with unequal (and unknown) means. Testing for population mean differences when population variances are unknown and possibly unequal is often referred to as the Behrens-Fisher…
Descriptors: Nonparametric Statistics, Computer Simulation, Monte Carlo Methods, Sampling
Peer reviewed Peer reviewed
Direct linkDirect link
Barnard, John J. – Practical Assessment, Research & Evaluation, 2018
Measurement specialists strive to shorten assessment time without compromising precision of scores. Computerized Adaptive Testing (CAT) has rapidly gained ground over the past decades to fulfill this goal. However, parameters for implementation of CATs need to be explored in simulations before implementation so that it can be determined whether…
Descriptors: Computer Assisted Testing, Adaptive Testing, Simulation, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Iwatani, Emi – Practical Assessment, Research & Evaluation, 2018
Education researchers are increasingly interested in applying data mining approaches, but to date, there has been no overarching exposition of their methodological advantages and disadvantages to the field. This is partly because the use of data mining in education research is relatively new, so its value and consequences are not yet well…
Descriptors: Data Analysis, Educational Research, Research Problems, Statistics
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  15