Publication Date
| In 2015 | 4 |
| Since 2014 | 22 |
| Since 2011 (last 5 years) | 72 |
| Since 2006 (last 10 years) | 107 |
| Since 1996 (last 20 years) | 161 |
Descriptor
Author
Publication Type
Education Level
| Higher Education | 21 |
| Postsecondary Education | 19 |
| Elementary Secondary Education | 14 |
| Middle Schools | 10 |
| Junior High Schools | 9 |
| Elementary Education | 8 |
| Secondary Education | 7 |
| Grade 5 | 6 |
| Grade 8 | 5 |
| Grade 3 | 4 |
| More ▼ | |
Audience
Showing 121 to 135 of 161 results
Peer reviewedTienken, Christopher; Wilson, Michael – Practical Assessment, Research & Evaluation, 2001
Describes a program used by two New Jersey educators to help teachers understand and use their state's standards and test specifications to improve classroom instruction and raise achievement. Suggests it is important for teachers to understand the entirety of each subject and where state test content fits within each area so that they can align…
Descriptors: Academic Achievement, Administrators, Instructional Improvement, State Programs
Peer reviewedRudner, Lawrence M. – Practical Assessment, Research & Evaluation, 2001
Provides and illustrates a method to compute the expected number of misclassifications of examinees using three-parameter item response theory and two state classifications (mastery or nonmastery). The method uses the standard error and the expected examinee ability distribution. (SLD)
Descriptors: Ability, Classification, Computation, Error of Measurement
Peer reviewedSchafer, William D. – Practical Assessment, Research & Evaluation, 2001
Suggests the routine use of replications in field studies, pointing out that it is usually possible to synthesize replications quantitatively using meta-analysis. Makes the c ase that this is especially attractive for investigators whose research paradigm choices are limited in the field environment. (SLD)
Descriptors: Educational Research, Field Studies, Meta Analysis, Research Methodology
Peer reviewedDing, Cody S. – Practical Assessment, Research & Evaluation, 2001
Outlines an exploratory multidimensional scaling-based approach to profile analysis called Profile Analysis via Multidimensional Scaling (PAMS) (M. Davison, 1994). The PAMS model has the advantages of being applied to samples of any size easily, classifying persons on a continuum, and using person profile index for further hypothesis studies, but…
Descriptors: Classification, Hypothesis Testing, Multidimensional Scaling
Peer reviewedStemler, Steve – Practical Assessment, Research & Evaluation, 2001
Provides an overview of content analysis, a powerful data reduction technique. It is a systematic, replicable technique for compressing many words of text into fewer content categories based on explicit rules of coding. The technique extends far beyond simple word frequency counts, but requires careful definition of categories. (SLD)
Descriptors: Coding, Content Analysis, Data Collection, Research Methodology
Peer reviewedSimon, Marielle; Forgette-Giroux, Renee – Practical Assessment, Research & Evaluation, 2001
Presents a generic rubric to assess postsecondary academic skills, describes its preliminary application in a university setting, and discusses related issues from a research point of view. The rubric was used with four graduate and two undergraduate classes (n=approximately 100 students). Interrater and intrarater aspects of reliability were…
Descriptors: Academic Achievement, College Students, Higher Education, Reliability
Peer reviewedSolomon, David J. – Practical Assessment, Research & Evaluation, 2001
Discusses the use of Web-based surveys and identifies several factors that influence data quality. Suggests that Web-based surveying, although very attractive, should be used with caution at present because of coverage bias or bias if sampled people do not have access to the Internet. (SLD)
Descriptors: Research Methodology, Statistical Bias, Surveys, World Wide Web
Peer reviewedCassady, Jerrell C. – Practical Assessment, Research & Evaluation, 2001
Studied the stability of test anxiety over time by examining the level of reported cognitive test anxiety at three points in an academic semester. Results for 64 undergraduates show that it is practical to collect test anxiety data at times other than when a test is being completed. It does not seem necessary to collect test anxiety data prior to…
Descriptors: Cognitive Tests, Data Collection, Higher Education, Reliability
Peer reviewedLa Marca, Paul M. – Practical Assessment, Research & Evaluation, 2001
Provides an overview of the concept of alignment and the role it plays in assessment and accountability systems. Discusses some methodological issues affecting the study of alignment and explores the relationship between alignment and test score interpretation. Alignment is not only a methodological requirement but also an ethical requirement.…
Descriptors: Accountability, Educational Assessment, Elementary Secondary Education, Ethics
Peer reviewedTaub, Gordon E. – Practical Assessment, Research & Evaluation, 2001
Investigated the construct validity of the implied and theoretical structures of the Wechsler Adult Intelligence Scale-III (WAIS-III). Results using the standardization sample of 2,450 adults and adolescents indicate that the WAIS-III provides an excellent measure of the four-factor model and a general factor, but they do not support the construct…
Descriptors: Adolescents, Adults, Construct Validity, Factor Structure
Peer reviewedMullane, Jennifer; McKelvie, Stuart J. – Practical Assessment, Research & Evaluation, 2001
Canadian postsecondary students (n=133) with moderate second-language competence took the Wonderlic Personnel Test with or without the standard time limit in English or French. Findings suggest that time accommodation can be applied to clients who are taking an intelligence test in their second language. (SLD)
Descriptors: College Students, English, Foreign Countries, French
Peer reviewedKellow, J. Thomas; Willson, Victor L. – Practical Assessment, Research & Evaluation, 2001
Explores the consequence of failing to incorporate measurement error in the development of cut scores in criterion-referenced measures, using the example of Texas and the Texas Assessment of Academic Skills to illustrate the impact of measurement error on false negative decisions. Findings support those of W. Haney (2000). (SLD)
Descriptors: Criterion Referenced Tests, Cutting Scores, Decision Making, Error of Measurement
Peer reviewedMertler, Craig A. – Practical Assessment, Research & Evaluation, 2001
Provides advice on designing scoring rubrics, rating scales as opposed to checklists, for use with performance assessments. Discusses holistic and analytic rubrics, and lists the steps in rubric development. (SLD)
Descriptors: Holistic Evaluation, Performance Based Assessment, Rating Scales, Scoring Rubrics
Peer reviewedRudner, Lawrence; Gagne, Phill – Practical Assessment, Research & Evaluation, 2001
Describes the three most promising approaches to essay scoring by computer: (1) Project Essay Grade (PEG; E. Page, 1966); (2) Intelligent Essay Assessor (IEA; T. Landauer, 1997); and (3) E-rater (J. Burstein, Educational Testing Service). All of these proprietary systems return grades that correlate meaningfully with those of human raters. (SLD)
Descriptors: Computer Uses in Education, Correlation, Essays, Scoring
Peer reviewedOsborne, Jason W. – Practical Assessment, Research & Evaluation, 2000
Introduces the problem of hierarchical, or nested, data structures and discusses how this problem is dealt with effectively. Provides examples of the pitfalls of not doing appropriate analyses. New software is making hierarchical linear modeling more user-friendly and accessible. (SLD)
Descriptors: Computer Software, Research Methodology


