NotesFAQContact Us
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 20 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Teye, Amanda Cleveland; Peaslee, Liliokanaio – Child & Youth Care Forum, 2015
Background: Youth programs often rely on self-reported data without clear evidence as to the accuracy of these reports. Although the validity of self-reporting has been confirmed among some high school and college age students, one area that is absent from extant literature is a serious investigation among younger children. Moreover, there is…
Descriptors: Youth Programs, Young Children, Student Evaluation, Outcomes of Education
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Boller, Kimberly; Kisker, Ellen Eliason – Regional Educational Laboratory, 2014
This guide is designed to help researchers make sure that their research reports include enough information about study measures so that readers can assess the quality of the study's methods and results. The guide also provides examples of write-ups about measures and suggests resources for learning more about these topics. The guide assumes…
Descriptors: Research Reports, Research Methodology, Educational Research, Check Lists
University of Chicago Consortium on Chicago School Research, 2014
Districts now have access to a wealth of new information that can help target students with appropriate supports and bring focus and coherence to college readiness efforts. However, the abundance of data has brought its own challenges. Schools and school systems are often overwhelmed with the amount of data available. The capacity of districts to…
Descriptors: College Readiness, Educational Indicators, College Preparation, School Districts
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Schochet, Peter Z.; Puma, Mike; Deke, John – National Center for Education Evaluation and Regional Assistance, 2014
This report summarizes the complex research literature on quantitative methods for assessing how impacts of educational interventions on instructional practices and student learning differ across students, educators, and schools. It also provides technical guidance about the use and interpretation of these methods. The research topics addressed…
Descriptors: Statistical Analysis, Evaluation Methods, Educational Research, Intervention
Peer reviewed Peer reviewed
Direct linkDirect link
Padgett, Ryan D.; Salisbury, Mark H.; An, Brian P.; Pascarella, Ernest T. – New Directions for Institutional Research, 2010
The sophisticated analytical techniques available to institutional researchers give them an array of procedures to estimate a causal effect using observational data. But as many quantitative researchers have discovered, access to a wider selection of statistical tools does not necessarily ensure construction of a better analytical model. Moreover,…
Descriptors: Institutional Research, Researchers, Statistical Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Kotz, Kasey M.; Watkins, Marley W.; McDermott, Paul A. – School Psychology Review, 2008
Some researchers have argued that discrepant broad index scores invalidate IQs, but others have questioned the fundamental logic of that argument. To resolve this debate, the present study used a nationally representative sample of children (N = 1,200) who were matched individually for IQ. Children with significantly uneven broad index score…
Descriptors: Validity, Scores, Measures (Individuals), Intelligence
Peer reviewed Peer reviewed
Direct linkDirect link
Richardson, John T. E. – Educational Research Review, 2007
Characterising the relationship between participants' scores on two different questionnaires is a common problem in educational research. The complement of the statistic known as Wilks' lambda measures the amount of variance shared between the scores obtained by the same group of participants on two sets of variables. (1 - lambda) is symmetric, in…
Descriptors: Evaluation Research, Educational Research, Questionnaires, Scores
Gordon, Howard R. D. – 2001
A random sample of 113 members of the American Vocational Education Research Association (AVERA) was surveyed to obtain baseline information regarding AVERA members' perceptions of statistical significance tests. The Psychometrics Group Instrument was used to collect data from participants. Of those surveyed, 67% were male, 93% had earned a…
Descriptors: Educational Research, Postsecondary Education, Predictor Variables, Research Methodology
Lord, Frederic M.; Wingersky, Marilyn S. – 1983
Two methods of 'equating' tests using item response theory (IRT) are compared, one using true scores, the other using the estimated distribution of observed scores. On the data studied, they yield almost indistinguishable results. This is a reassuring result for users of IRT equating methods. (Author)
Descriptors: Comparative Analysis, Equated Scores, Estimation (Mathematics), Latent Trait Theory
Institute for Independent Education, Inc., Washington, DC. – 1990
Median test scores on the Comprehensive Tests of Basic Skills (CTBS) for many District of Columbia public schools declined substantially in 1990, although this decline was not evident in reports from school officials. These declines occurred at elementary, junior high, and senior high school levels, in grades 6, 9, and 11, respectively. They also…
Descriptors: Achievement Tests, Basic Skills, Ethnic Distribution, Scores
Achilles, C. M.; DuVall, Lloyd – 1983
Pupils at three elementary schools and one junior high school among Area 1 inner-city schools of St. Louis Missouri, were identified as scoring well below norms on standardized tests. An effort to change the educational programs at these schools netted financial support and the creation of Project SHAL (named for the schools involved). Inservice…
Descriptors: Academic Achievement, Comparative Analysis, Comparative Testing, Educational Change
Carroll, C. Dennis – 1984
This paper compares five growth measures using the High School and Beyond database. The measures are: (1) simple gain (posttest minus pretest); (2) difference between group means; (3) percentage of students who scored higher on the posttest than the pretest; (4) percentage of items missed on the pretest which are subsequently answered correctly on…
Descriptors: Achievement Gains, Achievement Tests, Pretests Posttests, Program Effectiveness
Jarjoura, David – 1983
Issues regarding confidence and tolerance intervals are discussed within the context of educational measurement. Conceptual distinctions are drawn between these two types of intervals; and examples, under various error and true score models, are used to compare such intervals. It is shown that there tend to be only small differences in tolerance…
Descriptors: Educational Testing, Measurement Techniques, Models, Scores
Spray, Judith A.; Welch, Catherine J. – 1986
The purpose of this study was to examine the effect that large within-examinee item difficulty variability had on estimates of the proportion of consistent classification of examinees into mastery categories over two test administrations. The classification consistency estimate was based on a single test administration from an estimation procedure…
Descriptors: Adults, Difficulty Level, Estimation (Mathematics), Mathematical Models
Sarvela, Paul D. – 1986
Four discrimination indices were compared, using score distributions which were normal, bimodal, and negatively skewed. The score distributions were systematically varied to represent the common circumstances of a military training situation using criterion-referenced mastery tests. Three 20-item tests were administered to 110 simulated subjects.…
Descriptors: Comparative Analysis, Criterion Referenced Tests, Item Analysis, Mastery Tests
Previous Page | Next Page ยป
Pages: 1  |  2