NotesFAQContact Us
Search Tips
ERIC Number: ED371036
Record Type: RIE
Publication Date: 1994-Apr
Pages: 23
Abstractor: N/A
Reference Count: N/A
Bias vs. Precision: Combining Estimates in Multisite Evaluation Research.
Bernstein, Lawrence; Burstein, Nancy
The inherent methodological problem in conducting research at multiple sites is how to best derive an overall estimate of program impact across multiple sites, best being the estimate that minimizes the mean square error, that is, the square of the difference between the observed and true values. An empirical example illustrates the use of the following five models with data from the Comprehensive Child Development Program (CCDP), a 5-year national demonstration program implemented in 21 sites: (1) pooled data; (2) unweighted averaged; (3) weighted averaged; (4) hierarchical linear model random; and (5) hierarchical linear model fixed. Most striking is the similarity of results from all models. In the particular example, choice of model would not alter the conclusion that participation in the CCDP raised children's scores a given amount, although other outcomes might be more sensitive. By informing the analysis strategy with the employed sampling design, one can better justify the conclusions drawn regarding the efficacy of a particular program intervention. Two tables present analysis results. (Contains 13 references.) (SLD)
Publication Type: Reports - Evaluative; Speeches/Meeting Papers
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Identifiers: Comprehensive Child Development Program (ACYF); Hierarchical Linear Modeling; Multiple Site Studies; Precision (Mathematics); Weighting (Statistical)
Note: Paper presented at the Annual Meeting of the American Educational Research Association (New Orleans, LA, April 4-8, 1994).