NotesFAQContact Us
Search Tips
Peer reviewed Peer reviewed
PDF on ERIC Download full text
ERIC Number: EJ1078644
Record Type: Journal
Publication Date: 2014
Pages: 13
Abstractor: As Provided
ISSN: ISSN-0737-5328
Evaluation of an Online Mentoring Program
Sherman, Sharon; Camilli, Gregory
Teacher Education Quarterly, v41 n2 p107-119 Spr 2014
In this article, the evaluation of an online mentoring program for preparing pre-service elementary teachers at a small liberal arts college is described. An intervention was created to investigate the effects of online mentoring with preservice teachers, where mentoring is defined as a reciprocal relationship formed between an experienced teacher and a novice. This relationship is designed to provide ongoing support, advice and feedback during transition into the teaching profession (Andrews & Martin, 2003; Haney, 1997). According to Lloyd, Wood & Moreno (2000), policymakers in many states mandate or recommend mentoring for novice teachers during the first year of service. Such programs can potentially have a positive effect on both novice and experienced teachers and lead to greater retention (Boreen, Johnson, Niday & Potts, 2000), especially if a mentor is selected based on a set of competencies and trained to develop specific skills needed to provide student support (Brown & Kysilka, 2005; Haney, 1997). A secondary purpose of the article is to describe an efficient procedure for collecting and scoring rubric-based instruments because scoring performance outcomes is labor-intensive, time-wise and financially, even with small-scale studies. Observational or rating-scale data are required in many educational settings. Most of the instruments used in teacher evaluation systems require rating of observational data. The procedures described in this paper illustrate a coherent process for designing an instrument and collecting data in the framework of a comparative study; however, the same procedures could be applied to a single group of ratings. Three important aspects of this procedure are (1) how raters can be incentivized and trained, (2) how to make the most use of the limited availability of raters, (3) and how measurement information concerning the validity of a set of ratings can be obtained. In this paper, the general logistics are described; technical details are provided in a companion paper (Camilli & Sherman, 2013).
Caddo Gap Press. 3145 Geary Boulevard PMB 275, San Francisco, CA 94118. Tel: 415-666-3012; Fax: 415-666-3552; e-mail:; Web site:
Publication Type: Journal Articles; Reports - Research
Education Level: Higher Education; Postsecondary Education
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Grant or Contract Numbers: N/A