NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
Peer reviewed Peer reviewed
Direct linkDirect link
ERIC Number: EJ979136
Record Type: Journal
Publication Date: 2012
Pages: 15
Abstractor: As Provided
ISBN: N/A
ISSN: ISSN-1547-9714
EISSN: N/A
A Standardized Rubric for Evaluating Webquest Design: Reliability Analysis of ZUNAL Webquest Design Rubric
Unal, Zafer; Bodur, Yasar; Unal, Aslihan
Journal of Information Technology Education: Research, v11 p169-183 2012
Current literature provides many examples of rubrics that are used to evaluate the quality of web-quest designs. However, reliability of these rubrics has not yet been researched. This is the first study to fully characterize and assess the reliability of a webquest evaluation rubric. The ZUNAL rubric was created to utilize the strengths of the currently available rubrics and improved based on the comments provided in the literature and feedback received from the educators. The ZUNAL webquest design rubric was developed in three stages. First, a large set of rubric items was generated based on the operational definitions and existing literature on currently available webquest rubrics (version 1). This step included item selections from the three most widely used rubrics created by Bellofatto, Bohl, Casey, Krill & Dodge (2001), March (2004), and eMints (2006). Second, students (n = 15) enrolled in a graduate course titled "Technology and Data" were asked to assess the clarity of each item of the rubric on a four-point scale ranging from (1) "not at all" to (4) "very well/very clear." This scale was used only during the construction of the ZUNAL rubric; therefore, it was not a part of the analyses presented in this study. The students were also asked to supply written feedback for items that were either unclear or unrelated to the constructs. Items were revised based on the feedback (version 2,). Finally, K-12 class-room teachers (n = 23) that are involved with webquest creation and implementation in classrooms were invited for a survey that asked them to rate rubric elements for their value and clarity. Items were revised based on the feedback. At the conclusion of this three-step process, the webquest design rubric was composed of nine main indicators with 23 items underlying the proposed webquest rubric constructs: title (4 items), introduction (1 item), task (2 items), process (3 items), resources (3 items), evaluation (2 items), conclusion (2 items), teacher page (2 items) and overall design (4 items). A three-point response scale including "unacceptable", "acceptable", and "target" was utilized. After the rubric was created, twenty-three participants were given a week to evaluate three pre-selected webquests with varying quality using the latest version of the rubric. A month later, the evaluators were asked to re-evaluate the same webquests. In order to investigate the internal consistency and intrarater (test retest) reliability of the ZUNAL webquest design rubric, a series of statistical procedures were employed. The statistical analyses conducted on the ZUNAL webquest rubric pointed to its acceptable reliability. It is reasonable to expect that the consistency we observed in the rubric scores was due to the comprehensiveness of the rubric and clarity of the rubric items and descriptors. Because there are no existing studies focusing on reliability of webquests design rubrics, researchers were unable to make comparisons to discuss the merits of the ZUNAL rubric in relation to others at this point. (Contains 4 tables and 1 figure.)
Informing Science Institute. 131 Brookhill Court, Santa Rosa, CA 95409. Tel: 707-531-4925; Fax: 480-247-5724; e-mail: contactus@informingscience.org; Web site: http://www.informingscience.us/icarus/journals/jiteresearch
Publication Type: Journal Articles; Reports - Research
Education Level: Elementary Secondary Education; Higher Education; Postsecondary Education
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Grant or Contract Numbers: N/A