ERIC Number: ED453285
Record Type: RIE
Publication Date: 2001-Apr-13
Reference Count: N/A
Olae: A Bayesian Performance Assessment for Complex Problem Solving.
Olae is a computer system for assessing student knowledge of physics, and Newtonian mechanics in particular, using performance data collected while students solve complex problems. Although originally designed as a stand-alone system, it has also been used as part of the Andes intelligent tutoring system. Like many other performance assessment systems, Olae compares a student's problem-solving behavior step-by-step with the behavior of an ideal model solving the same problem. The main feature of Olae is use of Bayesian networks to assign credit to pieces of knowledge when the student makes a correct step; and blame to knowledge pieces when the student makes an incorrect step. This paper introduces the basic principles of Olae and illustrates how it solves the classic assignment of credit and blame in a way that is not only mathematically sound by also intuitively satisfying. The paper then reviews a series of evaluations that measured the reliability of Olae and its validity and sensitivity to its parameters. This paper synthesizes results across these studies to support the conclusion that, although Olae is a viable solution to the problem of complex performance assessment, all model-based assessments have two fundamental inadequacies that cause them to lose important data if students stop following expected solution paths or refuse to enter their intermediate results into the computer. These inadequacies harm their accuracy no matter what method of data analysis is used. In fact, human assessors, who were used as the gold standard when evaluating Olae, are equally hampered by these inadequacies. A promising solution appears to be to use tutoring systems to do assessment. (Contains 3 figures and 19 references.) (Author/SLD)
Publication Type: Information Analyses; Reports - Descriptive; Speeches/Meeting Papers
Education Level: N/A
Authoring Institution: N/A
Note: Paper presented at the Annual Meeting of the National Council on Measurement in Education (Seattle, WA, April 11-13, 2001).