ERIC Number: EJ791171
Record Type: Journal
Publication Date: 2007
Reference Count: 0
Answering the Complex Question of "How Good Is Good Enough?"
Assessment Update, v19 n4 p1-2, 12-13 Jul-Aug 2007
Imagine that a student--let's call him Michael--has earned a score of 55 on an examination. How well did he do? Was his score good enough for him to pass the examination, or pass the course, or be deemed at least minimally competent in whatever the exam was assessing? Michael's score alone, in the absence of any other information, cannot answer these questions. Indeed, it indicates nothing about how well he did. In order to determine if his score was "good enough," it must be compared against something else, another number that has been variously called a benchmark, standard, target, criterion, or "brightline" for minimally acceptable performance. Understanding how well Michael did is a complex question because there are so many different "something elses" out there, each answering a different question and each with its own strengths and shortcomings. A score of 55 could indicate either a passing or failing grade, depending on whether this exam is measured by a local standards-based benchmark or an external standards-based benchmark. These types of peer-referenced benchmarks, according to this author, exhibit several shortcomings: (1) the difficulty of identifying appropriate peers and collecting comparable information from them; and (2) the possibility of misinterpretation. A variation of the peer-referenced perspective, called a "best practice benchmark," asks the question, "How does our institution compare to the "best" of its peers?" This approach is seen in those who study top-rated institutions in "U.S. News & World Report" rankings and strategize how their institutions might emulate such institutions. Applied to student learning, the best practice approach is appropriate only if there is a strong, pervasive commitment to improving teaching and learning. A "value-added benchmark" measures a student's improvement based on previous exam scores, but this approach may not be relevant, depending on the availability of historical data. Another approach, the "strengths and weaknesses benchmark" may address the question of a student's relative strengths and areas for improvement, but, again, the shortcoming of this approach is that on many assessments, comparable subscores are unavailable. A "productivity benchmark," calculated by dividing the cost of instruction by the number of credit hours generated, can answer the question of whether or not an institution is getting the most from their investment, but can also result in drawing the focus away from educational effectiveness. Some policymakers might be tempted to focus on the lowest-cost approach, even if the quality of the educational experience suffers. The author concludes that, in essence, the benchmarks described in this essay can be thought of as a set of lenses through which assessment results can be viewed. While each benchmark or lens can give a view of the student learning, the view through each lens is somewhat incomplete, because each looks at the object from only one angle and is somewhat distorted because no assessment tool or strategy is completely accurate. Viewing assessment results using any one benchmark thus gives an incomplete and somewhat distorted perception of student learning. Examining student learning through multiple lenses provides the best chance of answering the complex question of what students have learned and whether their level of learning is good enough.
Descriptors: Educational Experience, Teaching Methods, Academic Achievement, Instructional Effectiveness, Scores
Jossey Bass. Available from John Wiley & Sons, Inc. 111 River Street, Hoboken, NJ 07030-5774. Tel: 800-825-7550; Tel: 201-748-6645; Fax: 201-748-6021; e-mail: email@example.com; Web site: http://www3.interscience.wiley.com/cgi-bin/jhome/86511121
Publication Type: Journal Articles; Reports - Descriptive
Education Level: N/A
Authoring Institution: N/A