NotesFAQContact Us
Collection
Advanced
Search Tips
ERIC Number: ED556143
Record Type: Non-Journal
Publication Date: 2013
Pages: 107
Abstractor: As Provided
Reference Count: N/A
ISBN: 978-1-3035-4796-6
ISSN: N/A
Automated Assessment of Reviews
Ramachandran, Lakshmi
ProQuest LLC, Ph.D. Dissertation, North Carolina State University
Relevance helps identify to what extent a review's content pertains to that of the submission. Relevance metric helps distinguish generic or vague reviews from the useful ones. Relevance of a review to a submission can be determined by identifying semantic and syntactic similarities between them. Our work introduces the use of a word-order graph representation, where vertices, edges and double edges (two contiguous edges) help capture sentence-structure information. Our matching technique exploits contextual similarities to determine relevance across texts. We use a WordNet-based relations metric to identify relatedness. During graph matching single and contiguous edges are compared in same and different orders to identify possible paraphrases involving word order shuffling. Review content helps identify what type of content a review contains. In this work we focus on three types of review content namely: "summative" (containing a summary or praise), "problem detection" (identifying problems in the author's work) and "advisory" (providing suggestions for improvement). A review may contain each of these content types at varying degrees. A graph-based pattern identification technique is used to determine the types of content a review contains. Patterns are extracted from reviews that represent each type of content using a cohesion detection technique. Edges that are most semantically similar to other edges in a graph are selected as patterns. These edge patterns may be used to identify the extent to which reviews contain each type of content. Reviews must be thorough in discussing a submission's content. At times a review may be based on just one section in a document, say the Introduction. Review coverage is the extent to which a review covers the "important topics" in a document. We study the coverage of a submission by a review using an agglomerative clustering technique to group the submission's sentences into topic clusters. Topic sentences from these clusters are used to calculate review coverage in terms of the degree of overlap between a review and the submission's topic sentences. Review tone helps identify whether a reviewer has used positive or negative words in the review, or has provided an objective assessment of the author's work. While a positive or an objective assessment may be well received by the author, the use of harsh or offensive words or phrases may disincline the author from using the feedback to fix their work. A review's tone is determined in terms of its semantic orientation, i.e., the presence or absence of positively or negatively oriented words. Review quantity is the number of unique tokens a review contains. The purpose of this metric is to encourage reviewers to write more feedback, since feedback with specific examples and additional explanation may be more useful to the author. Plagiarism is an important metric because reviewers who are evaluated on the quality of their reviews may tend to game the automated system to get higher ratings. We look for plagiarism by comparing a review's text with text from the submission and with text from the Internet to make sure that the reviewer has not copy-pasted text to make the review seem relevant. Relevance, content and coverage identification approaches have been evaluated on data from Expertiza, a collaborative web-based learning application. Our experimental results indicate that our word order based relevance identification technique succeeds in achieving an f-measure of 0.67. In another experiment we found that the pattern-based content type identification approach has an f-measure of 0.74, which is higher than the performance of text categorization techniques such as multiclass support vector machines and logistic regression. Our experiment on coverage analysis indicates a high correlation of 0.51 between system-generated and human-provided coverage values. We also report our results from a user study conducted to evaluate the usefulness of an automated review quality assessment system. Participants in the study found relevance to be most important metric in assessing review quality. Participants found the system's output from metrics such as "content type" and "plagiarism" to be most useful in helping them learn about their reviews. (Abstract shortened by UMI.). [The dissertation citations contained here are published with the permission of ProQuest LLC. Further reproduction is prohibited without permission. Copies of dissertations may be obtained by Telephone (800) 1-800-521-0600. Web page: http://www.proquest.com/en-US/products/dissertations/individuals.shtml.]
ProQuest LLC. 789 East Eisenhower Parkway, P.O. Box 1346, Ann Arbor, MI 48106. Tel: 800-521-0600; Web site: http://www.proquest.com/en-US/products/dissertations/individuals.shtml
Publication Type: Dissertations/Theses - Doctoral Dissertations
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A