Did you expect your users to say this?: Distilling unexpected micro-reviews for venue owners

With social media platforms such as Foursquare, users can now generate concise reviews, i.e. micro-reviews, about entities such as venues (or products). From the venue owner's perspective, analysing these micro-reviews will offer interesting insights, useful for event detection and customer rel...

Full description

Saved in:
Bibliographic Details
Main Authors: CHONG, Wen-Haw, DAI, Bingtian, LIM, Ee-Peng
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2015
Subjects:
tip
Online Access:https://ink.library.smu.edu.sg/sis_research/3105
https://ink.library.smu.edu.sg/context/sis_research/article/4105/viewcontent/Distilling_Unexpected_Micro_Reviews_pv.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
Description
Summary:With social media platforms such as Foursquare, users can now generate concise reviews, i.e. micro-reviews, about entities such as venues (or products). From the venue owner's perspective, analysing these micro-reviews will offer interesting insights, useful for event detection and customer relationship management. However not all micro-reviews are equally important, especially since a venue owner should already be familiar with his venue's primary aspects. Instead we envisage that a venue owner will be interested in micro-reviews that are unexpected to him. These can arise in many ways, such as users focusing on easily overlooked aspects (by the venue owner), making comparisons with competitors, using unusual language or mentioning rare venue-related events, e.g. a dish being contaminated with bugs. Hence in this study, we propose to discover unexpected information in micro-reviews, primarily to serve the needs of venue owners. Our proposed solution is to score and rank micro-reviews, for which we design a novel topic model, Sparse Additive Micro-Review (SAMR). Our model surfaces micro-review topics related to the venues. By properly offsetting these topics, we then derive unexpected micro-reviews. Qualitatively, we observed reasonable results for many venues. We then evaluate ranking accuracy using both human annotation and an automated approach with synthesized data. Both sets of evaluation indicate that our novel topic model, Sparse Additive Micro-Review (SAMR) has the best ranking accuracy, outperforming baselines using chi-square statistics and the vector space model.