Generating semantically similar permutations of questions by clustering

With sophisticated machine learning techniques available to the public, many industry has used their own data to solve their own problems, including training chat bots. However, a lack of data is major concern when trying to train a bot for specific use-cases, such as a university FAQ-answerin...

全面介紹

Saved in:
書目詳細資料
主要作者: Famili, Kurniawan Aryanto
其他作者: Chng Eng Siong
格式: Final Year Project
語言:English
出版: 2018
主題:
在線閱讀:http://hdl.handle.net/10356/74129
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:With sophisticated machine learning techniques available to the public, many industry has used their own data to solve their own problems, including training chat bots. However, a lack of data is major concern when trying to train a bot for specific use-cases, such as a university FAQ-answering bot. The researcher proposes a solution to create more training data by generating question permutations of existing questions from the campus’ FAQ page. The proposed system employs a combination of rule-based and cluster-based approach. The rule-based approach takes a straightforward way of doing parts-of-speech tagging on the question, finding synonyms of the applicable words in WordNet, and producing new questions by replacing the original words with them and restructuring based on production rules. The cluster-based approach relies on mining question patterns from existing questions, finding the ones semantically similar with a given question by a clustering algorithm such as K-means or affinity propagation, and generating permutations from the question patterns. An experiment with a small dataset of manually-written 30 questions covering 6 topics resulted in an F1 score of 0.561 for both clustering algorithms paired with sent2vec using a pre-trained model. A web-based user testing experiment required users to ask a question regarding 6 topics and rate the quality of generated permutations with a score range 0-3. The overall average score is 1.18/3.00 (39.3%). It is noted that for the topic with the most questions in the dataset, the average score is 1.92/3.00 (64%). Given a big enough dataset, it is believed that the generator’s performance would be able to solve the problem more efficiently and accurately across all topics.