Exploring language model for better semantic matching of text paragraphs

Natural Language Processing (NLP) has come a long way and modern NLP study has catapulted the proliferate use of NLP incorporated into our everyday lives for short texts. However, the same cannot be said for long text sentence documents. Some current NLP models work well for short texts but suf...

全面介紹

Saved in:
書目詳細資料
主要作者: Ng, Kwang Sheng
其他作者: Lihui Chen
格式: Final Year Project
語言:English
出版: Nanyang Technological University 2022
主題:
在線閱讀:https://hdl.handle.net/10356/157772
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English
實物特徵
總結:Natural Language Processing (NLP) has come a long way and modern NLP study has catapulted the proliferate use of NLP incorporated into our everyday lives for short texts. However, the same cannot be said for long text sentence documents. Some current NLP models work well for short texts but suffer when the length of the text increases in size, processing time growing in exponential time with poor results. In recent times, state-of-the-art (SOTA) BERT NLP model propelled existing work forward significantly with their approach. New methods such as Sentence-BERT (SBERT) or Simple Contrasting Learning (SimCSE), basing their work of BERT, experimented and achieved similar outcome as BERT. This report aims to learn how effective the two new models are. In this project, the two models will be put to the test with a patent dataset available online, ‘PatentMatch’ that consist of patent claims and when tested out by the PatentMatch team with the SOTA BERT only managed to achieve 54% accuracy. Utilising pretrained models from SBERT and SimCSE, the PatentMatch test balanced dataset was tested with training and without training to learn how the average cosine similarity score would change and how the models will perform. The experiment was replicated several times with different parameters set. The output from the 2 models varies with the pretrained models used, with models having an accuracy rate around the same as BERT model but was done so at a much quicker time. F1 score for both models look promising with some fine-tuned pretrained models scoring around 66% with quite a high precision and recall score. Both models have the potential to perform even better but a better and more complex pretrained model will need to be used for them to shine.