The study of word embedding representations in different domains

Word embedding has been a popular research topic since 2003 when Mikolov and his colleagues proposed a few new algorithms. These algorithms which were modified from the existing Machine Learning architectures. It allows machine to learn meaning behind words using an unsupervised manner. These propo...

Full description

Saved in:
Bibliographic Details
Main Author: Seng, Jeremy Jie Min
Other Authors: Chng Eng Siong
Format: Final Year Project
Language:English
Published: 2016
Subjects:
Online Access:http://hdl.handle.net/10356/69145
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Word embedding has been a popular research topic since 2003 when Mikolov and his colleagues proposed a few new algorithms. These algorithms which were modified from the existing Machine Learning architectures. It allows machine to learn meaning behind words using an unsupervised manner. These proposed algorithms were able to determine how close two words are in a vector by measuring the cosine similarity distance. However, much work can be done to determine if these proposed methods can further to determine the context of a sentence or a paragraphs using these cosine distances. As the proposed algorithms requires a large dictionary of words or commonly referred to a corpus in this report, the author wishes to find out if the corpus supplied with articles found in Wikipedia are able to show the closeness of two words in different context.