Integration and classification of documents from multiple sources
Knowledge lies in various sources and can be found in different format and shape. One of the greatest source of knowledge we often rely on in our daily lives is non-other than the internet. There, information is mostly encoded in unstructured text or documents. In addition, knowledge extraction from...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
2013
|
Subjects: | |
Online Access: | http://hdl.handle.net/10356/54429 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Knowledge lies in various sources and can be found in different format and shape. One of the greatest source of knowledge we often rely on in our daily lives is non-other than the internet. There, information is mostly encoded in unstructured text or documents. In addition, knowledge extraction from text/documents these days relies on manual entry, which is often time-consuming and laborious. In order to solve this problem, a human-like intelligent agent that is capable of reasoning and decision making is built.
The objective of this project is to integrate web-documents from multiple sources and classify them using the LSA (Latent Semantic Analysis) technique. A number of websites originated from a Google query input go through several processes such as text parsing, HTML tags removal, TF-IDF term weighting and normalization, also cosine similarity grouping through SVD (Singular Value Decomposition).
The system is built as a Java application and able to filter and group closely related documents by building a vector space model. |
---|