Implementing semantic search for textual data in web applications
Semantic search, also known as vector search, retrieves data based on their semantic similarity. It is enabled by sentence embeddings, which are high-dimension vectors that encapsulate the semantic meaning of sentences. Compared to traditional keyword search, semantic search accounts for the true in...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/177123 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Semantic search, also known as vector search, retrieves data based on their semantic similarity. It is enabled by sentence embeddings, which are high-dimension vectors that encapsulate the semantic meaning of sentences. Compared to traditional keyword search, semantic search accounts for the true intent of user queries which keyword search struggles to capture.
This paper explores the implementation of semantic search in web applications by processing sentences using Sentence-BERT (SBERT), which is a pre-trained deep learning language model for generating meaningful, high-dimension vectors called sentence embeddings. These embeddings are then stored in PostgreSQL database with an extension, vector, which enables efficient similarity comparisons during search queries.
This work details the findings from developing a vector search system, integrating it with a web application, and deploying it online. |
---|