Automated image captioning

Technology advancement has brought about many changes in human interaction. Information can now be easily accessed through the web and data can be stored online. This results in an influx of data stored on the web. Due to development in technologies, automated identification has become a popular sub...

Full description

Saved in:
Bibliographic Details
Main Author: Teo, Sabrina Jingya
Other Authors: Chan Chok You, John
Format: Final Year Project
Language:English
Published: 2017
Subjects:
Online Access:http://hdl.handle.net/10356/71076
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-71076
record_format dspace
spelling sg-ntu-dr.10356-710762023-07-07T16:09:59Z Automated image captioning Teo, Sabrina Jingya Chan Chok You, John Chau Lap Pui School of Electrical and Electronic Engineering DRNTU::Engineering::Electrical and electronic engineering Technology advancement has brought about many changes in human interaction. Information can now be easily accessed through the web and data can be stored online. This results in an influx of data stored on the web. Due to development in technologies, automated identification has become a popular subject. It caused any changes in the way human store and process image data. Other than efficient storage and classification of the massive data which is the initial purpose that sparks the interest in the field, automated identification can help the lives of the less fortunate. This project makes use of Tesseract, an optical character recognition engine and text- to-speech system to aim to build a system to assist the visually impaired on their daily interactions and activities especially in accessing printed materials. Texts are the most common form of data input on the computer or mobile phone. With decrease visibility, the visually impaired is restricted to information accessible on electronic devices. This system will improve their access to these information. Bachelor of Engineering 2017-05-15T04:05:22Z 2017-05-15T04:05:22Z 2017 Final Year Project (FYP) http://hdl.handle.net/10356/71076 en Nanyang Technological University 57 p. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic DRNTU::Engineering::Electrical and electronic engineering
spellingShingle DRNTU::Engineering::Electrical and electronic engineering
Teo, Sabrina Jingya
Automated image captioning
description Technology advancement has brought about many changes in human interaction. Information can now be easily accessed through the web and data can be stored online. This results in an influx of data stored on the web. Due to development in technologies, automated identification has become a popular subject. It caused any changes in the way human store and process image data. Other than efficient storage and classification of the massive data which is the initial purpose that sparks the interest in the field, automated identification can help the lives of the less fortunate. This project makes use of Tesseract, an optical character recognition engine and text- to-speech system to aim to build a system to assist the visually impaired on their daily interactions and activities especially in accessing printed materials. Texts are the most common form of data input on the computer or mobile phone. With decrease visibility, the visually impaired is restricted to information accessible on electronic devices. This system will improve their access to these information.
author2 Chan Chok You, John
author_facet Chan Chok You, John
Teo, Sabrina Jingya
format Final Year Project
author Teo, Sabrina Jingya
author_sort Teo, Sabrina Jingya
title Automated image captioning
title_short Automated image captioning
title_full Automated image captioning
title_fullStr Automated image captioning
title_full_unstemmed Automated image captioning
title_sort automated image captioning
publishDate 2017
url http://hdl.handle.net/10356/71076
_version_ 1772827219514621952