Automatic closed caption generation from video files

The idea of speech recognition using computers and software is not new. However, for years, its rather low accuracy and constantly changing variables, such as a speaker’s accent, background noise, etc. has resulted in a low adoption rate (Challenges in adopting speech recognition, 2004), until a boo...

全面介紹

Saved in:
書目詳細資料
主要作者: Tan, Kenneth Chengwei
其他作者: Chng Eng Siong
格式: Final Year Project
語言:English
出版: 2014
主題:
在線閱讀:http://hdl.handle.net/10356/59631
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:The idea of speech recognition using computers and software is not new. However, for years, its rather low accuracy and constantly changing variables, such as a speaker’s accent, background noise, etc. has resulted in a low adoption rate (Challenges in adopting speech recognition, 2004), until a boom in the medical industry with the adoption of electronic health records (EHRs) (Speech Recognition Booms As EHR Adoption Grows, 2013). Speech recognition has generally been kept for specialized and educational purposes, but are now hitting the mainstream industries and is permeating into the everyday lives of its users, for example – voice commands for smartphones, dictation software for personal computers and even home automation. (Say What? Google Works to Improve YouTube Auto-Captions for the Deaf, 2011) Through the use of speech recognition, much effort and time spent in traditionally entering large amounts of text manually into a computer can now be cut down drastically. The process of creating a closed caption file for a video manually, requires effort to listen, enter and finally synchronize the close captions to the audio track of a video, from the transcriber, and it can be tedious and time consuming. Using speech recognition, this process can be automated to produce closed captions, at a fraction of that time and effort previously required. This final year project report documents the development of an application that automatically generates closed captions from the input of a video file. It also discusses about the current speech recognition technology, and potential improvements and modifications that may be added to the application in future. The project commenced in August 2013 and completed in February 2014.