System integration of a vision-based robot system for the food industry

As technology becomes more advance, so too does the field of robotics and automation. Many robots can be found assembling vehicle parts in automotive factories. These robots consist mainly of mechanical arms programmed to do welding and screwing on some parts of the cars. Nowadays, the definition of...

全面介紹

Saved in:
書目詳細資料
主要作者: CHIA, JING CHENG
其他作者: Chen I-Ming
格式: Final Year Project
語言:English
出版: Nanyang Technological University 2022
主題:
在線閱讀:https://hdl.handle.net/10356/158861
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English
實物特徵
總結:As technology becomes more advance, so too does the field of robotics and automation. Many robots can be found assembling vehicle parts in automotive factories. These robots consist mainly of mechanical arms programmed to do welding and screwing on some parts of the cars. Nowadays, the definition of robotics evolved and expanded that includes the development, innovation, and use of robot for surveillance in harsh environment, robot that assist in many aspects in healthcare and even autonomous vehicle deploying in many places in Singapore for a future intelligent traffic system. This is especially true with the development of Artificial Intelligence in robotics industry which makes high level autonomy of robots possible in the complicated environment. Deep learning approach is widely utilised in robotics field such as object detection, robot navigation, natural language processing and point cloud registration. The purpose of this Final-Year Project is to integrate point cloud registration method into the vision-based food assembly robot. The main objective is to match two point cloud data collected from two depth cameras into higher quality depth information, wider perspective and lesser blind spot missing area point cloud data. Recent deep point cloud matching method mostly focus on standard point cloud data with high overlapping ratio but rarely deploy in practical application. Therefore, this project will focus on comparison on different developed point cloud registration approach on both standard data and real-world data. To compare quality of data collected from different tilt angles of camera, the author will design a tilt module for the depth cameras.