Binocular vision-guided manipulation by robotic arm

Visual signals are paramount in conferring human-like intelligence to robots, vehicles, and machines. Binocular vision, akin to its role in human comprehension of a dynamic world, is equally crucial for intelligent robots and machines to extract knowledge from visual signals. However, stereovision m...

全面介紹

Saved in:
書目詳細資料
主要作者: Fang, Yuhui
其他作者: Xie Ming
格式: Thesis-Master by Coursework
語言:English
出版: Nanyang Technological University 2024
主題:
在線閱讀:https://hdl.handle.net/10356/173393
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:Visual signals are paramount in conferring human-like intelligence to robots, vehicles, and machines. Binocular vision, akin to its role in human comprehension of a dynamic world, is equally crucial for intelligent robots and machines to extract knowledge from visual signals. However, stereovision matching presents a notable challenge for these entities. This paper introduces an innovative approach to tackle this challenge, emphasizing a robust matching solution that incorporates top-down image sampling, hybrid feature extraction, and the integration of a Restricted Coulomb Energy (RCE) neural network for incremental learning and robust recognition. Furthermore, the paper explores the analogy between the human eye and a pan-tiltzoom (PTZ) camera, prompting the intriguing question of whether simpler, easily calibratable formulas exist for computing depth and displacement. The paper unveils a groundbreaking discovery in the domain of 3D projection for human-like binocular vision systems. This discovery facilitates forward and inverse transformations between 2D digital images and a 3D analogue scene. The revealed formulas are accurate, easily computable, tunable on the fly, and suitable for implementation in a neural system. Experimental results affirm the efficacy of these formulas, offering a promising avenue for simplified and calibration-friendly 3D projection in binocular vision systems.