A visual based target detection and tracking system (II)
The main objective of this project is to design and create a visual sensor system and it’s Graphics User Interface (GUI). 4 cameras were used for this visual sensor system to detect and identify the different robots from the images captured. There are 2 major portions in this project, the calibratio...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
2010
|
Subjects: | |
Online Access: | http://hdl.handle.net/10356/40906 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-40906 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-409062023-07-07T17:25:31Z A visual based target detection and tracking system (II) Tan, Augustine Peng Soon. Xie Lihua School of Electrical and Electronic Engineering DRNTU::Engineering::Electrical and electronic engineering::Control and instrumentation::Control engineering The main objective of this project is to design and create a visual sensor system and it’s Graphics User Interface (GUI). 4 cameras were used for this visual sensor system to detect and identify the different robots from the images captured. There are 2 major portions in this project, the calibration process and the extraction process. The calibration process includes the color calibration, position calibration and the robot configuration. Previously the positions and orientations of the robot were calculated using Stargazer. In this project the Stargazer has been removed and the position and orientations of the robots will now be calculated solely from the image captured by the cameras. Color calibration is done by setting the values of the Hue, Saturation and Value (HSV) of the color to get the required color. Position calibration uses the known field position’s image point to calculate the homography matrix. This matrix is then used for the calculation of all the field coordinates. Robot configuration is to set the color combination of the robot to enable the identification of the robot from the images captured. The extraction process has 2 parts to it. The first part is the processing of the images captured to determine the robot’s ID and the calculation of its position and orientation. The second part is to send this calculated data to another computer via wireless communications. TCP/IP is the form of wireless communication chosen for this project. Bachelor of Engineering 2010-06-24T01:12:44Z 2010-06-24T01:12:44Z 2010 2010 Final Year Project (FYP) http://hdl.handle.net/10356/40906 en Nanyang Technological University 82 p. application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
DRNTU::Engineering::Electrical and electronic engineering::Control and instrumentation::Control engineering |
spellingShingle |
DRNTU::Engineering::Electrical and electronic engineering::Control and instrumentation::Control engineering Tan, Augustine Peng Soon. A visual based target detection and tracking system (II) |
description |
The main objective of this project is to design and create a visual sensor system and it’s Graphics User Interface (GUI). 4 cameras were used for this visual sensor system to detect and identify the different robots from the images captured. There are 2 major portions in this project, the calibration process and the extraction process.
The calibration process includes the color calibration, position calibration and the robot configuration. Previously the positions and orientations of the robot were calculated using Stargazer. In this project the Stargazer has been removed and the position and orientations of the robots will now be calculated solely from the image captured by the cameras. Color calibration is done by setting the values of the Hue, Saturation and Value (HSV) of the color to get the required color. Position calibration uses the known field position’s image point to calculate the homography matrix. This matrix is then used for the calculation of all the field coordinates. Robot configuration is to set the color combination of the robot to enable the identification of the robot from the images captured.
The extraction process has 2 parts to it. The first part is the processing of the images captured to determine the robot’s ID and the calculation of its position and orientation. The second part is to send this calculated data to another computer via wireless communications. TCP/IP is the form of wireless communication chosen for this project. |
author2 |
Xie Lihua |
author_facet |
Xie Lihua Tan, Augustine Peng Soon. |
format |
Final Year Project |
author |
Tan, Augustine Peng Soon. |
author_sort |
Tan, Augustine Peng Soon. |
title |
A visual based target detection and tracking system (II) |
title_short |
A visual based target detection and tracking system (II) |
title_full |
A visual based target detection and tracking system (II) |
title_fullStr |
A visual based target detection and tracking system (II) |
title_full_unstemmed |
A visual based target detection and tracking system (II) |
title_sort |
visual based target detection and tracking system (ii) |
publishDate |
2010 |
url |
http://hdl.handle.net/10356/40906 |
_version_ |
1772825386625794048 |