Study on human motion prediction for human robot collaboration in manufacturing
Prediction of human motion experiences great development based on deep neural networks, especially future human location. However, most of the current researches are done in general urban traffic roadway environments such as outdoor crosswalks or indoor walking environments like shopping malls or me...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Master by Coursework |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/169160 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Prediction of human motion experiences great development based on deep neural networks, especially future human location. However, most of the current researches are done in general urban traffic roadway environments such as outdoor crosswalks or indoor walking environments like shopping malls or metro entrances. In order to support human robot collaboration, this dissertation explores the task of motion recognition and prediction in smart manufacturing environment. A new dataset: the HUman Motion in Manufacturing (HUMM) dataset containing raw video for a total of 18.5 hours was collected from two detection perspectives. Using the real video from a smart factory as a reference, different motion patterns were designed to ensure the dataset was as close to the real manufacturing environment as possible. Upon pre-processing of this dataset, a Dynamic-Trajectory-Predictor (DTP) based on ResNet deep learning network was proposed for human motion prediction in manufacturing. ResNet accepts optical flows of video to output a compensation term which assists prediction. After experiments in different settings such as pre-processing operations or prediction length, the proposed DTP is able to predict the future human location in the next 0.5s in the short future using the video data in a 60 frame-per-second (fps) mode from a fixed detection perspective (FDP). As for the video data in a 60 frame-per-second mode from a first-person perspective (FPP), the prediction step is 0.25s. In terms of the performance of the DTP, the Mean Squared Errors of 507 and 1814 square pixels can be obtained for FDP and FPP, respectively. Compared with an existing human location prediction method, the results are relatively satisfactory, which can be regarded as a testament to the representativeness and quality of the HUMM dataset, and also implies the effectiveness and prospect of the proposed DTP for human future location prediction and real-time system support in manufacturing environment.
Keywords: Human Motion Prediction, Manufacturing, Human Robot collaboration, Dataset Construction, DTP |
---|