RenderFi: human pose rendering via wireless signals

Novel pose rendering is a burgeoning research area in many application scenarios, with the aid of human mesh reconstruction (HMR) methods to acquire external-source SMPL pose parameters. Existing HMR methods typically rely on images or wearable devices, where the former can be easily compromised by...

Full description

Saved in:
Bibliographic Details
Main Author: Huang, Runxi
Other Authors: Xie Lihua
Format: Thesis-Master by Coursework
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/177602
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Novel pose rendering is a burgeoning research area in many application scenarios, with the aid of human mesh reconstruction (HMR) methods to acquire external-source SMPL pose parameters. Existing HMR methods typically rely on images or wearable devices, where the former can be easily compromised by poor lighting or occlusions, and the latter may cause privacy intrusions. To overcome these limitations, we introduce RenderFi, an end-to-end, multi-task, multimodal learning framework that integrates wireless signals and image-derived SMPL parameters to enhance cross-modal supervision. This approach not only improves the robustness of pose estimation under varying environmental conditions but also leverages synchronized multimodal data from the MM-Fi dataset. RenderFi processes inputs from multiple wireless sensor signals to generate accurate 3D human keypoints and SMPL pose parameters. Although our framework may not surpass traditional methods across all metrics, it pioneers new avenues for rendering human poses in static scenes. We demonstrate its potential through reconstructed views for novel pose rendering and through quantitative assessments in 3D human pose estimation.