Low-latency compression of mocap data using learned spatial decorrelation transform

Due to the growing needs of motion capture (mocap) in movie, video games, sports, etc., it is highly desired to compress mocap data for efficient storage and transmission. Unfortunately, the existing compression methods have either high latency or poor compression performance, making them less appea...

全面介紹

Saved in:
書目詳細資料
Main Authors: Hou, Junhui, Chau, Lap-Pui, Magnenat-Thalmann, Nadia, He, Ying
其他作者: School of Computer Science and Engineering
格式: Article
語言:English
出版: 2018
主題:
在線閱讀:https://hdl.handle.net/10356/89393
http://hdl.handle.net/10220/46234
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:Due to the growing needs of motion capture (mocap) in movie, video games, sports, etc., it is highly desired to compress mocap data for efficient storage and transmission. Unfortunately, the existing compression methods have either high latency or poor compression performance, making them less appealing for time- critical applications and/or network with limited bandwidth. This paper presents two efficient methods to compress mocap data with low latency. The first method processes the data in a frame-by-frame manner so that it is ideal for mocap data streaming. The second one is clip-oriented and provides a flexible trade-off between latency and compression performance. It can achieve higher compression performance while keeping the latency fairly low and controllable. Observing that mocap data exhibits some unique spatial characteristics, we learn an orthogonal transform to reduce the spatial redundancy. We formulate the learning problem as the least square of reconstruction error regularized by orthogonality and sparsity, and solve it via alternating iteration. We also adopt a predictive coding and temporal DCT for temporal decorrelation in the frame- and clip-oriented methods, respectively. Experimental results show that the proposed methods can produce higher compression performance at lower computational cost and latency than the state-of-the-art methods. Moreover, our methods are general and applicable to various types of mocap data.