Low-latency compression of mocap data using learned spatial decorrelation transform
Due to the growing needs of motion capture (mocap) in movie, video games, sports, etc., it is highly desired to compress mocap data for efficient storage and transmission. Unfortunately, the existing compression methods have either high latency or poor compression performance, making them less appea...
Saved in:
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2018
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/89393 http://hdl.handle.net/10220/46234 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Due to the growing needs of motion capture (mocap) in movie, video games, sports, etc., it is highly desired to compress mocap data for efficient storage and transmission. Unfortunately, the existing compression methods have either high latency or poor compression performance, making them less appealing for time-
critical applications and/or network with limited bandwidth. This paper presents two efficient methods to compress mocap data with low latency. The first method processes the data in a frame-by-frame manner so that it is ideal for mocap data streaming. The second one is clip-oriented and provides a flexible trade-off between latency and compression performance. It can achieve higher compression performance while keeping the latency fairly low and controllable. Observing that mocap data exhibits some unique spatial characteristics, we learn an orthogonal transform to reduce the spatial redundancy. We formulate the
learning problem as the least square of reconstruction error regularized by orthogonality and sparsity, and solve it via alternating iteration. We also adopt a predictive coding and temporal DCT for temporal decorrelation in the frame- and clip-oriented methods, respectively. Experimental results show that the
proposed methods can produce higher compression performance at lower computational cost and latency than the state-of-the-art methods. Moreover, our methods are general and applicable to various types of mocap data. |
---|