Video coding with dynamic background

Motion estimation (ME) and motion compensation (MC) using variable block size, sub-pixel search, and multiple reference frames (MRFs) are the major reasons for improved coding performance of the H.264 video coding standard over other contemporary coding standards. The concept of MRFs is suitable for...

Full description

Saved in:
Bibliographic Details
Main Authors: Paul, Manoranjan, Lin, Weisi, Lau, Chiew Tong, Lee, Bu-Sung
Other Authors: School of Computer Engineering
Format: Article
Language:English
Published: 2013
Subjects:
Online Access:https://hdl.handle.net/10356/96303
http://hdl.handle.net/10220/10210
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-96303
record_format dspace
spelling sg-ntu-dr.10356-963032020-05-28T07:41:41Z Video coding with dynamic background Paul, Manoranjan Lin, Weisi Lau, Chiew Tong Lee, Bu-Sung School of Computer Engineering DRNTU::Engineering::Computer science and engineering Motion estimation (ME) and motion compensation (MC) using variable block size, sub-pixel search, and multiple reference frames (MRFs) are the major reasons for improved coding performance of the H.264 video coding standard over other contemporary coding standards. The concept of MRFs is suitable for repetitive motion, uncovered background, non-integer pixel displacement, lighting change, etc. The requirement of index codes of the reference frames, computational time in ME & MC, and memory buffer for coded frames limits the number of reference frames used in practical applications. In typical video sequences, the previous frame is used as a reference frame with 68–92% of cases. In this article, we propose a new video coding method using a reference frame [i.e., the most common frame in scene (McFIS)] generated by dynamic background modeling. McFIS is more effective in terms of rate-distortion and computational time performance compared to the MRFs techniques. It has also inherent capability of scene change detection (SCD) for adaptive group of picture (GOP) size determination. As a result, we integrate SCD (for GOP determination) with reference frame generation. The experimental results show that the proposed coding scheme outperforms the H.264 video coding with five reference frames and the two relevant state-of-the-art algorithms by 0.5–2.0 dB with less computational time. Published version 2013-06-12T01:30:09Z 2019-12-06T19:28:30Z 2013-06-12T01:30:09Z 2019-12-06T19:28:30Z 2013 2013 Journal Article Paul, M., Lin, W., Lau, C. T., & Lee, B. -S. (2013). Video coding with dynamic background. EURASIP Journal on Advances in Signal Processing, 2013(1), 11. 1687-6180 https://hdl.handle.net/10356/96303 http://hdl.handle.net/10220/10210 10.1186/1687-6180-2013-11 en EURASIP journal on advances in signal processing © 2013 Paul et al; licensee Springer. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This paper was published in EURASIP Journal on Advances in Signal Processing and is made available as an electronic reprint (preprint) with permission of The Author(s). The paper can be found at the following official DOI: [http://dx.doi.org/10.1186/1687-6180-2013-11].  One print or electronic copy may be made for personal use only. Systematic or multiple reproduction, distribution to multiple locations via electronic or other means, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper is prohibited and is subject to penalties under law. application/pdf
institution Nanyang Technological University
building NTU Library
country Singapore
collection DR-NTU
language English
topic DRNTU::Engineering::Computer science and engineering
spellingShingle DRNTU::Engineering::Computer science and engineering
Paul, Manoranjan
Lin, Weisi
Lau, Chiew Tong
Lee, Bu-Sung
Video coding with dynamic background
description Motion estimation (ME) and motion compensation (MC) using variable block size, sub-pixel search, and multiple reference frames (MRFs) are the major reasons for improved coding performance of the H.264 video coding standard over other contemporary coding standards. The concept of MRFs is suitable for repetitive motion, uncovered background, non-integer pixel displacement, lighting change, etc. The requirement of index codes of the reference frames, computational time in ME & MC, and memory buffer for coded frames limits the number of reference frames used in practical applications. In typical video sequences, the previous frame is used as a reference frame with 68–92% of cases. In this article, we propose a new video coding method using a reference frame [i.e., the most common frame in scene (McFIS)] generated by dynamic background modeling. McFIS is more effective in terms of rate-distortion and computational time performance compared to the MRFs techniques. It has also inherent capability of scene change detection (SCD) for adaptive group of picture (GOP) size determination. As a result, we integrate SCD (for GOP determination) with reference frame generation. The experimental results show that the proposed coding scheme outperforms the H.264 video coding with five reference frames and the two relevant state-of-the-art algorithms by 0.5–2.0 dB with less computational time.
author2 School of Computer Engineering
author_facet School of Computer Engineering
Paul, Manoranjan
Lin, Weisi
Lau, Chiew Tong
Lee, Bu-Sung
format Article
author Paul, Manoranjan
Lin, Weisi
Lau, Chiew Tong
Lee, Bu-Sung
author_sort Paul, Manoranjan
title Video coding with dynamic background
title_short Video coding with dynamic background
title_full Video coding with dynamic background
title_fullStr Video coding with dynamic background
title_full_unstemmed Video coding with dynamic background
title_sort video coding with dynamic background
publishDate 2013
url https://hdl.handle.net/10356/96303
http://hdl.handle.net/10220/10210
_version_ 1681056413065936896