CrowdGAN: Identity-free interactive crowd video generation and beyond

In this paper, we introduce a novel yet challenging research problem, interactive crowd video generation, committed to producing diverse and continuous crowd video, and relieving the difficulty of insufficient annotated real-world datasets in crowd analysis. Our goal is to recursively generate reali...

Full description

Saved in:
Bibliographic Details
Main Authors: CHAI, Liangyu, LIU, Yongtuo, LIU, Wenxi, HAN, Guoqiang, HE, Shengfeng
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2022
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/7849
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-8852
record_format dspace
spelling sg-smu-ink.sis_research-88522023-06-15T09:00:05Z CrowdGAN: Identity-free interactive crowd video generation and beyond CHAI, Liangyu LIU, Yongtuo LIU, Wenxi HAN, Guoqiang HE, Shengfeng In this paper, we introduce a novel yet challenging research problem, interactive crowd video generation, committed to producing diverse and continuous crowd video, and relieving the difficulty of insufficient annotated real-world datasets in crowd analysis. Our goal is to recursively generate realistic future crowd video frames given few context frames, under the user-specified guidance, namely individual positions of the crowd. To this end, we propose a deep network architecture specifically designed for crowd video generation that is composed of two complementary modules, each of which combats the problems of crowd dynamic synthesis and appearance preservation respectively. Particularly, a spatio-temporal transfer module is proposed to infer the crowd position and structure from guidance and temporal information, and a point-aware flow prediction module is presented to preserve appearance consistency by flow-based warping. Then, the outputs of the two modules are integrated by a self-selective fusion unit to produce an identity-preserved and continuous video. Unlike previous works, we generate continuous crowd behaviors beyond identity annotations or matching. Extensive experiments show that our method is effective for crowd video generation. More importantly, we demonstrate the generated video can produce diverse crowd behaviors and be used for augmenting different crowd analysis tasks, i.e., crowd counting, anomaly detection, crowd video prediction. Code is available at https://github.com/Icep2020/CrowdGAN. 2022-06-01T07:00:00Z text https://ink.library.smu.edu.sg/sis_research/7849 info:doi/10.1109/TPAMI.2020.3043372 Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Trajectory Task analysis Three-dimensional displays Predictive models Analytical models Uncertainty Solid modeling Crowd video generation data augmentation crowd analysis Information Security
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Trajectory
Task analysis
Three-dimensional displays
Predictive models
Analytical models
Uncertainty
Solid modeling
Crowd video generation
data augmentation
crowd analysis
Information Security
spellingShingle Trajectory
Task analysis
Three-dimensional displays
Predictive models
Analytical models
Uncertainty
Solid modeling
Crowd video generation
data augmentation
crowd analysis
Information Security
CHAI, Liangyu
LIU, Yongtuo
LIU, Wenxi
HAN, Guoqiang
HE, Shengfeng
CrowdGAN: Identity-free interactive crowd video generation and beyond
description In this paper, we introduce a novel yet challenging research problem, interactive crowd video generation, committed to producing diverse and continuous crowd video, and relieving the difficulty of insufficient annotated real-world datasets in crowd analysis. Our goal is to recursively generate realistic future crowd video frames given few context frames, under the user-specified guidance, namely individual positions of the crowd. To this end, we propose a deep network architecture specifically designed for crowd video generation that is composed of two complementary modules, each of which combats the problems of crowd dynamic synthesis and appearance preservation respectively. Particularly, a spatio-temporal transfer module is proposed to infer the crowd position and structure from guidance and temporal information, and a point-aware flow prediction module is presented to preserve appearance consistency by flow-based warping. Then, the outputs of the two modules are integrated by a self-selective fusion unit to produce an identity-preserved and continuous video. Unlike previous works, we generate continuous crowd behaviors beyond identity annotations or matching. Extensive experiments show that our method is effective for crowd video generation. More importantly, we demonstrate the generated video can produce diverse crowd behaviors and be used for augmenting different crowd analysis tasks, i.e., crowd counting, anomaly detection, crowd video prediction. Code is available at https://github.com/Icep2020/CrowdGAN.
format text
author CHAI, Liangyu
LIU, Yongtuo
LIU, Wenxi
HAN, Guoqiang
HE, Shengfeng
author_facet CHAI, Liangyu
LIU, Yongtuo
LIU, Wenxi
HAN, Guoqiang
HE, Shengfeng
author_sort CHAI, Liangyu
title CrowdGAN: Identity-free interactive crowd video generation and beyond
title_short CrowdGAN: Identity-free interactive crowd video generation and beyond
title_full CrowdGAN: Identity-free interactive crowd video generation and beyond
title_fullStr CrowdGAN: Identity-free interactive crowd video generation and beyond
title_full_unstemmed CrowdGAN: Identity-free interactive crowd video generation and beyond
title_sort crowdgan: identity-free interactive crowd video generation and beyond
publisher Institutional Knowledge at Singapore Management University
publishDate 2022
url https://ink.library.smu.edu.sg/sis_research/7849
_version_ 1770576555902238720