Generating expensive relationship features from cheap objects

We investigate the problem of object relationship classification of visual scenes. For a relationship object1-predicate-object2 that captures the object interaction, its representation is composed by the combination of object1 and object2 features. As a result, relationship classification models usu...

Full description

Saved in:
Bibliographic Details
Main Authors: WANG, Xiaogang, SUN, Qianru, CHUA, Tat-Seng, ANG, Marcelo
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2019
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/4446
https://ink.library.smu.edu.sg/context/sis_research/article/5449/viewcontent/BMVC2019_0657_paper_published.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
Description
Summary:We investigate the problem of object relationship classification of visual scenes. For a relationship object1-predicate-object2 that captures the object interaction, its representation is composed by the combination of object1 and object2 features. As a result, relationship classification models usually bias to the frequent objects, leading to poor generalization to rare or unseen objects. Inspired by the data augmentation methods, we propose a novel Semantic Transform Generative Adversarial Network (ST-GAN) that synthesizes relationship features for rare objects, conditioned on the features from random instances of the objects. Specifically, ST-GAN essentially offers a semantic transform function from cheap object features to expensive relationship features. Here, “cheap” means any easy-to-collect object which possesses an original but undesired relationship attribute, e.g., a sitting person; “expensive” means a target relationship on this object, e.g., person-riding-horse. By generating massive triplet combinations from any object pair with larger variance, ST-GAN can reduce the data bias. Extensive experiments on two benchmarks – Visual Relationship Detection (VRD) and Visual Genome (VG), show that using our synthesized features for data augmentation, the relationship classification model can be consistently improved in various settings such as zero-shot and low-shot.