Region embedding with intra and inter-view contrastive learning

Unsupervised region representation learning aims to extract dense and effective features from unlabeled urban data. While some efforts have been made for solving this problem based on multiple views, existing methods are still insufficient in extracting representations in a view and/or incorporating...

Full description

Saved in:
Bibliographic Details
Main Authors: Zhang, Liang, Long, Cheng, Cong, Gao
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/172863
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-172863
record_format dspace
spelling sg-ntu-dr.10356-1728632023-12-27T02:45:17Z Region embedding with intra and inter-view contrastive learning Zhang, Liang Long, Cheng Cong, Gao School of Computer Science and Engineering Engineering::Computer science and engineering Contrastive Learning Region Representation Unsupervised region representation learning aims to extract dense and effective features from unlabeled urban data. While some efforts have been made for solving this problem based on multiple views, existing methods are still insufficient in extracting representations in a view and/or incorporating representations from different views. Motivated by the success of contrastive learning for representation learning, we propose to leverage it for multi-view region representation learning and design a model called ReMVC (Region Embedding with Multi-View Contrastive Learning) by following two guidelines: ii) comparing a region with others within each view for effective representation extraction and iiii) comparing a region with itself across different views for cross-view information sharing. We design the intra-view contrastive learning module which helps to learn distinguished region embeddings and the inter-view contrastive learning module which serves as a soft co-regularizer to constrain the embedding parameters and transfer knowledge across multi-views. We exploit the learned region embeddings in two downstream tasks named land usage clustering and region popularity prediction. Extensive experiments demonstrate that our model achieves impressive improvements compared with seven state-of-the-art baseline methods, and the margins are over 30% in the land usage clustering task. Ministry of Education (MOE) National Research Foundation (NRF) This work was supported in part by RIE2020 Industry Alignment Fund – Industry Collaboration Projects (IAF-ICP) Funding Initiative, and cash and in-kind contribution from Singapore Telecommunications Limited (Singtel), through Singtel Cognitive and Artificial Intelligence Lab for Enterprises (SCALE@NTU), in part by the Ministry of Education, Singapore, through its Academic Research Fund Tier 2 under Grant MOET2EP20221-0013, and in part by the National Research Foundation, Singapore through its Industry Alignment Fund – Pre-positioning (IAF-PP) Funding Initiative. 2023-12-27T02:45:16Z 2023-12-27T02:45:16Z 2023 Journal Article Zhang, L., Long, C. & Cong, G. (2023). Region embedding with intra and inter-view contrastive learning. IEEE Transactions On Knowledge and Data Engineering, 35(9), 9031-9036. https://dx.doi.org/10.1109/TKDE.2022.3220874 1041-4347 https://hdl.handle.net/10356/172863 10.1109/TKDE.2022.3220874 2-s2.0-85144761758 9 35 9031 9036 en MOET2EP20221-0013 IEEE Transactions on Knowledge and Data Engineering © 2022 IEEE. All rights reserved.
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering
Contrastive Learning
Region Representation
spellingShingle Engineering::Computer science and engineering
Contrastive Learning
Region Representation
Zhang, Liang
Long, Cheng
Cong, Gao
Region embedding with intra and inter-view contrastive learning
description Unsupervised region representation learning aims to extract dense and effective features from unlabeled urban data. While some efforts have been made for solving this problem based on multiple views, existing methods are still insufficient in extracting representations in a view and/or incorporating representations from different views. Motivated by the success of contrastive learning for representation learning, we propose to leverage it for multi-view region representation learning and design a model called ReMVC (Region Embedding with Multi-View Contrastive Learning) by following two guidelines: ii) comparing a region with others within each view for effective representation extraction and iiii) comparing a region with itself across different views for cross-view information sharing. We design the intra-view contrastive learning module which helps to learn distinguished region embeddings and the inter-view contrastive learning module which serves as a soft co-regularizer to constrain the embedding parameters and transfer knowledge across multi-views. We exploit the learned region embeddings in two downstream tasks named land usage clustering and region popularity prediction. Extensive experiments demonstrate that our model achieves impressive improvements compared with seven state-of-the-art baseline methods, and the margins are over 30% in the land usage clustering task.
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Zhang, Liang
Long, Cheng
Cong, Gao
format Article
author Zhang, Liang
Long, Cheng
Cong, Gao
author_sort Zhang, Liang
title Region embedding with intra and inter-view contrastive learning
title_short Region embedding with intra and inter-view contrastive learning
title_full Region embedding with intra and inter-view contrastive learning
title_fullStr Region embedding with intra and inter-view contrastive learning
title_full_unstemmed Region embedding with intra and inter-view contrastive learning
title_sort region embedding with intra and inter-view contrastive learning
publishDate 2023
url https://hdl.handle.net/10356/172863
_version_ 1787136825974849536