MolCA: Molecular graph-language modeling with cross-modal projector and uni-modal adapter

Language Models (LMs) have demonstrated impressive molecule understanding ability on various 1D text-related tasks. However, they inherently lack 2D graph perception — a critical ability of human professionals in comprehending molecules’ topological structures. To bridge this gap, we propose MolCA:...

Full description

Saved in:
Bibliographic Details
Main Authors: LIU, Zhiyuan, LI, Sihang, LUO, Yanchen, FEI, Hao, CAO, Yixin, KAWAGUCHI, Kenji, WANG, Xiang, CHUA, Tat-Seng
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2023
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8394
https://ink.library.smu.edu.sg/context/sis_research/article/9397/viewcontent/2310.12798.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-9397
record_format dspace
spelling sg-smu-ink.sis_research-93972024-01-09T03:53:04Z MolCA: Molecular graph-language modeling with cross-modal projector and uni-modal adapter LIU, Zhiyuan LI, Sihang LUO, Yanchen FEI, Hao CAO, Yixin KAWAGUCHI, Kenji WANG, Xiang CHUA, Tat-Seng Language Models (LMs) have demonstrated impressive molecule understanding ability on various 1D text-related tasks. However, they inherently lack 2D graph perception — a critical ability of human professionals in comprehending molecules’ topological structures. To bridge this gap, we propose MolCA: Molecular Graph-Language Modeling with Cross-Modal Projector and Uni-Modal Adapter. MolCA enables an LM (i.e., Galactica) to understand both text- and graph-based molecular contents via the cross-modal projector. Specifically, the cross-modal projector is implemented as a QFormer to connect a graph encoder’s representation space and an LM’s text space. Further, MolCA employs a uni-modal adapter (i.e., LoRA) for the LM’s efficient adaptation to downstream tasks. Unlike previous studies that couple an LM with a graph encoder via cross-modal contrastive learning, MolCA retains the LM’s ability of open-ended text generation and augments it with 2D graph information. To showcase its effectiveness, we extensively benchmark MolCA on tasks of molecule captioning, IUPAC name prediction, and molecule-text retrieval, on which MolCA significantly outperforms the baselines. Our codes and checkpoints can be found at https: //github.com/acharkq/MolCA. 2023-12-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8394 info:doi/10.18653/v1/2023.emnlp-main.966 https://ink.library.smu.edu.sg/context/sis_research/article/9397/viewcontent/2310.12798.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Databases and Information Systems Programming Languages and Compilers
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Databases and Information Systems
Programming Languages and Compilers
spellingShingle Databases and Information Systems
Programming Languages and Compilers
LIU, Zhiyuan
LI, Sihang
LUO, Yanchen
FEI, Hao
CAO, Yixin
KAWAGUCHI, Kenji
WANG, Xiang
CHUA, Tat-Seng
MolCA: Molecular graph-language modeling with cross-modal projector and uni-modal adapter
description Language Models (LMs) have demonstrated impressive molecule understanding ability on various 1D text-related tasks. However, they inherently lack 2D graph perception — a critical ability of human professionals in comprehending molecules’ topological structures. To bridge this gap, we propose MolCA: Molecular Graph-Language Modeling with Cross-Modal Projector and Uni-Modal Adapter. MolCA enables an LM (i.e., Galactica) to understand both text- and graph-based molecular contents via the cross-modal projector. Specifically, the cross-modal projector is implemented as a QFormer to connect a graph encoder’s representation space and an LM’s text space. Further, MolCA employs a uni-modal adapter (i.e., LoRA) for the LM’s efficient adaptation to downstream tasks. Unlike previous studies that couple an LM with a graph encoder via cross-modal contrastive learning, MolCA retains the LM’s ability of open-ended text generation and augments it with 2D graph information. To showcase its effectiveness, we extensively benchmark MolCA on tasks of molecule captioning, IUPAC name prediction, and molecule-text retrieval, on which MolCA significantly outperforms the baselines. Our codes and checkpoints can be found at https: //github.com/acharkq/MolCA.
format text
author LIU, Zhiyuan
LI, Sihang
LUO, Yanchen
FEI, Hao
CAO, Yixin
KAWAGUCHI, Kenji
WANG, Xiang
CHUA, Tat-Seng
author_facet LIU, Zhiyuan
LI, Sihang
LUO, Yanchen
FEI, Hao
CAO, Yixin
KAWAGUCHI, Kenji
WANG, Xiang
CHUA, Tat-Seng
author_sort LIU, Zhiyuan
title MolCA: Molecular graph-language modeling with cross-modal projector and uni-modal adapter
title_short MolCA: Molecular graph-language modeling with cross-modal projector and uni-modal adapter
title_full MolCA: Molecular graph-language modeling with cross-modal projector and uni-modal adapter
title_fullStr MolCA: Molecular graph-language modeling with cross-modal projector and uni-modal adapter
title_full_unstemmed MolCA: Molecular graph-language modeling with cross-modal projector and uni-modal adapter
title_sort molca: molecular graph-language modeling with cross-modal projector and uni-modal adapter
publisher Institutional Knowledge at Singapore Management University
publishDate 2023
url https://ink.library.smu.edu.sg/sis_research/8394
https://ink.library.smu.edu.sg/context/sis_research/article/9397/viewcontent/2310.12798.pdf
_version_ 1787590768153591808