Towards unified multimodal editing with enhanced knowledge collaboration

The swift advancement in Multimodal LLMs (MLLMs) also presents significant challenges for effective knowledge editing. Current methods, including intrinsic knowledge editing and external knowledge resorting, each possess strengths and weaknesses, struggling to balance the desired properties of relia...

Full description

Saved in:
Bibliographic Details
Main Authors: PAN, Kaihang, FAN, Zhaoyu, LI, Juncheng, YU, Qifan, FEI, Hao, TANG, Siliang, HONG, Richang, ZHANG, Hanwang, Qianru SUN
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9401
https://ink.library.smu.edu.sg/context/sis_research/article/10401/viewcontent/2409.19872v2__2_.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10401
record_format dspace
spelling sg-smu-ink.sis_research-104012024-10-25T08:54:48Z Towards unified multimodal editing with enhanced knowledge collaboration PAN, Kaihang FAN, Zhaoyu LI, Juncheng YU, Qifan FEI, Hao TANG, Siliang HONG, Richang ZHANG, Hanwang Qianru SUN, The swift advancement in Multimodal LLMs (MLLMs) also presents significant challenges for effective knowledge editing. Current methods, including intrinsic knowledge editing and external knowledge resorting, each possess strengths and weaknesses, struggling to balance the desired properties of reliability, generality, and locality when applied to MLLMs. In this paper, we propose UniKE, a novel multimodal editing method that establishes a unified perspective and paradigm for intrinsic knowledge editing and external knowledge resorting. Both types of knowledge are conceptualized as vectorized key-value memories, with the corresponding editing processes resembling the assimilation and accommodation phases of human cognition, conducted at the same semantic levels. Within such a unified framework, we further promote knowledge collaboration by disentangling the knowledge representations into the semantic and truthfulness spaces. 2024-12-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9401 https://ink.library.smu.edu.sg/context/sis_research/article/10401/viewcontent/2409.19872v2__2_.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Multimodal LLMs Knowledge editing Intrinsic knowledge editing External knowledge resorting Databases and Information Systems
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Multimodal LLMs
Knowledge editing
Intrinsic knowledge editing
External knowledge resorting
Databases and Information Systems
spellingShingle Multimodal LLMs
Knowledge editing
Intrinsic knowledge editing
External knowledge resorting
Databases and Information Systems
PAN, Kaihang
FAN, Zhaoyu
LI, Juncheng
YU, Qifan
FEI, Hao
TANG, Siliang
HONG, Richang
ZHANG, Hanwang
Qianru SUN,
Towards unified multimodal editing with enhanced knowledge collaboration
description The swift advancement in Multimodal LLMs (MLLMs) also presents significant challenges for effective knowledge editing. Current methods, including intrinsic knowledge editing and external knowledge resorting, each possess strengths and weaknesses, struggling to balance the desired properties of reliability, generality, and locality when applied to MLLMs. In this paper, we propose UniKE, a novel multimodal editing method that establishes a unified perspective and paradigm for intrinsic knowledge editing and external knowledge resorting. Both types of knowledge are conceptualized as vectorized key-value memories, with the corresponding editing processes resembling the assimilation and accommodation phases of human cognition, conducted at the same semantic levels. Within such a unified framework, we further promote knowledge collaboration by disentangling the knowledge representations into the semantic and truthfulness spaces.
format text
author PAN, Kaihang
FAN, Zhaoyu
LI, Juncheng
YU, Qifan
FEI, Hao
TANG, Siliang
HONG, Richang
ZHANG, Hanwang
Qianru SUN,
author_facet PAN, Kaihang
FAN, Zhaoyu
LI, Juncheng
YU, Qifan
FEI, Hao
TANG, Siliang
HONG, Richang
ZHANG, Hanwang
Qianru SUN,
author_sort PAN, Kaihang
title Towards unified multimodal editing with enhanced knowledge collaboration
title_short Towards unified multimodal editing with enhanced knowledge collaboration
title_full Towards unified multimodal editing with enhanced knowledge collaboration
title_fullStr Towards unified multimodal editing with enhanced knowledge collaboration
title_full_unstemmed Towards unified multimodal editing with enhanced knowledge collaboration
title_sort towards unified multimodal editing with enhanced knowledge collaboration
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9401
https://ink.library.smu.edu.sg/context/sis_research/article/10401/viewcontent/2409.19872v2__2_.pdf
_version_ 1814777840658284544