Cross-modal food retrieval: Learning a joint embedding of food images and recipes with semantic consistency and attention mechanism

Food retrieval is an important task to perform analysis of food-related information, where we are interested in retrieving relevant information about the queried food item such as ingredients, cooking instructions, etc. In this paper, we investigate cross-modal retrieval between food images and cook...

Full description

Saved in:
Bibliographic Details
Main Authors: WANG, Hao, SAHOO, Doyen, LIU, Chenghao, SHU, Ke, ACHANANUPARP, Palakorn, LIM, Ee-peng, HOI, Steven C. H.
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2022
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/6268
https://ink.library.smu.edu.sg/context/sis_research/article/7271/viewcontent/cross_modal_food_retrieval.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-7271
record_format dspace
spelling sg-smu-ink.sis_research-72712024-02-28T02:39:10Z Cross-modal food retrieval: Learning a joint embedding of food images and recipes with semantic consistency and attention mechanism WANG, Hao SAHOO, Doyen LIU, Chenghao SHU, Ke ACHANANUPARP, Palakorn LIM, Ee-peng HOI, Steven C. H. Food retrieval is an important task to perform analysis of food-related information, where we are interested in retrieving relevant information about the queried food item such as ingredients, cooking instructions, etc. In this paper, we investigate cross-modal retrieval between food images and cooking recipes. The goal is to learn an embedding of images and recipes in a common feature space, such that the corresponding image-recipe embeddings lie close to one another. Two major challenges in addressing this problem are 1) large intra-variance and small inter-variance across cross-modal food data; and 2) difficulties in obtaining discriminative recipe representations. To address these two problems, we propose Semantic-Consistent and Attentionbased Networks (SCAN), which regularize the embeddings of the two modalities through aligning output semantic probabilities. Besides, we exploit a self-attention mechanism to improve the embedding of recipes.We evaluate the performance of the proposed method on the large-scale Recipe1M dataset, and show that we can outperform several state-of-the-art cross-modal retrieval strategies for food images and cooking recipes by a significant margin. 2022-01-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/6268 info:doi/10.1109/TMM.2021.3083109 https://ink.library.smu.edu.sg/context/sis_research/article/7271/viewcontent/cross_modal_food_retrieval.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Correlation Cross-Modal Retrieval Data models Deep Learning Semantics Sugar Task analysis Training Visionand-Language Visualization Artificial Intelligence and Robotics Databases and Information Systems Graphics and Human Computer Interfaces
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Correlation
Cross-Modal Retrieval
Data models
Deep Learning
Semantics
Sugar
Task analysis
Training
Visionand-Language
Visualization
Artificial Intelligence and Robotics
Databases and Information Systems
Graphics and Human Computer Interfaces
spellingShingle Correlation
Cross-Modal Retrieval
Data models
Deep Learning
Semantics
Sugar
Task analysis
Training
Visionand-Language
Visualization
Artificial Intelligence and Robotics
Databases and Information Systems
Graphics and Human Computer Interfaces
WANG, Hao
SAHOO, Doyen
LIU, Chenghao
SHU, Ke
ACHANANUPARP, Palakorn
LIM, Ee-peng
HOI, Steven C. H.
Cross-modal food retrieval: Learning a joint embedding of food images and recipes with semantic consistency and attention mechanism
description Food retrieval is an important task to perform analysis of food-related information, where we are interested in retrieving relevant information about the queried food item such as ingredients, cooking instructions, etc. In this paper, we investigate cross-modal retrieval between food images and cooking recipes. The goal is to learn an embedding of images and recipes in a common feature space, such that the corresponding image-recipe embeddings lie close to one another. Two major challenges in addressing this problem are 1) large intra-variance and small inter-variance across cross-modal food data; and 2) difficulties in obtaining discriminative recipe representations. To address these two problems, we propose Semantic-Consistent and Attentionbased Networks (SCAN), which regularize the embeddings of the two modalities through aligning output semantic probabilities. Besides, we exploit a self-attention mechanism to improve the embedding of recipes.We evaluate the performance of the proposed method on the large-scale Recipe1M dataset, and show that we can outperform several state-of-the-art cross-modal retrieval strategies for food images and cooking recipes by a significant margin.
format text
author WANG, Hao
SAHOO, Doyen
LIU, Chenghao
SHU, Ke
ACHANANUPARP, Palakorn
LIM, Ee-peng
HOI, Steven C. H.
author_facet WANG, Hao
SAHOO, Doyen
LIU, Chenghao
SHU, Ke
ACHANANUPARP, Palakorn
LIM, Ee-peng
HOI, Steven C. H.
author_sort WANG, Hao
title Cross-modal food retrieval: Learning a joint embedding of food images and recipes with semantic consistency and attention mechanism
title_short Cross-modal food retrieval: Learning a joint embedding of food images and recipes with semantic consistency and attention mechanism
title_full Cross-modal food retrieval: Learning a joint embedding of food images and recipes with semantic consistency and attention mechanism
title_fullStr Cross-modal food retrieval: Learning a joint embedding of food images and recipes with semantic consistency and attention mechanism
title_full_unstemmed Cross-modal food retrieval: Learning a joint embedding of food images and recipes with semantic consistency and attention mechanism
title_sort cross-modal food retrieval: learning a joint embedding of food images and recipes with semantic consistency and attention mechanism
publisher Institutional Knowledge at Singapore Management University
publishDate 2022
url https://ink.library.smu.edu.sg/sis_research/6268
https://ink.library.smu.edu.sg/context/sis_research/article/7271/viewcontent/cross_modal_food_retrieval.pdf
_version_ 1794549715969245184