Learning network-based multi-modal mobile user interface embeddings
Rich multi-modal information - text, code, images, categorical and numerical data - co-exist in the user interface (UI) design of mobile applications. UI designs are composed of UI entities supporting different functions which together enable the application. To support effective search and recommen...
Saved in:
Main Authors: | , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2021
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/7049 https://ink.library.smu.edu.sg/context/sis_research/article/8052/viewcontent/3397481.3450693.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-8052 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-80522022-04-07T03:22:20Z Learning network-based multi-modal mobile user interface embeddings ANG, Gary LIM, Ee-Peng Rich multi-modal information - text, code, images, categorical and numerical data - co-exist in the user interface (UI) design of mobile applications. UI designs are composed of UI entities supporting different functions which together enable the application. To support effective search and recommendation applications over mobile UIs, we need to be able to learn UI representations that integrate latent semantics. In this paper, we propose a novel unsupervised model - Multi-modal Attention-based Attributed Network Embedding (MAAN) model. MAAN is designed to capture both multi-modal and structural network information. Based on the encoder-decoder framework, MAAN aims to learn UI representations that allow UI design reconstruction. The generated embedding can be applied to a variety of tasks: predicting UI elements associated with UI screens, inferring missing UI screen and element attributes, predicting UI user ratings, and retrieving UIs. Extensive experiments, including user evaluations, conducted on two datasets from RICO, a rich real-world mobile UI repository, demonstrates that MAAN out-performs other state-of-the-art models. 2021-04-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/7049 info:doi/10.1145/3397481.3450693 https://ink.library.smu.edu.sg/context/sis_research/article/8052/viewcontent/3397481.3450693.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Network embedding mobile application user interface unsupervised retrieval multi-modal Databases and Information Systems OS and Networks |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Network embedding mobile application user interface unsupervised retrieval multi-modal Databases and Information Systems OS and Networks |
spellingShingle |
Network embedding mobile application user interface unsupervised retrieval multi-modal Databases and Information Systems OS and Networks ANG, Gary LIM, Ee-Peng Learning network-based multi-modal mobile user interface embeddings |
description |
Rich multi-modal information - text, code, images, categorical and numerical data - co-exist in the user interface (UI) design of mobile applications. UI designs are composed of UI entities supporting different functions which together enable the application. To support effective search and recommendation applications over mobile UIs, we need to be able to learn UI representations that integrate latent semantics. In this paper, we propose a novel unsupervised model - Multi-modal Attention-based Attributed Network Embedding (MAAN) model. MAAN is designed to capture both multi-modal and structural network information. Based on the encoder-decoder framework, MAAN aims to learn UI representations that allow UI design reconstruction. The generated embedding can be applied to a variety of tasks: predicting UI elements associated with UI screens, inferring missing UI screen and element attributes, predicting UI user ratings, and retrieving UIs. Extensive experiments, including user evaluations, conducted on two datasets from RICO, a rich real-world mobile UI repository, demonstrates that MAAN out-performs other state-of-the-art models. |
format |
text |
author |
ANG, Gary LIM, Ee-Peng |
author_facet |
ANG, Gary LIM, Ee-Peng |
author_sort |
ANG, Gary |
title |
Learning network-based multi-modal mobile user interface embeddings |
title_short |
Learning network-based multi-modal mobile user interface embeddings |
title_full |
Learning network-based multi-modal mobile user interface embeddings |
title_fullStr |
Learning network-based multi-modal mobile user interface embeddings |
title_full_unstemmed |
Learning network-based multi-modal mobile user interface embeddings |
title_sort |
learning network-based multi-modal mobile user interface embeddings |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2021 |
url |
https://ink.library.smu.edu.sg/sis_research/7049 https://ink.library.smu.edu.sg/context/sis_research/article/8052/viewcontent/3397481.3450693.pdf |
_version_ |
1770576194938339328 |