Learning user interface semantics from heterogeneous networks with multi-modal and positional attributes

User interfaces (UI) of desktop, web, and mobile applications involve a hierarchy of objects (e.g. applications, screens, view class, and other types of design objects) with multimodal (e.g. textual, visual) and positional (e.g. spatial location, sequence order and hierarchy level) attributes. We ca...

Full description

Saved in:
Bibliographic Details
Main Authors: ANG, Gary, LIM, Ee-peng
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2022
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/6918
https://ink.library.smu.edu.sg/context/sis_research/article/7921/viewcontent/3490099.3511143.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-7921
record_format dspace
spelling sg-smu-ink.sis_research-79212022-04-07T02:12:04Z Learning user interface semantics from heterogeneous networks with multi-modal and positional attributes ANG, Gary LIM, Ee-peng User interfaces (UI) of desktop, web, and mobile applications involve a hierarchy of objects (e.g. applications, screens, view class, and other types of design objects) with multimodal (e.g. textual, visual) and positional (e.g. spatial location, sequence order and hierarchy level) attributes. We can therefore represent a set of application UIs as a heterogeneous network with multimodal and positional attributes. Such a network not only represents how users understand the visual layout of UIs, but also influences how users would interact with applications through these UIs. To model the UI semantics well for different UI annotation, search, and evaluation tasks, this paper proposes the novel Heterogeneous Attention-based Multimodal Positional (HAMP) graph neural network model. HAMP combines graph neural networks with the scaled dot-product attention used in transformers to learn the embeddings of heterogeneous nodes and associated multimodal and positional attributes in a unified manner. HAMP is evaluated with classification and regression tasks conducted on three distinct real-world datasets. Our experiments demonstrate that HAMP significantly out-performs other state-ofthe-art models on such tasks. We also report our ablation study results on HAMP. 2022-03-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/6918 info:doi/10.1145/3490099.3511143 https://ink.library.smu.edu.sg/context/sis_research/article/7921/viewcontent/3490099.3511143.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Graph neural networks transformers attention mechanism heterogeneous networks multimodal mobile application user interface supervised learning Databases and Information Systems
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Graph neural networks
transformers
attention mechanism
heterogeneous networks
multimodal
mobile application user interface
supervised learning
Databases and Information Systems
spellingShingle Graph neural networks
transformers
attention mechanism
heterogeneous networks
multimodal
mobile application user interface
supervised learning
Databases and Information Systems
ANG, Gary
LIM, Ee-peng
Learning user interface semantics from heterogeneous networks with multi-modal and positional attributes
description User interfaces (UI) of desktop, web, and mobile applications involve a hierarchy of objects (e.g. applications, screens, view class, and other types of design objects) with multimodal (e.g. textual, visual) and positional (e.g. spatial location, sequence order and hierarchy level) attributes. We can therefore represent a set of application UIs as a heterogeneous network with multimodal and positional attributes. Such a network not only represents how users understand the visual layout of UIs, but also influences how users would interact with applications through these UIs. To model the UI semantics well for different UI annotation, search, and evaluation tasks, this paper proposes the novel Heterogeneous Attention-based Multimodal Positional (HAMP) graph neural network model. HAMP combines graph neural networks with the scaled dot-product attention used in transformers to learn the embeddings of heterogeneous nodes and associated multimodal and positional attributes in a unified manner. HAMP is evaluated with classification and regression tasks conducted on three distinct real-world datasets. Our experiments demonstrate that HAMP significantly out-performs other state-ofthe-art models on such tasks. We also report our ablation study results on HAMP.
format text
author ANG, Gary
LIM, Ee-peng
author_facet ANG, Gary
LIM, Ee-peng
author_sort ANG, Gary
title Learning user interface semantics from heterogeneous networks with multi-modal and positional attributes
title_short Learning user interface semantics from heterogeneous networks with multi-modal and positional attributes
title_full Learning user interface semantics from heterogeneous networks with multi-modal and positional attributes
title_fullStr Learning user interface semantics from heterogeneous networks with multi-modal and positional attributes
title_full_unstemmed Learning user interface semantics from heterogeneous networks with multi-modal and positional attributes
title_sort learning user interface semantics from heterogeneous networks with multi-modal and positional attributes
publisher Institutional Knowledge at Singapore Management University
publishDate 2022
url https://ink.library.smu.edu.sg/sis_research/6918
https://ink.library.smu.edu.sg/context/sis_research/article/7921/viewcontent/3490099.3511143.pdf
_version_ 1770576119138877440