Efficient inference offloading for mixture-of-experts large language models in internet of medical things
Despite recent significant advancements in large language models (LLMs) for medical services, the deployment difficulties of LLMs in e-healthcare hinder complex medical applications in the Internet of Medical Things (IoMT). People are increasingly concerned about e-healthcare risks and privacy prote...
Saved in:
Main Authors: | Yuan, Xiaoming, Kong, Weixuan, Luo, Zhenyu, Xu, Minrui |
---|---|
Other Authors: | School of Computer Science and Engineering |
Format: | Article |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/179743 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
A game-based incentive-driven offloading framework for dispersed computing
by: Wu, Hongjia, et al.
Published: (2023) -
Computation offloading and content caching and delivery in Vehicular Edge Network: a survey
by: Dziyauddin, Rudzidatul Akmam, et al.
Published: (2022) -
Distributed algorithm for computation offloading in mobile edge computing considering user mobility and task randomness
by: Zheng, F. Yifeng, et al.
Published: (2022) -
DYNAMIC NEURAL ARCHITECTURES FOR IMPROVED INFERENCE
by: CAI SHAOFENG
Published: (2021) -
Visible light based occupancy inference using ensemble learning
by: Hao, Jie, et al.
Published: (2018)