Building trustworthy AI from small DNNs to large language models: a software engineering perspective
As Artificial Intelligence (AI) software becomes increasingly prevalent across various industries, concerns about its trustworthiness and reliability have come to the forefront. Although the trustworthiness of traditional software is regulated by Software Engineering (SE) practices, these practices...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Doctor of Philosophy |
Language: | English |
Published: |
Nanyang Technological University
2025
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/182234 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-182234 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1822342025-01-16T05:22:27Z Building trustworthy AI from small DNNs to large language models: a software engineering perspective Li, Tianlin Liu Yang College of Computing and Data Science yangliu@ntu.edu.sg Engineering As Artificial Intelligence (AI) software becomes increasingly prevalent across various industries, concerns about its trustworthiness and reliability have come to the forefront. Although the trustworthiness of traditional software is regulated by Software Engineering (SE) practices, these practices have not been well integrated into AI model development due to the significant differences between traditional software development and AI model development. Inspired by this, we aim to systematically address trustworthiness by regulating the AI development process through the lens of SE practices. Specifically, we are inspired by the regulation of traditional software, focusing on the key phases in software regulation: software development, execution, and testing. We identify corresponding phases in AI model development: training, inference, and testing. These phases are crucial for ensuring the trustworthiness and reliability of AI models. My study aims to improve these phases to enhance the trustworthiness of AI models. Our primary approach to regulating AI model development mirrors traditional software practices. It involves first debugging these phases and then implementing repairs. Moreover, large language models (LLMs) are revolutionizing the software industry. Thus, in this thesis, I explore the debugging and repairing of AI software from three phases (i.e., training, inference, and testing), focusing on both small Deep Neural Networks (DNNs) and LLMs. Doctor of Philosophy 2025-01-16T05:22:26Z 2025-01-16T05:22:26Z 2025 Thesis-Doctor of Philosophy Li, T. (2025). Building trustworthy AI from small DNNs to large language models: a software engineering perspective. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/182234 https://hdl.handle.net/10356/182234 en This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering |
spellingShingle |
Engineering Li, Tianlin Building trustworthy AI from small DNNs to large language models: a software engineering perspective |
description |
As Artificial Intelligence (AI) software becomes increasingly prevalent across various industries, concerns about its trustworthiness and reliability have come to the forefront. Although the trustworthiness of traditional software is regulated by Software Engineering (SE) practices, these practices have not been well integrated into AI model development due to the significant differences between traditional software development and AI model development. Inspired by this, we aim to systematically address trustworthiness by regulating the AI development process through the lens of SE practices. Specifically, we are inspired by the regulation of traditional software, focusing on the key phases in software regulation: software development, execution, and testing. We identify corresponding phases in AI model development: training, inference, and testing. These phases are crucial for ensuring the trustworthiness and reliability of AI models. My study aims to improve these phases to enhance the trustworthiness of AI models. Our primary approach to regulating AI model development mirrors traditional software practices. It involves first debugging these phases and then implementing repairs. Moreover, large language models (LLMs) are revolutionizing the software industry. Thus, in this thesis, I explore the debugging and repairing of AI software from three phases (i.e., training, inference, and testing), focusing on both small Deep Neural Networks (DNNs) and LLMs. |
author2 |
Liu Yang |
author_facet |
Liu Yang Li, Tianlin |
format |
Thesis-Doctor of Philosophy |
author |
Li, Tianlin |
author_sort |
Li, Tianlin |
title |
Building trustworthy AI from small DNNs to large language models: a software engineering perspective |
title_short |
Building trustworthy AI from small DNNs to large language models: a software engineering perspective |
title_full |
Building trustworthy AI from small DNNs to large language models: a software engineering perspective |
title_fullStr |
Building trustworthy AI from small DNNs to large language models: a software engineering perspective |
title_full_unstemmed |
Building trustworthy AI from small DNNs to large language models: a software engineering perspective |
title_sort |
building trustworthy ai from small dnns to large language models: a software engineering perspective |
publisher |
Nanyang Technological University |
publishDate |
2025 |
url |
https://hdl.handle.net/10356/182234 |
_version_ |
1821833194312302592 |