Assured autonomy in safety critical CPS
The aim of this study is to investigate safety guarantees on variational autoencoder (VAE) outputs. The problem of establishing a safety guarantee on machine learning models is to ensure that the model probabilistically satisfies particular constraints. The model targeted in this study is the β-VAE,...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/153289 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-153289 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1532892021-11-16T05:12:45Z Assured autonomy in safety critical CPS Prashant, Mohit Arvind Easwaran School of Computer Science and Engineering arvinde@ntu.edu.sg Engineering::Computer science and engineering::Mathematics of computing::Probability and statistics Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence The aim of this study is to investigate safety guarantees on variational autoencoder (VAE) outputs. The problem of establishing a safety guarantee on machine learning models is to ensure that the model probabilistically satisfies particular constraints. The model targeted in this study is the β-VAE, a type of VAE that aims to produce a latent encoding that disentangles the generative factors of the training data, with the aim of applying safety constraints on the latent space. The method applied to solve this problem was adapted from solutions that provide safety guarantees to stochastic neural networks, defining the guarantee with two variables, ε and δ. 1-δ represents the confidence that with at least 1-ε probability, the output of the β-VAE will satisfy the safety constraints posed. With a sample size and the confidence, the minimum upper bound for the expected amount of error can be optimized for. The approach taken in this study implements the safety constraints using a density based conformal predictor, which is used to indicate out-of-distribution (OOD) elements for calculating ε. Though guarantees can be placed on the error and confidence of the model using these constraints, the results show that there are a number of valid data samples being classified as OOD. Future extensions to this work may be aimed at constructing safety constraints with different conformity metrics. Bachelor of Engineering (Computer Science) 2021-11-16T00:50:02Z 2021-11-16T00:50:02Z 2021 Final Year Project (FYP) Prashant, M. (2021). Assured autonomy in safety critical CPS. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/153289 https://hdl.handle.net/10356/153289 en application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering::Mathematics of computing::Probability and statistics Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence |
spellingShingle |
Engineering::Computer science and engineering::Mathematics of computing::Probability and statistics Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Prashant, Mohit Assured autonomy in safety critical CPS |
description |
The aim of this study is to investigate safety guarantees on variational autoencoder (VAE) outputs. The problem of establishing a safety guarantee on machine learning models is to ensure that the model probabilistically satisfies particular constraints. The model targeted in this study is the β-VAE, a type of VAE that aims to produce a latent encoding that disentangles the generative factors of the training data, with the aim of applying safety constraints on the latent space.
The method applied to solve this problem was adapted from solutions that provide safety guarantees to stochastic neural networks, defining the guarantee with two variables, ε and δ. 1-δ represents the confidence that with at least 1-ε probability, the output of the β-VAE will satisfy the safety constraints posed. With a sample size and the confidence, the minimum upper bound for the expected amount of error can be optimized for.
The approach taken in this study implements the safety constraints using a density based conformal predictor, which is used to indicate out-of-distribution (OOD) elements for calculating ε. Though guarantees can be placed on the error and confidence of the model using these constraints, the results show that there are a number of valid data samples being classified as OOD. Future extensions to this work may be aimed at constructing safety constraints with different conformity metrics. |
author2 |
Arvind Easwaran |
author_facet |
Arvind Easwaran Prashant, Mohit |
format |
Final Year Project |
author |
Prashant, Mohit |
author_sort |
Prashant, Mohit |
title |
Assured autonomy in safety critical CPS |
title_short |
Assured autonomy in safety critical CPS |
title_full |
Assured autonomy in safety critical CPS |
title_fullStr |
Assured autonomy in safety critical CPS |
title_full_unstemmed |
Assured autonomy in safety critical CPS |
title_sort |
assured autonomy in safety critical cps |
publisher |
Nanyang Technological University |
publishDate |
2021 |
url |
https://hdl.handle.net/10356/153289 |
_version_ |
1718368053452865536 |