Assured autonomy in safety critical CPS
The aim of this study is to investigate safety guarantees on variational autoencoder (VAE) outputs. The problem of establishing a safety guarantee on machine learning models is to ensure that the model probabilistically satisfies particular constraints. The model targeted in this study is the β-VAE,...
Saved in:
主要作者: | |
---|---|
其他作者: | |
格式: | Final Year Project |
語言: | English |
出版: |
Nanyang Technological University
2021
|
主題: | |
在線閱讀: | https://hdl.handle.net/10356/153289 |
標簽: |
添加標簽
沒有標簽, 成為第一個標記此記錄!
|
機構: | Nanyang Technological University |
語言: | English |
總結: | The aim of this study is to investigate safety guarantees on variational autoencoder (VAE) outputs. The problem of establishing a safety guarantee on machine learning models is to ensure that the model probabilistically satisfies particular constraints. The model targeted in this study is the β-VAE, a type of VAE that aims to produce a latent encoding that disentangles the generative factors of the training data, with the aim of applying safety constraints on the latent space.
The method applied to solve this problem was adapted from solutions that provide safety guarantees to stochastic neural networks, defining the guarantee with two variables, ε and δ. 1-δ represents the confidence that with at least 1-ε probability, the output of the β-VAE will satisfy the safety constraints posed. With a sample size and the confidence, the minimum upper bound for the expected amount of error can be optimized for.
The approach taken in this study implements the safety constraints using a density based conformal predictor, which is used to indicate out-of-distribution (OOD) elements for calculating ε. Though guarantees can be placed on the error and confidence of the model using these constraints, the results show that there are a number of valid data samples being classified as OOD. Future extensions to this work may be aimed at constructing safety constraints with different conformity metrics. |
---|