Analysis, detection, and mitigation of attacks in cyber-physical systems

Cyber-Physical Systems (CPS) offer close integration among computational elements, communication networks, and physical processes. Such systems play an increasingly important role in a large variety of fields, such as manufacturing, health care, environment, transportation, defence, and so on. Due t...

Full description

Saved in:
Bibliographic Details
Main Author: Liu, Hanxiao
Other Authors: Xie Lihua
Format: Thesis-Doctor of Philosophy
Language:English
Published: Nanyang Technological University 2021
Subjects:
Online Access:https://hdl.handle.net/10356/153392
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Cyber-Physical Systems (CPS) offer close integration among computational elements, communication networks, and physical processes. Such systems play an increasingly important role in a large variety of fields, such as manufacturing, health care, environment, transportation, defence, and so on. Due to the wide applications and critical functions of CPS, increasing importance has been attached to their security. In this thesis, we focus on the security of CPS by investigating vulnerability under cyber-attacks, providing detection mechanisms, and developing feasible countermeasures against cyber-attacks. The first contribution of this thesis is to analyze the performance of remote state estimation under linear attacks. A linear time-invariant system equipped with a smart sensor is studied. The adversary aims to maximize the state estimation error covariance while staying stealthy. The maximal performance degradation that an adversary can achieve with any linear first-order false data injection attack under strict stealthiness for vector systems and $\epsilon$-stealthiness for scalar systems is characterized. We also provide an explicit attack strategy that achieves this bound and compare it with strategies previously proposed in the literature. The second problem of this thesis is about the detection of replay attacks. We aim to design physical watermark signals and corresponding detector to protect a control system against replay attacks. For a scenario where the system parameters are available to the operator, a physical watermarking scheme to detect the replay attack is introduced. The optimal watermark signal design problem is formulated as an optimization problem, and the optimal watermark signal and detector are derived. Subsequently, for systems with unknown parameters, we provide an on-line learning mechanism to asymptotically derive the optimal watermarking signal and corresponding detector. The third problem under investigation is about the detection of false-data injection attacks when the attacker injects malicious data to flip the distribution of the manipulated sensor measurements. The detector decides to continue taking observations or to stop based on the received signals, and the goal is to have the flip attack detected as fast as possible while trying to avoid terminating the measurements when no attack is present. The detection problem is modeled as a partially observable Markov decision process (POMDP) by assuming an attack probability, with the dynamics of the hidden states of the POMDP characterized by a stochastic shortest path (SSP) problem. The optimal policy of the SSP solely depends on the transition costs and is independent of the assumed attack probability. By using a fixed-length window and suitable feature function of the measurements, a Markov decision process (MDP) is used to approximate the POMDP. The optimal solution of the MDP is obtained by reinforcement learning. The fourth contribution of this thesis is to develop a sensor scheduler for remote state estimation under integrity attacks. We seek a trade-off between the energy consumption of communications and the accuracy of state estimation when the acknowledgment (ACK) information, sent by the remote estimator to the local sensor, is compromised. The sensor scheduling problem is formulated as an infinite horizon discounted optimal control problem with infinite states. We first analyze the underlying MDP and show that the optimal schedule without ACK attack is of threshold type. Thus, we can simplify the problem by replacing the original state space with a finite state space. For the simplified MDP, when ACK is under attack, the problem is modeled as a POMDP. We analyze the induced MDP that uses a belief vector as its state for the POMDP. The properties of the exact optimal solution are studied via contractive models and it is shown that the threshold solution for the POMDP cannot be readily obtained. A suboptimal solution is provided instead via a rollout approach based on reinforcement learning. We present two variants of rollout and provide corresponding performance bounds.