Adaptive regret minimization in bounded-memory games
Organizations that collect and use large volumes of personal information often use security audits to protect data subjects from inappropriate uses of this information by authorized insiders. In face of unknown incentives of employees, a reasonable audit strategy for the organization is one that min...
Saved in:
Main Authors: | , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2013
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/4492 https://ink.library.smu.edu.sg/context/sis_research/article/5495/viewcontent/BLOCKI_GAMESEC14_1_.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-5495 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-54952019-12-19T06:30:35Z Adaptive regret minimization in bounded-memory games BLOCKI, Jeremiah CHRISTIN, Nicolas DATTA, Anupam SINHA, Arunesh Organizations that collect and use large volumes of personal information often use security audits to protect data subjects from inappropriate uses of this information by authorized insiders. In face of unknown incentives of employees, a reasonable audit strategy for the organization is one that minimizes its regret. While regret minimization has been extensively studied in repeated games, the standard notion of regret for repeated games cannot capture the complexity of the interaction between the organization (defender) and an adversary, which arises from dependence of rewards and actions on history. To account for this generality, we introduce a richer class of games called bounded-memory games, which can provide a more accurate model of the audit process. We introduce the notion of k-adaptive regret, which compares the reward obtained by playing actions prescribed by the algorithm against a hypothetical k-adaptive adversary with the reward obtained by the best expert in hindsight against the same adversary. Roughly, a hypothetical k-adaptive adversary adapts her strategy to the defender’s actions exactly as the real adversary would within each window of k rounds. A k-adaptive adversary is a natural model for temporary adversaries (e.g., company employees) who stay for a certain number of audit cycles and are then replaced by a different person. Our definition is parameterized by a set of experts, which can include both fixed and adaptive defender strategies. We investigate the inherent complexity of and design algorithms for adaptive regret minimization in bounded-memory games of perfect and imperfect information. We prove a hardness result showing that, with imperfect information, any k-adaptive regret minimizing algorithm (with fixed strategies as experts) must be inefficient unless NP = RP even when playing against an oblivious adversary. In contrast, for bounded-memory games of perfect and imperfect information we present approximate 0-adaptive regret minimization algorithms against an oblivious adversary running in time nO(1)nO(1). 2013-11-12T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/4492 info:doi/10.1007/978-3-319-02786-9_5 https://ink.library.smu.edu.sg/context/sis_research/article/5495/viewcontent/BLOCKI_GAMESEC14_1_.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Perfect Information Imperfect Information Stochastic Game Repeated Game Stackelberg Equilibrium Artificial Intelligence and Robotics |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Perfect Information Imperfect Information Stochastic Game Repeated Game Stackelberg Equilibrium Artificial Intelligence and Robotics |
spellingShingle |
Perfect Information Imperfect Information Stochastic Game Repeated Game Stackelberg Equilibrium Artificial Intelligence and Robotics BLOCKI, Jeremiah CHRISTIN, Nicolas DATTA, Anupam SINHA, Arunesh Adaptive regret minimization in bounded-memory games |
description |
Organizations that collect and use large volumes of personal information often use security audits to protect data subjects from inappropriate uses of this information by authorized insiders. In face of unknown incentives of employees, a reasonable audit strategy for the organization is one that minimizes its regret. While regret minimization has been extensively studied in repeated games, the standard notion of regret for repeated games cannot capture the complexity of the interaction between the organization (defender) and an adversary, which arises from dependence of rewards and actions on history. To account for this generality, we introduce a richer class of games called bounded-memory games, which can provide a more accurate model of the audit process. We introduce the notion of k-adaptive regret, which compares the reward obtained by playing actions prescribed by the algorithm against a hypothetical k-adaptive adversary with the reward obtained by the best expert in hindsight against the same adversary. Roughly, a hypothetical k-adaptive adversary adapts her strategy to the defender’s actions exactly as the real adversary would within each window of k rounds. A k-adaptive adversary is a natural model for temporary adversaries (e.g., company employees) who stay for a certain number of audit cycles and are then replaced by a different person. Our definition is parameterized by a set of experts, which can include both fixed and adaptive defender strategies. We investigate the inherent complexity of and design algorithms for adaptive regret minimization in bounded-memory games of perfect and imperfect information. We prove a hardness result showing that, with imperfect information, any k-adaptive regret minimizing algorithm (with fixed strategies as experts) must be inefficient unless NP = RP even when playing against an oblivious adversary. In contrast, for bounded-memory games of perfect and imperfect information we present approximate 0-adaptive regret minimization algorithms against an oblivious adversary running in time nO(1)nO(1). |
format |
text |
author |
BLOCKI, Jeremiah CHRISTIN, Nicolas DATTA, Anupam SINHA, Arunesh |
author_facet |
BLOCKI, Jeremiah CHRISTIN, Nicolas DATTA, Anupam SINHA, Arunesh |
author_sort |
BLOCKI, Jeremiah |
title |
Adaptive regret minimization in bounded-memory games |
title_short |
Adaptive regret minimization in bounded-memory games |
title_full |
Adaptive regret minimization in bounded-memory games |
title_fullStr |
Adaptive regret minimization in bounded-memory games |
title_full_unstemmed |
Adaptive regret minimization in bounded-memory games |
title_sort |
adaptive regret minimization in bounded-memory games |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2013 |
url |
https://ink.library.smu.edu.sg/sis_research/4492 https://ink.library.smu.edu.sg/context/sis_research/article/5495/viewcontent/BLOCKI_GAMESEC14_1_.pdf |
_version_ |
1770574874379550720 |