Regret minimizing audits: A learning-theoretic basis for privacy protection
Audit mechanisms are essential for privacy protection in permissive access control regimes, such as in hospitals where denying legitimate access requests can adversely affect patient care. Recognizing this need, we develop the first principled learning-theoretic foundation for audits. Our first cont...
Saved in:
Main Authors: | , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2011
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/4484 https://ink.library.smu.edu.sg/context/sis_research/article/5487/viewcontent/bcds_csf11_1_.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-5487 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-54872020-01-06T09:46:51Z Regret minimizing audits: A learning-theoretic basis for privacy protection BLOCKI, Jeremiah CHRISTIN, Nicolas DATTA, Anupam SINHA, Arunesh Audit mechanisms are essential for privacy protection in permissive access control regimes, such as in hospitals where denying legitimate access requests can adversely affect patient care. Recognizing this need, we develop the first principled learning-theoretic foundation for audits. Our first contribution is a game-theoretic model that captures the interaction between the defender (e.g., hospital auditors) and the adversary (e.g., hospital employees). The model takes pragmatic considerations into account, in particular, the periodic nature of audits, a budget that constrains the number of actions that the defender can inspect, and a loss function that captures the economic impact of detected and missed violations on the organization. We assume that the adversary is worst-case as is standard in other areas of computer security. We also formulate a desirable property of the audit mechanism in this model based on the concept of regret in learning theory. Our second contribution is an efficient audit mechanism that provably minimizes regret for the defender. This mechanism learns from experience to guide the defender's auditing efforts. The regret bound is significantly better than prior results in the learning literature. The stronger bound is important from a practical standpoint because it implies that the recommendations from the mechanism will converge faster to the best fixed auditing strategy for the defender. 2011-06-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/4484 info:doi/10.1109/CSF.2011.28 https://ink.library.smu.edu.sg/context/sis_research/article/5487/viewcontent/bcds_csf11_1_.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Databases and Information Systems |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Databases and Information Systems |
spellingShingle |
Databases and Information Systems BLOCKI, Jeremiah CHRISTIN, Nicolas DATTA, Anupam SINHA, Arunesh Regret minimizing audits: A learning-theoretic basis for privacy protection |
description |
Audit mechanisms are essential for privacy protection in permissive access control regimes, such as in hospitals where denying legitimate access requests can adversely affect patient care. Recognizing this need, we develop the first principled learning-theoretic foundation for audits. Our first contribution is a game-theoretic model that captures the interaction between the defender (e.g., hospital auditors) and the adversary (e.g., hospital employees). The model takes pragmatic considerations into account, in particular, the periodic nature of audits, a budget that constrains the number of actions that the defender can inspect, and a loss function that captures the economic impact of detected and missed violations on the organization. We assume that the adversary is worst-case as is standard in other areas of computer security. We also formulate a desirable property of the audit mechanism in this model based on the concept of regret in learning theory. Our second contribution is an efficient audit mechanism that provably minimizes regret for the defender. This mechanism learns from experience to guide the defender's auditing efforts. The regret bound is significantly better than prior results in the learning literature. The stronger bound is important from a practical standpoint because it implies that the recommendations from the mechanism will converge faster to the best fixed auditing strategy for the defender. |
format |
text |
author |
BLOCKI, Jeremiah CHRISTIN, Nicolas DATTA, Anupam SINHA, Arunesh |
author_facet |
BLOCKI, Jeremiah CHRISTIN, Nicolas DATTA, Anupam SINHA, Arunesh |
author_sort |
BLOCKI, Jeremiah |
title |
Regret minimizing audits: A learning-theoretic basis for privacy protection |
title_short |
Regret minimizing audits: A learning-theoretic basis for privacy protection |
title_full |
Regret minimizing audits: A learning-theoretic basis for privacy protection |
title_fullStr |
Regret minimizing audits: A learning-theoretic basis for privacy protection |
title_full_unstemmed |
Regret minimizing audits: A learning-theoretic basis for privacy protection |
title_sort |
regret minimizing audits: a learning-theoretic basis for privacy protection |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2011 |
url |
https://ink.library.smu.edu.sg/sis_research/4484 https://ink.library.smu.edu.sg/context/sis_research/article/5487/viewcontent/bcds_csf11_1_.pdf |
_version_ |
1770574872466948096 |