Quantifying and improving the robustness of trust systems
Trust systems help users evaluate trustworthiness of partners, which support users to make decisions in various scenarios. Evaluating trust requires evidences, which can be from ratings of other users (advisors). Rating are especially helpful when direct experiences of a user (advisee) are not enoug...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Theses and Dissertations |
Language: | English |
Published: |
2018
|
Subjects: | |
Online Access: | http://hdl.handle.net/10356/73182 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-73182 |
---|---|
record_format |
dspace |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
DRNTU::Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence |
spellingShingle |
DRNTU::Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Wang, Dongxia Quantifying and improving the robustness of trust systems |
description |
Trust systems help users evaluate trustworthiness of partners, which support users to make decisions in various scenarios. Evaluating trust requires evidences, which can be from ratings of other users (advisors). Rating are especially helpful when direct experiences of a user (advisee) are not enough. However, not all ratings are useful, and sometimes they may even be misleading. Either dishonesty (unfair rating attacks) or subjectivity of advisors can cause ratings deviating from the truth. Misleading ratings make trust evaluation inaccurate, and reduce the quality of trust-based decision making. There exist various approaches to defend against unfair rating attack, aiming to make a trust system robust. Most of them are passive regarding attackers – they prepare for known attack strategies. When facing unknown attack strategies in the future, there is no guarantee whether they will be robust. Moreover, the robustness of these approaches is typically verified and compared under specifically constructed attacks, the results of which are not convincing. First, we do not know whether there exist worse attacks under which their performance may not remain. Second, it is not clear whether the specifically constructed attacks used for robustness comparison are more advantageous to some approaches. Last, different approaches may have different models for same types of attacks. There lacks a unified modeling of unfair rating attacks. Thus, instead of passively defending after attacks are uncovered, we study unfair rating attacks in an active way, starting from considering all possible attack strategies. We propose a probabilistic modeling of attacks in any settings where ratings have discrete options. Our modeling is flexible in both space and time dimensions: allowing any number of advisors and rating levels, and also allowing attackers to change strategies over time. Given the uncertainty in predicting future attacks, we propose to emphasize the strongest or worst-case attacks. From a security viewpoint, how well a system would perform under the worst case should be a key consideration in its design. We propose to use information theory, specifically information leakage to quantify the strength of an attack: less information leakage means the attack is stronger. We then analyze and compare the robustness of several trust systems based on the strength of attacks (especially the strongest attacks) they can handle. Different from existing approaches, our quantification is independent of specific systems, allowing a fair comparison of their robustness. We study attacks from two dimensions, 1) whether attackers are independent or collusive, and 2) whether their behavior patterns are static or dynamic. For each type of attacks, we identify the strongest attack strategies. Compared to them, the commonly studied attacks are far from being really threatening, which are thus not suitable to stress-test robustness. There are approaches which not only consider dishonest ratings but also subjective ratings. They deal with dishonesty and subjectivity orthogonally – they distinguish the effects of dishonest ratings and honest but subjective ratings. However, subjectivity may twist with unfair rating attacks, in which way, influence the robustness of trust systems. We study their interplay, specifically: whether and how subjectivity, and different treatments of subjectivity may affect robustness. We also formally analyze two types of methods used to mitigate the effects of subjectivity: feature-based rating and clustering (advisors or ratings). We found that feature-based rating may deteriorate robustness, whereas clustering improves robustness. We also found that finer clustering enhances robustness, with tracking individual advisors as the extreme case. In summary, our work provides a new perspective on studying unfair rating attacks and robustness of trust systems. Probabilistic modeling allows it to be flexible and active towards gaining robustness, compared with most existing approaches. Information theory-based measurement allows it to be general, and to enable a fair comparison of both attacks and robustness across various systems. Exposed worst-case attacks have drawn the attention of both relevant researchers and system designers to design more robust systems against more threatening attacks. Finally, some non-intuitive theoretical results provide new insights for researchers, and also suggestions for system designers in practice. |
author2 |
Zhang Jie |
author_facet |
Zhang Jie Wang, Dongxia |
format |
Theses and Dissertations |
author |
Wang, Dongxia |
author_sort |
Wang, Dongxia |
title |
Quantifying and improving the robustness of trust systems |
title_short |
Quantifying and improving the robustness of trust systems |
title_full |
Quantifying and improving the robustness of trust systems |
title_fullStr |
Quantifying and improving the robustness of trust systems |
title_full_unstemmed |
Quantifying and improving the robustness of trust systems |
title_sort |
quantifying and improving the robustness of trust systems |
publishDate |
2018 |
url |
http://hdl.handle.net/10356/73182 |
_version_ |
1759853637276794880 |
spelling |
sg-ntu-dr.10356-731822023-03-04T00:52:55Z Quantifying and improving the robustness of trust systems Wang, Dongxia Zhang Jie Liu Yang School of Computer Science and Engineering Centre for Computational Intelligence DRNTU::Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Trust systems help users evaluate trustworthiness of partners, which support users to make decisions in various scenarios. Evaluating trust requires evidences, which can be from ratings of other users (advisors). Rating are especially helpful when direct experiences of a user (advisee) are not enough. However, not all ratings are useful, and sometimes they may even be misleading. Either dishonesty (unfair rating attacks) or subjectivity of advisors can cause ratings deviating from the truth. Misleading ratings make trust evaluation inaccurate, and reduce the quality of trust-based decision making. There exist various approaches to defend against unfair rating attack, aiming to make a trust system robust. Most of them are passive regarding attackers – they prepare for known attack strategies. When facing unknown attack strategies in the future, there is no guarantee whether they will be robust. Moreover, the robustness of these approaches is typically verified and compared under specifically constructed attacks, the results of which are not convincing. First, we do not know whether there exist worse attacks under which their performance may not remain. Second, it is not clear whether the specifically constructed attacks used for robustness comparison are more advantageous to some approaches. Last, different approaches may have different models for same types of attacks. There lacks a unified modeling of unfair rating attacks. Thus, instead of passively defending after attacks are uncovered, we study unfair rating attacks in an active way, starting from considering all possible attack strategies. We propose a probabilistic modeling of attacks in any settings where ratings have discrete options. Our modeling is flexible in both space and time dimensions: allowing any number of advisors and rating levels, and also allowing attackers to change strategies over time. Given the uncertainty in predicting future attacks, we propose to emphasize the strongest or worst-case attacks. From a security viewpoint, how well a system would perform under the worst case should be a key consideration in its design. We propose to use information theory, specifically information leakage to quantify the strength of an attack: less information leakage means the attack is stronger. We then analyze and compare the robustness of several trust systems based on the strength of attacks (especially the strongest attacks) they can handle. Different from existing approaches, our quantification is independent of specific systems, allowing a fair comparison of their robustness. We study attacks from two dimensions, 1) whether attackers are independent or collusive, and 2) whether their behavior patterns are static or dynamic. For each type of attacks, we identify the strongest attack strategies. Compared to them, the commonly studied attacks are far from being really threatening, which are thus not suitable to stress-test robustness. There are approaches which not only consider dishonest ratings but also subjective ratings. They deal with dishonesty and subjectivity orthogonally – they distinguish the effects of dishonest ratings and honest but subjective ratings. However, subjectivity may twist with unfair rating attacks, in which way, influence the robustness of trust systems. We study their interplay, specifically: whether and how subjectivity, and different treatments of subjectivity may affect robustness. We also formally analyze two types of methods used to mitigate the effects of subjectivity: feature-based rating and clustering (advisors or ratings). We found that feature-based rating may deteriorate robustness, whereas clustering improves robustness. We also found that finer clustering enhances robustness, with tracking individual advisors as the extreme case. In summary, our work provides a new perspective on studying unfair rating attacks and robustness of trust systems. Probabilistic modeling allows it to be flexible and active towards gaining robustness, compared with most existing approaches. Information theory-based measurement allows it to be general, and to enable a fair comparison of both attacks and robustness across various systems. Exposed worst-case attacks have drawn the attention of both relevant researchers and system designers to design more robust systems against more threatening attacks. Finally, some non-intuitive theoretical results provide new insights for researchers, and also suggestions for system designers in practice. Doctor of Philosophy (SCE) 2018-01-15T01:18:04Z 2018-01-15T01:18:04Z 2018 Thesis Wang, D. (2018). Quantifying and improving the robustness of trust systems. Doctoral thesis, Nanyang Technological University, Singapore. http://hdl.handle.net/10356/73182 10.32657/10356/73182 en 146 p. application/pdf |