Revisiting public reputation calculation in a personalized trust model
In this paper, we present a strategy for agents to predict the trustworthiness of other agents based on reports from peers, when the public reputation reflected by majority opinion may be suspect. We ground our discussion in the context of Zhang’s personalized trust model, where agents combine estim...
Saved in:
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2019
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/102617 http://hdl.handle.net/10220/47941 http://ceur-ws.org/Vol-2154/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | In this paper, we present a strategy for agents to predict the trustworthiness of other agents based on reports from peers, when the public reputation reflected by majority opinion may be suspect. We ground our discussion in the context of Zhang’s personalized trust model, where agents combine estimates of both private and public reputation, tempered by a representation of the trustworthiness of the peers providing ratings. We propose a change in how public reputation is calculated: instead of valuing consistency with the majority, we instead value consistency with the opinion adopted by a cluster of peers, chosen by a likelihood probability. We are able to show that our change reduces the number of negative transactions experienced by a new agent subscribing to a minority opinion, during a learning phase before private reputation dominates the calculations. In all, we offer a method for calibrating the benefit of discriminating more carefully among the peers being consulted for the trust calculations. We contrast briefly with other approaches advocating for a clustering of the set of peer advisors, and discuss as well related work for dealing with the challenge of misleading majority opinion when performing trust modeling. We also comment on the usefulness of our approach for practitioners designing intelligent systems to act as partners with human users. |
---|