Non-Bayesian social learning with observation reuse and soft switching
We propose a non-Bayesian social learning update rule for agents in a network, which minimizes the sum of the Kullback-Leibler divergence between the true distribution generating the agents’ local observations and the agents’ beliefs (parameterized by a hypothesis set), and a weighted varentropy-rel...
Saved in:
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2019
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/102641 http://hdl.handle.net/10220/48151 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | We propose a non-Bayesian social learning update rule for agents in a network, which minimizes the sum of the Kullback-Leibler divergence between the true distribution generating the agents’ local observations and the agents’ beliefs (parameterized by a hypothesis set), and a weighted varentropy-related term. The varentropy-related term allows us to control the rate of convergence of our update rule, which also reuses some of the most recent observations of each agent to speed up convergence. Under mild technical conditions, we show that the belief of each agent concentrates on the optimal hypothesis set, and we derive a bound for the convergence rate. Furthermore, to overcome the performance degradation due to misinforming agents, who use a corrupted likelihood functions in their belief updates, we propose to use multiple social networks that update their beliefs independently and a convex combination mechanism among the beliefs of all the networks. Simulations with applications to location identification and group recommendation demonstrate that our proposed methods offer improvements over two other current state-of-the art non-Bayesian social learning algorithms. |
---|