Gamification of security games in voting
With the growing acceptance of democracy and elections across the world, the security of elections become increasingly important to ensure that the will of the people is accurately reflected in the results of the election. However, there are limited security resources that can be allocated to defend...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/147988 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | With the growing acceptance of democracy and elections across the world, the security of elections become increasingly important to ensure that the will of the people is accurately reflected in the results of the election. However, there are limited security resources that can be allocated to defend against attackers and ensure the integrity of the election. The defence of polling stations against attackers can be classified as a sequential decision making problem. Due the strength of Reinforcement Learning in solving such problems, it has been used to design and optimise models which can serve as strategies to defend against potential attacks on elections. However, as these models are usually not trained against human attackers, we are unable to determine their performance in the real world. In this project, we design and create a game-like environment to enable the human players to serve as the attackers aiming to disrupt an election and play against the models, who serve as the defenders looking to preserve the integrity of the election. To train our models, we compare 3 different Multi Agent Reinforcement Learning algorithms: QMIX, Value Decomposition Networks (VDN) and Independent Q-Learning (IQL). We evaluate these 3 algorithms on 4 different maps in our environment and show that QMIX is able to consistently achieve the best results across all 4 maps, followed by VDN and lastly IQL. These results are discussed in hopes to provide a more comprehensive environment where the models can be tested against human players. |
---|