Evolving tactical plans for strategy games using automated red teaming
We examine Automated Red Teaming (ART) as a means to evolve tactical plans for strategy games. ART is a computational technique which has been employed by the defence community to uncover vulnerabilities of operational plans. In typical ART experiments, agent-based simulations of military scenarios...
Saved in:
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2011
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/101918 http://hdl.handle.net/10220/7229 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | We examine Automated Red Teaming (ART) as a means to evolve tactical plans for strategy games. ART is a computational technique which has been employed by the defence community to uncover vulnerabilities of operational plans. In typical ART experiments, agent-based simulations of military scenarios are repeatedly and automatically generated, varied and executed. Evolutionary Computation techniques are utilized to drive the exploration of simulation models to exhibit pre-specified and desired behaviour (e.g., evolve the adversary to best defeat defensive tactics). We suggest that ART is a suitable technique to assist the difficult development of challenging adversary strategies for games. To support this suggestion, we conduct and present an example ART experiment in which an urban operation scenario is considered. In this experiment, the tactical plan of the Red Team is evolved to best defeat a defensive Blue Team protecting a key position. The results present unexpected and interesting outcomes which support the suitability of ART to generate complex and stimulating tactical plans for games. |
---|