dAI_GO: artificial intelligence for fighting games
dAI_GO investigates the implementation of an Actor-Critic (AC) model for training an artificial intelligence (AI) agent in a fighting game environment using the IkemenGO engine. The purpose is to develop agents capable of engaging in human-like adaptive behaviour during gameplay. The agent is intend...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/181090 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | dAI_GO investigates the implementation of an Actor-Critic (AC) model for training an artificial intelligence (AI) agent in a fighting game environment using the IkemenGO engine. The purpose is to develop agents capable of engaging in human-like adaptive behaviour during gameplay. The agent is intended to balance short-term rewards such as successful attacks with longer-term rewards such as winning a round. The AC model leverages policy optimization through the Actor, while also taking advantage of value estimation via the Critic, making the AC model suitable for handling the dual objectives of the agent.
Training was conducted using GPU-accelerated PyTorch models while incorporating logit manipulation techniques such as temperature scaling and bias addition to influence the agent’s behaviour to create human-like agents. Additional experiments were conducted to investigate agent performance under various conditions, such as introducing biases towards certain actions or introducing behavioural asymmetries to the agents. Rewards were structured to reflect relevant changes in the game state such as health percentage, positioning and combos.
Experiments were conducted to explore how different training conditions—such as biases towards specific behaviours or segregated movement actions—affected performance. The results demonstrated notable progress, with agents showing promising adaptive play against built-in CPUs but sometimes struggling against human opponents due to certain exploitable patterns. Ultimately, the project highlights the potential for using AC models in game AI and offers insights into future improvements, such as enhancing real-time decision-making, refining rewards, and incorporating imitation learning for more complex behaviours. |
---|