RRT* Chasing Agent - Sonam Wangchuk

Introduction

Video games, particularly older ones that tend to be from a top down perspective, often feature enemies that chase the player. Examples of this can be seen in games such as pac-man, the 2D legend of Zelda games, and even within newer titles such as Vampire Survivors. A common problem that such games all struggle with is how the enemy AI handles movement when the game state includes impassable obstacles.

top_down_ai_0.gif

Very often, as is the case in the gif above, the most common solution is to simply chase the player when they are directly within the line of sight, and simply wander around otherwise. This is a lazy approach, and one that leads to enemies that feel lifeless and unchallenging. In order to remedy this lack of complexity, game designers include multiple enemies to create the illusion of a smart overall enemy base.

This always bothered me. Even incredibly complex games lauded for their good AI such as Alien Isolation, feature enemy patterns that rely on pre-learnt paths in order to navigate from one section to the other. Watching GDC talks from the developers of such games, enemy AI designers often need to extensively play test these pre-determined paths in order to see if they feel organic and are valid to begin with, and even then it doesn’t scale to any new environment or outside it’s regular bounds (best shown by enemies randomly stopping at invisible walls in some games, which really kills immersion.

The goal of this project is simple. We’ve learned a lot of pathfinding techniques, and I want to use a continuous space planner as the backbone of the movement AI for one enemy that will chase you, smartly creating new paths to find you even when hiding behind obstacles and being out of their line of sight. Specifically, I want to implement an enemy AI that uses a Rapidly-exploring Random Tree Star algorithm to dynamically create paths toward a player, creating intelligent movement at a lower cost than pre-determined pathfinding.

Implementation

In order to accomplish the goal above, it makes the most sense to make a skeleton of a game that features a player, a simple movement task, and an enemy agent with the RRT* planning algorithm built into its movement pattern. All of this can be done in Pygame, which allows us to build video games that you can interact with in python. We can use a library called pygbag in order to build a game that can be played on a browser, making testing and sharing the game significantly easier. It’s worth noting that Pygame famously struggles to scale for larger games (and its performance it generally considered sub-optimal) but it will do for this small project!

Screenshot 2025-12-12 at 5.42.17 PM.png

The objective of the game is quite simple. Given a randomly generated stage with obstacles, collect points on the map while avoiding an agent that is using an RRT* algorithm to come towards the player. There were 5 important design decisions that I made for the game and the algorithm:

  1. You can toggle the paths and exploration trees of the RRT* algorithm at any given time by pressing T, giving you a clear idea of how the enemy is choosing to move at any given point in time. Initially you could always see this (and I even thought it was an interesting game mechanic to know where your enemy is going) but it changes so rapidly and provides so much visual clutter that it ruined the experience for some of my play testers. Hence, I decided to make it a toggle instead (it also helped me debug the algorithm!)
  2. The RRT* algorithm is replanned every 15 frames for the enemy, building up to 500 tree nodes to find an optimal collision-free path. This number can be changed in order to get a more consistent or erratic agent, but this value seemed to lead to intelligent and consistent performance when I was play testing the agent.