AMD Schola is a library for developing reinforcement learning (RL) agents in Unreal Engine, and training with your favorite python-based RL frameworks: Gym, RLlib and Stable Baselines 3.
We also include ScholaExamples, featuring six environments demonstrating the usage of Schola to train and power agents in Unreal Engine.
Download the latest version: v1.0
This update includes:
- Initial release!
Advancing AI in Video Games with AMD Schola
By connecting popular open-source RL libraries (written in Python) with the visual and physics capabilities of Unreal Engine, Schola empowers AI researchers and game developers alike to push the boundaries of intelligent gameplay.
Features
Inference in C++
Schola provides tools for connecting and controlling agents with ONNX models inside Unreal Engine, allowing for inference with or without Python.
Simple Unreal Interfaces
Schola exposes simple interfaces in Unreal Engine for the user to implement, allowing you to quickly build and develop reinforcement learning environments.
Reusable Actions and Sensors
Schola supports building reusable sensors and actuators, to quickly design new agents from existing components.
Multi-agent Training
Train multiple agents to compete against each other at the same time using RLlib and multi-agent environments built with Schola.
Vectorized Training
Run multiple copies of your environment within the same Unreal Engine process to accelerate training.
Headless Training
Run training without rendering to significantly improve training throughput.
Sample Environments
Basic
The Basic environment features an agent that can move in the X-dimension and receives a small reward for going five steps in one direction and a bigger reward for going in the opposite direction.
MazeSolver: Using Raycasts
The MazeSolver environment features a static maze that the agent learns to solve as fast as possible. The agent observers the environment using raycasts, moves by teleporting in 2 dimensions and is given a reward for getting closer to the goal.
3DBall: Physics Based Environments
The 3DBall environment features an agent that is trying to balance a ball on-top of itself. The agent can rotate itself and receives a reward every step until the ball falls.
BallShooter: Building Your Own Actuator
The BallShooter environment features a rotating turret that learns to aim and shoot at randomly targets. The agent can rotate in either direction, and detects the targets by using a cone shaped ray-cast.
Pong: Collaborative Training
The Pong environment features two agents playing a collaborative game of pong. The agents receive a reward every step as long as the ball has not hit the wall behind either agent. The game ends when the ball hits the wall behind either agent.
Tag: Competitive Multi-Agent Training
The Tag environment features a 3v1 game of tag, where one agent(the runner) has to run away from the other agents which are trying to collide with it. The agents move using forward, left and right movement input, and observe the environment with a combination of ray-casts and global position data.
Additional resources
By connecting popular open-source RL libraries (written in Python) with the visual and physics capabilities of Unreal Engine, Schola empowers AI researchers and game developers alike to push the boundaries of intelligent gameplay.
Read the documentation for AMD Schola – a library for developing reinforcement learning (RL) agents in Unreal Engine, and training them.
Requirements
Unreal Engine v5.4 required.
Version history
- Just released!
Don't miss some of our other Unreal Engine content
AMD FidelityFX Super Resolution 3.1.3 Unreal Engine plugin guide
Download the AMD FSR 3.1.1 plugin for Unreal Engine, and learn how to install and use it.
Unreal Engine
Develop for Unreal Engine on AMD hardware with our plugin, performance and feature patches, including FidelityFX support.
Unreal Engine performance guide
Our one-stop guide to performance with Unreal Engine.