AMD Schola

AMD Schola is a library for developing reinforcement learning (RL) agents in Unreal® Engine, and training with your favorite python-based RL frameworks: Gym, RLlib and Stable Baselines 3.

We also include ScholaExamples, featuring seven environments demonstrating the usage of Schola to train and power agents in Unreal Engine.

Download the latest version - v1.3.0

What’s new in AMD Schola v1.3

This release of Schola contains support for behavior cloning with the imitation library, easily compiling your environment into a standalone executable from Python® and Unreal® Engine 5.6.

New features

  • Helpers for building standalone executables from Python scripts.

Bug fixes

  • Fixed a bug where the Unreal process doesn’t close when running standalone on Linux®.
  • Fixed a bug with merging discrete spaces into a new MultiDiscreteSpace.
  • Fixed an import bug in StatLoggerComponent.h.

Known incompatibilities and issues

  • Models trained with Schola v1.0.0 must be re-exported to ONNX using Schola v1.1+.
  • Schola v1.1+ is not compatible with Unreal® Engine 5.4.
  • RLlib does not support camera observers without an additional wrapper to convert observations to channels last format.

Prerequisites

Features

Inference in C++

Schola provides tools for connecting and controlling agents with ONNX models inside Unreal Engine, allowing for inference with or without Python.

Simple Unreal Interfaces

Schola exposes simple interfaces in Unreal Engine for the user to implement, allowing you to quickly build and develop reinforcement learning environments.

Reusable Actions and Sensors

Schola supports building reusable sensors and actuators, to quickly design new agents from existing components.

Multi-agent Training

Train multiple agents to compete against each other at the same time using RLlib and multi-agent environments built with Schola.

Vectorized Training

Run multiple copies of your environment within the same Unreal Engine process to accelerate training.

Headless Training

Run training without rendering to significantly improve training throughput.

Sample Environments

Basic

The Basic environment features an agent that can move in the X-dimension and receives a small reward for going five steps in one direction and a bigger reward for going in the opposite direction.

MazeSolver: Using Raycasts

The MazeSolver environment features a static maze that the agent learns to solve as fast as possible. The agent observers the environment using raycasts, moves by teleporting in 2 dimensions and is given a reward for getting closer to the goal.

3DBall: Physics Based Environments

The 3DBall environment features an agent that is trying to balance a ball on-top of itself. The agent can rotate itself and receives a reward every step until the ball falls.

BallShooter: Building Your Own Actuator

The BallShooter environment features a rotating turret that learns to aim and shoot at randomly moving targets. The agent can rotate in either direction, and detects the targets by using a cone shaped ray-cast.

Pong: Collaborative Training

The Pong environment features two agents playing a collaborative game of pong. The agents receive a reward every step as long as the ball has not hit the wall behind either agent. The game ends when the ball hits the wall behind either agent.

Tag: Competitive Multi-Agent Training

The Tag environment features a 3v1 game of tag, where one agent(the runner) has to run away from the other agents which are trying to collide with it. The agents move using forward, left and right movement input, and observe the environment with a combination of ray-casts and global position data.

RaceTrack: Controlling Chaos Vehicles with Schola

The RaceTrack environment features a car implemented with Chaos Vehicles, that learns to follow a race track. The agent controls the throttle, break and steering of the car, and can see it’s velocity and position relative to the center of the track.

Additional resources

Endnotes

Unreal® is a trademark or registered trademark of Epic Games, Inc. in the United States of America and elsewhere.

“Python” is a trademark or registered trademark of the Python Software Foundation.

Version history

What's new in AMD Schola v1.3.0

  • This release of Schola contains support for behavior cloning with the imitation library, easily compiling your environment into a standalone executable from Python and Unreal® Engine 5.6.

New features

  • Helpers for building standalone executables from Python scripts.

Bug fixes

  • Fixed a bug where the Unreal Process doesn't close when running standalone on Linux®.
  • Fixed a bug with merging discrete spaces into a new MultiDiscreteSpace.
  • Fixed an import bug in StatLoggerComponent.h.

Known incompatibilities and issues

  • Models trained with Schola v1.0.0 must be re-exported to ONNX using Schola v1.1+.
  • Schola v1.1+ is not compatible with Unreal® Engine 5.4.
  • RLlib does not support camera observers without an additional wrapper to convert observations to channels last format.

Prerequisites

Unreal Engine 5.5 to 5.6.
Python 3.9 to 3.12.

Related news and technical articles