AMD Schola

AMD Schola is a library for developing reinforcement learning (RL) agents in Unreal Engine, and training with your favorite python-based RL frameworks: Gym, RLlib and Stable Baselines 3.

We also include ScholaExamples, featuring seven environments demonstrating the usage of Schola to train and power agents in Unreal Engine.

Download the latest version - v1.2.0

What’s new in Schola v1.2

This release of Schola contains bug fixes, support for single agent gym environments, some helpful environment wrappers and additional documentation.

New features

  • Support for Non-Vectorized Gym Environments
  • Transpose Wrapper for using Image Observations with RLlib
  • Pop Action Wrapper for using Non-Vectorized gym environments with Stable-Baselines3

Documentation

  • Improved labeling of UPROPERTIES
  • Minor improvements

Bug fixes

  • Frame Stacker no longer flattens shaped Box Spaces
  • Raycast Sensor now properly rotates with characters

Known incompatibilities and issues

  • Models trained with Schola v1.0.0 must be re-exported to ONNX using Schola v1.1+
  • Schola v1.1+ is not compatible with Unreal 5.4
  • RLlib does not support camera observers without an additional wrapper to convert observations to channels last format.

Prerequisites

Features

Inference in C++

Schola provides tools for connecting and controlling agents with ONNX models inside Unreal Engine, allowing for inference with or without Python.

Simple Unreal Interfaces

Schola exposes simple interfaces in Unreal Engine for the user to implement, allowing you to quickly build and develop reinforcement learning environments.

Reusable Actions and Sensors

Schola supports building reusable sensors and actuators, to quickly design new agents from existing components.

Multi-agent Training

Train multiple agents to compete against each other at the same time using RLlib and multi-agent environments built with Schola.

Vectorized Training

Run multiple copies of your environment within the same Unreal Engine process to accelerate training.

Headless Training

Run training without rendering to significantly improve training throughput.

Sample Environments

Basic

The Basic environment features an agent that can move in the X-dimension and receives a small reward for going five steps in one direction and a bigger reward for going in the opposite direction.

MazeSolver: Using Raycasts

The MazeSolver environment features a static maze that the agent learns to solve as fast as possible. The agent observers the environment using raycasts, moves by teleporting in 2 dimensions and is given a reward for getting closer to the goal.

3DBall: Physics Based Environments

The 3DBall environment features an agent that is trying to balance a ball on-top of itself. The agent can rotate itself and receives a reward every step until the ball falls.

BallShooter: Building Your Own Actuator

The BallShooter environment features a rotating turret that learns to aim and shoot at randomly moving targets. The agent can rotate in either direction, and detects the targets by using a cone shaped ray-cast.

Pong: Collaborative Training

The Pong environment features two agents playing a collaborative game of pong. The agents receive a reward every step as long as the ball has not hit the wall behind either agent. The game ends when the ball hits the wall behind either agent.

Tag: Competitive Multi-Agent Training

The Tag environment features a 3v1 game of tag, where one agent(the runner) has to run away from the other agents which are trying to collide with it. The agents move using forward, left and right movement input, and observe the environment with a combination of ray-casts and global position data.

RaceTrack: Controlling Chaos Vehicles with Schola

The RaceTrack environment features a car implemented with Chaos Vehicles, that learns to follow a race track. The agent controls the throttle, break and steering of the car, and can see it’s velocity and position relative to the center of the track.

Additional resources

Version history

  • What's new in Schola v1.2

    This release of Schola contains bug fixes, support for single agent gym environments, some helpful environment wrappers and additional documentation.

    New features

    • Support for Non-Vectorized Gym Environments Transpose Wrapper for using Image Observations with RLlib
    • Pop Action Wrapper for using Non-Vectorized gym environments with Stable-Baselines3

    Documentation

    • Improved labeling of UPROPERTIES
    • Minor improvements

    Bug fixes

    • Raycast Sensor now properly rotates with characters

    Known incompatibilities and issues

    • Models trained with Schola v1.0.0 must be re-exported to ONNX using Schola v1.1+
    • Schola v1.1+ is not compatible with Unreal 5.4
    • RLlib does not support camera observers without an additional wrapper to convert observations to channels last format.

Related news and technical articles