
Related news and technical articles








AMD Schola is a library for developing reinforcement learning (RL) agents in Unreal Engine, and training with your favorite python-based RL frameworks: Gym, RLlib and Stable Baselines 3.
We also include ScholaExamples, featuring seven environments demonstrating the usage of Schola to train and power agents in Unreal Engine.
This is a minor update featuring bug fixes for Schola and ScholaExamples, as well as extended documentation for Schola.
This release includes updates to both Schola and ScholaExamples. Schola v1.1.0 introduces support for Unreal Engine 5.5, several new features, and stability improvements. ScholaExamples v1.1.0 ensures compatibility with Schola v1.1, adds a new example called RaceTrack, and updates the Pong example to leverage the new Camera Sensors.
New Features
launch.py
using Python entrypoint plugins.SceneCaptureComponent2D
to provide vision or depth input to an agent.AgentStep
BTTask to embed RL models into your behavioral trees.Improvements
(3, 64, 64)
instead of (1, 12288)
).Bug Fixes
Known Incompatibilities and Issues
New Example
Updated Example
Schola provides tools for connecting and controlling agents with ONNX models inside Unreal Engine, allowing for inference with or without Python.
Schola exposes simple interfaces in Unreal Engine for the user to implement, allowing you to quickly build and develop reinforcement learning environments.
Schola supports building reusable sensors and actuators, to quickly design new agents from existing components.
Train multiple agents to compete against each other at the same time using RLlib and multi-agent environments built with Schola.
Run multiple copies of your environment within the same Unreal Engine process to accelerate training.
Run training without rendering to significantly improve training throughput.
The Basic environment features an agent that can move in the X-dimension and receives a small reward for going five steps in one direction and a bigger reward for going in the opposite direction.
The MazeSolver environment features a static maze that the agent learns to solve as fast as possible. The agent observers the environment using raycasts, moves by teleporting in 2 dimensions and is given a reward for getting closer to the goal.
The 3DBall environment features an agent that is trying to balance a ball on-top of itself. The agent can rotate itself and receives a reward every step until the ball falls.
The BallShooter environment features a rotating turret that learns to aim and shoot at randomly moving targets. The agent can rotate in either direction, and detects the targets by using a cone shaped ray-cast.
The Pong environment features two agents playing a collaborative game of pong. The agents receive a reward every step as long as the ball has not hit the wall behind either agent. The game ends when the ball hits the wall behind either agent.
The Tag environment features a 3v1 game of tag, where one agent(the runner) has to run away from the other agents which are trying to collide with it. The agents move using forward, left and right movement input, and observe the environment with a combination of ray-casts and global position data.
The RaceTrack environment features a car implemented with Chaos Vehicles, that learns to follow a race track. The agent controls the throttle, break and steering of the car, and can see it’s velocity and position relative to the center of the track.
This is a minor update featuring bug fixes for Schola and ScholaExamples, as well as extended documentation for Schola.