schola.ray.env.BaseEnv

class schola.ray.env.BaseEnv(unreal_connection, verbosity=0)[source]

Bases: BaseEnv

A Ray RLlib environment that wraps a Schola environment.

Parameters:
  • unreal_connection (UnrealConnection) – The connection to the Unreal Engine environment.

  • verbosity (int, default=0) – The verbosity level for the environment.

unwrapped

The underlying multi-agent environment.

Type:

MultiAgentEnv

last_reset_obs

The observations recorded during the last reset.

Type:

Dict[int,Dict[str,Any]]

last_reset_infos

The info dict recorded during the last reset.

Type:

Dict[int,Dict[str,str]]

Methods

__init__(unreal_connection[, verbosity])

get_agent_ids()

Return the agent ids for the sub_environment.

get_sub_environments([as_dict])

Return a reference to the underlying sub environments, if any.

last()

Returns the last observations, rewards, done- truncated flags and infos …

poll()

Poll the environment for the next observation, reward, termination, info and any off_policy_actions (Currently Unused).

send_actions(action_dict)

Called to send actions back to running agents in this env.

stop()

Releases all resources used.

to_base_env([make_env, num_envs, …])

Converts an RLlib-supported env into a BaseEnv object.

try_render([env_id])

Tries to render the sub-environment with the given id or all.

try_reset([env_id, seed, options])

Attempt to reset the sub-env with the given id or all sub-envs.

try_restart([env_id])

Attempt to restart the sub-env with the given id or all sub-envs.

Attributes

action_space

The action space for the environment.

num_envs

The number of sub-environments in the wrapped environment.

observation_space

The observation space for the environment.

__init__(unreal_connection, verbosity=0)[source]
Parameters:
property action_space: DictSpace

The action space for the environment.

Returns:

The action space for the environment

Return type:

DictSpace

property num_envs: int

The number of sub-environments in the wrapped environment.

Returns:

The number of sub-environments in the wrapped environment.

Return type:

int

property observation_space: DictSpace

The observation space for the environment.

Returns:

The observation space for the environment.

Return type:

DictSpace

poll()[source]

Poll the environment for the next observation, reward, termination, info and any off_policy_actions (Currently Unused).

Returns:
  • observations (EnvAgentIdDict[Dict[str,Any]]) – A dictionary, keyed by the environment and agent Id, containing the observations for each agent.

  • rewards (EnvAgentIdDict[float]) – A dictionary, keyed by the environment and agent Id, containing the reward for each agent.

  • terminateds (EnvAgentIdDict[bool]) – A dictionary, keyed by the environment and agent Id, containing the termination flag for each agent.

  • truncateds (EnvAgentIdDict[bool]) – A dictionary, keyed by the environment and agent Id, containing the truncation flag for each agent.

  • infos (EnvAgentIdDict[Dict[str,str]]]:) – A dictionary, keyed by the environment and agent Id, containing the information dictionary for each agent.

  • off_policy_actions (EnvAgentIdDict[Any]) – A dictionary, keyed by the environment and agent Id, containing the off-policy actions for each agent. Unused.

Return type:

Tuple[Dict[int, Dict[int, Dict[str, Any]]], Dict[int, Dict[int, float]], Dict[int, Dict[int, bool]], Dict[int, Dict[int, bool]], Dict[int, Dict[int, Dict[str, str]]], Dict[int, Dict[int, Any]]]

send_actions(action_dict)[source]

Called to send actions back to running agents in this env.

Actions should be sent for each ready agent that returned observations in the previous poll() call.

Parameters:

action_dict (Dict[int, Dict[int, Dict[str, Any]]]) – Actions values keyed by env_id and agent_id.

Return type:

None

stop()[source]

Releases all resources used.

Return type:

None

try_reset(env_id=None, seed=None, options=None)[source]

Attempt to reset the sub-env with the given id or all sub-envs.

If the environment does not support synchronous reset, a tuple of (ASYNC_RESET_REQUEST, ASYNC_RESET_REQUEST) can be returned here.

Note: A MultiAgentDict is returned when using the deprecated wrapper classes such as ray.rllib.env.base_env._MultiAgentEnvToBaseEnv, however for consistency with the poll() method, a MultiEnvDict is returned from the new wrapper classes, such as ray.rllib.env.multi_agent_env.MultiAgentEnvWrapper.

Parameters:
  • env_id (int | None) – The sub-environment’s ID if applicable. If None, reset the entire Env (i.e. all sub-environments).

  • seed (None | List[int] | int) – The seed to be passed to the sub-environment(s) when resetting it. If None, will not reset any existing PRNG. If you pass an integer, the PRNG will be reset even if it already exists.

  • options (Dict[str, str] | None) – An options dict to be passed to the sub-environment(s) when resetting it.

Returns:

A tuple consisting of a) the reset (multi-env/multi-agent) observation dict and b) the reset (multi-env/multi-agent) infos dict. Returns the (ASYNC_RESET_REQUEST, ASYNC_RESET_REQUEST) tuple, if not supported.

Related pages

  • Visit the Schola product page for download links and more information.

Looking for more documentation on GPUOpen?

AMD GPUOpen software blogs

Our handy software release blogs will help you make good use of our tools, SDKs, and effects, as well as sharing the latest features with new releases.

GPUOpen Manuals

Don’t miss our manual documentation! And if slide decks are what you’re after, you’ll find 100+ of our finest presentations here.

AMD GPUOpen Performance Guides

The home of great performance and optimization advice for AMD RDNA™ 2 GPUs, AMD Ryzen™ CPUs, and so much more.

Getting started: AMD GPUOpen software

New or fairly new to AMD’s tools, libraries, and effects? This is the best place to get started on GPUOpen!

AMD GPUOpen Getting Started Development and Performance

Looking for tips on getting started with developing and/or optimizing your game, whether on AMD hardware or generally? We’ve got you covered!

AMD GPUOpen Technical blogs

Browse our technical blogs, and find valuable advice on developing with AMD hardware, ray tracing, Vulkan®, DirectX®, Unreal Engine, and lots more.

Find out more about our software!

AMD GPUOpen Effects - AMD FidelityFX technologies

Create wonder. No black boxes. Meet the AMD FidelityFX SDK!

AMD GPUOpen Samples

Browse all our useful samples. Perfect for when you’re needing to get started, want to integrate one of our libraries, and much more.

AMD GPUOpen developer SDKs

Discover what our SDK technologies can offer you. Query hardware or software, manage memory, create rendering applications or machine learning, and much more!

AMD GPUOpen Developer Tools

Analyze, Optimize, Profile, Benchmark. We provide you with the developer tools you need to make sure your game is the best it can be!