schola.scripts.ray.settings.TrainingSettings
Class Definition
class schola.scripts.ray.settings.TrainingSettings(timesteps=3000, learning_rate=0.0003, minibatch_size=128, train_batch_size_per_learner=256, num_sgd_iter=5, gamma=0.99)
Bases: object
Dataclass for generic training settings used in the RLlib training process. This class defines the parameters for training, including the number of timesteps, learning rate, minibatch size, and other hyperparameters that control the training process. These settings are applicable to any RLlib algorithm and can be customized based on the specific requirements of the training job.
Parameters
timesteps
Type: int
learning_rate
Type: float
minibatch_size
Type: int
train_batch_size_per_learner
Type: int
num_sgd_iter
Type: int
gamma
Type: float
Attributes
gamma
Type: float
Default: 0.99
The discount factor for the reinforcement learning algorithm. This is used to calculate the present value of future rewards. A value of 0.99 means that future rewards are discounted by 1% for each time step into the future. This helps to balance the importance of immediate versus future rewards in the training process. A value closer to 1.0 will prioritize future rewards more heavily, while a value closer to 0 will prioritize immediate rewards.
learning_rate
Type: float
Default: 0.0003
The learning rate for any chosen algorithm. This controls how much to adjust the model weights in response to the estimated error each time the model weights are updated. A smaller value means slower learning, while a larger value means faster learning.
minibatch_size
Type: int
Default: 128
The size of the minibatch for training. This is the number of samples used in each iteration of training to update the model weights. A larger batch size can lead to more stable estimates of the gradient, but requires more memory and can slow down training if too large.
name
Type: str
num_sgd_iter
Type: int
Default: 5
The number of stochastic gradient descent (SGD) iterations for each batch. This is the number of times to update the model weights using the samples in the minibatch. More iterations can lead to better convergence, but also increases the training time.
timesteps
Type: int
Default: 3000
The number of timesteps to train for. This is the total number of timesteps to run during training.
train_batch_size_per_learner
Type: int
Default: 256
The number of samples given to each learner during training. Must be divisible by minibatch_size.
Methods
__init__
__init__(timesteps=3000, learning_rate=0.0003, minibatch_size=128, train_batch_size_per_learner=256, num_sgd_iter=5, gamma=0.99)
Return type: None
populate_arg_group
classmethod populate_arg_group(args_group)