schola.scripts.ray.settings.TrainingSettings
- class schola.scripts.ray.settings.TrainingSettings(timesteps=3000, learning_rate=0.0003, minibatch_size=128, train_batch_size_per_learner=256, num_sgd_iter=5, gamma=0.99)[source]
-
Bases:
object
Dataclass for generic training settings used in the RLlib training process. This class defines the parameters for training, including the number of timesteps, learning rate, minibatch size, and other hyperparameters that control the training process. These settings are applicable to any RLlib algorithm and can be customized based on the specific requirements of the training job.
Methods
__init__
([timesteps, learning_rate, …])populate_arg_group
(args_group)Attributes
The discount factor for the reinforcement learning algorithm.
The learning rate for any chosen algorithm.
The size of the minibatch for training.
The number of stochastic gradient descent (SGD) iterations for each batch.
The number of timesteps to train for.
The number of samples given to each learner during training.
- Parameters:
- __init__(timesteps=3000, learning_rate=0.0003, minibatch_size=128, train_batch_size_per_learner=256, num_sgd_iter=5, gamma=0.99)
- gamma: float = 0.99
-
The discount factor for the reinforcement learning algorithm. This is used to calculate the present value of future rewards. A value of 0.99 means that future rewards are discounted by 1% for each time step into the future. This helps to balance the importance of immediate versus future rewards in the training process. A value closer to 1.0 will prioritize future rewards more heavily, while a value closer to 0 will prioritize immediate rewards.
- learning_rate: float = 0.0003
-
The learning rate for any chosen algorithm. This controls how much to adjust the model weights in response to the estimated error each time the model weights are updated. A smaller value means slower learning, while a larger value means faster learning.
- minibatch_size: int = 128
-
The size of the minibatch for training. This is the number of samples used in each iteration of training to update the model weights. A larger batch size can lead to more stable estimates of the gradient, but requires more memory and can slow down training if too large.
- property name: str
- num_sgd_iter: int = 5
-
The number of stochastic gradient descent (SGD) iterations for each batch. This is the number of times to update the model weights using the samples in the minibatch. More iterations can lead to better convergence, but also increases the training time.
- classmethod populate_arg_group(args_group)[source]
- timesteps: int = 3000
-
The number of timesteps to train for. This is the total number of timesteps to run during training.
- train_batch_size_per_learner: int = 256
-
The number of samples given to each learner during training. Must be divisble by minibatch_size.