bayesflow.helper_classes module#
- class bayesflow.helper_classes.EarlyStopper(patience=5, tolerance=0.05)[source]#
Bases:
object
This class will track the total validation loss and trigger an early stopping recommendation based on its hyperparameters.
- patienceint, optional, default: 5
How many successive times the tolerance value is reached before triggering an early stopping recommendation.
- tolerancefloat, optional, default: 0.05
The minimum reduction of validation loss to be considered significant.
- class bayesflow.helper_classes.LossHistory[source]#
Bases:
object
Helper class to keep track of losses during training.
- add_entry(epoch, current_loss)[source]#
Adds loss entry for current epoch into internal memory data structure.
- add_val_entry(epoch, val_loss)[source]#
Add validation entry to loss structure. Assume
loss_names
already exists as an attribute, so no attempt will be made to create names.
- file_name = 'history'#
- flush()[source]#
Returns current history and removes all existing loss history, but keeps loss names.
- get_plottable()[source]#
Returns the losses as a nicely formatted pandas DataFrame, in case only train losses were collected, otherwise a dict of data frames.
- get_running_losses(epoch)[source]#
Compute and return running means of the losses for current epoch.
- save_to_file(file_path, max_to_keep)[source]#
Saves a LossHistory object to a pickled dictionary in file_path. If max_to_keep saved loss history files are found in file_path, the oldest is deleted before a new one is saved.
- property total_loss#
- property total_val_loss#
- class bayesflow.helper_classes.MemoryReplayBuffer(capacity_in_batches=500)[source]#
Bases:
object
Implements a memory replay buffer for simulation-based inference.
Creates a circular buffer following the logic of experience replay.
- Parameters:
- capacity_in_batchesint, optional, default: 500
The capacity of the buffer in batches of simulations. Could potentially grow very large, so make sure you pick a reasonable number!
- class bayesflow.helper_classes.MultiSimulationDataset(forward_dict, batch_size, buffer_size=1024)[source]#
Bases:
object
Helper class for model comparison training with multiple generative models.
Will create multiple
SimulationDataset
instances, each parsing their own simulation dictionaries and returning these as expected by BayesFlow amortizers.Creates a wrapper holding multiple
tf.data.Dataset
instances for offline training in an amortized model comparison context.- Parameters:
- forward_dictdict
The outputs from a
MultiGenerativeModel
or a custom function, stored in a dictionary with at least the following keys:model_outputs
- a list with length equal to the number of models, each element representing a batched output of a single modelmodel_indices
- a list with integer model indices, which will later be one-hot-encoded for the model comparison learning problem.- batch_sizeint
The total number of simulations from all models in a given batch. The batch size per model will be calculated as
batch_size // num_models
- buffer_sizeint, optional, default: 1024
The buffer size for shuffling elements in a
tf.data.Dataset
- class bayesflow.helper_classes.RegressionLRAdjuster(optimizer, period=1000, wait_between_fits=10, patience=10, tolerance=-0.05, reduction_factor=0.25, cooldown_factor=2, num_resets=3, **kwargs)[source]#
Bases:
object
This class will compute the slope of the loss trajectory and inform learning rate decay.
Creates an instance with given hyperparameters which will track the slope of the loss trajectory according to specified hyperparameters and then issue an optional stopping suggestion.
- Parameters:
- optimizertf.keras.optimizers.Optimizer instance
An optimizer implementing a lr() method
- periodint, optional, default: 1000
How much loss values to consider from the past
- wait_between_fitsint, optional, default: 10
How many backpropagation updates to wait between two successive fits
- patienceint, optional, default: 10
How many successive times the tolerance value is reached before lr update.
- tolerancefloat, optional, default: -0.05
The minimum slope to be considered substantial for training.
- reduction_factorfloat in [0, 1], optional, default: 0.25
The factor by which the learning rate is reduced upon hitting the tolerance threshold for patience number of times
- cooldown_factorfloat, optional, default: 2
The factor by which the period is multiplied to arrive at a cooldown period.
- num_resetsint, optional, default: 3
How many times to reduce the learning rate before issuing an optional stopping
- **kwargsdict, optional, default {}
Additional keyword arguments passed to the HuberRegression class.
- file_name = 'lr_adjuster'#
- class bayesflow.helper_classes.SimulationDataset(forward_dict, batch_size, buffer_size=1024)[source]#
Bases:
object
Helper class to create a tensorflow.data.Dataset which parses simulation dictionaries and returns simulation dictionaries as expected by BayesFlow amortizers.
Creates a wrapper holding a
tf.data.Dataset
instance for offline training in an amortized estimation context.- Parameters:
- forward_dictdict
The outputs from a
GenerativeModel
or a custom function, stored in a dictionary with at least the following keys:sim_data
- an array representing the batched output of the modelprior_draws
- an array with prior generated from the model’s prior- batch_sizeint
The total number of simulations from all models in a given batch. The batch size per model will be calculated as
batch_size // num_models
- buffer_sizeint, optional, default: 1024
The buffer size for shuffling elements in a
tf.data.Dataset
- class bayesflow.helper_classes.SimulationMemory(stores_raw=True, capacity_in_batches=50)[source]#
Bases:
object
Helper class to keep track of a pre-determined number of simulations during training.
- file_name = 'memory'#