Adapter#

class bayesflow.adapters.Adapter(transforms: Sequence[Transform] | None = None)[source]#

Bases: MutableSequence[Transform]

Defines an adapter to apply various transforms to data.

Where possible, the transforms also supply an inverse transform.

Parameters:
transformsSequence[Transform], optional

The sequence of transforms to execute.

static create_default(inference_variables: Sequence[str]) Adapter[source]#

Create an adapter with a set of default transforms.

Parameters:
inference_variablesSequence of str

The names of the variables to be inferred by an estimator.

Returns:
An initialized Adapter with a set of default transforms.
classmethod from_config(config: dict, custom_objects=None) Adapter[source]#
get_config() dict[source]#
forward(data: dict[str, any], *, stage: str = 'inference', log_det_jac: bool = False, **kwargs) dict[str, ndarray] | tuple[dict[str, ndarray], dict[str, ndarray]][source]#

Apply the transforms in the forward direction.

Parameters:
datadict

The data to be transformed.

stagestr, one of [“training”, “validation”, “inference”]

The stage the function is called in.

log_det_jac: bool, optional

Whether to return the log determinant of the Jacobian of the transforms.

**kwargsdict

Additional keyword arguments passed to each transform.

Returns:
dict | tuple[dict, dict]

The transformed data or tuple of transformed data and log determinant of the Jacobian.

inverse(data: dict[str, ndarray], *, stage: str = 'inference', log_det_jac: bool = False, **kwargs) dict[str, ndarray] | tuple[dict[str, ndarray], dict[str, ndarray]][source]#

Apply the transforms in the inverse direction.

Parameters:
datadict

The data to be transformed.

stagestr, one of [“training”, “validation”, “inference”]

The stage the function is called in.

log_det_jac: bool, optional

Whether to return the log determinant of the Jacobian of the transforms.

**kwargsdict

Additional keyword arguments passed to each transform.

Returns:
dict | tuple[dict, dict]

The transformed data or tuple of transformed data and log determinant of the Jacobian.

__call__(data: Mapping[str, any], *, inverse: bool = False, stage='inference', **kwargs) dict[str, ndarray] | tuple[dict[str, ndarray], dict[str, ndarray]][source]#

Apply the transforms in the given direction.

Parameters:
dataMapping[str, any]

The data to be transformed.

inversebool, optional

If False, apply the forward transform, else apply the inverse transform (default False).

stagestr, one of [“training”, “validation”, “inference”]

The stage the function is called in.

**kwargs

Additional keyword arguments passed to each transform.

Returns:
dict | tuple[dict, dict]

The transformed data or tuple of transformed data and log determinant of the Jacobian.

append(value: Transform) Adapter[source]#

Append a transform to the list of transforms.

Parameters:
valueTransform

The transform to be added.

extend(values: Sequence[Transform]) Adapter[source]#

Extend the adapter with a sequence of transforms.

Parameters:
valuesSequence of Transform

The additional transforms to extend the adapter.

insert(index: int, value: Transform | Sequence[Transform]) Adapter[source]#

Insert a transform at a given index.

Parameters:
indexint

The index to insert at.

valueTransform or Sequence of Transform

The transform or transforms to insert.

add_transform(value: Transform) Adapter#

Append a transform to the list of transforms.

Parameters:
valueTransform

The transform to be added.

apply(include: str | Sequence[str] = None, *, forward: ufunc | str, inverse: ufunc | str = None, predicate: Predicate = None, exclude: str | Sequence[str] = None, **kwargs)[source]#

Append a NumpyTransform to the adapter.

Parameters:
forwardstr or np.ufunc

The name of the NumPy function to use for the forward transformation.

inversestr or np.ufunc, optional

The name of the NumPy function to use for the inverse transformation. By default, the inverse is inferred from the forward argument for supported methods. You can find the supported methods in INVERSE_METHODS.

predicatePredicate, optional

Function that indicates which variables should be transformed.

includestr or Sequence of str, optional

Names of variables to include in the transform.

excludestr or Sequence of str, optional

Names of variables to exclude from the transform.

**kwargsdict

Additional keyword arguments passed to the transform.

apply_serializable(include: str | Sequence[str] = None, *, forward: Callable[[ndarray, ...], ndarray], inverse: Callable[[ndarray, ...], ndarray], predicate: Predicate = None, exclude: str | Sequence[str] = None, **kwargs)[source]#

Append a SerializableCustomTransform to the adapter.

Parameters:
forwardfunction, no lambda

Registered serializable function to transform the data in the forward pass. For the adapter to be serializable, this function has to be serializable as well (see Notes). Therefore, only proper functions and no lambda functions can be used here.

inversefunction, no lambda

Registered serializable function to transform the data in the inverse pass. For the adapter to be serializable, this function has to be serializable as well (see Notes). Therefore, only proper functions and no lambda functions can be used here.

predicatePredicate, optional

Function that indicates which variables should be transformed.

includestr or Sequence of str, optional

Names of variables to include in the transform.

excludestr or Sequence of str, optional

Names of variables to exclude from the transform.

**kwargsdict

Additional keyword arguments passed to the transform.

Raises:
ValueError

When the provided functions are not registered serializable functions.

Notes

Important: The forward and inverse functions have to be registered with Keras. To do so, use the @keras.saving.register_keras_serializable decorator. They must also be registered (and identical) when loading the adapter at a later point in time.

Examples

The example below shows how to use the keras.saving.register_keras_serializable decorator to register functions with Keras. Note that for this simple example, one usually would use the simpler apply() method.

>>> import keras
>>>
>>> @keras.saving.register_keras_serializable("custom")
>>> def forward_fn(x):
>>>     return x**2
>>>
>>> @keras.saving.register_keras_serializable("custom")
>>> def inverse_fn(x):
>>>     return x**0.5
>>>
>>> adapter = bf.Adapter().apply_serializable(
>>>     "x",
>>>     forward=forward_fn,
>>>     inverse=inverse_fn,
>>> )
as_set(keys: str | Sequence[str])[source]#

Append an AsSet transform to the adapter.

Parameters:
keysstr or Sequence of str

The names of the variables to apply the transform to.

as_time_series(keys: str | Sequence[str])[source]#

Append an AsTimeSeries transform to the adapter.

Parameters:
keysstr or Sequence of str

The names of the variables to apply the transform to.

broadcast(keys: str | Sequence[str], *, to: str, expand: str | int | tuple = 'left', exclude: int | tuple = -1, squeeze: int | tuple = None)[source]#

Append a Broadcast transform to the adapter.

Parameters:
keysstr or Sequence of str

The names of the variables to apply the transform to.

tostr

Name of the data variable to broadcast to.

expandstr or int or tuple, optional

Where should new dimensions be added to match the number of dimensions in to? Can be “left”, “right”, or an integer or tuple containing the indices of the new dimensions. The latter is needed if we want to include a dimension in the middle, which will be required for more advanced cases. By default we expand left.

excludeint or tuple, optional

Which dimensions (of the dimensions after expansion) should retain their size, rather than being broadcasted to the corresponding dimension size of to? By default we exclude the last dimension (usually the data dimension) from broadcasting the size.

squeezeint or tuple, optional

Axis to squeeze after broadcasting.

Notes

Important: Do not broadcast to variables that are used as inference variables (i.e., parameters to be inferred by the networks). The adapter will work during training but then fail during inference because the variable being broadcasted to is not available.

clear()[source]#

Remove all transforms from the adapter.

concatenate(keys: str | Sequence[str], *, into: str, axis: int = -1)[source]#

Append a Concatenate transform to the adapter.

Parameters:
keysstr or Sequence of str

The names of the variables to concatenate.

intostr

The name of the resulting variable.

axisint, optional

Along which axis to concatenate the keys. The last axis is used by default.

convert_dtype(from_dtype: str, to_dtype: str, *, predicate: Predicate = None, include: str | Sequence[str] = None, exclude: str | Sequence[str] = None)[source]#

Append a ConvertDType transform to the adapter. See also map_dtype().

Parameters:
from_dtypestr

Original dtype

to_dtypestr

Target dtype

predicatePredicate, optional

Function that indicates which variables should be transformed.

includestr or Sequence of str, optional

Names of variables to include in the transform.

excludestr or Sequence of str, optional

Names of variables to exclude from the transform.

constrain(keys: str | Sequence[str], *, lower: int | float | ndarray = None, upper: int | float | ndarray = None, method: str = 'default', inclusive: str = 'both', epsilon: float = 1e-15)[source]#

Append a Constrain transform to the adapter.

Parameters:
keysstr or Sequence of str

The names of the variables to constrain.

lower: int or float or np.darray, optional

Lower bound for named data variable.

upperint or float or np.darray, optional

Upper bound for named data variable.

methodstr, optional

Method by which to shrink the network predictions space to specified bounds. Choose from - Double bounded methods: sigmoid, expit, (default = sigmoid) - Lower bound only methods: softplus, exp, (default = softplus) - Upper bound only methods: softplus, exp, (default = softplus)

inclusive{‘both’, ‘lower’, ‘upper’, ‘none’}, optional

Indicates which bounds are inclusive (or exclusive). - “both” (default): Both lower and upper bounds are inclusive. - “lower”: Lower bound is inclusive, upper bound is exclusive. - “upper”: Lower bound is exclusive, upper bound is inclusive. - “none”: Both lower and upper bounds are exclusive.

epsilonfloat, optional

Small value to ensure inclusive bounds are not violated. Current default is 1e-15 as this ensures finite outcomes with the default transformations applied to data exactly at the boundaries.

drop(keys: str | Sequence[str])[source]#

Append a Drop transform to the adapter.

Parameters:
keysstr or Sequence of str

The names of the variables to drop.

expand_dims(keys: str | Sequence[str], *, axis: int | tuple)[source]#

Append an ExpandDims transform to the adapter.

Parameters:
keysstr or Sequence of str

The names of the variables to expand.

axisint or tuple

The axis to expand.

group(keys: Sequence[str], into: str, *, prefix: str = '')[source]#

Append a Group transform to the adapter.

Groups the given variables as a dictionary in the key into. As most transforms do not support nested structures, this should usually be the last transform in the adapter.

Parameters:
keysSequence of str

The names of the variables to group together.

intostr

The name of the variable to store the grouped variables in.

prefixstr, optional

An optional common prefix of the variable names before grouping, which will be removed after grouping.

Raises:
ValueError

If a prefix is specified, but a provided key does not start with the prefix.

ungroup(key: str, *, prefix: str = '')[source]#

Append an Ungroup transform to the adapter.

Ungroups the the variables in key from a dictionary into individual entries. Most transforms do not support nested structures, so this can be used to flatten a nested structure. The nesting can be re-established after the transforms using the group() method.

Parameters:
keystr

The name of the variable to ungroup. The corresponding variable has to be a dictionary.

prefixstr, optional

An optional common prefix that will be added to the ungrouped variable names. This can be necessary to avoid duplicate names.

keep(keys: str | Sequence[str])[source]#

Append a Keep transform to the adapter.

Parameters:
keysstr or Sequence of str

The names of the variables to keep.

log(keys: str | Sequence[str], *, p1: bool = False)[source]#

Append an Log transform to the adapter.

Parameters:
keysstr or Sequence of str

The names of the variables to transform.

p1boolean

Add 1 to the input before taking the logarithm?

map_dtype(keys: str | Sequence[str], to_dtype: str)[source]#

Append a ConvertDType transform to the adapter. See also convert_dtype().

Parameters:
keysstr or Sequence of str

The names of the variables to transform.

to_dtypestr

Target dtype

nnpe(keys: str | Sequence[str], *, spike_scale: float | None = None, slab_scale: float | None = None, per_dimension: bool = True, seed: int | None = None)[source]#

Append an NNPE transform to the adapter.

Parameters:
keysstr or Sequence of str

The names of the variables to transform.

spike_scalefloat or np.ndarray or None, default=None

The scale of the spike (Normal) distribution. Automatically determined if None.

slab_scalefloat or np.ndarray or None, default=None

The scale of the slab (Cauchy) distribution. Automatically determined if None.

per_dimensionbool, default=True

If true, noise is applied per dimension of the last axis of the input data. If false, noise is applied globally.

seedint or None

The seed for the random number generator. If None, a random seed is used.

one_hot(keys: str | Sequence[str], num_classes: int)[source]#

Append a OneHot transform to the adapter.

Parameters:
keysstr or Sequence of str

The names of the variables to transform.

num_classesint

Number of classes for the encoding.

random_subsample(key: str, *, sample_size: int | float, axis: int = -1)[source]#

Append a RandomSubsample transform to the adapter.

Parameters:
keystr or Sequence of str

The name of the variable to subsample.

sample_sizeint or float

The number of samples to draw, or a fraction between 0 and 1 of the total number of samples to draw.

axis: int, optional

Which axis to draw samples over. The last axis is used by default.

rename(from_key: str, to_key: str)[source]#

Append a Rename transform to the adapter.

Parameters:
from_keystr

Variable name that should be renamed

to_keystr

New variable name

scale(keys: str | Sequence[str], by: float | ndarray)[source]#
shift(keys: str | Sequence[str], by: float | ndarray)[source]#
split(key: str, *, into: Sequence[str], indices_or_sections: int | Sequence[int] = None, axis: int = -1)[source]#
squeeze(keys: str | Sequence[str], *, axis: int | Sequence[int])[source]#

Append a Squeeze transform to the adapter.

Parameters:
keysstr or Sequence of str

The names of the variables to squeeze.

axisint or tuple

The axis to squeeze. As the number of batch dimensions might change, we advise using negative numbers (i.e., indexing from the end instead of the start).

sqrt(keys: str | Sequence[str])[source]#

Append an Sqrt transform to the adapter.

Parameters:
keysstr or Sequence of str

The names of the variables to transform.

standardize(include: str | Sequence[str] = None, *, predicate: Predicate = None, exclude: str | Sequence[str] = None, **kwargs)[source]#

Append a Standardize transform to the adapter.

Parameters:
predicatePredicate, optional

Function that indicates which variables should be transformed.

includestr or Sequence of str, optional

Names of variables to include in the transform.

excludestr or Sequence of str, optional

Names of variables to exclude from the transform.

**kwargs

Additional keyword arguments passed to the transform.

take(include: str | Sequence[str] = None, *, indices: Sequence[int], axis: int = -1, predicate: Predicate = None, exclude: str | Sequence[str] = None)[source]#

Append a Take transform to the adapter.

Parameters:
includestr or Sequence of str, optional

Names of variables to include in the transform.

indicesSequence of int

Which indices to take from the data.

axisint, optional

Which axis to take from. The last axis is used by default.

predicatePredicate, optional

Function that indicates which variables should be transformed.

excludestr or Sequence of str, optional

Names of variables to exclude from the transform.

to_array(include: str | Sequence[str] = None, *, predicate: Predicate = None, exclude: str | Sequence[str] = None, **kwargs)[source]#

Append a ToArray transform to the adapter.

Parameters:
predicatePredicate, optional

Function that indicates which variables should be transformed.

includestr or Sequence of str, optional

Names of variables to include in the transform.

excludestr or Sequence of str, optional

Names of variables to exclude from the transform.

**kwargsdict

Additional keyword arguments passed to the transform.

to_dict()[source]#
nan_to_num(keys: str | Sequence[str], default_value: float = 0.0, return_mask: bool = False, mask_prefix: str = 'mask')[source]#

Append NanToNum transform to the adapter.

Parameters:
keysstr or sequence of str

The names of the variables to clean / mask.

default_valuefloat

Value to substitute wherever data is NaN. Defaults to 0.0.

return_maskbool

If True, encode a binary missingness mask alongside the data. Defaults to False.

mask_prefixstr

Prefix for the mask key in the output dictionary. Defaults to ‘mask_’. If the mask key already exists, a ValueError is raised to avoid overwriting existing masks.

count(value) integer -- return number of occurrences of value#
index(value[, start[, stop]]) integer -- return first index of value.#

Raises ValueError if the value is not present.

Supporting start and stop arguments is optional, but recommended.

pop([index]) item -- remove and return item at index (default last).#

Raise IndexError if list is empty or index is out of range.

remove(value)#

S.remove(value) – remove first occurrence of value. Raise ValueError if the value is not present.

reverse()#

S.reverse() – reverse IN PLACE