ScoringRule#

class bayesflow.scores.ScoringRule(subnets: dict[str, str | type] = None, subnets_kwargs: dict[str, dict] = None, links: dict[str, str | type] = None)[source]#

Bases: object

Base class for scoring rules.

Scoring rules evaluate the quality of statistical predictions based on the values that materialize when sampling from the true distribution. By minimizing an expected score, estimates with different properties can be obtained.

To define a custom ScoringRule, inherit from this class and overwrite the score method. For proper serialization, any new constructor arguments must be taken care of in a get_config method.

Estimates are typically parameterized by projection heads consisting of a neural network component and a link to project into the correct output space.

A ScoringRule can score estimates consisting of multiple parts. See MultivariateNormalScore for an example of a ParametricDistributionScore. That score evaluates an estimated mean and covariance simultaneously.

NOT_TRANSFORMING_LIKE_VECTOR_WARNING = ()#

This variable contains names of prediction heads that should lead to a warning when the adapter is applied in inverse direction to them.

Prediction heads can output estimates in spaces other than the target distribution space. To such estimates the adapter cannot be straightforwardly applied in inverse direction, because the adapter is built to map vectors from the inference variable space. When subclassing ScoringRule, add the names of such heads to the following list to warn users about difficulties with a type of estimate whenever the adapter is applied to them in inverse direction.

get_config()[source]#
classmethod from_config(config)[source]#
get_head_shapes_from_target_shape(target_shape: tuple[int, ...]) dict[str, tuple[int, ...]][source]#

Request a dictionary of names and output shapes of required heads from the score.

get_subnet(key: str) Layer[source]#

For a specified key, request a subnet to be used for projecting the shared condition embedding before further projection and reshaping to the heads output shape.

If no subnet was specified for the key (e.g. upon initialization), return just an instance of keras.layers.Identity.

Parameters:
keystr

Name of head for which to request a subnet.

Returns:
linkkeras.Layer

Subnet projecting the shared condition embedding.

For a specified key, request a link from network output to estimation target.

If no link was specified for the key (e.g. upon initialization), return a linear activation.

Parameters:
keystr

Name of head for which to request a link.

Returns:
linkkeras.Layer

Activation function linking network output to estimation target.

get_head(key: str, output_shape: tuple[int, ...]) Sequential[source]#

For a specified head key and output shape, request corresponding head network.

A head network has the following components that are called sequentially:

  1. subnet: A keras.Layer.

  2. dense: A trainable linear projection with as many units as are required by the next component.

  3. reshape: Changes shape of output of projection to match requirements of next component.

  4. link: Transforms unconstrained values into a constrained space for the final estimator. See links for examples.

This method initializes the components in reverse order to meet all requirements and returns them.

Parameters:
keystr

Name of head for which to request a link.

output_shape: Shape

The necessary shape of estimated values for the given key as returned by get_head_shapes_from_target_shape().

Returns:
headkeras.Sequential

Head network consisting of a learnable projection, a reshape and a link operation to parameterize estimates.

score(estimates: dict[str, Tensor], targets: Tensor, weights: Tensor) Tensor[source]#

Scores a batch of probabilistic estimates of distributions based on samples of the corresponding distributions.

Parameters:
estimatesdict[str, Tensor]

Dictionary of estimates.

targetsTensor

Array of samples from the true distribution to evaluate the estimates.

weightsTensor

Array of weights for aggregating the scores.

Returns:
numeric_scoreTensor

Negatively oriented score evaluating the estimates, aggregated for the whole batch.

Examples

The following shows how to score estimates with a MeanScore. All ScoringRules follow this pattern, only differing in the structure of the estimates dictionary.

>>> import keras
>>> from bayesflow.scores import MeanScore
>>>
>>> # batch of samples from a normal distribution
>>> samples = keras.random.normal(shape=(100,))
>>>
>>> # batch of uninformed (random) estimates
>>> bad_estimates = {"value": keras.random.uniform((100,))}
>>>
>>> # batch of estimates that are closer to the true mean
>>> better_estimates = {"value": keras.random.normal(stddev=0.1, shape=(100,))}
>>>
>>> # calculate the score
>>> scoring_rule = MeanScore()
>>> scoring_rule.score(bad_estimates, samples)
<tf.Tensor: shape=(), dtype=float32, numpy=1.2243813276290894>
>>> scoring_rule.score(better_estimates, samples)
<tf.Tensor: shape=(), dtype=float32, numpy=1.013983130455017>