QuantileScore#

class bayesflow.scores.QuantileScore(q: Sequence[float] = None, links=None, **kwargs)[source]#

Bases: ScoringRule

\(S(\hat \theta_i, \theta; \tau_i) = (\hat \theta_i - \theta)(\mathbf{1}_{\hat \theta - \theta > 0} - \tau_i)\)

Scores predicted quantiles \(\hat \theta_i\) with the quantile score to match the quantile levels \(\hat \tau_i\).

get_config()[source]#
get_head_shapes_from_target_shape(target_shape: tuple[int, ...])[source]#

Request a dictionary of names and output shapes of required heads from the score.

score(estimates: dict[str, Tensor], targets: Tensor, weights: Tensor = None) Tensor[source]#

Scores a batch of probabilistic estimates of distributions based on samples of the corresponding distributions.

Parameters:
estimatesdict[str, Tensor]

Dictionary of estimates.

targetsTensor

Array of samples from the true distribution to evaluate the estimates.

weightsTensor

Array of weights for aggregating the scores.

Returns:
numeric_scoreTensor

Negatively oriented score evaluating the estimates, aggregated for the whole batch.

Examples

The following shows how to score estimates with a MeanScore. All ScoringRule s follow this pattern, only differing in the structure of the estimates dictionary.

>>> import keras
... from bayesflow.scores import MeanScore
>>>
>>> # batch of samples from a normal distribution
>>> samples = keras.random.normal(shape=(100,))
>>>
>>> # batch of uninformed (random) estimates
>>> bad_estimates = {"value": keras.random.uniform((100,))}
>>>
>>> # batch of estimates that are closer to the true mean
>>> better_estimates = {"value": keras.random.normal(stddev=0.1, shape=(100,))}
>>>
>>> # calculate the score
>>> scoring_rule = MeanScore()
>>> scoring_rule.score(bad_estimates, samples)
<tf.Tensor: shape=(), dtype=float32, numpy=1.2243813276290894>
>>> scoring_rule.score(better_estimates, samples)
<tf.Tensor: shape=(), dtype=float32, numpy=1.013983130455017>
aggregate(scores: Tensor, weights: Tensor = None) Tensor#

Computes the mean of scores, optionally applying weights.

This function computes the mean value of the given scores. When weights are provided, it first multiplies the scores by the weights and then computes the mean of the result. If no weights are provided, it computes the mean of the scores.

Parameters:
scoresTensor

A tensor containing the scores to be aggregated.

weightsTensor, optional (default - None)

A tensor of weights corresponding to each score. Must be the same shape as scores. If not provided, the function returns the mean of scores.

Returns:
Tensor

The aggregated score computed as a weighted mean if weights is provided, or as the simple mean of scores otherwise.

classmethod from_config(config)#
get_head(key: str, shape: tuple[int, ...]) Sequential#

For a specified head key and shape, request corresponding head network.

Parameters:
keystr

Name of head for which to request a link.

shape: Shape

The necessary shape for the point estimators.

Returns:
headkeras.Sequential

Head network consisting of a learnable projection, a reshape and a link operation to parameterize estimates.

For a specified key, request a link from network output to estimation target.

Parameters:
keystr

Name of head for which to request a link.

Returns:
linkkeras.Layer

Activation function linking network output to estimation target.

get_subnet(key: str) Layer#

For a specified key, request a subnet to be used for projecting the shared condition embedding before reshaping to the heads output shape.

Parameters:
keystr

Name of head for which to request a link.

Returns:
linkkeras.Layer

Subnet projecting the shared condition embedding.