calibration_curve#

bayesflow.utils.calibration_curve(targets: ndarray, estimates: ndarray, *, pos_label: int | float | bool | str = 1, num_bins: int = 5, strategy: str = 'uniform')[source]#

Compute true and predicted probabilities for a calibration curve.

The method assumes the inputs come from a binary classifier, and discretize the [0, 1] interval into bins.

Code from: scikit-learn/scikit-learn

Parameters:
targetsarray-like of shape (n_samples,)

True targets.

estimatesarray-like of shape (n_samples,)

Probabilities of the positive class.

pos_labelint, float, bool or str, default = 1

The label of the positive class.

num_binsint, default=5

Number of bins to discretize the [0, 1] interval. A bigger number requires more data. Bins with no samples (i.e. without corresponding values in estimates) will not be returned, thus the returned arrays may have less than num_bins values.

strategy{‘uniform’, ‘quantile’}, default=’uniform’

Strategy used to define the widths of the bins.

uniform

The bins have identical widths.

quantile

The bins have the same number of samples and depend on y_prob.

Returns:
prob_truendarray of shape (num_bins,) or smaller

The proportion of samples whose class is the positive class, in each bin (fraction of positives).

prob_predndarray of shape (num_bins,) or smaller

The mean estimated probability in each bin.

References

Alexandru Niculescu-Mizil and Rich Caruana (2005) Predicting Good Probabilities With Supervised Learning, in Proceedings of the 22nd International Conference on Machine Learning (ICML).