z_score_contraction#
- bayesflow.diagnostics.z_score_contraction(estimates: Mapping[str, ndarray] | ndarray, targets: Mapping[str, ndarray] | ndarray, variable_keys: Sequence[str] = None, variable_names: Sequence[str] = None, test_quantities: dict[str, Callable] = None, figsize: Sequence[int] = None, label_fontsize: int = 16, title_fontsize: int = 18, tick_fontsize: int = 12, color: str = '#132a70', num_col: int = None, num_row: int = None, markersize: float = None) Figure[source]#
Implements a graphical check for global model sensitivity by plotting the posterior z-score over the posterior contraction for each set of posterior samples in
estimatesaccording to [1].The definition of the posterior z-score is:
post_z_score = (posterior_mean - true_parameters) / posterior_std
And the score is adequate if it centers around zero and spreads roughly in the interval [-3, 3]
The definition of posterior contraction is:
post_contraction = 1 - (posterior_variance / prior_variance)
In other words, the posterior contraction is a proxy for the reduction in uncertainty gained by replacing the prior with the posterior. The ideal posterior contraction tends to 1. Contraction near zero indicates that the posterior variance is almost identical to the prior variance for the particular marginal parameter distribution.
Note: Means and variances will be estimated via their sample-based estimators.
[1] Schad, D. J., Betancourt, M., & Vasishth, S. (2021). Toward a principled Bayesian workflow in cognitive science. Psychological methods, 26(1), 103.
Paper also available at https://arxiv.org/abs/1904.12765
- Parameters:
- estimatesnp.ndarray of shape (num_datasets, num_post_draws, num_params)
The posterior draws obtained from num_datasets
- targetsnp.ndarray of shape (num_datasets, num_params)
The prior draws (true parameters) used for generating the num_datasets
- variable_keyslist or None, optional, default: None
Select keys from the dictionaries provided in estimates and targets. By default, select all keys.
- variable_nameslist or None, optional, default: None
The parameter names for nice plot titles. Inferred if None
- test_quantitiesdict or None, optional, default: None
A dict that maps plot titles to functions that compute test quantities based on estimate/target draws.
The dict keys are automatically added to
variable_keysandvariable_names. Test quantity functions are expected to accept a dict of draws with shape(batch_size, ...)as the first (typically only) positional argument and return an NumPy array of shape(batch_size,). The functions do not have to deal with an additional sample dimension, as appropriate reshaping is done internally.- figsizetuple or None, optional, defaultNone
The figure size passed to the matplotlib constructor. Inferred if None.
- label_fontsizeint, optional, default: 16
The font size of the y-label text
- title_fontsizeint, optional, default: 18
The font size of the title text
- tick_fontsizeint, optional, default: 12
The font size of the axis ticklabels
- colorstr, optional, default: ‘#8f2727’
The color for the true vs. estimated scatter points and error bars
- num_rowint, optional, default: None
The number of rows for the subplots. Dynamically determined if None.
- num_colint, optional, default: None
The number of columns for the subplots. Dynamically determined if None.
- markersizefloat, optional, default: None
The marker size in points**2 of the scatter plot.
- Returns:
- fplt.Figure - the figure instance for optional saving
- Raises:
- ShapeError
If there is a deviation from the expected shapes of
estimatesandtargets.