BernoulliGLM#
- class bayesflow.simulators.BernoulliGLM(T: int = 100, scale_by_T: bool = True, rng: Generator = None)[source]#
Bases:
BenchmarkSimulator
Bernoulli GLM simulated benchmark See: https://arxiv.org/pdf/2101.04653.pdf, Task T.5
Important: scale_sum should be set to False if the simulator is used with variable T during training, otherwise the information of T will be lost.
- Parameters:
- T: int, optional, default: 100
The simulated duration of the task (eq. the number of Bernoulli draws).
- scale_by_T: bool, optional, default: True
A flag indicating whether to scale the summayr statistics by T.
- rng: np.random.Generator or None, optional, default: None
An optional random number generator to use.
- prior()[source]#
Generates a random draw from the custom prior over the 10 Bernoulli GLM parameters (1 intercept and 9 weights). Uses a global covariance matrix Cov for the multivariate Gaussian prior over the model weights, which is pre-computed for efficiency.
- Returns:
- params: np.ndarray of shape (10, )
A single draw from the prior.
- observation_model(params: ndarray)[source]#
Simulates data from the custom Bernoulli GLM likelihood.
- Parameters:
- params: np.ndarray of shape (10, )
The vector of model parameters (params[0] is intercept, params[i], i > 0 are weights).
- Returns:
- x: np.ndarray of shape (10, )
The vector of sufficient summary statistics of the data.
- rejection_sample(batch_shape: tuple[int, ...], predicate: Callable[[dict[str, ndarray]], ndarray], *, axis: int = 0, sample_size: int = None, **kwargs) dict[str, ndarray] #
- sample(batch_shape: tuple[int, ...], **kwargs) dict[str, ndarray] #
Runs simulated benchmark and returns batch_size parameter and observation batches
- Parameters:
- batch_shape: tuple
Number of parameter-observation batches to simulate.
- Returns:
- dict[str, np.ndarray]: simulated parameters and observables
with shapes (batch_size, …)