networks#
A rich collection of neural network architectures for use in Approximators.
The module features inference networks (IN), summary networks (SN), as well as general purpose networks.
Modules
Classes
  | 
(IN) Implements a Consistency Model with Consistency Training (CT) as described in [1-2].  | 
  | 
(IN) Implements a coupling flow as a sequence of dual couplings with permutations and activation normalization.  | 
  | 
(SN) Implements a deep set encoder introduced in [1] for learning permutation-invariant representations of set-based data, as generated by exchangeable models.  | 
  | 
Diffusion Model as described in this overview paper [1].  | 
  | 
(IN) Implements Optimal Transport Flow Matching, originally introduced as Rectified Flow, with ideas incorporated from [1-3].  | 
  | 
(SN) Wraps multiple summary networks (backbones) to learn summary statistics from multi-modal data.  | 
  | 
(SN) Implements a more flexible version of the TimeSeriesTransformer that applies a series of self-attention layers followed by cross-attention between the representation and a learnable template summarized via a recurrent net.  | 
  | 
|
  | 
Implements a simple configurable MLP with optional residual connections and dropout.  | 
  | 
Implements point estimation for user specified scoring rules by a shared feed forward architecture with separate heads for each scoring rule.  | 
  | 
A custom sequential model for managing a sequence of Keras layers.  | 
  | 
(SN) Implements the set transformer architecture from [1] which ultimately represents a learnable permutation-invariant function.  | 
  | 
|
  | 
(SN) Implements a LSTNet Architecture as described in [1]  | 
  | 
(SN) Creates a regular transformer coupled with Time2Vec embeddings of time used to flexibly compress time series.  |