Open in Colab Open On GitHub

Flax Basics#

This notebook will walk you through the following workflow:

  • Instantiating a model from Flax built-in layers or third-party models.

  • Initializing parameters of the model and manually written training.

  • Using optimizers provided by Flax to ease training.

  • Serialization of parameters and other objects.

  • Creating your own models and managing state.

Setting up our environment#

Here we provide the code needed to set up the environment for our notebook.

# Install the latest JAXlib version.
!pip install --upgrade -q pip jax jaxlib
# Install Flax at head:
!pip install --upgrade -q git+https://github.com/google/flax.git
WARNING: Running pip as root will break packages and permissions. You should install packages reliably by using venv: https://pip.pypa.io/warnings/venv
WARNING: Running pip as root will break packages and permissions. You should install packages reliably by using venv: https://pip.pypa.io/warnings/venv
import jax
from typing import Any, Callable, Sequence
from jax import random, numpy as jnp
import flax
from flax import linen as nn

Linear regression with Flax#

In the previous JAX for the impatient notebook, we finished up with a linear regression example. As we know, linear regression can also be written as a single dense neural network layer, which we will show in the following so that we can compare how it’s done.

A dense layer is a layer that has a kernel parameter \(W\in\mathcal{M}_{m,n}(\mathbb{R})\) where \(m\) is the number of features as an output of the model, and \(n\) the dimensionality of the input, and a bias parameter \(b\in\mathbb{R}^m\). The dense layers returns \(Wx+b\) from an input \(x\in\mathbb{R}^n\).

This dense layer is already provided by Flax in the flax.linen module (here imported as nn).

# We create one dense layer instance (taking 'features' parameter as input)
model = nn.Dense(features=5)

Layers (and models in general, we’ll use that word from now on) are subclasses of the linen.Module class.

Model parameters & initialization#

Parameters are not stored with the models themselves. You need to initialize parameters by calling the init function, using a PRNGKey and dummy input data.

key1, key2 = random.split(random.key(0))
x = random.normal(key1, (10,)) # Dummy input data
params = model.init(key2, x) # Initialization call
jax.tree_util.tree_map(lambda x: x.shape, params) # Checking output shapes
WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
FrozenDict({
    params: {
        bias: (5,),
        kernel: (10, 5),
    },
})

Note: JAX and Flax, like NumPy, are row-based systems, meaning that vectors are represented as row vectors and not column vectors. This can be seen in the shape of the kernel here.

The result is what we expect: bias and kernel parameters of the correct size. Under the hood:

  • The dummy input data x is used to trigger shape inference: we only declared the number of features we wanted in the output of the model, not the size of the input. Flax finds out by itself the correct size of the kernel.

  • The random PRNG key is used to trigger the initialization functions (those have default values provided by the module here).

  • Initialization functions are called to generate the initial set of parameters that the model will use. Those are functions that take as arguments (PRNG Key, shape, dtype) and return an Array of shape shape.

  • The init function returns the initialized set of parameters (you can also get the output of the forward pass on the dummy input with the same syntax by using the init_with_output method instead of init.

To conduct a forward pass with the model with a given set of parameters (which are never stored with the model), we just use the apply method by providing it the parameters to use as well as the input:

model.apply(params, x)
DeviceArray([-0.7358944,  1.3583755, -0.7976872,  0.8168598,  0.6297793],            dtype=float32)

Gradient descent#

If you jumped here directly without going through the JAX part, here is the linear regression formulation we’re going to use: from a set of data points \(\{(x_i,y_i), i\in \{1,\ldots, k\}, x_i\in\mathbb{R}^n,y_i\in\mathbb{R}^m\}\), we try to find a set of parameters \(W\in \mathcal{M}_{m,n}(\mathbb{R}), b\in\mathbb{R}^m\) such that the function \(f_{W,b}(x)=Wx+b\) minimizes the mean squared error:

\[\mathcal{L}(W,b)\rightarrow\frac{1}{k}\sum_{i=1}^{k} \frac{1}{2}\|y_i-f_{W,b}(x_i)\|^2_2\]

Here, we see that the tuple \((W,b)\) matches the parameters of the Dense layer. We’ll perform gradient descent using those. Let’s first generate the fake data we’ll use. The data is exactly the same as in the JAX part’s linear regression pytree example.

# Set problem dimensions.
n_samples = 20
x_dim = 10
y_dim = 5

# Generate random ground truth W and b.
key = random.key(0)
k1, k2 = random.split(key)
W = random.normal(k1, (x_dim, y_dim))
b = random.normal(k2, (y_dim,))
# Store the parameters in a FrozenDict pytree.
true_params = flax.core.freeze({'params': {'bias': b, 'kernel': W}})

# Generate samples with additional noise.
key_sample, key_noise = random.split(k1)
x_samples = random.normal(key_sample, (n_samples, x_dim))
y_samples = jnp.dot(x_samples, W) + b + 0.1 * random.normal(key_noise,(n_samples, y_dim))
print('x shape:', x_samples.shape, '; y shape:', y_samples.shape)
x shape: (20, 10) ; y shape: (20, 5)

We copy the same training loop that we used in the JAX pytree linear regression example with jax.value_and_grad(), but here we can use model.apply() instead of having to define our own feed-forward function (predict_pytree() in the JAX example).

# Same as JAX version but using model.apply().
@jax.jit
def mse(params, x_batched, y_batched):
  # Define the squared loss for a single pair (x,y)
  def squared_error(x, y):
    pred = model.apply(params, x)
    return jnp.inner(y-pred, y-pred) / 2.0
  # Vectorize the previous to compute the average of the loss on all samples.
  return jnp.mean(jax.vmap(squared_error)(x_batched,y_batched), axis=0)

And finally perform the gradient descent.

learning_rate = 0.3  # Gradient step size.
print('Loss for "true" W,b: ', mse(true_params, x_samples, y_samples))
loss_grad_fn = jax.value_and_grad(mse)

@jax.jit
def update_params(params, learning_rate, grads):
  params = jax.tree_util.tree_map(
      lambda p, g: p - learning_rate * g, params, grads)
  return params

for i in range(101):
  # Perform one gradient update.
  loss_val, grads = loss_grad_fn(params, x_samples, y_samples)
  params = update_params(params, learning_rate, grads)
  if i % 10 == 0:
    print(f'Loss step {i}: ', loss_val)
Loss for "true" W,b:  0.023639778
Loss step 0:  38.094772
Loss step 10:  0.44692168
Loss step 20:  0.10053458
Loss step 30:  0.035822745
Loss step 40:  0.018846875
Loss step 50:  0.013864839
Loss step 60:  0.012312559
Loss step 70:  0.011812928
Loss step 80:  0.011649306
Loss step 90:  0.011595251
Loss step 100:  0.0115773035

Optimizing with Optax#

Flax used to use its own flax.optim package for optimization, but with FLIP #1009 this was deprecated in favor of Optax.

Basic usage of Optax is straightforward:

  1. Choose an optimization method (e.g. optax.adam).

  2. Create optimizer state from parameters (for the Adam optimizer, this state will contain the momentum values).

  3. Compute the gradients of your loss with jax.value_and_grad().

  4. At every iteration, call the Optax update function to update the internal optimizer state and create an update to the parameters. Then add the update to the parameters with Optax’s apply_updates method.

Note that Optax can do a lot more: it’s designed for composing simple gradient transformations into more complex transformations that allows to implement a wide range of optimizers. There is also support for changing optimizer hyperparameters over time (“schedules”), applying different updates to different parts of the parameter tree (“masking”) and much more. For details please refer to the official documentation.

import optax
tx = optax.adam(learning_rate=learning_rate)
opt_state = tx.init(params)
loss_grad_fn = jax.value_and_grad(mse)
for i in range(101):
  loss_val, grads = loss_grad_fn(params, x_samples, y_samples)
  updates, opt_state = tx.update(grads, opt_state)
  params = optax.apply_updates(params, updates)
  if i % 10 == 0:
    print('Loss step {}: '.format(i), loss_val)
Loss step 0:  0.011576377
Loss step 10:  0.0115710115
Loss step 20:  0.011569244
Loss step 30:  0.011568661
Loss step 40:  0.011568454
Loss step 50:  0.011568379
Loss step 60:  0.011568358
Loss step 70:  0.01156836
Loss step 80:  0.01156835
Loss step 90:  0.011568353
Loss step 100:  0.011568348

Serializing the result#

Now that we’re happy with the result of our training, we might want to save the model parameters to load them back later. Flax provides a serialization package to enable you to do that.

from flax import serialization
bytes_output = serialization.to_bytes(params)
dict_output = serialization.to_state_dict(params)
print('Dict output')
print(dict_output)
print('Bytes output')
print(bytes_output)
Dict output
{'params': {'bias': DeviceArray([-1.4540135, -2.0262308,  2.0806582,  1.2201802, -0.9964547],            dtype=float32), 'kernel': DeviceArray([[ 1.0106664 ,  0.19014716,  0.04533899, -0.92722285,
               0.34720102],
             [ 1.7320251 ,  0.9901233 ,  1.1662225 ,  1.1027892 ,
              -0.10574618],
             [-1.2009128 ,  0.28837162,  1.4176372 ,  0.12073109,
              -1.3132601 ],
             [-1.1944956 , -0.18993308,  0.03379077,  1.3165942 ,
               0.07996067],
             [ 0.14103189,  1.3737966 , -1.3162128 ,  0.53401774,
              -2.239638  ],
             [ 0.5643044 ,  0.813604  ,  0.31888172,  0.5359193 ,
               0.90352124],
             [-0.37948322,  1.7408353 ,  1.0788013 , -0.5041964 ,
               0.9286919 ],
             [ 0.9701384 , -1.3158673 ,  0.33630812,  0.80941117,
              -1.202457  ],
             [ 1.0198247 , -0.6198277 ,  1.0822718 , -1.8385581 ,
              -0.45790705],
             [-0.64384323,  0.4564892 , -1.1331053 , -0.68556863,
               0.17010891]], dtype=float32)}}
Bytes output
b'\x81\xa6params\x82\xa4bias\xc7!\x01\x93\x91\x05\xa7float32\xc4\x14\x1d\x1d\xba\xbf\xc4\xad\x01\xc0\x81)\x05@\xdd.\x9c?\xa8\x17\x7f\xbf\xa6kernel\xc7\xd6\x01\x93\x92\n\x05\xa7float32\xc4\xc8\x84]\x81?\xf0\xb5B>`\xb59=z^m\xbfU\xc4\xb1>\x00\xb3\xdd?\xb8x}?\xc7F\x95?2(\x8d?t\x91\xd8\xbd\x83\xb7\x99\xbfr\xa5\x93>#u\xb5?\xdcA\xf7=\xe8\x18\xa8\xbf;\xe5\x98\xbf\xd1}B\xbe0h\n=)\x86\xa8?k\xc2\xa3=\xaaj\x10>\x91\xd8\xaf?\xa9y\xa8\xbfc\xb5\x08?;V\x0f\xc0Av\x10?ZHP?wD\xa3>\x022\t?+Mg?\xa0K\xc2\xbe\xb1\xd3\xde?)\x16\x8a?\x04\x13\x01\xbf\xc1\xbem?\xfdZx?Wn\xa8\xbf\x940\xac>\x925O?\x1c\xea\x99\xbf\x9e\x89\x82?\x07\xad\x1e\xbf\xe2\x87\x8a?\xdfU\xeb\xbf\xcbr\xea\xbe\xe9\xd2$\xbf\xf4\xb8\xe9>\x98\t\x91\xbfm\x81/\xbf\x081.>'

To load the model back, you’ll need to use a template of the model parameter structure, like the one you would get from the model initialization. Here, we use the previously generated params as a template. Note that this will produce a new variable structure, and not mutate in-place.

The point of enforcing structure through template is to avoid users issues downstream, so you need to first have the right model that generates the parameters structure.

serialization.from_bytes(params, bytes_output)
FrozenDict({
    params: {
        bias: array([-1.4540135, -2.0262308,  2.0806582,  1.2201802, -0.9964547],
              dtype=float32),
        kernel: array([[ 1.0106664 ,  0.19014716,  0.04533899, -0.92722285,  0.34720102],
               [ 1.7320251 ,  0.9901233 ,  1.1662225 ,  1.1027892 , -0.10574618],
               [-1.2009128 ,  0.28837162,  1.4176372 ,  0.12073109, -1.3132601 ],
               [-1.1944956 , -0.18993308,  0.03379077,  1.3165942 ,  0.07996067],
               [ 0.14103189,  1.3737966 , -1.3162128 ,  0.53401774, -2.239638  ],
               [ 0.5643044 ,  0.813604  ,  0.31888172,  0.5359193 ,  0.90352124],
               [-0.37948322,  1.7408353 ,  1.0788013 , -0.5041964 ,  0.9286919 ],
               [ 0.9701384 , -1.3158673 ,  0.33630812,  0.80941117, -1.202457  ],
               [ 1.0198247 , -0.6198277 ,  1.0822718 , -1.8385581 , -0.45790705],
               [-0.64384323,  0.4564892 , -1.1331053 , -0.68556863,  0.17010891]],
              dtype=float32),
    },
})

Defining your own models#

Flax allows you to define your own models, which should be a bit more complicated than a linear regression. In this section, we’ll show you how to build simple models. To do so, you’ll need to create subclasses of the base nn.Module class.

Keep in mind that we imported linen as nn and this only works with the new linen API

Module basics#

The base abstraction for models is the nn.Module class, and every type of predefined layers in Flax (like the previous Dense) is a subclass of nn.Module. Let’s take a look and start by defining a simple but custom multi-layer perceptron i.e. a sequence of Dense layers interleaved with calls to a non-linear activation function.

class ExplicitMLP(nn.Module):
  features: Sequence[int]

  def setup(self):
    # we automatically know what to do with lists, dicts of submodules
    self.layers = [nn.Dense(feat) for feat in self.features]
    # for single submodules, we would just write:
    # self.layer1 = nn.Dense(feat1)

  def __call__(self, inputs):
    x = inputs
    for i, lyr in enumerate(self.layers):
      x = lyr(x)
      if i != len(self.layers) - 1:
        x = nn.relu(x)
    return x

key1, key2 = random.split(random.key(0), 2)
x = random.uniform(key1, (4,4))

model = ExplicitMLP(features=[3,4,5])
params = model.init(key2, x)
y = model.apply(params, x)

print('initialized parameter shapes:\n', jax.tree_util.tree_map(jnp.shape, flax.core.unfreeze(params)))
print('output:\n', y)
initialized parameter shapes:
 {'params': {'layers_0': {'bias': (3,), 'kernel': (4, 3)}, 'layers_1': {'bias': (4,), 'kernel': (3, 4)}, 'layers_2': {'bias': (5,), 'kernel': (4, 5)}}}
output:
 [[ 4.2292815e-02 -4.3807115e-02  2.9323792e-02  6.5492536e-03
  -1.7147182e-02]
 [ 1.2967806e-01 -1.4551792e-01  9.4432183e-02  1.2521387e-02
  -4.5417298e-02]
 [ 0.0000000e+00  0.0000000e+00  0.0000000e+00  0.0000000e+00
   0.0000000e+00]
 [ 9.3024032e-04  2.7864395e-05  2.4478821e-04  8.1344310e-04
  -1.0110770e-03]]

As we can see, a nn.Module subclass is made of:

  • A collection of data fields (nn.Module are Python dataclasses) - here we only have the features field of type Sequence[int].

  • A setup() method that is being called at the end of the __postinit__ where you can register submodules, variables, parameters you will need in your model.

  • A __call__ function that returns the output of the model from a given input.

  • The model structure defines a pytree of parameters following the same tree structure as the model: the params tree contains one layers_n sub dict per layer, and each of those contain the parameters of the associated Dense layer. The layout is very explicit.

Note: lists are mostly managed as you would expect (WIP), there are corner cases you should be aware of as pointed out here

Since the module structure and its parameters are not tied to each other, you can’t directly call model(x) on a given input as it will return an error. The __call__ function is being wrapped up in the apply one, which is the one to call on an input:

try:
    y = model(x) # Returns an error
except AttributeError as e:
    print(e)
"ExplicitMLP" object has no attribute "layers"

Since here we have a very simple model, we could have used an alternative (but equivalent) way of declaring the submodules inline in the __call__ using the @nn.compact annotation like so:

class SimpleMLP(nn.Module):
  features: Sequence[int]

  @nn.compact
  def __call__(self, inputs):
    x = inputs
    for i, feat in enumerate(self.features):
      x = nn.Dense(feat, name=f'layers_{i}')(x)
      if i != len(self.features) - 1:
        x = nn.relu(x)
      # providing a name is optional though!
      # the default autonames would be "Dense_0", "Dense_1", ...
    return x

key1, key2 = random.split(random.key(0), 2)
x = random.uniform(key1, (4,4))

model = SimpleMLP(features=[3,4,5])
params = model.init(key2, x)
y = model.apply(params, x)

print('initialized parameter shapes:\n', jax.tree_util.tree_map(jnp.shape, flax.core.unfreeze(params)))
print('output:\n', y)
initialized parameter shapes:
 {'params': {'layers_0': {'bias': (3,), 'kernel': (4, 3)}, 'layers_1': {'bias': (4,), 'kernel': (3, 4)}, 'layers_2': {'bias': (5,), 'kernel': (4, 5)}}}
output:
 [[ 4.2292815e-02 -4.3807115e-02  2.9323792e-02  6.5492536e-03
  -1.7147182e-02]
 [ 1.2967806e-01 -1.4551792e-01  9.4432183e-02  1.2521387e-02
  -4.5417298e-02]
 [ 0.0000000e+00  0.0000000e+00  0.0000000e+00  0.0000000e+00
   0.0000000e+00]
 [ 9.3024032e-04  2.7864395e-05  2.4478821e-04  8.1344310e-04
  -1.0110770e-03]]

There are, however, a few differences you should be aware of between the two declaration modes:

  • In setup, you are able to name some sublayers and keep them around for further use (e.g. encoder/decoder methods in autoencoders).

  • If you want to have multiple methods, then you need to declare the module using setup, as the @nn.compact annotation only allows one method to be annotated.

  • The last initialization will be handled differently. See these notes for more details (TODO: add notes link).

Module parameters#

In the previous MLP example, we relied only on predefined layers and operators (Dense, relu). Let’s imagine that you didn’t have a Dense layer provided by Flax and you wanted to write it on your own. Here is what it would look like using the @nn.compact way to declare a new modules:

class SimpleDense(nn.Module):
  features: int
  kernel_init: Callable = nn.initializers.lecun_normal()
  bias_init: Callable = nn.initializers.zeros_init()

  @nn.compact
  def __call__(self, inputs):
    kernel = self.param('kernel',
                        self.kernel_init, # Initialization function
                        (inputs.shape[-1], self.features))  # shape info.
    y = jnp.dot(inputs, kernel)
    bias = self.param('bias', self.bias_init, (self.features,))
    y = y + bias
    return y

key1, key2 = random.split(random.key(0), 2)
x = random.uniform(key1, (4,4))

model = SimpleDense(features=3)
params = model.init(key2, x)
y = model.apply(params, x)

print('initialized parameters:\n', params)
print('output:\n', y)
initialized parameters:
 FrozenDict({
    params: {
        kernel: DeviceArray([[ 0.6503669 ,  0.86789787,  0.4604268 ],
                     [ 0.05673932,  0.9909285 , -0.63536596],
                     [ 0.76134115, -0.3250529 , -0.65221626],
                     [-0.82430327,  0.4150194 ,  0.19405058]], dtype=float32),
        bias: DeviceArray([0., 0., 0.], dtype=float32),
    },
})
output:
 [[ 0.5035518   1.8548558  -0.4270195 ]
 [ 0.0279097   0.5589246  -0.43061772]
 [ 0.3547128   1.5740999  -0.32865518]
 [ 0.5264864   1.2928858   0.10089308]]

Here, we see how to both declare and assign a parameter to the model using the self.param method. It takes as input (name, init_fn, *init_args, **init_kwargs) :

  • name is simply the name of the parameter that will end up in the parameter structure.

  • init_fn is a function with input (PRNGKey, *init_args, **init_kwargs) returning an Array, with init_args and init_kwargs being the arguments needed to call the initialisation function.

  • init_args and init_kwargs are the arguments to provide to the initialization function.

Such params can also be declared in the setup method; it won’t be able to use shape inference because Flax is using lazy initialization at the first call site.

Variables and collections of variables#

As we’ve seen so far, working with models means working with:

  • A subclass of nn.Module;

  • A pytree of parameters for the model (typically from model.init());

However this is not enough to cover everything that we would need for machine learning, especially neural networks. In some cases, you might want your neural network to keep track of some internal state while it runs (e.g. batch normalization layers). There is a way to declare variables beyond the parameters of the model with the variable method.

For demonstration purposes, we’ll implement a simplified but similar mechanism to batch normalization: we’ll store running averages and subtract those to the input at training time. For proper batchnorm, you should use (and look at) the implementation here.

class BiasAdderWithRunningMean(nn.Module):
  decay: float = 0.99

  @nn.compact
  def __call__(self, x):
    # easy pattern to detect if we're initializing via empty variable tree
    is_initialized = self.has_variable('batch_stats', 'mean')
    ra_mean = self.variable('batch_stats', 'mean',
                            lambda s: jnp.zeros(s),
                            x.shape[1:])
    bias = self.param('bias', lambda rng, shape: jnp.zeros(shape), x.shape[1:])
    if is_initialized:
      ra_mean.value = self.decay * ra_mean.value + (1.0 - self.decay) * jnp.mean(x, axis=0, keepdims=True)

    return x - ra_mean.value + bias


key1, key2 = random.split(random.key(0), 2)
x = jnp.ones((10,5))
model = BiasAdderWithRunningMean()
variables = model.init(key1, x)
print('initialized variables:\n', variables)
y, updated_state = model.apply(variables, x, mutable=['batch_stats'])
print('updated state:\n', updated_state)
initialized variables:
 FrozenDict({
    batch_stats: {
        mean: DeviceArray([0., 0., 0., 0., 0.], dtype=float32),
    },
    params: {
        bias: DeviceArray([0., 0., 0., 0., 0.], dtype=float32),
    },
})
updated state:
 FrozenDict({
    batch_stats: {
        mean: DeviceArray([[0.01, 0.01, 0.01, 0.01, 0.01]], dtype=float32),
    },
})

Here, updated_state returns only the state variables that are being mutated by the model while applying it on data. To update the variables and get the new parameters of the model, we can use the following pattern:

for val in [1.0, 2.0, 3.0]:
  x = val * jnp.ones((10,5))
  y, updated_state = model.apply(variables, x, mutable=['batch_stats'])
  old_state, params = flax.core.pop(variables, 'params')
  variables = flax.core.freeze({'params': params, **updated_state})
  print('updated state:\n', updated_state) # Shows only the mutable part
updated state:
 FrozenDict({
    batch_stats: {
        mean: DeviceArray([[0.01, 0.01, 0.01, 0.01, 0.01]], dtype=float32),
    },
})
updated state:
 FrozenDict({
    batch_stats: {
        mean: DeviceArray([[0.0299, 0.0299, 0.0299, 0.0299, 0.0299]], dtype=float32),
    },
})
updated state:
 FrozenDict({
    batch_stats: {
        mean: DeviceArray([[0.059601, 0.059601, 0.059601, 0.059601, 0.059601]], dtype=float32),
    },
})

From this simplified example, you should be able to derive a full BatchNorm implementation, or any layer involving a state. To finish, let’s add an optimizer to see how to play with both parameters updated by an optimizer and state variables.

This example isn’t doing anything and is only for demonstration purposes.

from functools import partial

@partial(jax.jit, static_argnums=(0, 1))
def update_step(tx, apply_fn, x, opt_state, params, state):

  def loss(params):
    y, updated_state = apply_fn({'params': params, **state},
                                x, mutable=list(state.keys()))
    l = ((x - y) ** 2).sum()
    return l, updated_state

  (l, state), grads = jax.value_and_grad(loss, has_aux=True)(params)
  updates, opt_state = tx.update(grads, opt_state)
  params = optax.apply_updates(params, updates)
  return opt_state, params, state

x = jnp.ones((10,5))
variables = model.init(random.key(0), x)
state, params = flax.core.pop(variables, 'params')
del variables
tx = optax.sgd(learning_rate=0.02)
opt_state = tx.init(params)

for _ in range(3):
  opt_state, params, state = update_step(tx, model.apply, x, opt_state, params, state)
  print('Updated state: ', state)
Updated state:  FrozenDict({
    batch_stats: {
        mean: DeviceArray([[0.01, 0.01, 0.01, 0.01, 0.01]], dtype=float32),
    },
})
Updated state:  FrozenDict({
    batch_stats: {
        mean: DeviceArray([[0.0199, 0.0199, 0.0199, 0.0199, 0.0199]], dtype=float32),
    },
})
Updated state:  FrozenDict({
    batch_stats: {
        mean: DeviceArray([[0.029701, 0.029701, 0.029701, 0.029701, 0.029701]], dtype=float32),
    },
})

Note that the above function has a quite verbose signature and it would not actually work with jax.jit() because the function arguments are not “valid JAX types”.

Flax provides a handy wrapper - TrainState - that simplifies the above code. Check out flax.training.train_state.TrainState to learn more.

Exporting to Tensorflow’s SavedModel with jax2tf#

JAX released an experimental converter called jax2tf, which allows converting trained Flax models into Tensorflow’s SavedModel format (so it can be used for TF Hub, TF.lite, TF.js, or other downstream applications). The repository contains more documentation and has various examples for Flax.