flax.linen.GroupNorm#

class flax.linen.GroupNorm(num_groups=32, group_size=None, epsilon=1e-06, dtype=None, param_dtype=<class 'jax.numpy.float32'>, use_bias=True, use_scale=True, bias_init=<function zeros>, scale_init=<function ones>, axis_name=None, axis_index_groups=None, parent=<flax.linen.module._Sentinel object>, name=None)[source]#

Group normalization (arxiv.org/abs/1803.08494).

This op is similar to batch normalization, but statistics are shared across equally-sized groups of channels and not shared across batch dimension. Thus, group normalization does not depend on the batch composition and does not require maintaining internal state for storing statistics. The user should either specify the total number of channel groups or the number of channels per group.

num_groups#

the total number of channel groups. The default value of 32 is proposed by the original group normalization paper.

Type

Optional[int]

group_size#

the number of channels in a group.

Type

Optional[int]

epsilon#

A small float added to variance to avoid dividing by zero.

Type

float

dtype#

the dtype of the result (default: infer from input and params).

Type

Optional[Any]

param_dtype#

the dtype passed to parameter initializers (default: float32).

Type

Any

use_bias#

If True, bias (beta) is added.

Type

bool

use_scale#

If True, multiply by scale (gamma). When the next layer is linear (also e.g. nn.relu), this can be disabled since the scaling will be done by the next layer.

Type

bool

bias_init#

Initializer for bias, by default, zero.

Type

Callable[[Any, Tuple[int, …], Any], Any]

scale_init#

Initializer for scale, by default, one.

Type

Callable[[Any, Tuple[int, …], Any], Any]

axis_name#

the axis name used to combine batch statistics from multiple devices. See jax.pmap for a description of axis names (default: None). This is only needed if the model is subdivided across devices, i.e. the array being normalized is sharded across devices within a pmap.

Type

Optional[str]

axis_index_groups#

groups of axis indices within that named axis representing subsets of devices to reduce over (default: None). For example, [[0, 1], [2, 3]] would independently batch-normalize over the examples on the first two and last two devices. See jax.lax.psum for more details.

Type

Any

__call__(x)[source]#

Applies group normalization to the input (arxiv.org/abs/1803.08494).

Parameters

x – the input of shape N…C, where N is a batch dimension and C is a channels dimensions. represents an arbitrary number of extra dimensions that are used to accumulate statistics over.

Returns

Normalized inputs (the same shape as inputs).

Methods