Core examples#
Core examples are hosted on the GitHub Flax repository in the examples directory.
Each example is designed to be self-contained and easily forkable, while reproducing relevant results in different areas of machine learning.
As discussed in #231, we decided to go for a standard pattern for all examples including the simplest ones (like MNIST). This makes every example a bit more verbose, but once you know one example, you know the structure of all of them. Having unit tests and integration tests is also very useful when you fork these examples.
Some of the examples below have a link “Interactive🕹” that lets you run them directly in Colab.
Image classification#
MNIST - Interactive🕹: Convolutional neural network for MNIST classification (featuring simple code).
ImageNet - Interactive🕹: Resnet-50 on ImageNet with weight decay (featuring multi-host SPMD, custom preprocessing, checkpointing, dynamic scaling, mixed precision).
Reinforcement learning#
Proximal Policy Optimization: Learning to play Atari games (featuring single host SPMD, RL setup).
Natural language processing#
Sequence to sequence for number addition: (featuring simple code, LSTM state handling, on the fly data generation).
Parts-of-speech tagging: Simple transformer encoder model using the universal dependency dataset.
Sentiment classification: with a LSTM model.
Transformer encoder/decoder model trained on WMT: Translating English/German (featuring multihost SPMD, dynamic bucketing, attention cache, packed sequences, recipe for TPU training on GCP).
Transformer encoder trained on one billion word benchmark: for autoregressive language modeling, based on the WMT example above.
Generative models#
Variational auto-encoder: Trained on binarized MNIST (featuring simple code, vmap).
Graph modeling#
Graph Neural Networks: Molecular predictions on ogbg-molpcba from the Open Graph Benchmark.
Contributing to core Flax examples#
Most of the core Flax examples on GitHub follow a structure that the Flax dev team found works well with Flax projects. The team strives to make these examples easy to explore and fork. In particular (as per GitHub Issue #231):
README: contains links to paper, command line, TensorBoard metrics.
Focus: an example is about a single model/dataset.
Configs: we use
ml_collections.ConfigDict
stored underconfigs/
.Tests: executable
main.py
loadstrain.py
which hastrain_test.py
.Data: is read from TensorFlow Datasets.
Standalone: every directory is self-contained.
Requirements: versions are pinned in
requirements.txt
.Boilerplate: is reduced by using clu.
Interactive: the example can be explored with a Colab.