🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
-
Updated
Jan 31, 2025 - Python
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch and FLAX.
Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Flax.
A Library for Uncertainty Quantification.
Long Range Arena for Benchmarking Efficient Transformers
Original Implementation of Prompt Tuning from Lester, et al, 2021
Tevatron - A flexible toolkit for neural retrieval research and development.
Run Effective Large Batch Contrastive Learning Beyond GPU/TPU Memory Constraint
Orbax provides common checkpointing and persistence utilities for JAX users
A Jax-based library for designing and training transformer models from scratch.
Pretrained deep learning models for Jax/Flax: StyleGAN2, GPT2, VGG, ResNet, etc.
Train very large language models in Jax.
Unofficial JAX implementations of deep learning research papers
KoCLIP: Korean port of OpenAI CLIP, in Flax
Official code for "Maximum Likelihood Training of Score-Based Diffusion Models", NeurIPS 2021 (spotlight)
Clean single-file implementation of offline RL algorithms in JAX
EfficientNet, MobileNetV3, MobileNetV2, MixNet, etc in JAX w/ Flax Linen and Objax
Implementations and checkpoints for ResNet, Wide ResNet, ResNeXt, ResNet-D, and ResNeSt in JAX (Flax).
JAX implementation of deep RL agents with resets from the paper "The Primacy Bias in Deep Reinforcement Learning"
Add a description, image, and links to the flax topic page so that developers can more easily learn about it.
To associate your repository with the flax topic, visit your repo's landing page and select "manage topics."