Overview
MLX Examples is a curated collection of standalone, runnable examples that demonstrate how to use the MLX framework for a wide range of machine learning workloads. Maintained by the ml-explore GitHub organization, the repository aims to provide practical, minimal-to-complete recipes for model training, fine-tuning, generation, and multimodal experiments across text, image, audio, and other domains.
What it contains
- Text models: Transformer language model training, minimal examples for large-scale generation (LLaMA, Mistral), a mixture-of-experts (Mixtral) example, parameter-efficient fine-tuning recipes (LoRA / QLoRA), T5 multi-task text-to-text, and BERT for bidirectional understanding.
- Image models: Generative model examples (FLUX, Stable Diffusion / SDXL), image classification (ResNets on CIFAR-10), and a convolutional VAE on MNIST.
- Audio models: Speech recognition with OpenAI Whisper, audio compression/generation with Meta EnCodec, and music generation using Meta MusicGen.
- Multimodal: Joint text-image embeddings with CLIP, image+text generation with LLaVA, and segmentation with Segment Anything (SAM).
- Other: Graph neural network (GCN) examples, normalizing flows (Real NVP) for density estimation and sampling, and other research-oriented demos.
Design goals and highlights
- Practicality: examples are written to be runnable with minimal setup so users can quickly experiment with MLX features and model checkpoints.
- Modality coverage: demonstrates how MLX can be used across text, vision, audio, and multimodal tasks, making it useful for a broad audience.
- Interoperability: points to converted checkpoints and community models on Hugging Face (mlx-community) for easy download and reuse.
- Learning path: the README recommends starting with the MNIST example for newcomers and includes pointers to more advanced examples (LLMs, MoE, diffusion models, etc.).
Usage
Each example typically contains its own directory with code, README, and configuration to run training or inference. The repository provides guidance on required dependencies and often links to model checkpoints (or where to obtain them). Users can adapt scripts for local machines (including Apple Silicon) or scale them to larger infrastructure.
Community, citation & contribution
- The README includes a BibTeX citation for the MLX software suite and acknowledges core contributors involved in MLX's initial development (Awni Hannun, Jagrit Digani, Angelos Katharopoulos, Ronan Collobert).
- Contributions are welcomed via issues and pull requests; the repo encourages contributors to add their names to acknowledgements when appropriate.
- The repository references the MLX Community organization on Hugging Face for preconverted checkpoints and additional resources.
Who should use it
Researchers, engineers, and students who want hands-on examples for training and running modern ML models using the MLX framework — especially those interested in cross-modality workflows, efficient fine-tuning, and practical recipes to reproduce or extend model behaviors.
