LogoAIAny
Icon for item

Boltz

Boltz is an open-source family of deep learning models and a GitHub project for biomolecular interaction prediction (Boltz-1, Boltz-2). It predicts complex structures and binding affinities, aiming to approach or exceed AlphaFold3 structural accuracy and deliver fast, practical affinity predictions comparable to physics-based methods at a fraction of the compute cost. Code and weights are released under the MIT license.

Introduction

Overview

Boltz is a family of open-source models and a supporting codebase for predicting biomolecular interactions, including complex structures and ligand binding affinities. The project provides two major model releases documented in technical reports: Boltz-1 (first release) and Boltz-2 (later release), with Boltz-2 emphasizing joint modeling of structure and binding affinity. The authors report that Boltz-1 approached AlphaFold3 accuracy, and Boltz-2 advances affinity prediction accuracy toward that of physics-based free-energy perturbation (FEP) methods while running orders of magnitude faster.

Key features
  • Structure and affinity prediction: Boltz jointly models complex 3D structures and binding affinity metrics so it can be used both for predicting physical conformations and for virtual screening/lead optimization.
  • Two affinity outputs: affinity_probability_binary (binder vs decoy probability, useful for hit discovery) and affinity_pred_value (a continuous estimate reported as log10(IC50) for comparing binders and small modifications during optimization).
  • Open-source with weights: All code and pretrained weights are provided under the MIT license for academic and commercial use.
  • Performance goals: The project claims structural accuracy approaching AlphaFold3 and affinity predictions approaching FEP accuracy while running ~1000x faster than FEP, making large-scale in silico screening practical.
  • GPU acceleration and hardware support: Recommended to run on NVIDIA GPUs (uses NVIDIA cuEquivariance kernels when available); community forks add support for other hardware (e.g., Tenstorrent).
Installation & usage
  • Install from PyPI (recommended): pip install boltz[cuda] -U (omit [cuda] for CPU-only installs).
  • Install from source for daily updates: clone the repository and pip install -e .[cuda].
  • Command-line inference: boltz predict input_path --use_msa_server where input_path is a YAML file or directory describing molecules and requested predictions.
  • Prediction outputs include structural data and affinity fields; the README and docs provide detailed prediction and input format instructions.
Training & evaluation
  • The repository contains training and evaluation pipelines (Boltz-1 training code is available; updated Boltz-2 training/evaluation code is noted as coming soon in the README).
  • The team plans to provide evaluation scripts and prediction sets comparing Boltz-1, Boltz-2, Chai-1, and AlphaFold3 on benchmark datasets, and affinity evaluations on FEP+ and other test sets.
Licensing & citation
  • License: MIT — code and weights can be used in academic and commercial projects.
  • Citation: The README provides BibTeX entries for Boltz-1 (Wohlwend et al., 2024) and Boltz-2 (Passaro et al., 2025) technical reports hosted on bioRxiv.
Who maintains it
  • The repository is maintained by Jeremy Wohlwend (GitHub: jwohlwend) and collaborators; core author lists for the papers include Saro Passaro, Gabriele Corso, Jeremy Wohlwend, Regina Barzilay, Tommi Jaakkola, Noah Getz and others.
When to use
  • Boltz is appropriate for researchers and practitioners working on protein–ligand complex modeling, virtual screening, hit-to-lead optimization, and anyone who needs faster approximate alternatives to expensive physics-based affinity calculations for early-stage drug discovery.
Notes & caveats
  • The README emphasizes using GPU acceleration for practical speed; CPU-only runs are significantly slower.
  • As with any ML-based affinity predictor, users should validate predictions experimentally and be mindful of dataset and domain limitations described in the technical reports.

Information

  • Websitegithub.com
  • AuthorsSaro Passaro, Gabriele Corso, Jeremy Wohlwend, Mateo Reveiz, Stephan Thaler, Vignesh Ram Somnath, Noah Getz, Tally Portnoi, Julien Roy, Hannes Stark, Tommi Jaakkola, Regina Barzilay
  • Published date2024/11/17

Categories

More Items