XGBoost is an open-source, scalable gradient-boosting library renowned for its speed, accuracy, and support for parallel, distributed and GPU-accelerated training.
LightGBM is an open-source gradient-boosting framework that delivers fast, memory-efficient tree-based learning for classification, regression and ranking tasks.
Open-source gradient-boosting library from Yandex that natively handles categorical features and offers fast CPU/GPU training.
Ray is an open-source distributed compute engine that lets you scale Python and AI workloads—from data processing to model training and serving—without deep distributed-systems expertise.
NVIDIA’s model-parallel training library for GPT-like transformers at multi-billion-parameter scale.
A PyTorch-based system for large-scale model parallel training, memory optimization, and heterogeneous acceleration.
An open platform for training, serving and evaluating chat-oriented LLMs—powering Vicuna & Chatbot Arena.
Zero-code CLI & WebUI to fine-tune 100+ LLMs/VLMs with LoRA, QLoRA, PPO, DPO and more.
An open-source, Ray-based framework for scalable Reinforcement Learning from Human Feedback (RLHF).
Volcano Engine Reinforcement Learning library for efficient LLM post-training—open-sourced HybridFlow.