Distributed KV-cache store & transfer engine that decouples prefilling from decoding to scale vLLM serving clusters.
vLLM-project’s control-plane that orchestrates cost-efficient, plug-and-play LLM inference infrastructure.
NVIDIA Dynamo is an open-source, high-throughput, low-latency inference framework that scales generative-AI and reasoning models across large, multi-node GPU clusters.