This paper introduces GPipe, a model-parallelism library designed to train large neural networks efficiently using pipeline parallelism. It partitions models across accelerators, processes micro-batches in parallel, and supports synchronous gradient updates. GPipe enables near-linear scaling with the number of devices while maintaining model quality and training stability. It achieves state-of-the-art performance in large-scale image classification (AmoebaNet) and multilingual machine translation (6B parameter Transformer), demonstrating flexibility across tasks. Its impact lies in making massive model training more practical and accessible across diverse architectures without relying on high-speed interconnects or custom model designs.