LogoAIAny
Icon for item

YOLOv5

YOLOv5 is an open-source PyTorch-based computer vision repository by Ultralytics, focused on real-time object detection and extended support for segmentation and classification. It is known for ease of use, speed, multiple pre-trained model sizes, and broad export/deployment support (ONNX, TFLite, CoreML, TensorRT). The repo includes training, inference scripts, tutorials, and integrations for production-ready workflows.

Introduction

Overview

YOLOv5 (Ultralytics) is an open-source computer vision repository built on PyTorch that provides production-ready implementations for object detection, instance segmentation, and image classification. Designed for accessibility and real-world performance, YOLOv5 ships with multiple pretrained model sizes (nano, small, medium, large, x) to balance latency and accuracy across devices.

Key features
  • Multiple model variants (yolov5n/s/m/l/x) targeting different speed/accuracy trade-offs.
  • Support for detection, instance segmentation (seg) and classification (cls) workflows.
  • Easy-to-use CLI and Python APIs for training (train.py), inference (detect.py / predict.py), evaluation (val.py) and export (export.py).
  • Native PyTorch implementation with PyTorch Hub compatibility for simple model loading.
  • Export and deployment options: ONNX, TensorRT, CoreML, TFLite and Docker images for edge and server deployment.
  • Extensive tutorials and notebooks (Colab, Kaggle) for quickstarts: training custom datasets, multi-GPU training, pruning/quantization, TTA, ensembling, transfer learning, and more.
  • Integrations with ecosystem tools: Weights & Biases, Comet ML, Roboflow, Neural Magic DeepSparse, Ultralytics HUB, etc.
  • CI tests, release artifacts (model checkpoints), and active community contributions on GitHub and Discord.
Typical workflow
  1. Clone the repo and install requirements (Python >= 3.8, PyTorch >= 1.8).
  2. Prepare dataset or use built-in datasets (COCO, COCO128, COCO-seg, ImageNet variants).
  3. Train using train.py with chosen model config and hyperparameters; resume from checkpoints or use pretrained weights.
  4. Validate with val.py to obtain mAP metrics and speed benchmarks.
  5. Export with export.py to the desired runtime format for deployment.
  6. Run inference via detect.py, PyTorch Hub, or the provided deployment runtimes.
Performance & checkpoints

YOLOv5 provides pretrained checkpoints with published mAP and speed numbers for different model sizes. Models are commonly benchmarked on COCO val2017 for mAP (0.50:0.95) and inference speed on GPU/CPU instances. Segmentation and classification variants are available in release artifacts.

Licensing & Commercial use

The repository is distributed under AGPL-3.0 for open-source use. Ultralytics also offers an Enterprise License for commercial integration that avoids AGPL obligations; contact Ultralytics for licensing details.

Community & ecosystem

YOLOv5 has an active community (GitHub issues, discussions, Discord, forums, and external tutorials). Ultralytics maintains documentation and guides at the official docs site. The project emphasizes reproducibility with release assets, CI, example notebooks (Colab/Kaggle), and clear tutorials for deployment on devices such as NVIDIA Jetson.

Notes

While Ultralytics continues developing newer families (the repo references later YOLO releases from Ultralytics), YOLOv5 remains widely used for many research and production applications due to its maturity, tooling, and extensive integration options.

Information

  • Websitegithub.com
  • AuthorsUltralytics
  • Published date2020/05/18

Categories

More Items