# yolov5
**Repository Path**: james9/yolov5
## Basic Information
- **Project Name**: yolov5
- **Description**: YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
- **Primary Language**: Unknown
- **License**: GPL-3.0
- **Default Branch**: master
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2021-10-27
- **Last Updated**: 2021-11-17
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
YOLOv5 🚀 is a family of object detection architectures and models pretrained on the COCO dataset, and represents Ultralytics
open-source research into future vision AI methods, incorporating lessons learned and best practices evolved over thousands of hours of research and development.
YOLOv5🚀是一个对象检测架构和模型家族,预训练在COCO数据集上,并代表了Ultralytics的开源研究到未来的视觉AI方法,整合了经验教训和在数千小时的研究和开发的最佳实践。
## Documentation
See the [YOLOv5 Docs](https://docs.ultralytics.com) for full documentation on training, testing and deployment.
有关训练、测试和部署的完整文档,请参阅YOLOv5文档。
## Quick Start Examples
Install
[**Python>=3.6.0**](https://www.python.org/) is required with all
[requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) installed including
[**PyTorch>=1.7**](https://pytorch.org/get-started/locally/):
```bash
$ git clone https://github.com/ultralytics/yolov5
$ cd yolov5
$ pip install -r requirements.txt
```
Inference
Inference with YOLOv5 and [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36). Models automatically download
from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases).
使用YOLOv5和PyTorch Hub进行推理。模型自动从最新的YOLOv5版本下载。
```python
import torch
# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s') # or yolov5m, yolov5l, yolov5x, custom
# Images
img = 'https://ultralytics.com/images/zidane.jpg' # or file, Path, PIL, OpenCV, numpy, list
# Inference
results = model(img)
# Results
results.print() # or .show(), .save(), .crop(), .pandas(), etc.
```
Inference with detect.py
`detect.py` runs inference on a variety of sources, downloading models automatically from
the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases) and saving results to `runs/detect`.
在各种源上运行推断,自动从[最新的YOLOv5版本](https://github.com/ultralytics/yolov5/releases)下载模型,并将结果保存到“运行/检测”
```bash
$ python detect.py --source 0 # webcam
file.jpg # image
file.mp4 # video
path/ # directory
path/*.jpg # glob
'https://youtu.be/NUsoVlDFqZg' # YouTube
'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
```
Training
Run commands below to reproduce results
on [COCO](https://github.com/ultralytics/yolov5/blob/master/data/scripts/get_coco.sh) dataset (dataset auto-downloads on
first use). Training times for YOLOv5s/m/l/x are 2/4/6/8 days on a single V100 (multi-GPU times faster). Use the
largest `--batch-size` your GPU allows (batch sizes shown for 16 GB devices).
运行下面的命令在[COCO](https://github.com/ultralytics/yolov5/blob/master/data/scripts/get_coco.sh)数据集上重现结果(数据集自动下载或首次使用)。YOLOv5s/m/l/x在单个V100上的训练时间为2/4/6/8天(多gpu更快)。使用你的GPU允许的最大“批量大小”(批量大小显示为16gb设备)。
```bash
$ python train.py --data coco.yaml --cfg yolov5s.yaml --weights '' --batch-size 64
yolov5m 40
yolov5l 24
yolov5x 16
```
Tutorials
* [Train Custom Data](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data) 🚀 RECOMMENDED
* [Tips for Best Training Results](https://github.com/ultralytics/yolov5/wiki/Tips-for-Best-Training-Results) ☘️
RECOMMENDED
* [Weights & Biases Logging](https://github.com/ultralytics/yolov5/issues/1289) 🌟 NEW
* [Supervisely Ecosystem](https://github.com/ultralytics/yolov5/issues/2518) 🌟 NEW
* [Multi-GPU Training](https://github.com/ultralytics/yolov5/issues/475)
* [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36) ⭐ NEW
* [TorchScript, ONNX, CoreML Export](https://github.com/ultralytics/yolov5/issues/251) 🚀
* [Test-Time Augmentation (TTA)](https://github.com/ultralytics/yolov5/issues/303)
* [Model Ensembling](https://github.com/ultralytics/yolov5/issues/318)
* [Model Pruning/Sparsity](https://github.com/ultralytics/yolov5/issues/304)
* [Hyperparameter Evolution](https://github.com/ultralytics/yolov5/issues/607)
* [Transfer Learning with Frozen Layers](https://github.com/ultralytics/yolov5/issues/1314) ⭐ NEW
* [TensorRT Deployment](https://github.com/wang-xinyu/tensorrtx)
## Environments and Integrations
环境和集成
Get started in seconds with our verified environments and integrations,
including [Weights & Biases](https://wandb.ai/site?utm_campaign=repo_yolo_readme) for automatic YOLOv5 experiment
logging. Click each icon below for details.
几秒钟内就可以开始使用我们验证过的环境和集成,包括用于自动YOLOv5实验日志记录的[Weights & Biases](https://wandb.ai/site?utm_campaign=repo_yolo_readme)。点击下面的每个图标查看详细信息。
## Compete and Win(完胜)
We are super excited about our first-ever Ultralytics YOLOv5 🚀 EXPORT Competition with **$10,000** in cash prizes!
我们非常兴奋我们的第一次Ultralytics YOLOv5🚀出口比赛,有**$10,000**的现金奖励!
## Why YOLOv5

YOLOv5-P5 640 Figure (click to expand)

Figure Notes (click to expand)
* GPU Speed measures end-to-end time per image averaged over 5000 COCO val2017 images using a V100 GPU with batch size
32, and includes image preprocessing, PyTorch FP16 inference, postprocessing and NMS.
GPU速度测量了平均超过5000张COCO val2017图像的端到端时间,使用批量大小为32的V100 GPU,包括图像预处理、PyTorch FP16推断、后处理和NMS。
* EfficientDet data from [google/automl](https://github.com/google/automl) at batch size 8.
* **Reproduce** by
`python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt`
### Pretrained Checkpoints
### 预先训练的检查点
[assets]: https://github.com/ultralytics/yolov5/releases
|Model |size
(pixels) |mAPval
0.5:0.95 |mAPtest
0.5:0.95 |mAPval
0.5 |Speed
V100 (ms) | |params
(M) |FLOPs
640 (B)
|--- |--- |--- |--- |--- |--- |---|--- |---
|[YOLOv5s][assets] |640 |36.7 |36.7 |55.4 |**2.0** | |7.3 |17.0
|[YOLOv5m][assets] |640 |44.5 |44.5 |63.1 |2.7 | |21.4 |51.3
|[YOLOv5l][assets] |640 |48.2 |48.2 |66.9 |3.8 | |47.0 |115.4
|[YOLOv5x][assets] |640 |**50.4** |**50.4** |**68.8** |6.1 | |87.7 |218.8
| | | | | | | | |
|[YOLOv5s6][assets] |1280 |43.3 |43.3 |61.9 |**4.3** | |12.7 |17.4
|[YOLOv5m6][assets] |1280 |50.5 |50.5 |68.7 |8.4 | |35.9 |52.4
|[YOLOv5l6][assets] |1280 |53.4 |53.4 |71.1 |12.3 | |77.2 |117.7
|[YOLOv5x6][assets] |1280 |**54.4** |**54.4** |**72.0** |22.4 | |141.8 |222.9
| | | | | | | | |
|[YOLOv5x6][assets] TTA |1280 |**55.0** |**55.0** |**72.0** |70.8 | |- |-
Table Notes (click to expand)
* APtest denotes COCO [test-dev2017](http://cocodataset.org/#upload) server results, all other AP results
denote val2017 accuracy.
* AP values are for single-model single-scale unless otherwise noted. **Reproduce mAP**
by `python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65`
* SpeedGPU averaged over 5000 COCO val2017 images using a
GCP [n1-standard-16](https://cloud.google.com/compute/docs/machine-types#n1_standard_machine_types) V100 instance, and
includes FP16 inference, postprocessing and NMS. **Reproduce speed**
by `python val.py --data coco.yaml --img 640 --conf 0.25 --iou 0.45 --half`
* All checkpoints are trained to 300 epochs with default settings and hyperparameters (no autoaugmentation).
* Test Time Augmentation ([TTA](https://github.com/ultralytics/yolov5/issues/303)) includes reflection and scale
augmentation. **Reproduce TTA** by `python val.py --data coco.yaml --img 1536 --iou 0.7 --augment`
## Contribute(贡献)
We love your input! We want to make contributing to YOLOv5 as easy and transparent as possible. Please see
our [Contributing Guide](CONTRIBUTING.md) to get started.
我们喜欢你的输入!我们希望使对YOLOv5的贡献尽可能地简单和透明。请参阅我们的[投稿指南](contribution .md)开始。
## Contact
For issues running YOLOv5 please visit [GitHub Issues](https://github.com/ultralytics/yolov5/issues). For business or
professional support requests please visit [https://ultralytics.com/contact](https://ultralytics.com/contact).