ultralytics 8.0.141
create new SettingsManager (#3790)
This commit is contained in:
@ -85,4 +85,4 @@ sudo swapon /swapfile
|
||||
free -h # check memory
|
||||
```
|
||||
|
||||
Now you have successfully set up and run YOLOv5 on an AWS Deep Learning instance. Enjoy training, testing, and deploying your object detection models!
|
||||
Now you have successfully set up and run YOLOv5 on an AWS Deep Learning instance. Enjoy training, testing, and deploying your object detection models!
|
||||
|
@ -61,4 +61,4 @@ python detect.py --weights yolov5s.pt --source path/to/images # run inference o
|
||||
python export.py --weights yolov5s.pt --include onnx coreml tflite # export models to other formats
|
||||
```
|
||||
|
||||
<p align="center"><img width="1000" src="https://user-images.githubusercontent.com/26833433/142224770-6e57caaf-ac01-4719-987f-c37d1b6f401f.png"></p>
|
||||
<p align="center"><img width="1000" src="https://user-images.githubusercontent.com/26833433/142224770-6e57caaf-ac01-4719-987f-c37d1b6f401f.png"></p>
|
||||
|
@ -46,4 +46,4 @@ python detect.py --weights yolov5s.pt --source path/to/images # run inference o
|
||||
python export.py --weights yolov5s.pt --include onnx coreml tflite # export models to other formats
|
||||
```
|
||||
|
||||
<img width="1000" alt="GCP terminal" src="https://user-images.githubusercontent.com/26833433/142223900-275e5c9e-e2b5-43f7-a21c-35c4ca7de87c.png">
|
||||
<img width="1000" alt="GCP terminal" src="https://user-images.githubusercontent.com/26833433/142223900-275e5c9e-e2b5-43f7-a21c-35c4ca7de87c.png">
|
||||
|
@ -87,4 +87,4 @@ This badge signifies that all [YOLOv5 GitHub Actions](https://github.com/ultraly
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="" />
|
||||
<a href="https://discord.gg/2wNGbc6g9X" style="text-decoration:none;">
|
||||
<img src="https://github.com/ultralytics/assets/blob/main/social/logo-social-discord.png" width="3%" alt="" /></a>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -77,4 +77,4 @@ python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5n.yaml -
|
||||
yolov5x 16
|
||||
```
|
||||
|
||||
<img width="800" src="https://user-images.githubusercontent.com/26833433/90222759-949d8800-ddc1-11ea-9fa1-1c97eed2b963.png">
|
||||
<img width="800" src="https://user-images.githubusercontent.com/26833433/90222759-949d8800-ddc1-11ea-9fa1-1c97eed2b963.png">
|
||||
|
@ -160,9 +160,9 @@ The objectness losses of the three prediction layers (`P3`, `P4`, `P5`) are weig
|
||||
|
||||
The YOLOv5 architecture makes some important changes to the box prediction strategy compared to earlier versions of YOLO. In YOLOv2 and YOLOv3, the box coordinates were directly predicted using the activation of the last layer.
|
||||
|
||||
+c_x)
|
||||
+c_y)
|
||||

|
||||
+c_x)
|
||||
+c_y)
|
||||

|
||||

|
||||
|
||||
<img src="https://user-images.githubusercontent.com/31005897/158508027-8bf63c28-8290-467b-8a3e-4ad09235001a.png#pic_center" width=40%>
|
||||
@ -171,9 +171,9 @@ However, in YOLOv5, the formula for predicting the box coordinates has been upda
|
||||
|
||||
The revised formulas for calculating the predicted bounding box are as follows:
|
||||
|
||||
-0.5)+c_x)
|
||||
-0.5)+c_y)
|
||||
)^2)
|
||||
-0.5)+c_x)
|
||||
-0.5)+c_y)
|
||||
)^2)
|
||||
)^2)
|
||||
|
||||
Compare the center point offset before and after scaling. The center point offset range is adjusted from (0, 1) to (-0.5, 1.5).
|
||||
@ -221,4 +221,4 @@ This way, the build targets process ensures that each ground truth object is pro
|
||||
|
||||
In conclusion, YOLOv5 represents a significant step forward in the development of real-time object detection models. By incorporating various new features, enhancements, and training strategies, it surpasses previous versions of the YOLO family in performance and efficiency.
|
||||
|
||||
The primary enhancements in YOLOv5 include the use of a dynamic architecture, an extensive range of data augmentation techniques, innovative training strategies, as well as important adjustments in computing losses and the process of building targets. All these innovations significantly improve the accuracy and efficiency of object detection while retaining a high degree of speed, which is the trademark of YOLO models.
|
||||
The primary enhancements in YOLOv5 include the use of a dynamic architecture, an extensive range of data augmentation techniques, innovative training strategies, as well as important adjustments in computing losses and the process of building targets. All these innovations significantly improve the accuracy and efficiency of object detection while retaining a high degree of speed, which is the trademark of YOLO models.
|
||||
|
@ -240,4 +240,4 @@ ClearML comes with autoscalers too! This tool will automatically spin up new rem
|
||||
|
||||
Check out the autoscalers getting started video below.
|
||||
|
||||
[](https://youtu.be/j4XVMAaUt3E)
|
||||
[](https://youtu.be/j4XVMAaUt3E)
|
||||
|
@ -261,4 +261,4 @@ comet optimizer -j <set number of workers> utils/loggers/comet/hpo.py \
|
||||
|
||||
Comet provides a number of ways to visualize the results of your sweep. Take a look at a [project with a completed sweep here](https://www.comet.com/examples/comet-example-yolov5/view/PrlArHGuuhDTKC1UuBmTtOSXD/panels?utm_source=yolov5&utm_medium=partner&utm_campaign=partner_yolov5_2022&utm_content=github)
|
||||
|
||||
<img width="1626" alt="hyperparameter-yolo" src="https://user-images.githubusercontent.com/7529846/186914869-7dc1de14-583f-4323-967b-c9a66a29e495.png">
|
||||
<img width="1626" alt="hyperparameter-yolo" src="https://user-images.githubusercontent.com/7529846/186914869-7dc1de14-583f-4323-967b-c9a66a29e495.png">
|
||||
|
@ -64,10 +64,10 @@ copy_paste: 0.0 # segment copy-paste (probability)
|
||||
Fitness is the value we seek to maximize. In YOLOv5 we define a default fitness function as a weighted combination of metrics: `mAP@0.5` contributes 10% of the weight and `mAP@0.5:0.95` contributes the remaining 90%, with [Precision `P` and Recall `R`](https://en.wikipedia.org/wiki/Precision_and_recall) absent. You may adjust these as you see fit or use the default fitness definition in utils/metrics.py (recommended).
|
||||
|
||||
```python
|
||||
def fitness(x):
|
||||
# Model fitness as a weighted combination of metrics
|
||||
w = [0.0, 0.0, 0.1, 0.9] # weights for [P, R, mAP@0.5, mAP@0.5:0.95]
|
||||
return (x[:, :4] * w).sum(1)
|
||||
def fitness(x):
|
||||
# Model fitness as a weighted combination of metrics
|
||||
w = [0.0, 0.0, 0.1, 0.9] # weights for [P, R, mAP@0.5, mAP@0.5:0.95]
|
||||
return (x[:, :4] * w).sum(1)
|
||||
```
|
||||
|
||||
## 3. Evolve
|
||||
@ -163,4 +163,4 @@ YOLOv5 is designed to be run in the following up-to-date verified environments (
|
||||
|
||||
<a href="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml"><img src="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml/badge.svg" alt="YOLOv5 CI"></a>
|
||||
|
||||
If this badge is green, all [YOLOv5 GitHub Actions](https://github.com/ultralytics/yolov5/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 [training](https://github.com/ultralytics/yolov5/blob/master/train.py), [validation](https://github.com/ultralytics/yolov5/blob/master/val.py), [inference](https://github.com/ultralytics/yolov5/blob/master/detect.py), [export](https://github.com/ultralytics/yolov5/blob/master/export.py) and [benchmarks](https://github.com/ultralytics/yolov5/blob/master/benchmarks.py) on macOS, Windows, and Ubuntu every 24 hours and on every commit.
|
||||
If this badge is green, all [YOLOv5 GitHub Actions](https://github.com/ultralytics/yolov5/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 [training](https://github.com/ultralytics/yolov5/blob/master/train.py), [validation](https://github.com/ultralytics/yolov5/blob/master/val.py), [inference](https://github.com/ultralytics/yolov5/blob/master/detect.py), [export](https://github.com/ultralytics/yolov5/blob/master/export.py) and [benchmarks](https://github.com/ultralytics/yolov5/blob/master/benchmarks.py) on macOS, Windows, and Ubuntu every 24 hours and on every commit.
|
||||
|
@ -4,7 +4,7 @@ description: Learn how to ensemble YOLOv5 models for improved mAP and Recall! Cl
|
||||
keywords: YOLOv5, object detection, ensemble learning, mAP, Recall
|
||||
---
|
||||
|
||||
📚 This guide explains how to use YOLOv5 🚀 **model ensembling** during testing and inference for improved mAP and Recall.
|
||||
📚 This guide explains how to use YOLOv5 🚀 **model ensembling** during testing and inference for improved mAP and Recall.
|
||||
UPDATED 25 September 2022.
|
||||
|
||||
From [https://en.wikipedia.org/wiki/Ensemble_learning](https://en.wikipedia.org/wiki/Ensemble_learning):
|
||||
@ -34,7 +34,7 @@ Output:
|
||||
val: data=./data/coco.yaml, weights=['yolov5x.pt'], batch_size=32, imgsz=640, conf_thres=0.001, iou_thres=0.65, task=val, device=, single_cls=False, augment=False, verbose=False, save_txt=False, save_hybrid=False, save_conf=False, save_json=True, project=runs/val, name=exp, exist_ok=False, half=True
|
||||
YOLOv5 🚀 v5.0-267-g6a3ee7c torch 1.9.0+cu102 CUDA:0 (Tesla P100-PCIE-16GB, 16280.875MB)
|
||||
|
||||
Fusing layers...
|
||||
Fusing layers...
|
||||
Model Summary: 476 layers, 87730285 parameters, 0 gradients
|
||||
|
||||
val: Scanning '../datasets/coco/val2017' images and labels...4952 found, 48 missing, 0 empty, 0 corrupted: 100% 5000/5000 [00:01<00:00, 2846.03it/s]
|
||||
@ -76,9 +76,9 @@ Output:
|
||||
val: data=./data/coco.yaml, weights=['yolov5x.pt', 'yolov5l6.pt'], batch_size=32, imgsz=640, conf_thres=0.001, iou_thres=0.6, task=val, device=, single_cls=False, augment=False, verbose=False, save_txt=False, save_hybrid=False, save_conf=False, save_json=True, project=runs/val, name=exp, exist_ok=False, half=True
|
||||
YOLOv5 🚀 v5.0-267-g6a3ee7c torch 1.9.0+cu102 CUDA:0 (Tesla P100-PCIE-16GB, 16280.875MB)
|
||||
|
||||
Fusing layers...
|
||||
Fusing layers...
|
||||
Model Summary: 476 layers, 87730285 parameters, 0 gradients # Model 1
|
||||
Fusing layers...
|
||||
Fusing layers...
|
||||
Model Summary: 501 layers, 77218620 parameters, 0 gradients # Model 2
|
||||
Ensemble created with ['yolov5x.pt', 'yolov5l6.pt'] # Ensemble notice
|
||||
|
||||
@ -117,9 +117,9 @@ Output:
|
||||
detect: weights=['yolov5x.pt', 'yolov5l6.pt'], source=data/images, imgsz=640, conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_width=3, hide_labels=False, hide_conf=False, half=False
|
||||
YOLOv5 🚀 v5.0-267-g6a3ee7c torch 1.9.0+cu102 CUDA:0 (Tesla P100-PCIE-16GB, 16280.875MB)
|
||||
|
||||
Fusing layers...
|
||||
Fusing layers...
|
||||
Model Summary: 476 layers, 87730285 parameters, 0 gradients
|
||||
Fusing layers...
|
||||
Fusing layers...
|
||||
Model Summary: 501 layers, 77218620 parameters, 0 gradients
|
||||
Ensemble created with ['yolov5x.pt', 'yolov5l6.pt']
|
||||
|
||||
@ -144,4 +144,4 @@ YOLOv5 is designed to be run in the following up-to-date verified environments (
|
||||
|
||||
<a href="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml"><img src="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml/badge.svg" alt="YOLOv5 CI"></a>
|
||||
|
||||
If this badge is green, all [YOLOv5 GitHub Actions](https://github.com/ultralytics/yolov5/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 [training](https://github.com/ultralytics/yolov5/blob/master/train.py), [validation](https://github.com/ultralytics/yolov5/blob/master/val.py), [inference](https://github.com/ultralytics/yolov5/blob/master/detect.py), [export](https://github.com/ultralytics/yolov5/blob/master/export.py) and [benchmarks](https://github.com/ultralytics/yolov5/blob/master/benchmarks.py) on macOS, Windows, and Ubuntu every 24 hours and on every commit.
|
||||
If this badge is green, all [YOLOv5 GitHub Actions](https://github.com/ultralytics/yolov5/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 [training](https://github.com/ultralytics/yolov5/blob/master/train.py), [validation](https://github.com/ultralytics/yolov5/blob/master/val.py), [inference](https://github.com/ultralytics/yolov5/blob/master/detect.py), [export](https://github.com/ultralytics/yolov5/blob/master/export.py) and [benchmarks](https://github.com/ultralytics/yolov5/blob/master/benchmarks.py) on macOS, Windows, and Ubuntu every 24 hours and on every commit.
|
||||
|
@ -6,7 +6,7 @@ keywords: Ultralytics, YOLOv5, model export, PyTorch, TorchScript, ONNX, OpenVIN
|
||||
|
||||
# TFLite, ONNX, CoreML, TensorRT Export
|
||||
|
||||
📚 This guide explains how to export a trained YOLOv5 🚀 model from PyTorch to ONNX and TorchScript formats.
|
||||
📚 This guide explains how to export a trained YOLOv5 🚀 model from PyTorch to ONNX and TorchScript formats.
|
||||
UPDATED 8 December 2022.
|
||||
|
||||
## Before You Start
|
||||
@ -116,7 +116,7 @@ YOLOv5 🚀 v6.2-104-ge3e5122 Python-3.7.13 torch-1.12.1+cu113 CPU
|
||||
Downloading https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5s.pt to yolov5s.pt...
|
||||
100% 14.1M/14.1M [00:00<00:00, 274MB/s]
|
||||
|
||||
Fusing layers...
|
||||
Fusing layers...
|
||||
YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients
|
||||
|
||||
PyTorch: starting from yolov5s.pt with output shape (1, 25200, 85) (14.1 MB)
|
||||
@ -129,8 +129,8 @@ ONNX: export success ✅ 2.3s, saved as yolov5s.onnx (28.0 MB)
|
||||
|
||||
Export complete (5.5s)
|
||||
Results saved to /content/yolov5
|
||||
Detect: python detect.py --weights yolov5s.onnx
|
||||
Validate: python val.py --weights yolov5s.onnx
|
||||
Detect: python detect.py --weights yolov5s.onnx
|
||||
Validate: python val.py --weights yolov5s.onnx
|
||||
PyTorch Hub: model = torch.hub.load('ultralytics/yolov5', 'custom', 'yolov5s.onnx')
|
||||
Visualize: https://netron.app/
|
||||
```
|
||||
@ -243,4 +243,4 @@ YOLOv5 is designed to be run in the following up-to-date verified environments (
|
||||
|
||||
<a href="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml"><img src="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml/badge.svg" alt="YOLOv5 CI"></a>
|
||||
|
||||
If this badge is green, all [YOLOv5 GitHub Actions](https://github.com/ultralytics/yolov5/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 [training](https://github.com/ultralytics/yolov5/blob/master/train.py), [validation](https://github.com/ultralytics/yolov5/blob/master/val.py), [inference](https://github.com/ultralytics/yolov5/blob/master/detect.py), [export](https://github.com/ultralytics/yolov5/blob/master/export.py) and [benchmarks](https://github.com/ultralytics/yolov5/blob/master/benchmarks.py) on macOS, Windows, and Ubuntu every 24 hours and on every commit.
|
||||
If this badge is green, all [YOLOv5 GitHub Actions](https://github.com/ultralytics/yolov5/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 [training](https://github.com/ultralytics/yolov5/blob/master/train.py), [validation](https://github.com/ultralytics/yolov5/blob/master/val.py), [inference](https://github.com/ultralytics/yolov5/blob/master/detect.py), [export](https://github.com/ultralytics/yolov5/blob/master/export.py) and [benchmarks](https://github.com/ultralytics/yolov5/blob/master/benchmarks.py) on macOS, Windows, and Ubuntu every 24 hours and on every commit.
|
||||
|
@ -4,7 +4,7 @@ description: Improve YOLOv5 model efficiency by pruning with Ultralytics. Unders
|
||||
keywords: YOLOv5, YOLO, Ultralytics, model pruning, PyTorch, machine learning, deep learning, computer vision, object detection
|
||||
---
|
||||
|
||||
📚 This guide explains how to apply **pruning** to YOLOv5 🚀 models.
|
||||
📚 This guide explains how to apply **pruning** to YOLOv5 🚀 models.
|
||||
UPDATED 25 September 2022.
|
||||
|
||||
## Before You Start
|
||||
@ -31,7 +31,7 @@ Output:
|
||||
val: data=/content/yolov5/data/coco.yaml, weights=['yolov5x.pt'], batch_size=32, imgsz=640, conf_thres=0.001, iou_thres=0.65, task=val, device=, workers=8, single_cls=False, augment=False, verbose=False, save_txt=False, save_hybrid=False, save_conf=False, save_json=True, project=runs/val, name=exp, exist_ok=False, half=True, dnn=False
|
||||
YOLOv5 🚀 v6.0-224-g4c40933 torch 1.10.0+cu111 CUDA:0 (Tesla V100-SXM2-16GB, 16160MiB)
|
||||
|
||||
Fusing layers...
|
||||
Fusing layers...
|
||||
Model Summary: 444 layers, 86705005 parameters, 0 gradients
|
||||
val: Scanning '/content/datasets/coco/val2017.cache' images and labels... 4952 found, 48 missing, 0 empty, 0 corrupt: 100% 5000/5000 [00:00<?, ?it/s]
|
||||
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 157/157 [01:12<00:00, 2.16it/s]
|
||||
@ -67,7 +67,7 @@ We repeat the above test with a pruned model by using the `torch_utils.prune()`
|
||||
val: data=/content/yolov5/data/coco.yaml, weights=['yolov5x.pt'], batch_size=32, imgsz=640, conf_thres=0.001, iou_thres=0.65, task=val, device=, workers=8, single_cls=False, augment=False, verbose=False, save_txt=False, save_hybrid=False, save_conf=False, save_json=True, project=runs/val, name=exp, exist_ok=False, half=True, dnn=False
|
||||
YOLOv5 🚀 v6.0-224-g4c40933 torch 1.10.0+cu111 CUDA:0 (Tesla V100-SXM2-16GB, 16160MiB)
|
||||
|
||||
Fusing layers...
|
||||
Fusing layers...
|
||||
Model Summary: 444 layers, 86705005 parameters, 0 gradients
|
||||
Pruning model... 0.3 global sparsity
|
||||
val: Scanning '/content/datasets/coco/val2017.cache' images and labels... 4952 found, 48 missing, 0 empty, 0 corrupt: 100% 5000/5000 [00:00<?, ?it/s]
|
||||
@ -107,4 +107,4 @@ YOLOv5 is designed to be run in the following up-to-date verified environments (
|
||||
|
||||
<a href="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml"><img src="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml/badge.svg" alt="YOLOv5 CI"></a>
|
||||
|
||||
If this badge is green, all [YOLOv5 GitHub Actions](https://github.com/ultralytics/yolov5/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 [training](https://github.com/ultralytics/yolov5/blob/master/train.py), [validation](https://github.com/ultralytics/yolov5/blob/master/val.py), [inference](https://github.com/ultralytics/yolov5/blob/master/detect.py), [export](https://github.com/ultralytics/yolov5/blob/master/export.py) and [benchmarks](https://github.com/ultralytics/yolov5/blob/master/benchmarks.py) on macOS, Windows, and Ubuntu every 24 hours and on every commit.
|
||||
If this badge is green, all [YOLOv5 GitHub Actions](https://github.com/ultralytics/yolov5/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 [training](https://github.com/ultralytics/yolov5/blob/master/train.py), [validation](https://github.com/ultralytics/yolov5/blob/master/val.py), [inference](https://github.com/ultralytics/yolov5/blob/master/detect.py), [export](https://github.com/ultralytics/yolov5/blob/master/export.py) and [benchmarks](https://github.com/ultralytics/yolov5/blob/master/benchmarks.py) on macOS, Windows, and Ubuntu every 24 hours and on every commit.
|
||||
|
@ -4,7 +4,7 @@ description: Learn how to train datasets on single or multiple GPUs using YOLOv5
|
||||
keywords: YOLOv5, multi-GPU Training, YOLOv5 training, deep learning, machine learning, object detection, Ultralytics
|
||||
---
|
||||
|
||||
📚 This guide explains how to properly use **multiple** GPUs to train a dataset with YOLOv5 🚀 on single or multiple machine(s).
|
||||
📚 This guide explains how to properly use **multiple** GPUs to train a dataset with YOLOv5 🚀 on single or multiple machine(s).
|
||||
UPDATED 25 December 2022.
|
||||
|
||||
## Before You Start
|
||||
@ -136,9 +136,9 @@ cd .. && rm -rf app && git clone https://github.com/ultralytics/yolov5 -b master
|
||||
cp data/coco.yaml data/coco_profile.yaml
|
||||
|
||||
# profile
|
||||
python train.py --batch-size 16 --data coco_profile.yaml --weights yolov5l.pt --epochs 1 --device 0
|
||||
python -m torch.distributed.run --nproc_per_node 2 train.py --batch-size 32 --data coco_profile.yaml --weights yolov5l.pt --epochs 1 --device 0,1
|
||||
python -m torch.distributed.run --nproc_per_node 4 train.py --batch-size 64 --data coco_profile.yaml --weights yolov5l.pt --epochs 1 --device 0,1,2,3
|
||||
python train.py --batch-size 16 --data coco_profile.yaml --weights yolov5l.pt --epochs 1 --device 0
|
||||
python -m torch.distributed.run --nproc_per_node 2 train.py --batch-size 32 --data coco_profile.yaml --weights yolov5l.pt --epochs 1 --device 0,1
|
||||
python -m torch.distributed.run --nproc_per_node 4 train.py --batch-size 64 --data coco_profile.yaml --weights yolov5l.pt --epochs 1 --device 0,1,2,3
|
||||
python -m torch.distributed.run --nproc_per_node 8 train.py --batch-size 128 --data coco_profile.yaml --weights yolov5l.pt --epochs 1 --device 0,1,2,3,4,5,6,7
|
||||
```
|
||||
|
||||
@ -188,4 +188,4 @@ If this badge is green, all [YOLOv5 GitHub Actions](https://github.com/ultralyti
|
||||
|
||||
## Credits
|
||||
|
||||
I would like to thank @MagicFrogSJTU, who did all the heavy lifting, and @glenn-jocher for guiding us along the way.
|
||||
I would like to thank @MagicFrogSJTU, who did all the heavy lifting, and @glenn-jocher for guiding us along the way.
|
||||
|
@ -145,7 +145,7 @@ An example request, using Python's `requests` package:
|
||||
import requests, json
|
||||
|
||||
# list of images for inference (local files on client side)
|
||||
path = ['basilica.jpg']
|
||||
path = ['basilica.jpg']
|
||||
files = [('request', open(img, 'rb')) for img in path]
|
||||
|
||||
# send request over HTTP to /predict/from_files endpoint
|
||||
@ -268,4 +268,4 @@ deepsparse.benchmark zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/pruned35
|
||||
|
||||
## Get Started With DeepSparse
|
||||
|
||||
**Research or Testing?** DeepSparse Community is free for research and testing. Get started with our [Documentation](https://docs.neuralmagic.com/).
|
||||
**Research or Testing?** DeepSparse Community is free for research and testing. Get started with our [Documentation](https://docs.neuralmagic.com/).
|
||||
|
@ -4,7 +4,7 @@ description: Detailed guide on loading YOLOv5 from PyTorch Hub. Includes example
|
||||
keywords: Ultralytics, YOLOv5, PyTorch, loading YOLOv5, PyTorch Hub, inference, multi-GPU inference, training
|
||||
---
|
||||
|
||||
📚 This guide explains how to load YOLOv5 🚀 from PyTorch Hub at [https://pytorch.org/hub/ultralytics_yolov5](https://pytorch.org/hub/ultralytics_yolov5).
|
||||
📚 This guide explains how to load YOLOv5 🚀 from PyTorch Hub at [https://pytorch.org/hub/ultralytics_yolov5](https://pytorch.org/hub/ultralytics_yolov5).
|
||||
UPDATED 26 March 2023.
|
||||
|
||||
## Before You Start
|
||||
@ -65,7 +65,7 @@ im2 = cv2.imread('bus.jpg')[..., ::-1] # OpenCV image (BGR to RGB)
|
||||
results = model([im1, im2], size=640) # batch of images
|
||||
|
||||
# Results
|
||||
results.print()
|
||||
results.print()
|
||||
results.save() # or .show()
|
||||
|
||||
results.xyxy[0] # im1 predictions (tensor)
|
||||
@ -301,7 +301,7 @@ model = torch.hub.load('path/to/yolov5', 'custom', path='path/to/best.pt', sourc
|
||||
|
||||
PyTorch Hub supports inference on most YOLOv5 export formats, including custom trained models. See [TFLite, ONNX, CoreML, TensorRT Export tutorial](https://docs.ultralytics.com/yolov5/tutorials/model_export) for details on exporting models.
|
||||
|
||||
💡 ProTip: **TensorRT** may be up to 2-5X faster than PyTorch on [**GPU benchmarks**](https://github.com/ultralytics/yolov5/pull/6963)
|
||||
💡 ProTip: **TensorRT** may be up to 2-5X faster than PyTorch on [**GPU benchmarks**](https://github.com/ultralytics/yolov5/pull/6963)
|
||||
💡 ProTip: **ONNX** and **OpenVINO** may be up to 2-3X faster than PyTorch on [**CPU benchmarks**](https://github.com/ultralytics/yolov5/pull/6613)
|
||||
|
||||
```python
|
||||
@ -328,4 +328,4 @@ YOLOv5 is designed to be run in the following up-to-date verified environments (
|
||||
|
||||
<a href="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml"><img src="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml/badge.svg" alt="YOLOv5 CI"></a>
|
||||
|
||||
If this badge is green, all [YOLOv5 GitHub Actions](https://github.com/ultralytics/yolov5/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 [training](https://github.com/ultralytics/yolov5/blob/master/train.py), [validation](https://github.com/ultralytics/yolov5/blob/master/val.py), [inference](https://github.com/ultralytics/yolov5/blob/master/detect.py), [export](https://github.com/ultralytics/yolov5/blob/master/export.py) and [benchmarks](https://github.com/ultralytics/yolov5/blob/master/benchmarks.py) on macOS, Windows, and Ubuntu every 24 hours and on every commit.
|
||||
If this badge is green, all [YOLOv5 GitHub Actions](https://github.com/ultralytics/yolov5/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 [training](https://github.com/ultralytics/yolov5/blob/master/train.py), [validation](https://github.com/ultralytics/yolov5/blob/master/val.py), [inference](https://github.com/ultralytics/yolov5/blob/master/detect.py), [export](https://github.com/ultralytics/yolov5/blob/master/export.py) and [benchmarks](https://github.com/ultralytics/yolov5/blob/master/benchmarks.py) on macOS, Windows, and Ubuntu every 24 hours and on every commit.
|
||||
|
@ -6,7 +6,7 @@ keywords: Ultralytics, YOLOv5, Roboflow, data organization, data labelling, data
|
||||
|
||||
# Roboflow Datasets
|
||||
|
||||
You can now use Roboflow to organize, label, prepare, version, and host your datasets for training YOLOv5 🚀 models. Roboflow is free to use with YOLOv5 if you make your workspace public.
|
||||
You can now use Roboflow to organize, label, prepare, version, and host your datasets for training YOLOv5 🚀 models. Roboflow is free to use with YOLOv5 if you make your workspace public.
|
||||
UPDATED 7 June 2023.
|
||||
|
||||
!!! warning
|
||||
@ -50,4 +50,4 @@ We have released a custom training tutorial demonstrating all of the above capab
|
||||
|
||||
The real world is messy and your model will invariably encounter situations your dataset didn't anticipate. Using [active learning](https://blog.roboflow.com/what-is-active-learning/) is an important strategy to iteratively improve your dataset and model. With the Roboflow and YOLOv5 integration, you can quickly make improvements on your model deployments by using a battle tested machine learning pipeline.
|
||||
|
||||
<p align=""><a href="https://roboflow.com/?ref=ultralytics"><img width="1000" src="https://uploads-ssl.webflow.com/5f6bc60e665f54545a1e52a5/615627e5824c9c6195abfda9_computer-vision-cycle.png"/></a></p>
|
||||
<p align=""><a href="https://roboflow.com/?ref=ultralytics"><img width="1000" src="https://uploads-ssl.webflow.com/5f6bc60e665f54545a1e52a5/615627e5824c9c6195abfda9_computer-vision-cycle.png"/></a></p>
|
||||
|
@ -6,7 +6,7 @@ keywords: TensorRT, NVIDIA Jetson, DeepStream SDK, deployment, Ultralytics, YOLO
|
||||
|
||||
# Deploy on NVIDIA Jetson using TensorRT and DeepStream SDK
|
||||
|
||||
📚 This guide explains how to deploy a trained model into NVIDIA Jetson Platform and perform inference using TensorRT and DeepStream SDK. Here we use TensorRT to maximize the inference performance on the Jetson platform.
|
||||
📚 This guide explains how to deploy a trained model into NVIDIA Jetson Platform and perform inference using TensorRT and DeepStream SDK. Here we use TensorRT to maximize the inference performance on the Jetson platform.
|
||||
UPDATED 18 November 2022.
|
||||
|
||||
## Hardware Verification
|
||||
@ -117,7 +117,7 @@ pip3 install torch-1.10.0-cp36-cp36m-linux_aarch64.whl
|
||||
sudo apt install -y libjpeg-dev zlib1g-dev
|
||||
git clone --branch v0.11.1 https://github.com/pytorch/vision torchvision
|
||||
cd torchvision
|
||||
sudo python3 setup.py install
|
||||
sudo python3 setup.py install
|
||||
```
|
||||
|
||||
Here a list of the corresponding torchvision version that you need to install according to the PyTorch version:
|
||||
@ -310,11 +310,11 @@ The following table summarizes how different models perform on **Jetson Xavier N
|
||||
|
||||
| Model Name | Precision | Inference Size | Inference Time (ms) | FPS |
|
||||
|------------|-----------|----------------|---------------------|-----|
|
||||
| YOLOv5s | FP32 | 320x320 | 16.66 | 60 |
|
||||
| | FP32 | 640x640 | 33.33 | 30 |
|
||||
| | INT8 | 640x640 | 16.66 | 60 |
|
||||
| YOLOv5n | FP32 | 640x640 | 16.66 | 60 |
|
||||
| YOLOv5s | FP32 | 320x320 | 16.66 | 60 |
|
||||
| | FP32 | 640x640 | 33.33 | 30 |
|
||||
| | INT8 | 640x640 | 16.66 | 60 |
|
||||
| YOLOv5n | FP32 | 640x640 | 16.66 | 60 |
|
||||
|
||||
### Additional
|
||||
|
||||
This tutorial is written by our friends at seeed @lakshanthad and Elaine
|
||||
This tutorial is written by our friends at seeed @lakshanthad and Elaine
|
||||
|
@ -6,7 +6,7 @@ keywords: YOLOv5, Ultralytics, Test-Time Augmentation, TTA, mAP, Recall, model p
|
||||
|
||||
# Test-Time Augmentation (TTA)
|
||||
|
||||
📚 This guide explains how to use Test Time Augmentation (TTA) during testing and inference for improved mAP and Recall with YOLOv5 🚀.
|
||||
📚 This guide explains how to use Test Time Augmentation (TTA) during testing and inference for improved mAP and Recall with YOLOv5 🚀.
|
||||
UPDATED 25 September 2022.
|
||||
|
||||
## Before You Start
|
||||
@ -33,7 +33,7 @@ Output:
|
||||
val: data=./data/coco.yaml, weights=['yolov5x.pt'], batch_size=32, imgsz=640, conf_thres=0.001, iou_thres=0.65, task=val, device=, single_cls=False, augment=False, verbose=False, save_txt=False, save_hybrid=False, save_conf=False, save_json=True, project=runs/val, name=exp, exist_ok=False, half=True
|
||||
YOLOv5 🚀 v5.0-267-g6a3ee7c torch 1.9.0+cu102 CUDA:0 (Tesla P100-PCIE-16GB, 16280.875MB)
|
||||
|
||||
Fusing layers...
|
||||
Fusing layers...
|
||||
Model Summary: 476 layers, 87730285 parameters, 0 gradients
|
||||
|
||||
val: Scanning '../datasets/coco/val2017' images and labels...4952 found, 48 missing, 0 empty, 0 corrupted: 100% 5000/5000 [00:01<00:00, 2846.03it/s]
|
||||
@ -72,7 +72,7 @@ Output:
|
||||
val: data=./data/coco.yaml, weights=['yolov5x.pt'], batch_size=32, imgsz=832, conf_thres=0.001, iou_thres=0.6, task=val, device=, single_cls=False, augment=True, verbose=False, save_txt=False, save_hybrid=False, save_conf=False, save_json=True, project=runs/val, name=exp, exist_ok=False, half=True
|
||||
YOLOv5 🚀 v5.0-267-g6a3ee7c torch 1.9.0+cu102 CUDA:0 (Tesla P100-PCIE-16GB, 16280.875MB)
|
||||
|
||||
Fusing layers...
|
||||
Fusing layers...
|
||||
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /pytorch/c10/core/TensorImpl.h:1156.)
|
||||
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
|
||||
Model Summary: 476 layers, 87730285 parameters, 0 gradients
|
||||
@ -115,7 +115,7 @@ YOLOv5 🚀 v5.0-267-g6a3ee7c torch 1.9.0+cu102 CUDA:0 (Tesla P100-PCIE-16GB, 16
|
||||
Downloading https://github.com/ultralytics/yolov5/releases/download/v5.0/yolov5s.pt to yolov5s.pt...
|
||||
100% 14.1M/14.1M [00:00<00:00, 81.9MB/s]
|
||||
|
||||
Fusing layers...
|
||||
Fusing layers...
|
||||
Model Summary: 224 layers, 7266973 parameters, 0 gradients
|
||||
image 1/2 /content/yolov5/data/images/bus.jpg: 832x640 4 persons, 1 bus, 1 fire hydrant, Done. (0.029s)
|
||||
image 2/2 /content/yolov5/data/images/zidane.jpg: 480x832 3 persons, 3 ties, Done. (0.024s)
|
||||
@ -162,4 +162,4 @@ YOLOv5 is designed to be run in the following up-to-date verified environments (
|
||||
|
||||
<a href="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml"><img src="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml/badge.svg" alt="YOLOv5 CI"></a>
|
||||
|
||||
If this badge is green, all [YOLOv5 GitHub Actions](https://github.com/ultralytics/yolov5/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 [training](https://github.com/ultralytics/yolov5/blob/master/train.py), [validation](https://github.com/ultralytics/yolov5/blob/master/val.py), [inference](https://github.com/ultralytics/yolov5/blob/master/detect.py), [export](https://github.com/ultralytics/yolov5/blob/master/export.py) and [benchmarks](https://github.com/ultralytics/yolov5/blob/master/benchmarks.py) on macOS, Windows, and Ubuntu every 24 hours and on every commit.
|
||||
If this badge is green, all [YOLOv5 GitHub Actions](https://github.com/ultralytics/yolov5/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 [training](https://github.com/ultralytics/yolov5/blob/master/train.py), [validation](https://github.com/ultralytics/yolov5/blob/master/val.py), [inference](https://github.com/ultralytics/yolov5/blob/master/detect.py), [export](https://github.com/ultralytics/yolov5/blob/master/export.py) and [benchmarks](https://github.com/ultralytics/yolov5/blob/master/benchmarks.py) on macOS, Windows, and Ubuntu every 24 hours and on every commit.
|
||||
|
@ -4,7 +4,7 @@ description: Our comprehensive guide provides insights on how to train your YOLO
|
||||
keywords: Ultralytics, YOLOv5, Training guide, dataset preparation, model selection, training settings, mAP results, Machine Learning, Object Detection
|
||||
---
|
||||
|
||||
📚 This guide explains how to produce the best mAP and training results with YOLOv5 🚀.
|
||||
📚 This guide explains how to produce the best mAP and training results with YOLOv5 🚀.
|
||||
UPDATED 25 May 2022.
|
||||
|
||||
Most of the time good results can be obtained with no changes to the models or training settings, **provided your dataset is sufficiently large and well labelled**. If at first you don't get good results, there are steps you might be able to take to improve, but we always recommend users **first train with all default settings** before considering any changes. This helps establish a performance baseline and spot areas for improvement.
|
||||
@ -63,4 +63,4 @@ Before modifying anything, **first train with default settings to establish a pe
|
||||
|
||||
If you'd like to know more, a good place to start is Karpathy's 'Recipe for Training Neural Networks', which has great ideas for training that apply broadly across all ML domains: [http://karpathy.github.io/2019/04/25/recipe/](http://karpathy.github.io/2019/04/25/recipe/)
|
||||
|
||||
Good luck 🍀 and let us know if you have any other questions!
|
||||
Good luck 🍀 and let us know if you have any other questions!
|
||||
|
@ -4,7 +4,7 @@ description: Learn how to train your data on custom datasets using YOLOv5. Simpl
|
||||
keywords: YOLOv5, train on custom dataset, image collection, model training, object detection, image labelling, Ultralytics, PyTorch, machine learning
|
||||
---
|
||||
|
||||
📚 This guide explains how to train your own **custom dataset** with [YOLOv5](https://github.com/ultralytics/yolov5) 🚀.
|
||||
📚 This guide explains how to train your own **custom dataset** with [YOLOv5](https://github.com/ultralytics/yolov5) 🚀.
|
||||
UPDATED 7 June 2023.
|
||||
|
||||
## Before You Start
|
||||
@ -152,11 +152,11 @@ python train.py --img 640 --epochs 3 --data coco128.yaml --weights yolov5s.pt
|
||||
|
||||
!!! tip "Tip"
|
||||
|
||||
💡 Add `--cache ram` or `--cache disk` to speed up training (requires significant RAM/disk resources).
|
||||
💡 Add `--cache ram` or `--cache disk` to speed up training (requires significant RAM/disk resources).
|
||||
|
||||
!!! tip "Tip"
|
||||
|
||||
💡 Always train from a local dataset. Mounted or network drives like Google Drive will be very slow.
|
||||
💡 Always train from a local dataset. Mounted or network drives like Google Drive will be very slow.
|
||||
|
||||
All training results are saved to `runs/train/` with incrementing run directories, i.e. `runs/train/exp2`, `runs/train/exp3` etc. For more details see the Training section of our tutorial notebook. <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> <a href="https://www.kaggle.com/ultralytics/yolov5"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a>
|
||||
|
||||
@ -234,4 +234,4 @@ YOLOv5 is designed to be run in the following up-to-date verified environments (
|
||||
|
||||
<a href="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml"><img src="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml/badge.svg" alt="YOLOv5 CI"></a>
|
||||
|
||||
If this badge is green, all [YOLOv5 GitHub Actions](https://github.com/ultralytics/yolov5/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 [training](https://github.com/ultralytics/yolov5/blob/master/train.py), [validation](https://github.com/ultralytics/yolov5/blob/master/val.py), [inference](https://github.com/ultralytics/yolov5/blob/master/detect.py), [export](https://github.com/ultralytics/yolov5/blob/master/export.py) and [benchmarks](https://github.com/ultralytics/yolov5/blob/master/benchmarks.py) on macOS, Windows, and Ubuntu every 24 hours and on every commit.
|
||||
If this badge is green, all [YOLOv5 GitHub Actions](https://github.com/ultralytics/yolov5/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 [training](https://github.com/ultralytics/yolov5/blob/master/train.py), [validation](https://github.com/ultralytics/yolov5/blob/master/val.py), [inference](https://github.com/ultralytics/yolov5/blob/master/detect.py), [export](https://github.com/ultralytics/yolov5/blob/master/export.py) and [benchmarks](https://github.com/ultralytics/yolov5/blob/master/benchmarks.py) on macOS, Windows, and Ubuntu every 24 hours and on every commit.
|
||||
|
@ -4,7 +4,7 @@ description: Learn to freeze YOLOv5 layers for efficient transfer learning. Opti
|
||||
keywords: YOLOv5, freeze layers, transfer learning, model retraining, Ultralytics
|
||||
---
|
||||
|
||||
📚 This guide explains how to **freeze** YOLOv5 🚀 layers when **transfer learning**. Transfer learning is a useful way to quickly retrain a model on new data without having to retrain the entire network. Instead, part of the initial weights are frozen in place, and the rest of the weights are used to compute loss and are updated by the optimizer. This requires less resources than normal training and allows for faster training times, though it may also result in reductions to final trained accuracy.
|
||||
📚 This guide explains how to **freeze** YOLOv5 🚀 layers when **transfer learning**. Transfer learning is a useful way to quickly retrain a model on new data without having to retrain the entire network. Instead, part of the initial weights are frozen in place, and the rest of the weights are used to compute loss and are updated by the optimizer. This requires less resources than normal training and allows for faster training times, though it may also result in reductions to final trained accuracy.
|
||||
UPDATED 25 September 2022.
|
||||
|
||||
## Before You Start
|
||||
@ -22,13 +22,13 @@ pip install -r requirements.txt # install
|
||||
All layers that match the train.py `freeze` list in train.py will be frozen by setting their gradients to zero before training starts.
|
||||
|
||||
```python
|
||||
# Freeze
|
||||
freeze = [f'model.{x}.' for x in range(freeze)] # layers to freeze
|
||||
for k, v in model.named_parameters():
|
||||
v.requires_grad = True # train all layers
|
||||
if any(x in k for x in freeze):
|
||||
print(f'freezing {k}')
|
||||
v.requires_grad = False
|
||||
# Freeze
|
||||
freeze = [f'model.{x}.' for x in range(freeze)] # layers to freeze
|
||||
for k, v in model.named_parameters():
|
||||
v.requires_grad = True # train all layers
|
||||
if any(x in k for x in freeze):
|
||||
print(f'freezing {k}')
|
||||
v.requires_grad = False
|
||||
```
|
||||
|
||||
To see a list of module names:
|
||||
@ -60,43 +60,43 @@ model.24.m.2.bias
|
||||
Looking at the model architecture we can see that the model backbone is layers 0-9:
|
||||
|
||||
```yaml
|
||||
# YOLOv5 backbone
|
||||
backbone:
|
||||
# [from, number, module, args]
|
||||
[[-1, 1, Focus, [64, 3]], # 0-P1/2
|
||||
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
|
||||
[-1, 3, BottleneckCSP, [128]],
|
||||
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
|
||||
[-1, 9, BottleneckCSP, [256]],
|
||||
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
|
||||
[-1, 9, BottleneckCSP, [512]],
|
||||
[-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
|
||||
[-1, 1, SPP, [1024, [5, 9, 13]]],
|
||||
[-1, 3, BottleneckCSP, [1024, False]], # 9
|
||||
]
|
||||
|
||||
# YOLOv5 head
|
||||
head:
|
||||
[[-1, 1, Conv, [512, 1, 1]],
|
||||
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
|
||||
[[-1, 6], 1, Concat, [1]], # cat backbone P4
|
||||
[-1, 3, BottleneckCSP, [512, False]], # 13
|
||||
|
||||
[-1, 1, Conv, [256, 1, 1]],
|
||||
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
|
||||
[[-1, 4], 1, Concat, [1]], # cat backbone P3
|
||||
[-1, 3, BottleneckCSP, [256, False]], # 17 (P3/8-small)
|
||||
|
||||
[-1, 1, Conv, [256, 3, 2]],
|
||||
[[-1, 14], 1, Concat, [1]], # cat head P4
|
||||
[-1, 3, BottleneckCSP, [512, False]], # 20 (P4/16-medium)
|
||||
|
||||
[-1, 1, Conv, [512, 3, 2]],
|
||||
[[-1, 10], 1, Concat, [1]], # cat head P5
|
||||
[-1, 3, BottleneckCSP, [1024, False]], # 23 (P5/32-large)
|
||||
|
||||
[[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
|
||||
]
|
||||
# YOLOv5 backbone
|
||||
backbone:
|
||||
# [from, number, module, args]
|
||||
[[-1, 1, Focus, [64, 3]], # 0-P1/2
|
||||
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
|
||||
[-1, 3, BottleneckCSP, [128]],
|
||||
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
|
||||
[-1, 9, BottleneckCSP, [256]],
|
||||
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
|
||||
[-1, 9, BottleneckCSP, [512]],
|
||||
[-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
|
||||
[-1, 1, SPP, [1024, [5, 9, 13]]],
|
||||
[-1, 3, BottleneckCSP, [1024, False]], # 9
|
||||
]
|
||||
|
||||
# YOLOv5 head
|
||||
head:
|
||||
[[-1, 1, Conv, [512, 1, 1]],
|
||||
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
|
||||
[[-1, 6], 1, Concat, [1]], # cat backbone P4
|
||||
[-1, 3, BottleneckCSP, [512, False]], # 13
|
||||
|
||||
[-1, 1, Conv, [256, 1, 1]],
|
||||
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
|
||||
[[-1, 4], 1, Concat, [1]], # cat backbone P3
|
||||
[-1, 3, BottleneckCSP, [256, False]], # 17 (P3/8-small)
|
||||
|
||||
[-1, 1, Conv, [256, 3, 2]],
|
||||
[[-1, 14], 1, Concat, [1]], # cat head P4
|
||||
[-1, 3, BottleneckCSP, [512, False]], # 20 (P4/16-medium)
|
||||
|
||||
[-1, 1, Conv, [512, 3, 2]],
|
||||
[[-1, 10], 1, Concat, [1]], # cat head P5
|
||||
[-1, 3, BottleneckCSP, [1024, False]], # 23 (P5/32-large)
|
||||
|
||||
[[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
|
||||
]
|
||||
```
|
||||
|
||||
so we can define the freeze list to contain all modules with 'model.0.' - 'model.9.' in their names:
|
||||
@ -152,4 +152,4 @@ YOLOv5 is designed to be run in the following up-to-date verified environments (
|
||||
|
||||
<a href="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml"><img src="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml/badge.svg" alt="YOLOv5 CI"></a>
|
||||
|
||||
If this badge is green, all [YOLOv5 GitHub Actions](https://github.com/ultralytics/yolov5/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 [training](https://github.com/ultralytics/yolov5/blob/master/train.py), [validation](https://github.com/ultralytics/yolov5/blob/master/val.py), [inference](https://github.com/ultralytics/yolov5/blob/master/detect.py), [export](https://github.com/ultralytics/yolov5/blob/master/export.py) and [benchmarks](https://github.com/ultralytics/yolov5/blob/master/benchmarks.py) on macOS, Windows, and Ubuntu every 24 hours and on every commit.
|
||||
If this badge is green, all [YOLOv5 GitHub Actions](https://github.com/ultralytics/yolov5/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 [training](https://github.com/ultralytics/yolov5/blob/master/train.py), [validation](https://github.com/ultralytics/yolov5/blob/master/val.py), [inference](https://github.com/ultralytics/yolov5/blob/master/detect.py), [export](https://github.com/ultralytics/yolov5/blob/master/export.py) and [benchmarks](https://github.com/ultralytics/yolov5/blob/master/benchmarks.py) on macOS, Windows, and Ubuntu every 24 hours and on every commit.
|
||||
|
Reference in New Issue
Block a user