ultralytics 8.0.141
create new SettingsManager (#3790)
This commit is contained in:
@ -47,10 +47,10 @@ To train a YOLOv8n model on the Argoverse dataset for 100 epochs with an image s
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
|
||||
# Load a model
|
||||
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
|
||||
|
||||
|
||||
# Train the model
|
||||
model.train(data='Argoverse.yaml', epochs=100, imgsz=640)
|
||||
```
|
||||
@ -86,4 +86,4 @@ If you use the Argoverse dataset in your research or development work, please ci
|
||||
}
|
||||
```
|
||||
|
||||
We would like to acknowledge Argo AI for creating and maintaining the Argoverse dataset as a valuable resource for the autonomous driving research community. For more information about the Argoverse dataset and its creators, visit the [Argoverse dataset website](https://www.argoverse.org/).
|
||||
We would like to acknowledge Argo AI for creating and maintaining the Argoverse dataset as a valuable resource for the autonomous driving research community. For more information about the Argoverse dataset and its creators, visit the [Argoverse dataset website](https://www.argoverse.org/).
|
||||
|
@ -47,10 +47,10 @@ To train a YOLOv8n model on the COCO dataset for 100 epochs with an image size o
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
|
||||
# Load a model
|
||||
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
|
||||
|
||||
|
||||
# Train the model
|
||||
model.train(data='coco.yaml', epochs=100, imgsz=640)
|
||||
```
|
||||
@ -78,7 +78,7 @@ If you use the COCO dataset in your research or development work, please cite th
|
||||
|
||||
```bibtex
|
||||
@misc{lin2015microsoft,
|
||||
title={Microsoft COCO: Common Objects in Context},
|
||||
title={Microsoft COCO: Common Objects in Context},
|
||||
author={Tsung-Yi Lin and Michael Maire and Serge Belongie and Lubomir Bourdev and Ross Girshick and James Hays and Pietro Perona and Deva Ramanan and C. Lawrence Zitnick and Piotr Dollár},
|
||||
year={2015},
|
||||
eprint={1405.0312},
|
||||
@ -87,4 +87,4 @@ If you use the COCO dataset in your research or development work, please cite th
|
||||
}
|
||||
```
|
||||
|
||||
We would like to acknowledge the COCO Consortium for creating and maintaining this valuable resource for the computer vision community. For more information about the COCO dataset and its creators, visit the [COCO dataset website](https://cocodataset.org/#home).
|
||||
We would like to acknowledge the COCO Consortium for creating and maintaining this valuable resource for the computer vision community. For more information about the COCO dataset and its creators, visit the [COCO dataset website](https://cocodataset.org/#home).
|
||||
|
@ -37,10 +37,10 @@ To train a YOLOv8n model on the COCO8 dataset for 100 epochs with an image size
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
|
||||
# Load a model
|
||||
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
|
||||
|
||||
|
||||
# Train the model
|
||||
model.train(data='coco8.yaml', epochs=100, imgsz=640)
|
||||
```
|
||||
@ -68,7 +68,7 @@ If you use the COCO dataset in your research or development work, please cite th
|
||||
|
||||
```bibtex
|
||||
@misc{lin2015microsoft,
|
||||
title={Microsoft COCO: Common Objects in Context},
|
||||
title={Microsoft COCO: Common Objects in Context},
|
||||
author={Tsung-Yi Lin and Michael Maire and Serge Belongie and Lubomir Bourdev and Ross Girshick and James Hays and Pietro Perona and Deva Ramanan and C. Lawrence Zitnick and Piotr Dollár},
|
||||
year={2015},
|
||||
eprint={1405.0312},
|
||||
@ -77,4 +77,4 @@ If you use the COCO dataset in your research or development work, please cite th
|
||||
}
|
||||
```
|
||||
|
||||
We would like to acknowledge the COCO Consortium for creating and maintaining this valuable resource for the computer vision community. For more information about the COCO dataset and its creators, visit the [COCO dataset website](https://cocodataset.org/#home).
|
||||
We would like to acknowledge the COCO Consortium for creating and maintaining this valuable resource for the computer vision community. For more information about the COCO dataset and its creators, visit the [COCO dataset website](https://cocodataset.org/#home).
|
||||
|
@ -46,10 +46,10 @@ To train a YOLOv8n model on the Global Wheat Head Dataset for 100 epochs with an
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
|
||||
# Load a model
|
||||
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
|
||||
|
||||
|
||||
# Train the model
|
||||
model.train(data='GlobalWheat2020.yaml', epochs=100, imgsz=640)
|
||||
```
|
||||
@ -84,4 +84,4 @@ If you use the Global Wheat Head Dataset in your research or development work, p
|
||||
}
|
||||
```
|
||||
|
||||
We would like to acknowledge the researchers and institutions that contributed to the creation and maintenance of the Global Wheat Head Dataset as a valuable resource for the plant phenotyping and crop management research community. For more information about the dataset and its creators, visit the [Global Wheat Head Dataset website](http://www.global-wheat.com/).
|
||||
We would like to acknowledge the researchers and institutions that contributed to the creation and maintenance of the Global Wheat Head Dataset as a valuable resource for the plant phenotyping and crop management research community. For more information about the dataset and its creators, visit the [Global Wheat Head Dataset website](http://www.global-wheat.com/).
|
||||
|
@ -51,10 +51,10 @@ Here's how you can use these formats to train your model:
|
||||
!!! example ""
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
|
||||
# Load a model
|
||||
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
|
||||
|
||||
@ -62,7 +62,7 @@ Here's how you can use these formats to train your model:
|
||||
model.train(data='coco128.yaml', epochs=100, imgsz=640)
|
||||
```
|
||||
=== "CLI"
|
||||
|
||||
|
||||
```bash
|
||||
# Start training from a pretrained *.pt model
|
||||
yolo detect train data=coco128.yaml model=yolov8n.pt epochs=100 imgsz=640
|
||||
@ -100,4 +100,4 @@ convert_coco(labels_dir='../coco/annotations/')
|
||||
|
||||
This conversion tool can be used to convert the COCO dataset or any dataset in the COCO format to the Ultralytics YOLO format.
|
||||
|
||||
Remember to double-check if the dataset you want to use is compatible with your model and follows the necessary format conventions. Properly formatted datasets are crucial for training successful object detection models.
|
||||
Remember to double-check if the dataset you want to use is compatible with your model and follows the necessary format conventions. Properly formatted datasets are crucial for training successful object detection models.
|
||||
|
@ -46,10 +46,10 @@ To train a YOLOv8n model on the Objects365 dataset for 100 epochs with an image
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
|
||||
# Load a model
|
||||
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
|
||||
|
||||
|
||||
# Train the model
|
||||
model.train(data='Objects365.yaml', epochs=100, imgsz=640)
|
||||
```
|
||||
@ -85,4 +85,4 @@ If you use the Objects365 dataset in your research or development work, please c
|
||||
}
|
||||
```
|
||||
|
||||
We would like to acknowledge the team of researchers who created and maintain the Objects365 dataset as a valuable resource for the computer vision research community. For more information about the Objects365 dataset and its creators, visit the [Objects365 dataset website](https://www.objects365.org/).
|
||||
We would like to acknowledge the team of researchers who created and maintain the Objects365 dataset as a valuable resource for the computer vision research community. For more information about the Objects365 dataset and its creators, visit the [Objects365 dataset website](https://www.objects365.org/).
|
||||
|
@ -48,10 +48,10 @@ To train a YOLOv8n model on the SKU-110K dataset for 100 epochs with an image si
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
|
||||
# Load a model
|
||||
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
|
||||
|
||||
|
||||
# Train the model
|
||||
model.train(data='SKU-110K.yaml', epochs=100, imgsz=640)
|
||||
```
|
||||
@ -86,4 +86,4 @@ If you use the SKU-110k dataset in your research or development work, please cit
|
||||
}
|
||||
```
|
||||
|
||||
We would like to acknowledge Eran Goldman et al. for creating and maintaining the SKU-110k dataset as a valuable resource for the computer vision research community. For more information about the SKU-110k dataset and its creators, visit the [SKU-110k dataset GitHub repository](https://github.com/eg4000/SKU110K_CVPR19).
|
||||
We would like to acknowledge Eran Goldman et al. for creating and maintaining the SKU-110k dataset as a valuable resource for the computer vision research community. For more information about the SKU-110k dataset and its creators, visit the [SKU-110k dataset GitHub repository](https://github.com/eg4000/SKU110K_CVPR19).
|
||||
|
@ -44,10 +44,10 @@ To train a YOLOv8n model on the VisDrone dataset for 100 epochs with an image si
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
|
||||
# Load a model
|
||||
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
|
||||
|
||||
|
||||
# Train the model
|
||||
model.train(data='VisDrone.yaml', epochs=100, imgsz=640)
|
||||
```
|
||||
@ -76,8 +76,8 @@ If you use the VisDrone dataset in your research or development work, please cit
|
||||
```bibtex
|
||||
@ARTICLE{9573394,
|
||||
author={Zhu, Pengfei and Wen, Longyin and Du, Dawei and Bian, Xiao and Fan, Heng and Hu, Qinghua and Ling, Haibin},
|
||||
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
|
||||
title={Detection and Tracking Meet Drones Challenge},
|
||||
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
|
||||
title={Detection and Tracking Meet Drones Challenge},
|
||||
year={2021},
|
||||
volume={},
|
||||
number={},
|
||||
@ -85,4 +85,4 @@ If you use the VisDrone dataset in your research or development work, please cit
|
||||
doi={10.1109/TPAMI.2021.3119563}}
|
||||
```
|
||||
|
||||
We would like to acknowledge the AISKYEYE team at the Lab of Machine Learning and Data Mining, Tianjin University, China, for creating and maintaining the VisDrone dataset as a valuable resource for the drone-based computer vision research community. For more information about the VisDrone dataset and its creators, visit the [VisDrone Dataset GitHub repository](https://github.com/VisDrone/VisDrone-Dataset).
|
||||
We would like to acknowledge the AISKYEYE team at the Lab of Machine Learning and Data Mining, Tianjin University, China, for creating and maintaining the VisDrone dataset as a valuable resource for the drone-based computer vision research community. For more information about the VisDrone dataset and its creators, visit the [VisDrone Dataset GitHub repository](https://github.com/VisDrone/VisDrone-Dataset).
|
||||
|
@ -47,10 +47,10 @@ To train a YOLOv8n model on the VOC dataset for 100 epochs with an image size of
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
|
||||
# Load a model
|
||||
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
|
||||
|
||||
|
||||
# Train the model
|
||||
model.train(data='VOC.yaml', epochs=100, imgsz=640)
|
||||
```
|
||||
@ -79,7 +79,7 @@ If you use the VOC dataset in your research or development work, please cite the
|
||||
|
||||
```bibtex
|
||||
@misc{everingham2010pascal,
|
||||
title={The PASCAL Visual Object Classes (VOC) Challenge},
|
||||
title={The PASCAL Visual Object Classes (VOC) Challenge},
|
||||
author={Mark Everingham and Luc Van Gool and Christopher K. I. Williams and John Winn and Andrew Zisserman},
|
||||
year={2010},
|
||||
eprint={0909.5206},
|
||||
@ -88,4 +88,4 @@ If you use the VOC dataset in your research or development work, please cite the
|
||||
}
|
||||
```
|
||||
|
||||
We would like to acknowledge the PASCAL VOC Consortium for creating and maintaining this valuable resource for the computer vision community. For more information about the VOC dataset and its creators, visit the [PASCAL VOC dataset website](http://host.robots.ox.ac.uk/pascal/VOC/).
|
||||
We would like to acknowledge the PASCAL VOC Consortium for creating and maintaining this valuable resource for the computer vision community. For more information about the VOC dataset and its creators, visit the [PASCAL VOC dataset website](http://host.robots.ox.ac.uk/pascal/VOC/).
|
||||
|
@ -50,10 +50,10 @@ To train a model on the xView dataset for 100 epochs with an image size of 640,
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
|
||||
# Load a model
|
||||
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
|
||||
|
||||
|
||||
# Train the model
|
||||
model.train(data='xView.yaml', epochs=100, imgsz=640)
|
||||
```
|
||||
@ -81,7 +81,7 @@ If you use the xView dataset in your research or development work, please cite t
|
||||
|
||||
```bibtex
|
||||
@misc{lam2018xview,
|
||||
title={xView: Objects in Context in Overhead Imagery},
|
||||
title={xView: Objects in Context in Overhead Imagery},
|
||||
author={Darius Lam and Richard Kuzma and Kevin McGee and Samuel Dooley and Michael Laielli and Matthew Klaric and Yaroslav Bulatov and Brendan McCord},
|
||||
year={2018},
|
||||
eprint={1802.07856},
|
||||
@ -90,4 +90,4 @@ If you use the xView dataset in your research or development work, please cite t
|
||||
}
|
||||
```
|
||||
|
||||
We would like to acknowledge the [Defense Innovation Unit](https://www.diu.mil/) (DIU) and the creators of the xView dataset for their valuable contribution to the computer vision research community. For more information about the xView dataset and its creators, visit the [xView dataset website](http://xviewdataset.org/).
|
||||
We would like to acknowledge the [Defense Innovation Unit](https://www.diu.mil/) (DIU) and the creators of the xView dataset for their valuable contribution to the computer vision research community. For more information about the xView dataset and its creators, visit the [xView dataset website](http://xviewdataset.org/).
|
||||
|
Reference in New Issue
Block a user