ultralytics 8.0.151
add DOTAv2.yaml
for OBB training (#4258)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Kayzwer <68285002+Kayzwer@users.noreply.github.com>
This commit is contained in:
@ -39,7 +39,7 @@ To train a YOLO model on the Caltech-101 dataset for 100 epochs, you can use the
|
||||
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
|
||||
|
||||
# Train the model
|
||||
model.train(data='caltech101', epochs=100, imgsz=416)
|
||||
results = model.train(data='caltech101', epochs=100, imgsz=416)
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
@ -61,17 +61,21 @@ The example showcases the variety and complexity of the objects in the Caltech-1
|
||||
|
||||
If you use the Caltech-101 dataset in your research or development work, please cite the following paper:
|
||||
|
||||
```bibtex
|
||||
@article{fei2007learning,
|
||||
title={Learning generative visual models from few training examples: An incremental Bayesian approach tested on 101 object categories},
|
||||
author={Fei-Fei, Li and Fergus, Rob and Perona, Pietro},
|
||||
journal={Computer vision and Image understanding},
|
||||
volume={106},
|
||||
number={1},
|
||||
pages={59--70},
|
||||
year={2007},
|
||||
publisher={Elsevier}
|
||||
}
|
||||
```
|
||||
!!! note ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
```bibtex
|
||||
@article{fei2007learning,
|
||||
title={Learning generative visual models from few training examples: An incremental Bayesian approach tested on 101 object categories},
|
||||
author={Fei-Fei, Li and Fergus, Rob and Perona, Pietro},
|
||||
journal={Computer vision and Image understanding},
|
||||
volume={106},
|
||||
number={1},
|
||||
pages={59--70},
|
||||
year={2007},
|
||||
publisher={Elsevier}
|
||||
}
|
||||
```
|
||||
|
||||
We would like to acknowledge Li Fei-Fei, Rob Fergus, and Pietro Perona for creating and maintaining the Caltech-101 dataset as a valuable resource for the machine learning and computer vision research community. For more information about the Caltech-101 dataset and its creators, visit the [Caltech-101 dataset website](https://data.caltech.edu/records/mzrjq-6wc02).
|
||||
|
@ -39,7 +39,7 @@ To train a YOLO model on the Caltech-256 dataset for 100 epochs, you can use the
|
||||
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
|
||||
|
||||
# Train the model
|
||||
model.train(data='caltech256', epochs=100, imgsz=416)
|
||||
results = model.train(data='caltech256', epochs=100, imgsz=416)
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
@ -61,13 +61,17 @@ The example showcases the diversity and complexity of the objects in the Caltech
|
||||
|
||||
If you use the Caltech-256 dataset in your research or development work, please cite the following paper:
|
||||
|
||||
```bibtex
|
||||
@article{griffin2007caltech,
|
||||
title={Caltech-256 object category dataset},
|
||||
author={Griffin, Gregory and Holub, Alex and Perona, Pietro},
|
||||
year={2007}
|
||||
}
|
||||
```
|
||||
!!! note ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
```bibtex
|
||||
@article{griffin2007caltech,
|
||||
title={Caltech-256 object category dataset},
|
||||
author={Griffin, Gregory and Holub, Alex and Perona, Pietro},
|
||||
year={2007}
|
||||
}
|
||||
```
|
||||
|
||||
We would like to acknowledge Gregory Griffin, Alex Holub, and Pietro Perona for creating and maintaining the Caltech-256 dataset as a valuable resource for the machine learning and computer vision research community. For more information about the
|
||||
|
||||
|
@ -42,7 +42,7 @@ To train a YOLO model on the CIFAR-10 dataset for 100 epochs with an image size
|
||||
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
|
||||
|
||||
# Train the model
|
||||
model.train(data='cifar10', epochs=100, imgsz=32)
|
||||
results = model.train(data='cifar10', epochs=100, imgsz=32)
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
@ -64,13 +64,17 @@ The example showcases the variety and complexity of the objects in the CIFAR-10
|
||||
|
||||
If you use the CIFAR-10 dataset in your research or development work, please cite the following paper:
|
||||
|
||||
```bibtex
|
||||
@TECHREPORT{Krizhevsky09learningmultiple,
|
||||
author={Alex Krizhevsky},
|
||||
title={Learning multiple layers of features from tiny images},
|
||||
institution={},
|
||||
year={2009}
|
||||
}
|
||||
```
|
||||
!!! note ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
```bibtex
|
||||
@TECHREPORT{Krizhevsky09learningmultiple,
|
||||
author={Alex Krizhevsky},
|
||||
title={Learning multiple layers of features from tiny images},
|
||||
institution={},
|
||||
year={2009}
|
||||
}
|
||||
```
|
||||
|
||||
We would like to acknowledge Alex Krizhevsky for creating and maintaining the CIFAR-10 dataset as a valuable resource for the machine learning and computer vision research community. For more information about the CIFAR-10 dataset and its creator, visit the [CIFAR-10 dataset website](https://www.cs.toronto.edu/~kriz/cifar.html).
|
||||
|
@ -42,7 +42,7 @@ To train a YOLO model on the CIFAR-100 dataset for 100 epochs with an image size
|
||||
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
|
||||
|
||||
# Train the model
|
||||
model.train(data='cifar100', epochs=100, imgsz=32)
|
||||
results = model.train(data='cifar100', epochs=100, imgsz=32)
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
@ -64,13 +64,17 @@ The example showcases the variety and complexity of the objects in the CIFAR-100
|
||||
|
||||
If you use the CIFAR-100 dataset in your research or development work, please cite the following paper:
|
||||
|
||||
```bibtex
|
||||
@TECHREPORT{Krizhevsky09learningmultiple,
|
||||
author={Alex Krizhevsky},
|
||||
title={Learning multiple layers of features from tiny images},
|
||||
institution={},
|
||||
year={2009}
|
||||
}
|
||||
```
|
||||
!!! note ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
```bibtex
|
||||
@TECHREPORT{Krizhevsky09learningmultiple,
|
||||
author={Alex Krizhevsky},
|
||||
title={Learning multiple layers of features from tiny images},
|
||||
institution={},
|
||||
year={2009}
|
||||
}
|
||||
```
|
||||
|
||||
We would like to acknowledge Alex Krizhevsky for creating and maintaining the CIFAR-100 dataset as a valuable resource for the machine learning and computer vision research community. For more information about the CIFAR-100 dataset and its creator, visit the [CIFAR-100 dataset website](https://www.cs.toronto.edu/~kriz/cifar.html).
|
||||
|
@ -56,7 +56,7 @@ To train a CNN model on the Fashion-MNIST dataset for 100 epochs with an image s
|
||||
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
|
||||
|
||||
# Train the model
|
||||
model.train(data='fashion-mnist', epochs=100, imgsz=28)
|
||||
results = model.train(data='fashion-mnist', epochs=100, imgsz=28)
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
|
@ -42,7 +42,7 @@ To train a deep learning model on the ImageNet dataset for 100 epochs with an im
|
||||
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
|
||||
|
||||
# Train the model
|
||||
model.train(data='imagenet', epochs=100, imgsz=224)
|
||||
results = model.train(data='imagenet', epochs=100, imgsz=224)
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
@ -64,16 +64,20 @@ The example showcases the variety and complexity of the images in the ImageNet d
|
||||
|
||||
If you use the ImageNet dataset in your research or development work, please cite the following paper:
|
||||
|
||||
```bibtex
|
||||
@article{ILSVRC15,
|
||||
author = {Olga Russakovsky and Jia Deng and Hao Su and Jonathan Krause and Sanjeev Satheesh and Sean Ma and Zhiheng Huang and Andrej Karpathy and Aditya Khosla and Michael Bernstein and Alexander C. Berg and Li Fei-Fei},
|
||||
title={ImageNet Large Scale Visual Recognition Challenge},
|
||||
year={2015},
|
||||
journal={International Journal of Computer Vision (IJCV)},
|
||||
volume={115},
|
||||
number={3},
|
||||
pages={211-252}
|
||||
}
|
||||
```
|
||||
!!! note ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
```bibtex
|
||||
@article{ILSVRC15,
|
||||
author = {Olga Russakovsky and Jia Deng and Hao Su and Jonathan Krause and Sanjeev Satheesh and Sean Ma and Zhiheng Huang and Andrej Karpathy and Aditya Khosla and Michael Bernstein and Alexander C. Berg and Li Fei-Fei},
|
||||
title={ImageNet Large Scale Visual Recognition Challenge},
|
||||
year={2015},
|
||||
journal={International Journal of Computer Vision (IJCV)},
|
||||
volume={115},
|
||||
number={3},
|
||||
pages={211-252}
|
||||
}
|
||||
```
|
||||
|
||||
We would like to acknowledge the ImageNet team, led by Olga Russakovsky, Jia Deng, and Li Fei-Fei, for creating and maintaining the ImageNet dataset as a valuable resource for the machine learning and computer vision research community. For more information about the ImageNet dataset and its creators, visit the [ImageNet website](https://www.image-net.org/).
|
||||
|
@ -38,7 +38,7 @@ To test a deep learning model on the ImageNet10 dataset with an image size of 22
|
||||
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
|
||||
|
||||
# Train the model
|
||||
model.train(data='imagenet10', epochs=5, imgsz=224)
|
||||
results = model.train(data='imagenet10', epochs=5, imgsz=224)
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
@ -59,16 +59,20 @@ The example showcases the variety and complexity of the images in the ImageNet10
|
||||
|
||||
If you use the ImageNet10 dataset in your research or development work, please cite the original ImageNet paper:
|
||||
|
||||
```bibtex
|
||||
@article{ILSVRC15,
|
||||
author = {Olga Russakovsky and Jia Deng and Hao Su and Jonathan Krause and Sanjeev Satheesh and Sean Ma and Zhiheng Huang and Andrej Karpathy and Aditya Khosla and Michael Bernstein and Alexander C. Berg and Li Fei-Fei},
|
||||
title={ImageNet Large Scale Visual Recognition Challenge},
|
||||
year={2015},
|
||||
journal={International Journal of Computer Vision (IJCV)},
|
||||
volume={115},
|
||||
number={3},
|
||||
pages={211-252}
|
||||
}
|
||||
```
|
||||
!!! note ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
```bibtex
|
||||
@article{ILSVRC15,
|
||||
author = {Olga Russakovsky and Jia Deng and Hao Su and Jonathan Krause and Sanjeev Satheesh and Sean Ma and Zhiheng Huang and Andrej Karpathy and Aditya Khosla and Michael Bernstein and Alexander C. Berg and Li Fei-Fei},
|
||||
title={ImageNet Large Scale Visual Recognition Challenge},
|
||||
year={2015},
|
||||
journal={International Journal of Computer Vision (IJCV)},
|
||||
volume={115},
|
||||
number={3},
|
||||
pages={211-252}
|
||||
}
|
||||
```
|
||||
|
||||
We would like to acknowledge the ImageNet team, led by Olga Russakovsky, Jia Deng, and Li Fei-Fei, for creating and maintaining the ImageNet dataset. The ImageNet10 dataset, while a compact subset, is a valuable resource for quick testing and debugging in the machine learning and computer vision research community. For more information about the ImageNet dataset and its creators, visit the [ImageNet website](https://www.image-net.org/).
|
||||
|
@ -40,7 +40,7 @@ To train a model on the ImageNette dataset for 100 epochs with a standard image
|
||||
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
|
||||
|
||||
# Train the model
|
||||
model.train(data='imagenette', epochs=100, imgsz=224)
|
||||
results = model.train(data='imagenette', epochs=100, imgsz=224)
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
@ -75,7 +75,7 @@ To use these datasets, simply replace 'imagenette' with 'imagenette160' or 'imag
|
||||
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
|
||||
|
||||
# Train the model with ImageNette160
|
||||
model.train(data='imagenette160', epochs=100, imgsz=160)
|
||||
results = model.train(data='imagenette160', epochs=100, imgsz=160)
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
@ -96,7 +96,7 @@ To use these datasets, simply replace 'imagenette' with 'imagenette160' or 'imag
|
||||
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
|
||||
|
||||
# Train the model with ImageNette320
|
||||
model.train(data='imagenette320', epochs=100, imgsz=320)
|
||||
results = model.train(data='imagenette320', epochs=100, imgsz=320)
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
|
@ -37,7 +37,7 @@ To train a CNN model on the ImageWoof dataset for 100 epochs with an image size
|
||||
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
|
||||
|
||||
# Train the model
|
||||
model.train(data='imagewoof', epochs=100, imgsz=224)
|
||||
results = model.train(data='imagewoof', epochs=100, imgsz=224)
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
@ -79,6 +79,6 @@ The example showcases the subtle differences and similarities among the differen
|
||||
|
||||
## Citations and Acknowledgments
|
||||
|
||||
If you use the ImageWoof dataset in your research or development work, please make sure to acknowledge the creators of the dataset by linking to the [official dataset repository](https://github.com/fastai/imagenette). As of my knowledge cutoff in September 2021, there is no official publication specifically about ImageWoof for citation.
|
||||
If you use the ImageWoof dataset in your research or development work, please make sure to acknowledge the creators of the dataset by linking to the [official dataset repository](https://github.com/fastai/imagenette).
|
||||
|
||||
We would like to acknowledge the FastAI team for creating and maintaining the ImageWoof dataset as a valuable resource for the machine learning and computer vision research community. For more information about the ImageWoof dataset, visit the [ImageWoof dataset repository](https://github.com/fastai/imagenette).
|
||||
|
@ -91,7 +91,7 @@ In this example, the `train` directory contains subdirectories for each class in
|
||||
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
|
||||
|
||||
# Train the model
|
||||
model.train(data='path/to/dataset', epochs=100, imgsz=640)
|
||||
results = model.train(data='path/to/dataset', epochs=100, imgsz=640)
|
||||
```
|
||||
=== "CLI"
|
||||
|
||||
|
@ -45,7 +45,7 @@ To train a CNN model on the MNIST dataset for 100 epochs with an image size of 3
|
||||
model = YOLO('yolov8n-cls.pt') # load a pretrained model (recommended for training)
|
||||
|
||||
# Train the model
|
||||
model.train(data='mnist', epochs=100, imgsz=32)
|
||||
results = model.train(data='mnist', epochs=100, imgsz=32)
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
@ -69,14 +69,18 @@ If you use the MNIST dataset in your
|
||||
|
||||
research or development work, please cite the following paper:
|
||||
|
||||
```bibtex
|
||||
@article{lecun2010mnist,
|
||||
title={MNIST handwritten digit database},
|
||||
author={LeCun, Yann and Cortes, Corinna and Burges, CJ},
|
||||
journal={ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist},
|
||||
volume={2},
|
||||
year={2010}
|
||||
}
|
||||
```
|
||||
!!! note ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
```bibtex
|
||||
@article{lecun2010mnist,
|
||||
title={MNIST handwritten digit database},
|
||||
author={LeCun, Yann and Cortes, Corinna and Burges, CJ},
|
||||
journal={ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist},
|
||||
volume={2},
|
||||
year={2010}
|
||||
}
|
||||
```
|
||||
|
||||
We would like to acknowledge Yann LeCun, Corinna Cortes, and Christopher J.C. Burges for creating and maintaining the MNIST dataset as a valuable resource for the machine learning and computer vision research community. For more information about the MNIST dataset and its creators, visit the [MNIST dataset website](http://yann.lecun.com/exdb/mnist/).
|
||||
|
Reference in New Issue
Block a user