ultralytics 8.0.141 create new SettingsManager (#3790)

This commit is contained in:
Glenn Jocher
2023-07-23 16:03:34 +02:00
committed by GitHub
parent 42afe772d5
commit 20f5efd40a
215 changed files with 917 additions and 749 deletions

View File

@ -166,4 +166,4 @@ We would like to acknowledge the FastSAM authors for their significant contribut
}
```
The original FastSAM paper can be found on [arXiv](https://arxiv.org/abs/2306.12156). The authors have made their work publicly available, and the codebase can be accessed on [GitHub](https://github.com/CASIA-IVA-Lab/FastSAM). We appreciate their efforts in advancing the field and making their work accessible to the broader community.
The original FastSAM paper can be found on [arXiv](https://arxiv.org/abs/2306.12156). The authors have made their work publicly available, and the codebase can be accessed on [GitHub](https://github.com/CASIA-IVA-Lab/FastSAM). We appreciate their efforts in advancing the field and making their work accessible to the broader community.

View File

@ -45,4 +45,4 @@ model.info() # display model information
model.train(data="coco128.yaml", epochs=100) # train the model
```
For more details on each model, their supported tasks, modes, and performance, please visit their respective documentation pages linked above.
For more details on each model, their supported tasks, modes, and performance, please visit their respective documentation pages linked above.

View File

@ -96,4 +96,4 @@ If you find MobileSAM useful in your research or development work, please consid
journal={arXiv preprint arXiv:2306.14289},
year={2023}
}
```
```

View File

@ -71,4 +71,4 @@ If you use Baidu's RT-DETR in your research or development work, please cite the
We would like to acknowledge Baidu and the [PaddlePaddle](https://github.com/PaddlePaddle/PaddleDetection) team for creating and maintaining this valuable resource for the computer vision community. Their contribution to the field with the development of the Vision Transformers-based real-time object detector, RT-DETR, is greatly appreciated.
*Keywords: RT-DETR, Transformer, ViT, Vision Transformers, Baidu RT-DETR, PaddlePaddle, Paddle Paddle RT-DETR, real-time object detection, Vision Transformers-based object detection, pre-trained PaddlePaddle RT-DETR models, Baidu's RT-DETR usage, Ultralytics Python API*
*Keywords: RT-DETR, Transformer, ViT, Vision Transformers, Baidu RT-DETR, PaddlePaddle, Paddle Paddle RT-DETR, real-time object detection, Vision Transformers-based object detection, pre-trained PaddlePaddle RT-DETR models, Baidu's RT-DETR usage, Ultralytics Python API*

View File

@ -37,10 +37,10 @@ The Segment Anything Model can be employed for a multitude of downstream tasks t
Segment image with given prompts.
=== "Python"
```python
from ultralytics import SAM
# Load a model
model = SAM('sam_b.pt')
@ -59,10 +59,10 @@ The Segment Anything Model can be employed for a multitude of downstream tasks t
Segment the whole image.
=== "Python"
```python
from ultralytics import SAM
# Load a model
model = SAM('sam_b.pt')
@ -73,7 +73,7 @@ The Segment Anything Model can be employed for a multitude of downstream tasks t
model('path/to/image.jpg')
```
=== "CLI"
```bash
# Run inference with a SAM model
yolo predict model=sam_b.pt source=path/to/image.jpg
@ -86,7 +86,7 @@ The Segment Anything Model can be employed for a multitude of downstream tasks t
This way you can set image once and run prompts inference multiple times without running image encoder multiple times.
=== "Prompt inference"
```python
from ultralytics.models.sam import Predictor as SAMPredictor
@ -106,7 +106,7 @@ The Segment Anything Model can be employed for a multitude of downstream tasks t
Segment everything with additional args.
=== "Segment everything"
```python
from ultralytics.models.sam import Predictor as SAMPredictor
@ -207,7 +207,7 @@ If you find SAM useful in your research or development work, please consider cit
```bibtex
@misc{kirillov2023segment,
title={Segment Anything},
title={Segment Anything},
author={Alexander Kirillov and Eric Mintun and Nikhila Ravi and Hanzi Mao and Chloe Rolland and Laura Gustafson and Tete Xiao and Spencer Whitehead and Alexander C. Berg and Wan-Yen Lo and Piotr Dollár and Ross Girshick},
year={2023},
eprint={2304.02643},
@ -218,4 +218,4 @@ If you find SAM useful in your research or development work, please consider cit
We would like to express our gratitude to Meta AI for creating and maintaining this valuable resource for the computer vision community.
*keywords: Segment Anything, Segment Anything Model, SAM, Meta SAM, image segmentation, promptable segmentation, zero-shot performance, SA-1B dataset, advanced architecture, auto-annotation, Ultralytics, pre-trained models, SAM base, SAM large, instance segmentation, computer vision, AI, artificial intelligence, machine learning, data annotation, segmentation masks, detection model, YOLO detection model, bibtex, Meta AI.*
*keywords: Segment Anything, Segment Anything Model, SAM, Meta SAM, image segmentation, promptable segmentation, zero-shot performance, SA-1B dataset, advanced architecture, auto-annotation, Ultralytics, pre-trained models, SAM base, SAM large, instance segmentation, computer vision, AI, artificial intelligence, machine learning, data annotation, segmentation masks, detection model, YOLO detection model, bibtex, Meta AI.*

View File

@ -106,4 +106,4 @@ If you employ YOLO-NAS in your research or development work, please cite SuperGr
We express our gratitude to Deci AI's [SuperGradients](https://github.com/Deci-AI/super-gradients/) team for their efforts in creating and maintaining this valuable resource for the computer vision community. We believe YOLO-NAS, with its innovative architecture and superior object detection capabilities, will become a critical tool for developers and researchers alike.
*Keywords: YOLO-NAS, Deci AI, object detection, deep learning, neural architecture search, Ultralytics Python API, YOLO model, SuperGradients, pre-trained models, quantization-friendly basic block, advanced training schemes, post-training quantization, AutoNAC optimization, COCO, Objects365, Roboflow 100*
*Keywords: YOLO-NAS, Deci AI, object detection, deep learning, neural architecture search, Ultralytics Python API, YOLO model, SuperGradients, pre-trained models, quantization-friendly basic block, advanced training schemes, post-training quantization, AutoNAC optimization, COCO, Objects365, Roboflow 100*

View File

@ -77,4 +77,4 @@ If you use YOLOv3 in your research, please cite the original YOLO papers and the
}
```
Thank you to Joseph Redmon and Ali Farhadi for developing the original YOLOv3.
Thank you to Joseph Redmon and Ali Farhadi for developing the original YOLOv3.

View File

@ -55,7 +55,7 @@ We would like to acknowledge the YOLOv4 authors for their significant contributi
```bibtex
@misc{bochkovskiy2020yolov4,
title={YOLOv4: Optimal Speed and Accuracy of Object Detection},
title={YOLOv4: Optimal Speed and Accuracy of Object Detection},
author={Alexey Bochkovskiy and Chien-Yao Wang and Hong-Yuan Mark Liao},
year={2020},
eprint={2004.10934},
@ -64,4 +64,4 @@ We would like to acknowledge the YOLOv4 authors for their significant contributi
}
```
The original YOLOv4 paper can be found on [arXiv](https://arxiv.org/pdf/2004.10934.pdf). The authors have made their work publicly available, and the codebase can be accessed on [GitHub](https://github.com/AlexeyAB/darknet). We appreciate their efforts in advancing the field and making their work accessible to the broader community.
The original YOLOv4 paper can be found on [arXiv](https://arxiv.org/pdf/2004.10934.pdf). The authors have made their work publicly available, and the codebase can be accessed on [GitHub](https://github.com/AlexeyAB/darknet). We appreciate their efforts in advancing the field and making their work accessible to the broader community.

View File

@ -86,4 +86,4 @@ If you use YOLOv5 or YOLOv5u in your research, please cite the Ultralytics YOLOv
}
```
Special thanks to Glenn Jocher and the Ultralytics team for their work on developing and maintaining the YOLOv5 and YOLOv5u models.
Special thanks to Glenn Jocher and the Ultralytics team for their work on developing and maintaining the YOLOv5 and YOLOv5u models.

View File

@ -70,7 +70,7 @@ We would like to acknowledge the authors for their significant contributions in
```bibtex
@misc{li2023yolov6,
title={YOLOv6 v3.0: A Full-Scale Reloading},
title={YOLOv6 v3.0: A Full-Scale Reloading},
author={Chuyi Li and Lulu Li and Yifei Geng and Hongliang Jiang and Meng Cheng and Bo Zhang and Zaidan Ke and Xiaoming Xu and Xiangxiang Chu},
year={2023},
eprint={2301.05586},
@ -79,4 +79,4 @@ We would like to acknowledge the authors for their significant contributions in
}
```
The original YOLOv6 paper can be found on [arXiv](https://arxiv.org/abs/2301.05586). The authors have made their work publicly available, and the codebase can be accessed on [GitHub](https://github.com/meituan/YOLOv6). We appreciate their efforts in advancing the field and making their work accessible to the broader community.
The original YOLOv6 paper can be found on [arXiv](https://arxiv.org/abs/2301.05586). The authors have made their work publicly available, and the codebase can be accessed on [GitHub](https://github.com/meituan/YOLOv6). We appreciate their efforts in advancing the field and making their work accessible to the broader community.

View File

@ -58,4 +58,4 @@ We would like to acknowledge the YOLOv7 authors for their significant contributi
}
```
The original YOLOv7 paper can be found on [arXiv](https://arxiv.org/pdf/2207.02696.pdf). The authors have made their work publicly available, and the codebase can be accessed on [GitHub](https://github.com/WongKinYiu/yolov7). We appreciate their efforts in advancing the field and making their work accessible to the broader community.
The original YOLOv7 paper can be found on [arXiv](https://arxiv.org/pdf/2207.02696.pdf). The authors have made their work publicly available, and the codebase can be accessed on [GitHub](https://github.com/WongKinYiu/yolov7). We appreciate their efforts in advancing the field and making their work accessible to the broader community.

View File

@ -112,4 +112,4 @@ If you use the YOLOv8 model or any other software from this repository in your w
}
```
Please note that the DOI is pending and will be added to the citation once it is available. The usage of the software is in accordance with the AGPL-3.0 license.
Please note that the DOI is pending and will be added to the citation once it is available. The usage of the software is in accordance with the AGPL-3.0 license.