diff --git a/.github/workflows/links.yml b/.github/workflows/links.yml index 2baea63..74c1057 100644 --- a/.github/workflows/links.yml +++ b/.github/workflows/links.yml @@ -5,9 +5,9 @@ name: Check Broken links on: push: - branches: [main] + branches: [na] pull_request: - branches: [main] + branches: [na] workflow_dispatch: schedule: - cron: '0 0 * * *' # runs at 00:00 UTC every day diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 54a733a..0cc5937 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -16,7 +16,7 @@ repos: - id: end-of-file-fixer - id: trailing-whitespace - id: check-case-conflict - - id: check-yaml + # - id: check-yaml - id: check-docstring-first - id: double-quote-string-fixer - id: detect-private-key diff --git a/docs/assets/favicon.ico b/docs/assets/favicon.ico index b71e7ec..7aa5066 100644 Binary files a/docs/assets/favicon.ico and b/docs/assets/favicon.ico differ diff --git a/docs/index.md b/docs/index.md index 3fa3b5f..e054ee1 100644 --- a/docs/index.md +++ b/docs/index.md @@ -1,74 +1,45 @@
-Welcome to the Ultralytics YOLOv8 documentation landing -page! [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) is the latest version of the YOLO (You Only Look -Once) object detection and image segmentation model developed by [Ultralytics](https://ultralytics.com). This page -serves as the starting point for exploring the various resources available to help you get started with YOLOv8 and -understand its features and capabilities. +Introducing [Ultralytics](https://ultralytics.com) [YOLOv8](https://github.com/ultralytics/ultralytics), the latest version of the acclaimed real-time object detection and image segmentation model. YOLOv8 is built on cutting-edge advancements in deep learning and computer vision, offering unparalleled performance in terms of speed and accuracy. Its streamlined design makes it suitable for various applications and easily adaptable to different hardware platforms, from edge devices to cloud APIs. -The YOLOv8 model is designed to be fast, accurate, and easy to use, making it an excellent choice for a wide range of -object detection and image segmentation tasks. It can be trained on large datasets and is capable of running on a -variety of hardware platforms, from CPUs to GPUs. +Explore the YOLOv8 Docs, a comprehensive resource designed to help you understand and utilize its features and capabilities. Whether you are a seasoned machine learning practitioner or new to the field, this hub aims to maximize YOLOv8's potential in your projects -Whether you are a seasoned machine learning practitioner or new to the field, we hope that the resources on this page -will help you get the most out of YOLOv8. For any bugs and feature requests please -visit [GitHub Issues](https://github.com/ultralytics/ultralytics/issues). For professional support -please [Contact Us](https://ultralytics.com/contact). +## Where to Start -## A Brief History of YOLO +- **Install** `ultralytics` with pip and get up and running in minutes [:material-clock-fast: Get Started](quickstart.md){ .md-button } +- **Predict** new images and videos with YOLOv8 [:octicons-image-16: Predict on Images](modes/predict.md){ .md-button } +- **Train** a new YOLOv8 model on your own custom dataset [:fontawesome-solid-brain: Train a Model](modes/train.md){ .md-button } +- **Explore** YOLOv8 tasks like segment, classify, pose and track [:material-magnify-expand: Explore Tasks](tasks/index.md){ .md-button } -YOLO (You Only Look Once) is a popular object detection and image segmentation model developed by Joseph Redmon and Ali -Farhadi at the University of Washington. The first version of YOLO was released in 2015 and quickly gained popularity -due to its high speed and accuracy. +## YOLO: A Brief History -YOLOv2 was released in 2016 and improved upon the original model by incorporating batch normalization, anchor boxes, and -dimension clusters. YOLOv3 was released in 2018 and further improved the model's performance by using a more efficient -backbone network, adding a feature pyramid, and making use of focal loss. +[YOLO](https://arxiv.org/abs/1506.02640) (You Only Look Once), a popular object detection and image segmentation model, was developed by Joseph Redmon and Ali Farhadi at the University of Washington. Launched in 2015, YOLO quickly gained popularity for its high speed and accuracy. -In 2020, YOLOv4 was released which introduced a number of innovations such as the use of Mosaic data augmentation, a new -anchor-free detection head, and a new loss function. +- [YOLOv2](https://arxiv.org/abs/1612.08242), released in 2016, improved the original model by incorporating batch normalization, anchor boxes, and dimension clusters. +- [YOLOv3](https://pjreddie.com/media/files/papers/YOLOv3.pdf), launched in 2018, further enhanced the model's performance using a more efficient backbone network, multiple anchors and spatial pyramid pooling. +- [YOLOv4](https://arxiv.org/abs/2004.10934) was released in 2020, introducing innovations like Mosaic data augmentation, a new anchor-free detection head, and a new loss function. +- [YOLOv5](https://github.com/ultralytics/yolov5) further improved the model's performance and added new features such as hyperparameter optimization, integrated experiment tracking and automatic export to popular export formats. +- [YOLOv6](https://github.com/meituan/YOLOv6) was open-sourced by Meituan in 2022 and is in use in many of the company's autonomous delivery robots. +- [YOLOv7](https://github.com/WongKinYiu/yolov7) added additional tasks such as pose estimation on the COCO keypoints dataset. -In 2021, Ultralytics released [YOLOv5](https://github.com/ultralytics/yolov5), which further improved the model's -performance and added new features such as support for panoptic segmentation and object tracking. - -YOLO has been widely used in a variety of applications, including autonomous vehicles, security and surveillance, and -medical imaging. It has also been used to win several competitions, such as the COCO Object Detection Challenge and the -DOTA Object Detection Challenge. - -For more information about the history and development of YOLO, you can refer to the following references: - -- Redmon, J., & Farhadi, A. (2015). You only look once: Unified, real-time object detection. In Proceedings of the IEEE - conference on computer vision and pattern recognition (pp. 779-788). -- Redmon, J., & Farhadi, A. (2016). YOLO9000: Better, faster, stronger. In Proceedings +Since its launch YOLO has been employed in various applications, including autonomous vehicles, security and surveillance, and medical imaging, and has won several competitions like the COCO Object Detection Challenge and the DOTA Object Detection Challenge. ## Ultralytics YOLOv8 -[Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) is the latest version of the YOLO object detection and -image segmentation model developed by Ultralytics. YOLOv8 is a cutting-edge, state-of-the-art (SOTA) model that builds -upon the success of previous YOLO versions and introduces new features and improvements to further boost performance and -flexibility. - -One key feature of YOLOv8 is its extensibility. It is designed as a framework that supports all previous versions of -YOLO, making it easy to switch between different versions and compare their performance. This makes YOLOv8 an ideal -choice for users who want to take advantage of the latest YOLO technology while still being able to use their existing -YOLO models. +[Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) is the latest version of the YOLO object detection and image segmentation model. As a cutting-edge, state-of-the-art (SOTA) model, YOLOv8 builds on the success of previous versions, introducing new features and improvements for enhanced performance, flexibility, and efficiency. -In addition to its extensibility, YOLOv8 includes a number of other innovations that make it an appealing choice for a -wide range of object detection and image segmentation tasks. These include a new backbone network, a new anchor-free -detection head, and a new loss function. YOLOv8 is also highly efficient and can be run on a variety of hardware -platforms, from CPUs to GPUs. +YOLOv8 is designed with a strong focus on speed, size, and accuracy, making it a compelling choice for various vision AI tasks. It outperforms previous versions by incorporating innovations like a new backbone network, a new anchor-free split head, and new loss functions. These improvements enable YOLOv8 to deliver superior results, while maintaining a compact size and exceptional speed. -Overall, YOLOv8 is a powerful and flexible tool for object detection and image segmentation that offers the best of both -worlds: the latest SOTA technology and the ability to use and compare all previous YOLO versions. +Additionally, YOLOv8 supports a full range of vision AI tasks, including [detection](tasks/detect.md), [segmentation](tasks/segment.md), [pose estimation](tasks/keypoints.md), [tracking](modes/track.md), and [classification](tasks/classify.md). This versatility allows users to leverage YOLOv8's capabilities across diverse applications and domains. diff --git a/docs/modes/index.md b/docs/modes/index.md index 14e2d85..ffa544a 100644 --- a/docs/modes/index.md +++ b/docs/modes/index.md @@ -1,4 +1,4 @@ -# YOLOv8 Modes +# Ultralytics YOLOv8 Modes
+
+
+
+
+
+
+
+
+