Update SAM docs page (#3672)

single_channel
Glenn Jocher 1 year ago committed by GitHub
parent 1ae7f84394
commit b239246452
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -30,12 +30,29 @@ For an in-depth look at the Segment Anything Model and the SA-1B dataset, please
The Segment Anything Model can be employed for a multitude of downstream tasks that go beyond its training data. This includes edge detection, object proposal generation, instance segmentation, and preliminary text-to-mask prediction. With prompt engineering, SAM can swiftly adapt to new tasks and data distributions in a zero-shot manner, establishing it as a versatile and potent tool for all your image segmentation needs.
!!! example "SAM prediction example"
Device is determined automatically. If a GPU is available then it will be used, otherwise inference will run on CPU.
=== "Python"
```python
from ultralytics import SAM
# Load a model
model = SAM('sam_b.pt')
model.info() # display model information
model.predict('path/to/image.jpg') # predict
# Display model information (optional)
model.info()
# Run inference with the model
model('path/to/image.jpg')
```
=== "CLI"
```bash
# Run inference with a SAM model
yolo predict model=sam_b.pt source=path/to/image.jpg
```
## Available Models and Supported Tasks
@ -53,6 +70,33 @@ model.predict('path/to/image.jpg') # predict
| Validation | :x: |
| Training | :x: |
## SAM comparison vs YOLOv8
Here we compare Meta's smallest SAM model, SAM-b, with Ultralytics smallest segmentation model, [YOLOv8n-seg](../tasks/segment):
| Model | Size | Parameters | Speed (CPU) |
|---------------------------------------------|----------------------------|------------------------|-------------------------|
| Meta's SAM-b | 358 MB | 94.7 M | 51096 ms |
| Ultralytics [YOLOv8n-seg](../tasks/segment) | **6.7 MB** (53.4x smaller) | **3.4 M** (27.9x less) | **59 ms** (866x faster) |
This comparison shows the order-of-magnitude differences in the model sizes and speeds. Whereas SAM presents unique capabilities for automatic segmenting, it is not a direct competitor to YOLOv8 segment models, which are smaller, faster and more efficient since they are dedicated to more targeted use cases.
To reproduce this test:
```python
from ultralytics import SAM, YOLO
# Profile SAM-b
model = SAM('sam_b.pt')
model.info()
model('ultralytics/assets')
# Profile YOLOv8n-seg
model = YOLO('yolov8n-seg.pt')
model.info()
model('ultralytics/assets')
```
## Auto-Annotation: A Quick Path to Segmentation Datasets
Auto-annotation is a key feature of SAM, allowing users to generate a [segmentation dataset](https://docs.ultralytics.com/datasets/segment) using a pre-trained detection model. This feature enables rapid and accurate annotation of a large number of images, bypassing the need for time-consuming manual labeling.

@ -1,3 +1,8 @@
---
description: Learn about Ultralytics YOLO's MaskDecoder, Transformer architecture, MLP, mask prediction, and quality prediction.
keywords: Ultralytics YOLO, MaskDecoder, Transformer architecture, mask prediction, image embeddings, prompt embeddings, multi-mask output, MLP, mask quality prediction
---
## MaskDecoder
---
### ::: ultralytics.vit.sam.modules.decoders.MaskDecoder

@ -23,6 +23,11 @@ keywords: Ultralytics YOLO, downloads, trained models, datasets, weights, deep l
### ::: ultralytics.yolo.utils.downloads.safe_download
<br><br>
## get_github_assets
---
### ::: ultralytics.yolo.utils.downloads.get_github_assets
<br><br>
## attempt_download_asset
---
### ::: ultralytics.yolo.utils.downloads.attempt_download_asset

Loading…
Cancel
Save