Remove unused code (#4327)

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
This commit is contained in:
Glenn Jocher
2023-08-13 13:10:52 +02:00
committed by GitHub
parent 1c753cbce6
commit 9366062af2
14 changed files with 134 additions and 263 deletions

View File

@ -40,42 +40,45 @@ The FastSAM models are easy to integrate into your Python applications. Ultralyt
To perform object detection on an image, use the `predict` method as shown below:
```python
from ultralytics import FastSAM
from ultralytics.models.fastsam import FastSAMPrompt
!!! example ""
# Define image path and inference device
IMAGE_PATH = 'ultralytics/assets/bus.jpg'
DEVICE = 'cpu'
=== "Python"
```python
from ultralytics import FastSAM
from ultralytics.models.fastsam import FastSAMPrompt
# Create a FastSAM model
model = FastSAM('FastSAM-s.pt') # or FastSAM-x.pt
# Define image path and inference device
IMAGE_PATH = 'ultralytics/assets/bus.jpg'
DEVICE = 'cpu'
# Run inference on an image
everything_results = model(IMAGE_PATH,
device=DEVICE,
retina_masks=True,
imgsz=1024,
conf=0.4,
iou=0.9)
# Create a FastSAM model
model = FastSAM('FastSAM-s.pt') # or FastSAM-x.pt
prompt_process = FastSAMPrompt(IMAGE_PATH, everything_results, device=DEVICE)
# Run inference on an image
everything_results = model(IMAGE_PATH,
device=DEVICE,
retina_masks=True,
imgsz=1024,
conf=0.4,
iou=0.9)
prompt_process = FastSAMPrompt(IMAGE_PATH, everything_results, device=DEVICE)
# Everything prompt
ann = prompt_process.everything_prompt()
# Everything prompt
ann = prompt_process.everything_prompt()
# Bbox default shape [0,0,0,0] -> [x1,y1,x2,y2]
ann = prompt_process.box_prompt(bbox=[200, 200, 300, 300])
# Bbox default shape [0,0,0,0] -> [x1,y1,x2,y2]
ann = prompt_process.box_prompt(bbox=[200, 200, 300, 300])
# Text prompt
ann = prompt_process.text_prompt(text='a photo of a dog')
# Text prompt
ann = prompt_process.text_prompt(text='a photo of a dog')
# Point prompt
# points default [[0,0]] [[x1,y1],[x2,y2]]
# point_label default [0] [1,0] 0:background, 1:foreground
ann = prompt_process.point_prompt(points=[[200, 200]], pointlabel=[1])
prompt_process.plot(annotations=ann, output='./')
```
# Point prompt
# points default [[0,0]] [[x1,y1],[x2,y2]]
# point_label default [0] [1,0] 0:background, 1:foreground
ann = prompt_process.point_prompt(points=[[200, 200]], pointlabel=[1])
prompt_process.plot(annotations=ann, output='./')
```
This snippet demonstrates the simplicity of loading a pre-trained model and running a prediction on an image.
@ -83,15 +86,19 @@ This snippet demonstrates the simplicity of loading a pre-trained model and runn
Validation of the model on a dataset can be done as follows:
```python
from ultralytics import FastSAM
!!! example ""
# Create a FastSAM model
model = FastSAM('FastSAM-s.pt') # or FastSAM-x.pt
=== "Python"
# Validate the model
results = model.val(data='coco8-seg.yaml')
```
```python
from ultralytics import FastSAM
# Create a FastSAM model
model = FastSAM('FastSAM-s.pt') # or FastSAM-x.pt
# Validate the model
results = model.val(data='coco8-seg.yaml')
```
Please note that FastSAM only supports detection and segmentation of a single class of object. This means it will recognize and segment all objects as the same class. Therefore, when preparing the dataset, you need to convert all object category IDs to 0.

View File

@ -26,7 +26,7 @@ You can use many of these models directly in the Command Line Interface (CLI) or
## Usage
This example provides simple inference code for YOLO, SAM and RTDETR models. For more options including handling inference results see [Predict](../modes/predict.md) mode. For using models with additional modes see [Train](../modes/train.md), [Val](../modes/val.md) and [Export](../modes/export.md).
This example provides simple inference code for YOLO, SAM and RTDETR models. For more options including handling inference results see [Predict](../modes/predict.md) mode. For using models with additional modes see [Train](../modes/train.md), [Val](../modes/val.md) and [Export](../modes/export.md).
!!! example ""

View File

@ -61,27 +61,33 @@ You can download the model [here](https://github.com/ChaoningZhang/MobileSAM/blo
### Point Prompt
```python
from ultralytics import SAM
!!! example ""
# Load the model
model = SAM('mobile_sam.pt')
# Predict a segment based on a point prompt
model.predict('ultralytics/assets/zidane.jpg', points=[900, 370], labels=[1])
```
=== "Python"
```python
from ultralytics import SAM
# Load the model
model = SAM('mobile_sam.pt')
# Predict a segment based on a point prompt
model.predict('ultralytics/assets/zidane.jpg', points=[900, 370], labels=[1])
```
### Box Prompt
```python
from ultralytics import SAM
!!! example ""
# Load the model
model = SAM('mobile_sam.pt')
# Predict a segment based on a box prompt
model.predict('ultralytics/assets/zidane.jpg', bboxes=[439, 437, 524, 709])
```
=== "Python"
```python
from ultralytics import SAM
# Load the model
model = SAM('mobile_sam.pt')
# Predict a segment based on a box prompt
model.predict('ultralytics/assets/zidane.jpg', bboxes=[439, 437, 524, 709])
```
We have implemented `MobileSAM` and `SAM` using the same API. For more usage information, please see the [SAM page](./sam.md).

View File

@ -152,29 +152,33 @@ This comparison shows the order-of-magnitude differences in the model sizes and
Tests run on a 2023 Apple M2 Macbook with 16GB of RAM. To reproduce this test:
```python
from ultralytics import FastSAM, SAM, YOLO
# Profile SAM-b
model = SAM('sam_b.pt')
model.info()
model('ultralytics/assets')
!!! example ""
# Profile MobileSAM
model = SAM('mobile_sam.pt')
model.info()
model('ultralytics/assets')
# Profile FastSAM-s
model = FastSAM('FastSAM-s.pt')
model.info()
model('ultralytics/assets')
# Profile YOLOv8n-seg
model = YOLO('yolov8n-seg.pt')
model.info()
model('ultralytics/assets')
```
=== "Python"
```python
from ultralytics import FastSAM, SAM, YOLO
# Profile SAM-b
model = SAM('sam_b.pt')
model.info()
model('ultralytics/assets')
# Profile MobileSAM
model = SAM('mobile_sam.pt')
model.info()
model('ultralytics/assets')
# Profile FastSAM-s
model = FastSAM('FastSAM-s.pt')
model.info()
model('ultralytics/assets')
# Profile YOLOv8n-seg
model = YOLO('yolov8n-seg.pt')
model.info()
model('ultralytics/assets')
```
## Auto-Annotation: A Quick Path to Segmentation Datasets
@ -184,11 +188,14 @@ Auto-annotation is a key feature of SAM, allowing users to generate a [segmentat
To auto-annotate your dataset with the Ultralytics framework, use the `auto_annotate` function as shown below:
```python
from ultralytics.data.annotator import auto_annotate
!!! example ""
auto_annotate(data="path/to/images", det_model="yolov8x.pt", sam_model='sam_b.pt')
```
=== "Python"
```python
from ultralytics.data.annotator import auto_annotate
auto_annotate(data="path/to/images", det_model="yolov8x.pt", sam_model='sam_b.pt')
```
| Argument | Type | Description | Default |
|------------|---------------------|---------------------------------------------------------------------------------------------------------|--------------|

View File

@ -33,6 +33,7 @@ Train YOLOv8n on the COCO128 dataset for 100 epochs at image size 640. See Argum
# Train the model
results = model.train(data='coco128.yaml', epochs=100, imgsz=640)
```
=== "CLI"
```bash
@ -63,6 +64,7 @@ The training device can be specified using the `device` argument. If no argument
# Train the model with 2 GPUs
results = model.train(data='coco128.yaml', epochs=100, imgsz=640, device=[0, 1])
```
=== "CLI"
```bash
@ -89,6 +91,7 @@ To enable training on Apple M1 and M2 chips, you should specify 'mps' as your de
# Train the model with 2 GPUs
results = model.train(data='coco128.yaml', epochs=100, imgsz=640, device='mps')
```
=== "CLI"
```bash
@ -121,6 +124,7 @@ Below is an example of how to resume an interrupted training using Python and vi
# Resume training
results = model.train(resume=True)
```
=== "CLI"
```bash
@ -196,12 +200,15 @@ To use a logger, select it from the dropdown menu in the code snippet above and
To use Comet:
```python
# pip install comet_ml
import comet_ml
!!! example ""
comet_ml.init()
```
=== "Python"
```python
# pip install comet_ml
import comet_ml
comet_ml.init()
```
Remember to sign in to your Comet account on their website and get your API key. You will need to add this to your environment variables or your script to log your experiments.
@ -211,12 +218,15 @@ Remember to sign in to your Comet account on their website and get your API key.
To use ClearML:
```python
# pip install clearml
import clearml
!!! example ""
clearml.browser_login()
```
=== "Python"
```python
# pip install clearml
import clearml
clearml.browser_login()
```
After running this script, you will need to sign in to your ClearML account on the browser and authenticate your session.
@ -226,16 +236,22 @@ After running this script, you will need to sign in to your ClearML account on t
To use TensorBoard in [Google Colab](https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb):
```bash
load_ext tensorboard
tensorboard --logdir ultralytics/runs # replace with 'runs' directory
```
!!! example ""
=== "CLI"
```bash
load_ext tensorboard
tensorboard --logdir ultralytics/runs # replace with 'runs' directory
```
To use TensorBoard locally run the below command and view results at http://localhost:6006/.
```bash
tensorboard --logdir ultralytics/runs # replace with 'runs' directory
```
!!! example ""
=== "CLI"
```bash
tensorboard --logdir ultralytics/runs # replace with 'runs' directory
```
This will load TensorBoard and direct it to the directory where your training logs are saved.

View File

@ -9,34 +9,14 @@ keywords: Ultralytics, Mask Data, Transformation, Encoding, RLE encoding, Image
Full source code for this file is available at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/models/sam/amg.py](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/models/sam/amg.py). Help us fix any issues you see by submitting a [Pull Request](https://docs.ultralytics.com/help/contributing/) 🛠️. Thank you 🙏!
---
## ::: ultralytics.models.sam.amg.MaskData
<br><br>
---
## ::: ultralytics.models.sam.amg.is_box_near_crop_edge
<br><br>
---
## ::: ultralytics.models.sam.amg.box_xyxy_to_xywh
<br><br>
---
## ::: ultralytics.models.sam.amg.batch_iterator
<br><br>
---
## ::: ultralytics.models.sam.amg.mask_to_rle_pytorch
<br><br>
---
## ::: ultralytics.models.sam.amg.rle_to_mask
<br><br>
---
## ::: ultralytics.models.sam.amg.area_from_rle
<br><br>
---
## ::: ultralytics.models.sam.amg.calculate_stability_score
<br><br>
@ -69,10 +49,6 @@ keywords: Ultralytics, Mask Data, Transformation, Encoding, RLE encoding, Image
## ::: ultralytics.models.sam.amg.remove_small_regions
<br><br>
---
## ::: ultralytics.models.sam.amg.coco_encode_rle
<br><br>
---
## ::: ultralytics.models.sam.amg.batched_mask_to_box
<br><br>