--- license: mit --- ## Model Card for butterfly_segmentation_yolo_v8 This model takes in an image of a butterfly (with or without body attached to wings) and segments out any existing hindwings and forewings, in addition to pictured equipment described below. ## Model Details yolov8m_shear_10.0_scale_0.5_translate_0.1_fliplr_0.0_best.pt is the butterfly segmentation model. The butterfly segmentation model was trained on a dataset of 800 total images from the Jiggins, OM_STRI, and Monteiro datasets. The model architecture is based on YOLO v8 (yolov8m-seg.pt), which we fine-tune further on our dataset of 800 images. ## Model Description The model is responsible for taking an input image (RGB) and generating segmentation masks for all classes below that are found in the image. Data augmentations applied during training include shear (10.0), scale (0.5), and translate (0.1). The model was trained for 50 epochs with an image size of 256. Note that despite defining an image size of 256, the normalized masks predicted by yolo can be rescaled to the original image size. ### Segmentation Classes [`pixel class`] corresponding category - [0] background - [1] right_forewing - [2] left_forewing - [3] right_hindwing - [4] left_hindwing - [5] ruler - [6] white_balance - [7] label - [8] color_card - [9] body ### Details model.train(data=YAML, imgsz=256, epochs=50, batch=16, device=DEVICE, optimizer='auto', verbose=True, val=True, shear=10.0, scale=0.5, translate=0.1, fliplr = 0.0 ) ## Metrics Class Images Instances mAP50-95 all 64 358 background 64 3 0.20946 right_forewing 64 58 0.9845 left_forewing 64 51 0.9682 right_hindwing 64 59 0.95296 left_hindwing 64 50 0.93961 ruler 64 31 0.73608 white_balance 64 18 0.90686 label 64 50 0.80865 color_card 64 24 0.92653 body 64 14 0.78283 **Developed by:** Michelle Ramirez ## How to Get Started with the Model To view applications of how to load in the model file and predict masks on images, please refer to [this github repository](https://github.com/Imageomics/wing-segmentation)