The Bear Necessity of AI in Conservation
The AI for Bears Challenge results which aims to improve the monitoring and identification of bears using advanced computer vision techniques.
By developing an image segmentation pipeline using state-of-the-art AI models, the challenge focused on rapidly analyzing underwater imagery to provide insights into coral reef health. This approach aimed to reduce the manual labor involved in data processing, offering a quicker, more accurate understanding of coral conditions globally. To give you an idea of how much manual labor is currently involved in data processing: A four-hour dive photographing one hectare of coral results in 40 hours of labeling images.
The use of AI in coral reef monitoring is a game-changer, offering precise, timely insights that can guide conservation efforts effectively. This technology enables targeted interventions, ensuring resources are allocated where they're needed most, ultimately contributing to the global effort to protect these vital ecosystems.
The challenge adopted a dual approach, given the availability of two distinct types of datasets - dense segmentation masks and sparse point labels - the participants were initially categorized into two groups: Supervised Learning and Unsupervised Learning, aimed at addressing the problem statement using differing methodologies. Within the Supervised Learning group, further subdivision occurred to capitalize on a diverse array of cutting-edge segmentation models, including You Only Look Once V8 (YOLOv8), Segment Anything Model (SAM), and Mask R-CNN.
This blog focuses on the research and experimentation conducted with SAM and Yolo as these models proved to perform the best. Given the commonalities in the model training process for both dense and sparse inputs, members from both the Supervised Learning and Unsupervised Learning groups collaborated to execute experiments related to fine-tuning.
The following objectives were set for the subgroup working with supervised learning:
The unsupervised approach focussed on developing a label propagator that uses point labels to create segmentation masks. The results of using this label propagator can be a starting point for the labeling process, which would only require some human refinement. This would significantly ease the labeling process in marine research, saving hours of work.
Reef Support has curated a diverse array of datasets encompassing coral reef ecosystems worldwide, incorporating two widely utilized datasets in coral reef research: Seaview and CoralSeg.
Furthermore, the Reef Support team meticulously crafted dense segmentation masks for one-third of the Seaview dataset, ensuring adequate coverage across all biological realms of coral reefs. The challenge led to several significant milestones. Despite encountering challenges such as data leakage and dataset quality, the teams persevered, refining their methodologies and enhancing model performance.
The best model turned out to be the Yolov8l. The table below summarizes the performance of the different YOLOv8 models that are trained on the same training set, using the same test set for evaluation. The top-performing model is the l size model, as indicated in the table above. As the model size decreases, there is a slight degradation in performance—from a mIoU of 0.85 to 0.83. However, the advantage of smaller models lies in their faster execution and compatibility with smaller hardware devices.
While the results presented in the table may seem exceptionally favorable for this computer vision task given the dataset, it is important to note that performance varies significantly across different regions. The average results summarized in the table above should be interpreted cautiously.
The subsequent table provides a summary of the performance of the model on the test sets for each region: For a qualitative assessment of the model’s performance, a random sample of images was drawn from the test set. This enables a direct comparison between the ground truth masks and the predicted masks.
The predictions presented here were generated using the l size model. In comparison, the SAM models reached a mean IoU of 76.05% The unsupervised learning team's innovative approaches, like the Point Label Aware Superpixels, achieved a mIoU of 55%.
The AI for Coral Reefs Challenge has set a precedent for the use of AI in environmental science, offering promising new directions for coral reef preservation. The collaborative effort of the global AI community has not only advanced our understanding of coral ecosystems but also paved the way for future innovations in marine conservation.
Arthur Caillau, Bart Emons, Bogumila Soroka, Cas Rooijackers, Icxa Khandelwal, Julieta Millán, Laurens Potiau, Leo Hyams, Masum Patel, Pierre Le Roux, shadi Andishmand, Sohane Le Roux, Sumit Sakarkar, Thomas Burger, Timo Scheidel, Mohsen Nabil, Sonny Burniston, Joanne Lijbers, Ponniah Kameswaran & of course Yohan Runhaar!