The Bear Necessity of AI in Conservation
The AI for Bears Challenge results which aims to improve the monitoring and identification of bears using advanced computer vision techniques.
These smaller, often unnoticed spills cause long-lasting effects on marine organisms as the oil spills are not detected in a timely manner, and many organisms can be oiled or poisoned in the meanwhile. Birds and marine mammals are severely affected by oil spills as they need to make regular contact with the surface of the water, where much of the oil floats. These organisms run the risk of ingesting or inhaling oil, or getting covered in it—all of which are harmful. When an oil spill reaches the shore, it affects the lives of birds and mammals that nest on the coast as well.
When oil spills happen, a fraction of the oil evaporates quickly, but the rest of the oil weathers and forms a persistent oil slick on the water surface, which can spread out in the direction of the wind. It becomes difficult to clean up oil spills once they have dispersed into very thin films and broken down into smaller particles. So it is important to begin cleanup as fast as possible to reduce the impact of a spill.
In addition to being a menace to marine life and the environment, small and sometimes deliberate oil spills are an economic burden for governments. Detected spills are usually cleaned up by oil spill response groups tasked by government agencies and they require massive resources to perform their duties.
‍
Rijkswaterstaat, the Directorate-General for Public Works and Water Management in the Netherlands, is are responsible for water quality and cleaning up oil spills in inland waterways and ports. They teamed up with FruitPunch AI to improve their response speed to spills in the AI against Oil Spills Challenge.Â
A group of 15 AI for Good engineers from different parts of the world joined hands in this Challenge. We split up into two separate subgroups, each focusing on a different type of oil spill: “Inland Team” for spills in inland ports and waterways and “Sea Team” for spills on open sea, and started our 10-week crunch.
Our goals for this challenge were three-fold, namely:Â
This blog describes the journey of the Sea Team in detecting oil spills in open seas and oceans.
‍
We identified a source for potential oil spill data in the US Office of Satellite and Product Operations - NESDIS Marine Pollution Products; but we found that it lacked the associated satellite images which serve as input for training machine learning models. So, we set out on our journey of finding readymade datasets that could help us in our task.Â
We soon realised that oil spill detection from satellite images is a budding research area, and no “curated” oil spill datasets exist in the public domain. So we took it upon ourselves to curate an open-access dataset based on Marine Pollution Surveillance Reports (MPSR) from US NESDIS Marine Pollution Products and make it available for the research community and data science enthusiasts.
We initially identified 800+ oil spill events from the archives of NESDIS Marine Pollution Products; during the course of the challenge, we expanded the list to 2200+ covering potential oil spills in the North American region and sourced from 20+ satellites.Â
‍
Since multispectral images (MSI) were key to the estimation of thickness and no large scale implementations were available, we decided to create a baseline using synthetic aperture radar images (SAR) from Sentinel 1 and then compare it with the results from MSI images from Sentinel 2 and Landsat 8. Based on the availability of time and resources, we decided to explore a few of thickness estimation ideas on MSI images.
Though different research papers used different metrics, we decided to evaluate the performance of our models on the basis of two F-measures, namely: Intersection over Union (IoU, aka Jaccard Index) and F1 Score (aka Dice Coefficient), as they were the most suited for our use case: our datasets have a high imbalance between the oil spill and sea class.
‍
Satellite images are well known for their vibrant false color rendering, but they can be difficult to work with in the oil spill context as each satellite product was huge in size (1 to 2 GB) and needed complex preprocessing before they could be used in machine learning models. The actual oil spills were restricted to a tiny area within the large image captured by the radar.Â
We found a savior in Alaska Satellite Facility, which provided a cloud-based API (HyP3 SDK) that can perform preprocessing—such as Radiometric Terrain Correction and Speckle Filtering—at the push of a button on all Sentinel 1 products.Â
We used Level-1 Ground Range Detected (GRD) products and chose VV polarization for training deep learning models. We tried several semantic segmentation architectures for the Sentinel 1 pipeline and then limited our search space to UNet++, DeepLabV3, and DeepLabV3+ with EfficientNet (B0-B3) backbone initialized with ImageNet pre-trained weights, as they provided the best results. The best metrics obtained on the test dataset were an IoU of 0.65 and an F1 Score of 0.79.
‍
Based on the analysis of spectral reflectance across all bands we were able to identify two new spectral indices, one each for Sentinel 2 and Landsat 8, that help in the quick identification of oil spills from MSI images and also provide compact feature representation for use as input to train machine learning models.Â
With further rigorous statistical analysis and testing on a geographically diverse dataset, the following indices can be officially rolled out for use by the general public (the numbers in the formulae represent the band numbers): New oil spill index was chosen as input for training semantic segmentation models; whereas for the Landsat 8 pipeline, the new oil spill index created for Landsat 8 MSI was chosen as input. The following table shows a comparative view of different choices for band combinations and the effectiveness of options using a newly identified spectral index: ‍ ‍
We trained several models within the short time frame of 10 weeks. The models that provided the best results for IoU and F1 Score on test dataset are shown in below figure: ‍
The following chart compares the performance of different segmentation architectures used in the Sentinel 1 pipeline with regards to IoU and F1 Score on the test dataset (Encoder, Encoder Weights, Batch Size, etc were kept the same as the best performing model):Â
‍ The following are few samples from predictions made on test dataset for different pipelines: ‍
The models based on SAR images (Sentinel 1) gave better results in this Challenge, when compared to the models based on MSI images (Sentinel 2 / Landsat 8). This seems to be in line with the fact that the majority of literature on oil spill detection is based on SAR imagery.Â
However, it is also important to note that the number of samples available for MSI imagery pipelines was only 40% - 60% of the number of samples available for SAR imagery and this may have also played a major role in the metrics of MSI imagery-based models being lower.
Since we used a dataset that is based on oil spill events around the United States, the resulting model may be biased toward the conditions prevalent in North America. We think it is important to create a more geographically diverse dataset that contains oil spill events from different regions of the world.Â
In addition, it is essential for spill response teams to define a workflow in their process to add shape files and satellite images collected during their cleanup operations to a common oil spill dataset. This will greatly aid in the creation of sufficient data samples for building accurate machine learning models in the future.
We came up with several other ideas and approaches that could not be implemented in this AI for Good Challenge due to time and computational resource constraints. A few of the notable ideas are listed below for exploration in future Challenges or projects:
‍
Emile
I think the magic of this Challenge lay in the fact that it was an A to Z type of problem: we started with a problem statement, some initial data, and little prior knowledge and had three difficult and ambitious goals ahead of us. Because we had to find most of the data ourselves, build multiple machine learning pipelines, perform data analysis, and convey meaningful information from our results to stakeholders all with our own (small) team within the timeline of the Challenge, it meant that we had to overcome a lot of challenges along the way. This is probably the reason that we learned as much as we did. Because of the intensity and expectations we had of our project, we had to adapt to different scenarios, different domains of expertise, and different workflows that we encountered throughout the weeks.Â
I feel like all of us gained a lot of new knowledge and many new insights through this. Myself, since the start of the AI against Oil Spills Challenge, I have acquired knowledge about programmatic aspects of Geographic Information Systems (GIS), as well as data collection (and the importance thereof in domain-specific problems), machine learning pipelines - in particular segmentation models that were at the heart of our pipelines, and collaborative working in a multi-disciplinary team.
All in all, this has been an inspiring Challenge for me and it took me through a period in which I learned multiple new skills and further developed several other ones.
‍
Chi
This was personally a difficult challenge for me and was out of my comfort zone, given my limited knowledge of satellite images and machine learning, as well as my other commitments. Due to the different levels of experience and expertise of each member, we were moving at our own pace.Â
Building production-ready machine learning models from scratch in only 10 weeks with a team of 5 was most definitely challenging, however, with the team’s hard work, we were able to deliver the results. Through the AI against Oil Spills Challenge, I had the opportunity to get my feet wet in the AI/ML space, segmentation models, as well as in image processing, with the help and patience of my team members.Â
I believe what brought us all together was our interest and passion in machine learning and how to use it towards what we care about—real-life problems. The challenge was most definitely difficult yet inspiring and most importantly, I learned a lot about myself and what I know and didn’t know, so that I can further improve my skills and knowledge for upcoming Challenges!
‍‍
Ponniah
This Challenge gave me an opportunity to apply machine learning skills to solve meaningful real-world problems and to also enhance my skills in the process. The masterclasses provided by Rijkswaterstaathelped us learn the foundations of the domain in a very short time. One of the best aspects of this Challenge was the coming together of learning-oriented persons with varied skill sets and levels of experience. This helped me learn new skills from my peers, gain intuitions of various ML concepts by working closely with experienced team members, and bounce ideas, not just within our subgroup, but also between the two subgroups in this Challenge. Having no prior knowledge of GIS, it was challenging to work with imagery from multiple satellites, but at the same time, it was exciting, interesting, and very fulfilling. Though ours was a very small team, everyone chipped in at the right time with the right contribution and helped meet our key objectives. More than anything, contributing towards a good cause gave me great satisfaction and I have to thank FruitPunch for coming up with this wonderful AI for Good platform.
AI for Oil spills Members: Aliaksandr Hancharenka, Agustin Iniguez Rabago, Chi Nguyen, Emile Dhifallah, Ponniah Kameswaran, Bram de Wit, Leonardo Iheme, Resham Sundar, Sahil Chachra, Shubham Baid, Timothy Malche, Muhammad Uzair Ghous