The Bear Necessity of AI in Conservation
The AI for Bears Challenge results which aims to improve the monitoring and identification of bears using advanced computer vision techniques.
Although the plane is guided by a mission planner in-flight, the landing is not executed on auto-pilot. Extreme caution has to be taken when landing a fixed-wing plane with a span of four meters in a densely packed landscape of the south aAfrican bush. Roads are only slightly wider than the plane - there are high risks involved when landing, including hitting overhanging tree branches, uneven terrain and passing animals. Because of this, basic auto-landing features were originally replaced by manual landing with expert drone pilots.
But manual landing requires many resources. SPOTS wanted to implement a smart auto-landing feature as soon as possible. Our team set out to develop a prototype for landing the plane autonomously.
We first set out to list all the constraints the plane has to adhere to. Things like weather conditions, runway conditions and plane measurements have to be encoded accurately when giving the control of the plane over to our software.
We approached the landing problem from two perspectives:
The first technique involves setting up two GPS beacons at the start and at the end of the runway, which serve as pointers for the plane. When the drone approaches the first runway pointer, a script calculates the desired landing approach direction, and uploads a flight plan to a PX4 Flight Control Unit (FCU). The FCU monitors the sensors onboard the plane and executes a landing manoeuvre. This ensures that the landing is soft enough and does not divert too much from the middle of the runway. Using a Gazebo simulation environment, we tested the auto-landing script with success. The only setback we faced was a discrepancy in the simulation, causing the plane to land below the surface of the map. After fixing this, the method is ready to be tried on a test-version of the real plane in South Africa.
“The diversity of the tasks involved challenged me to learn a lot of new things in the overlap of hardware and software. The interactive team environment and interaction with the people from SPOTS was a great experience for someone learning to apply AI in real-life scenarios. Moreover, working on a case as beautiful as wildlife conservation has been inspiring, it showed how applying engineering knowledge can have a positive impact in the real world.” - Emile Dhifallah, Autonomous Flight Team
We started to build a second simulation environment in parallel, where we could train a model of the plane to land with the help of a Reinforcement Learning agent. In a highly realistic environment built in Unreal Engine 4, we could model the descent of the plane using an Airsim simulation. The goal here was to connect the plane’s controls to OpenAI’s Gym, which is an extremely useful toolkit for running RL experiments with drones. Training a RL agent to land the plane involves setting definitions of what is considered ‘good’. Specifically, we had to define:
After encoding all of this information, the agent could start learning the landing process. We made a first attempt at doing so using Deep Q Learning. The progress was unfortunately halted because of the lack of compute resources. Other obstacles we faced in the RL pipeline included a lack of definition in the Unreal Engine environment, which is essential to transferring the capabilities of the reinforcement learning agent to the real-life plane.
Both of our development trajectories need more refinement before real-life testing, but they’ve shown to be a promising replacement for manual landing of the SPOTS plane. We’re looking forward to seeing the techniques above implemented on the drone in the future.
*Autonomous Flight Team *
Thanasis Trantas, Emile Dhifallah, Nima Negarandeh, Gerson Foks, Ryan Wolf
The goal of the Hardware Team was to connect different hardware components on the drone, which included:
We had to make sure that the information flows smoothly in the whole system, with as little delay as possible. Within seconds, the drone could have already moved tens of meters, so the detection results had to be processed and sent to the ground really quickly.
Our goals were:
“Being a part of the AI for Wildlife Challenge was a great learning opportunity. My background is in machine learning, so I was able to learn about different parts of the pipeline required to create a successful real-life solution. The biggest challenge was to work on the hardware remotely (I was in Amsterdam, 9000 km from the actual drone), but I still managed to contribute to the project. It was great to meet people from all over the world and work together towards a common goal for a good cause.” - Maks Kulicki, Hardware Team
We were able to run the model on the Jetson Nano, but we didn’t find a way to skip the frames. We managed to set up the system to send the information to the ground. It required plugging an HDMI cable between the Jetson Nano and the Herelink transmitter. We tried to set up the ground display to show the original video alongside the model output on the same screen, but it turned out to be more difficult than expected. Hardware Team of AI for Wildlife 3
Kamalen Reddy, Estine Clasen, Maks Kulicki, Thembinkosi Malefo, Michael Bernhardt
*Model Optimization: *Sahil Chachra, Manu Chauhan, Jaka Cikač, Sabelo Makhanya, Sinan Robillard
CI/ CD: Aisha Kala, Mayur Ranchod, Sabelo Makhanya, Rowanne Trapmann, Ethan Kraus, Barbra Apilli, Samantha Biegel, Adrian Azoitei
Hardware: Kamalen Reddy, Estine Clasen, Maks Kulicki, Thembinkosi Malefo, Michael Bernhardt
Autonomous Flight: Thanasis Trantas, Emile Dhifallah, Nima Negarandeh, Gerson Foks, Ryan Wolf