The Game

For this competition, we made our own game using the Unity engine. “Isaac’s Labyrinth” is a multiplayer human + AI co-op game, where multiple pairs of one AI agent and one of its AI developers compete to find as many fruits as possible. It is the job of the AI agent to navigate the dynamic maze as efficiently as possible using only visual observations and information on roughly where the fruits are ‘as the crow flies’. The human player can place invisible traps, but can also reveal hidden traps to their AI counterpart to make sure he can move without any issues.

The Challenge

The challenge itself is part smart navigation and part decision making and risk management. Essentially, an agent in this environment has to understand where it is relative to where it wants to go, and it has to understand how it gets there as fast as possible while avoiding dangers, as well as understand that other entities in the environment can create dangers, but that it needs its own human partner to reveal those dangers to it.

The Lore

“I woke up from hibernation with the systems going wild. It was too late, the display read ‘brace for impact’. Everything went black.”

“Next thing I remember Pathy pulls me out of my chamber. My heads-up is indicating ‘power too low for long range transmission’.
‘Why would we need long range… Anywhere inside the cluster system should be safe.’
I look through the small window of my quarters and through the dust & smoke I see… A MAZE

What a sick machine mind would make such a trap… Playing games… There must be more out there!

All I know is, I need to make contact with the central community fast. Pathy will help me, as long as we’re working together we can get out of this mess.”


Community Platform

We’re currently developing the FruitPunch AI Community Platform.

Every participant of the competition will receive an account on the FruitPunch AI Community Platform. Competition events, info and updates will all be posted here, and you will also use the platform to hand in your work. You will be able to interact with all other members of the FruitPunch AI community and connect with experts in the field of Artificial Intelligence.


We organize some workshops to tackle things like setting up and training an agent in the cloud and masterclasses that contain machine learning techniques that you could use in the competition.


(Date TBA)
Workshop Competition environment & training
(Date TBA)
Masterclass LSTM networks
(Date TBA)
Masterclass Location estimation
(Date TBA)
Workshop training in our cloud environment

Prior Knowledge

For our masterclasses and workshops we expect you to have:

Resources for these subjects can be found on our website and platform.


For teams affiliated with a university or some academic institute participation is free of charge.
For teams affiliated with a company an entry fee will apply.


To make ensure the fairness of the competition we require of each team to abide by this rulebook.

Upon enrolling in the competition each team agrees to these rules.

All enrolled teams will be notified upon any modifications to these rules.
For any comments or suggestions for the rulebook please send us an email.

Technical information


The challenge environment is made in the Unity game engine and uses the Unity ML-Agents toolkit. This toolkit allows for communication between a Python script and the Unity environment. This Python script can receive observations from agents in the Unity environment and can in turn send instructions that these agents will execute. Each team will write such a Python script.
It needs to contain a learning agent that learns to chooses the best actions based on the observations from the environment.


This ‘learning’ will happen on servers provided to FruitPunch AI by a third party and only on these servers. This is because AI performance heavily depends on computing power and we want to provide you with enough and at the same time maintain the fairness of the competition.


We recommend using TensorFlow and Keras. These are two free and open-source machine learning libraries for Python (and other languages) that will also be used in the masterclasses. Using Python is not mandatory, but strongly recommended. If your team wishes to use another language, please discuss this with us first.
Send an email

Clean Code

At FruitPunch AI, we love clean code. Clean code is code that can be understood when reading it once, without a need for comments. Since 90% of your time programming is actually spent reading what you and your team members have already written, spending more time on writing code that is easy to read will save you time in the long run and both your team and the judges of the competition will love you for it. However, if comments are required to understand your code, please write clean comments. Recommended reading material: Martin, R. C. (2008) Clean Code

Hand in

Before the agent deadline your team will hand in a trained agent, together with a list of the libraries you have used and a short explanation of what type of agent you made, under an open source license (see open source segment for more details). Before the documentation deadline your team will hand in a concise document that details your approach and your reasoning behind that approach on a medium to high level. This means you explain why you used a certain technology, and why for example your neural net is structured the way it is, but not why you gave a certain function a certain name.

Open Source

After the finals event, the final submission from each team will be made public. We do this to advance the knowledge of all competitors by learning from each other. For this reason you have to submit your solution with an open source license that allows FruitPunch AI to publish a copy of your code on their media, of course with full credit to your team.

Bugs and improvements

If you find a bug in the game or want to propose an improvement, please email us. If email is not your piece of cake, you can also find our dev & tech team on the community platform and on our Slack. If you report a visual bug, please provide us with screenshot(s) or screen-captures if possible. For other bugs, standard bug-reporting procedures apply, i.e. description of the circumstances, expected behavior and actual behavior and in a perfect case, logs. 

How to Join

Enrollment for this challenge is no longer possible.

Team Guidelines

Our recommended team size is 5 people. This is because the expected amount of parallel tasks in solving the challenge is small and working with too many people on a single task decreases individual efficiency. There is no limit to the amount of competitors in a single team, but non-monetary prizes are limited to 5 items per prize.