How does A.I inspired by neuroscience differ from normal A.I?

Traditional Artificial Intelligence is grounded in the engineering school of thinking. Algorithms are built that take into account every eventuality the engineer can come up with and then make decisions about the data they are applied to. Imagine a logic tree that is tens of thousands of columns wide and rows deep. It has a solution for every eventuality that its creator has conceived.

This way of thinking was the cornerstone of the first generation of A.I. It gave the world a glimpse of what A.I is capable of but  also highlighted that it can be brittle and fail when exposed to data it can't interpret, or eventualities that departed from the pre-determined route.

Put simply, the first gen AI is limited by the imagination of its creator.

At Remi we take a different approach, building Artificially Intelligent agent using Reinforcement Learning. 

Put simply, instead of telling the agent how to do something, we set the goal for an agent and the constraints of the environment it exists in. From here, the agent "trains" by exploring every option available, completely randomly. After a unspecified number of iterations, the agent will - by chance - achieve the goal. When it does this, it is rewarded (the "reward" is an algorithmic equivalent of the dopamine rush a mammal experiences), and continues exploring alternate strategies.


By not being instructed how to do something, the agent explores every possible approach to a task, to find the optimal way to complete the task. After a certain number of iterations - the complexity of the task denotes the number required - the agent is "trained" and the optimal strategy for completing a task.


This strategy is based on the method that resulted in the greatest number of rewards given to the algorithm - the agent learns to use the strategy that generated the greatest number of rewards - the strategy that was reinforced by the greatest number of rewards called Reinforcement Learning.

The optimised strategy is scalable and robust, because if the algorithm is exposed to something it doesn't understand, it will learn and be better prepared the next time it is exposed to a similar circumstance. 


Where these algorithms really differentiate from the engineering school of thought is their transferable nature. As mentioned previously, the first generation of A.I were highly specific and this specificity, in part, lead to their brittle nature. The algorithms built at Remi are adept, once trained, at multiple tasks - much the same way the mammalian brain is equally adept at hundreds, if not thousands of tasks, once it learns how to do something.