Different Approaches In AI Research

Artificial Intelligence has been researched for more than half a century. Back in then, scientists believed the will be able to make a proper AI in just a decade or so, but soon discovered that constructing an AI is way more complicated than they first believed. Over many decades new approached in AI research appeared, basing their theories on different grounds.

Since Artificial Intelligence research is more than half century old, it is useful to know different “tribes” that differ in their approaches.

For decades AI research was dominated by Symbolists, who used symbolic, rule-based systems to develop an AI. This approach believes that AI can be constructed by using a set of simple rules. The main disadvantage of symbolists was the fact that they believed how an Artificial Intelligence system can be constructed solely by encoding it with a set of rules about our world, without using learning, but this was soon proved as a major flaw.

You see, our world is not defined by a set of fitted rules, we can see that in our everyday life. For instance, objects can’t be described with a system of symbols since observing a certain object from different angles will change it drastically, making an AI to not recognize it unless observing it from the same angle. It’s simple, reality is uncertain and vague, and using an approach not based on learning proved to be a wrong way for researchers.

Next, there are evolutionists. This group bases their research on evolutionary processes. Phenomena like mutation and learning are solid cornerstones for research, but using solely evolutionary rules in order to construct an AI isn’t the best possible solution. Evolutionists did some advances with Depp Learning, making some progress regarding AI construction.

We also have Bayesians, a group using probabilistic rules to make implications. They use Probabilistic Graph Models (PGMs) in their research, as well as the Monte-Carlo method for sampling distributions as their primary computational mechanism. Probabilistic-based research is similar to Symbolist approach, but it can also include uncertainties in their results.

Connectionists appeared in many forms since the dawn of AI research, the latest one being Deep Learning. They believe that intelligent behavior can arise from simple mechanics, simple mechanics that are highly interconnected. Think of the Butterfly Effect, how a few simple interconnections can have huge consequences.

These are the most popular approaches, but there are much more, not so popular ones. For instance, complexity theorists are an interesting group. They use physics-based methods, as well as chaos theory, and complexity theory.

Compressionists are interesting because they base their research on an idea that Cognition and learning are compression. The idea gave birth to the Information Theory, which is a universal concept that could be the guiding force of the AI research since it is way more powerful than the use of probabilistic statistics.

Image Source: www.shivonzilis.com

This was just a short list of some of the most popular AI research approaches. Presenting them all would be an impossible task, just look at the picture above, showing machine intelligence landscape. We hope that this list give you a certain understanding of the complexity of the subject, but more probably it just made you even more confused.

Leave a Reply

Your email address will not be published. Required fields are marked *