AI is more than a buzzword, but right now, it’s closer to sea slug than an all-knowing machine.
Though the term was officially coined in the 1950s, Artificial Intelligence (AI) is a concept that dates back to ancient Egyptian automatons and early myths of Greek robots. Notable attempts to define AI include the 1956 Dartmouth conference and the Turing test, and passionate AI advocates persist to explain the concept to the world in a way that is distinguishable and digestible.
AI is a topic of mystery, wonder, and seemingly endless possibilities. However, it remains elusive to the general public, and is often portrayed negatively in predictions of its future. To combat the cycle of fear induced by Hollywood’s versions of AI, we need to understand, clearly, what artificial intelligence is.
How to know if it’s AI
In its most complete and general form, an AI might have all the cognitive capabilities of humans, including the ability to learn. However, a machine is only required have a minute fraction of these skills to qualify as an AI.
Artificial Intelligence is the trait of a machine, usually a computer program, to exhibit intelligent behavior. Intelligence, in this context, means the ability to achieve a goal under the varying circumstances or conditions that occur in the world. Correspondingly, within computer science, the domain of AI is the study of designing such intelligent systems.
Based on this technical definition, an AI doesn’t require the ability to learn. In the most extreme case, all intelligent behavior in the machine could be directly hard-coded into it by a programmer. The machine can still conform to the definition of AI as long as the preset algorithm allows it to achieve its objective. Many current-day AI systems are actually of this rule-based systems type, where engineers supply all the intelligence to the system.
Machine learning is the science of making machines exhibit intelligent behavior without explicitly being programmed to do so. Specifically, it provides systems with the ability to autonomously learn from data and improve without an engineer having to change its program code.
On a less technical level, you could say that AI is the goal and machine learning is one of the paths to get there — have the machine figure it out itself. In many cases, machine learning is concerned with learning and improving models using previously collected data. Using the data, the machine can make experience-driven predictions or decisions. By keeping its models up-to-date, the machine will autonomously learn to adapt to changing environments.
AI is not autonomously superior to humans
To clarify what AI isn’t capable of, we need to explain what it can do. While engineers can handcraft AI and supply all the intelligence, machine learning is increasingly important when creating AI systems. This is because machine learning promises to reduce manual engineering time while finding unknown solutions, even to domain experts. However, in many cases, engineering time simply shifts from directly designing an AI to designing a machine learning algorithm that learns the solution itself. Human engineering is still very much needed.
At first glance, the above is the perfect solution. We create an AI capable of learning, show it how to learn the solution to a task, and subsequently, it will simply figure out a solution to any related problem, right? It would appear that big companies like Google, Microsoft, and Apple think so: they’re capitalizing on this intuitive expectation to persuade people that their AI systems will solve many of their customers’ problems. They’re investing heavily in AI and making big promises.
Over the last decade, learning systems have glamorously solved object recognition, speech recognition, speech synthesis, language translation, image creation, and gameplay. The algorithms’ abilities are advertised as groundbreaking, which they are. People without deep technical background in machine learning often perceive machines’ improvements in performing specialized tasks like these as an AI’s rapidly increasing set of combined abilities. This is not entirely true.
Every day, an algorithm is learning to solve new tasks and getting better at others. Google DeepMind’s AlphaGo AI defeated Lee Sedol, one of the world’s best Go players. Upon learning this, a client with a background in engineering stated,
“We now have a general AI which has learned to outperform humans in Go — it could surely optimize the design of a car’s exhaust system.”
However, this reasoning is based on the assumption that once a machine learning algorithm has been developed to solve one problem, that same algorithm can be easily applied to solve a different problem. That’s not the case.
In reality, each of the above-mentioned breakthroughs are achieved by highly-specialized machine learning algorithms that have taken some of the smartest people on the planet years to develop. They were designed and fine-tuned with the specific goal of solving their specific task — and only that task.
There are some underlying methodologies, such as deep learning, that can be repeatedly applied across various application domains. However, for most applications, combining various machine learning methodologies is required. The resulting machine learning system needs to be tailored to fit to the data from the specific application, and the training algorithms need to be tuned to find a high-performing solution. Each of those steps requires a machine learning expert (often more than one) supplemented by software engineers and domain experts.
It takes an army
AlphaGo was the result of a multiple-year project with at least 17 people contributing to it, of whom several are leading experts in their respective fields of machine learning. According to third-party sources, AlphaGo is reported to have used 1920 CPUs and 280 GPUs during its game with Sedol.
Big AI companies have several teams of world-renowned machine learning experts paired with software engineers. In many cases, each team is dedicated to one specific application domain with the goal to research incremental methods to improve the current best machine learning approach in that domain.
Modern AI is more like a sea slug than an omnipotent machine
Biology offers a good intuition of today’s AI capabilities. Biologists research ‘the mechanisms that cause an animal to change the way it responds to a particular circumstance after an experience alters the meaning of that circumstance’. Put in one word: learning.
A common research subject is the sea hare (i.e. a mollusc or sea slug): specifically, scientists study the genes that define how its neurons fire. Depending on their genetic structure, two species of sea hares condition their behavior differently based on the same experience (i.e. data). Right now, machine learning operates at roughly this level; experts modify the program code of the learning algorithm (similar to the gene code of the sea hare), changing its abilities and predispositions to adapt to various experiences. The developmental state of machine learning is probably closer to invertebrates, like the sea hare, than to the advanced cognitive abilities of mammals or humans.
During the past two years, researchers started developing machine learning techniques that adapt autonomously to new tasks. However, methodologies are only in their infancy. To put it in the words of a DeepMind scientist,
“recent work on memory, exploration, compositional representation, and processing architectures provides grounds for optimism.”
In other words, we have reason to believe that reaching the goal of a more general AI might be feasible.