The future of artificial intelligence

While future developments in artificial intelligence research are likely to yield useful industrial applications, in the medium term the technology may turn out to be less revolutionary than many expect.

AlphaGo and Ke Jie playing Go (artificial intelligence future))
During a Go contest in 2017, Google’s AlphaGo AI program defeated Ke Jie, the world’s top-ranking player, in all three matches. © Getty Images
×

In a nutshell

  • AI is still far behind human intelligence
  • There may be limits to what machines can learn
  • AI research could still bring significant benefits

Artificial intelligence (AI) can beat humans at complex tasks like chess and video games, but it still cannot reproduce behaviors that come naturally to people, like making small talk about the weather. 

In his 1994 book “The Language Instinct,” linguist and cognitive scientist Steven Pinker concluded that: “The main lesson of thirty-five years of AI research is that the hard problems are easy, and the easy problems are hard.”

This paradox has led researchers to divide AI into two different types: artificial general intelligence or strong AI, and weak (or narrow) AI.

Intelligence and programming

Strong AI means the ability to learn any task that people can perform. In contrast, weak AI is not intended to have cognitive abilities; it is a program designed to solve a single problem, like computers that play chess.

There are already several ways to program AI, like machine learning, deep learning or artificial neural networks. Programming, however, does not equal intelligence – it is only part of the input necessary to generate intelligence. Advances in programming have led to better and broader applications of weak AI. For example, Google’s AlphaGo is designed to play the board game Go using a mix of deep learning and statistical simulations. While it can outperform humans at the game, it cannot do any other task.

Despite the growing sophistication of programming, strong AI is slow to develop.

Despite the growing sophistication of programming, strong AI is slow to develop. The bar for intelligence can be set at different levels: sentience, conscience, acting as if understanding or merely interacting. The lowest benchmark is coping with a task involving unforeseen parameters, like driving a car. And even by this measure, artificial general intelligence still has a long way to go.

Intuitive physics and psychology

Driving is relatively easy: it involves anticipating and executing different moves. However, the driver also needs two additional abilities: estimating physical interactions and predicting human movements. In other words, a driver needs intuitive physics and psychology.

Intuitive physics is the human capacity to understand how objects interact. Researchers believe that people have an instinctive knowledge of physics that allows them to navigate the world and make predictions even in entirely new situations. AI, however, currently cannot reproduce this behavior. Different experiments have shown that it fails to predict and react to physical interactions within a 200-meter radius.

Beijing robot (artificial intelligence future)
A robot disinfects an ice rink at the Beijing 2022 Winter Olympics. Even if AI does not progress much beyond its current state, there could be benefits to replacing part of human labor with machines. © Getty Images

Intuitive psychology is the ability to gain insight into the motives of animate agents and to make predictions based on inferences. This does not mean reading minds, but rather understanding that other people have mental states like goals and beliefs. This intuition allows humans to anticipate other people’s actions and to plan their reactions. Here too, AI struggles. In this area, many animals perform much better than AI at this stage.

A self-driving car requires both of these skills. It must be able to judge whether it is safe to drive over an object on the road – to evaluate whether something is a plastic bag or a brick. And it must be able to anticipate the intentions of human drivers – whether a car is only trying to pass by or whether it is out of control and might collide. And even in an environment where all drivers are automated, the issue of predicting physical interactions remains.

It is possible that part of human knowledge is simply not transferable to machines.

Intuitive physics and psychology come easily to humans because they are acquired not through conscious learning but as a result of adaptive behavior. It is difficult to teach AI similar skills because there is no learning “program” for them. Researchers struggle to reproduce the adaptive behavior process that creates these abilities for the sake of teaching AI. And it is still unclear whether this is the right path – maybe AI needs a different method to acquire these competencies.

Some analysts even claim that it is impossible for AI to learn these abilities because AI is not human and will therefore never be able to act and behave like people. AI, as its name indicates, is artificial. All it knows is, in principle, programmed by humans. And it is possible that part of human knowledge is simply not transferable to machines.

×

Scenarios

There are three basic scenarios for the medium-term development of AI. A first base scenario can be combined with either of the other two.

The base scenario is that the development of weak AI will continue. Programming will lead to increasingly refined single-task feats. This type of AI can be used in manufacturing, healthcare, administration and some services, but its constraints are apparent. Once it is dedicated to a function, it is not able to perform another. And it can only perform tasks with limited scope for unforeseen factors. Still, weak AI can free up human labor capabilities and work more precisely. It can also potentially lower costs.

Building on the base scenario, one possibility is the development of strong AI in an upward concave curve. The first developments would make larger progress than subsequent ones. There would be a diminishing marginal increase of benefits to strong AI innovation with time. This scenario would confirm the claim that there are limits to what machines can learn. If there is a fundamental difference between human and nonhuman intelligence, then strong AI will develop only to a point. This is not necessarily bad news. While teaching computers, scientists could find out a lot about human intelligence and discover new programming principles. These benefits could make it worth pursuing artificial general intelligence even if research falls short of creating strong AI. It could also lead to humans specializing in some areas and machines in others, which would bring welfare gains. 

Strong AI could also develop in an upward convex curve. The first steps would be arduous but later developments would accelerate. In this scenario, there would be no difference between human and machine intelligence, and therefore no limitations to what strong AI can learn. Coming up with the first building blocks of artificial general intelligence is difficult because it involves discovering new programming principles and new hardware. But once those are in place, innovation could speed up. During the Industrial Revolution, building the first machines and connecting them to a power grid was difficult. But once the technology became widespread, new inventions multiplied. In this scenario, strong AI could usher in a new way of organizing human life – most likely for the better.

Hypothetically, research in AI could implode because of government intervention or spiraling costs. This is, however, extremely unlikely. Even if governments curb AI research, the potential rewards are high enough to attract resources.

It is also extremely unlikely that “the singularity” would occur, with artificial agents becoming more intelligent than humans and ultimately taking over. Even if artificial intelligence were to surpass human capabilities, machines would not necessarily conquer the world. Intelligence is only one aspect of life – some people are more intelligent than others and it does not determine social order.

Overall, AI and its implementations hold much promise. However, its proponents have been overpromising. Weak AI is the frontier of innovation at the moment, and the technology has more benefits than its name implies. Strong AI is more science fiction than reality for now. It is not even clear whether it can ever become reality. But this, too, is not bad news. Researching strong AI can still lead to many beneficial innovations.

Related reports

Scroll to top