Mathematic

The Limits of AI are demonstrated by Mathematical Paradoxes

The Limits of AI are demonstrated by Mathematical Paradoxes

Humans are usually quite adept at recognizing mistakes, but artificial intelligence systems are not. According to a new study, AI has intrinsic restrictions as a result of a century-old mathematical contradiction.

AI systems, like certain individuals, frequently have a level of confidence that far surpasses their actual capability. And, like an overconfident person, many AI systems are unaware when they are making errors. It is sometimes more difficult for an AI system to recognize when it is making a mistake than it is to produce the proper output.

According to researchers from the University of Cambridge and the University of Oslo, modern AI’s Achilles’ heel is instability, and a mathematical contradiction demonstrates AI’s limitations. The most advanced AI technology, neural networks, roughly simulate the connections between neurons in the brain. The researchers demonstrate that there are problems in which stable and accurate neural networks exist, yet no method can generate such a network. Algorithms can only compute stable and accurate neural networks in a few instances.

Under particular conditions, the researchers present a classification theory that describes when neural networks can be taught to generate a trustworthy AI system. Their findings have been published in the Proceedings of the National Academy of Sciences.

Many AI systems are unstable, and this is becoming a huge issue, particularly as they are being utilized in high-risk sectors like disease detection or autonomous vehicles. If AI systems are employed in areas where they have the potential to cause significant harm if they go wrong, trust in those systems must be prioritized.

Professor Anders Hansen

Deep learning, the top AI approach for pattern identification, has received a lot of attention recently. Examples include identifying sickness more effectively than physicians and reducing traffic accidents using self-driving vehicles. Many deep learning systems, on the other hand, are untrustworthy and easily duped.

“Many AI systems are unstable, and this is becoming a huge issue, particularly as they are being utilized in high-risk sectors like disease detection or autonomous vehicles,” said co-author Professor Anders Hansen of Cambridge’s Department of Applied Mathematics and Theoretical Physics. “If AI systems are employed in areas where they have the potential to cause significant harm if they go wrong, trust in those systems must be prioritized.”

The researchers traced the contradiction back to two twentieth-century mathematical giants: Alan Turing and Kurt Gödel. Mathematicians attempted to justify mathematics as the ultimate coherent language of science around the turn of the twentieth century. Turing and Gödel, on the other hand, demonstrated a fundamental paradox in mathematics: it is impossible to prove whether certain mathematical propositions are true or incorrect, and some computational problems cannot be solved with algorithms. And, anytime a mathematical system is rich enough to represent the arithmetic we learn in school, it is unable to demonstrate its own consistency.

Decades later, mathematician Steve Smale produced an 18-item list of unsolved mathematical issues for the twenty-first century. The 18th problem examined the limits of intellect in humans and computers alike.

Mathematical paradoxes demonstrate the limits of AI

“The dilemma first identified by Turing and Gödel has now been taken forward into the field of AI by Smale and others,” said co-author and Department of Applied Mathematics and Theoretical Physics professor Dr. Matthew Colbrook. “Mathematics has fundamental constraints, and similarly, AI algorithms cannot exist for certain tasks.”

According to the researchers, there are circumstances where decent neural networks exist but an inherently trustworthy one cannot be developed due to this paradox. “No matter how good your data is, you can never have the exact information to build the requisite neural network,” University of Oslo co-author Dr. Vegard Antun stated.

Regardless of the amount of training data, it is impossible to compute a good existing neural network. No matter how much data an algorithm has access to, it will not generate the desired network. “This is analogous to Turing’s argument: there are computational problems that cannot be solved regardless of processing power or runtime,” Hansen explained.

The researchers say that not all AI is inherently flawed, but it’s only reliable in specific areas, using specific methods. “The issue is with areas where you need a guarantee because many AI systems are a black box,” said Colbrook. “It’s completely fine in some situations for an AI to make mistakes, but it needs to be honest about it. And that’s not what we’re seeing for many systems – there’s no way of knowing when they’re more confident or less confident about a decision.”

“At the moment, AI systems can have a hint of guessing to them,” Hansen remarked. “You try something, and if it doesn’t work, you add additional stuff in the hopes that it will. You’ll eventually get bored of not getting what you want and attempt a fresh approach. It is critical to understand the limitations of various approaches. We have reached a point where AI’s practical triumphs have far outpaced theory and comprehension. To bridge this gap, a program on learning the foundations of AI computing is required.”

“When 20th-century mathematicians identified different paradoxes, they didn’t stop studying mathematics. They just had to find new paths, because they understood the limitations,” said Colbrook. “For AI, it may be a case of changing paths or developing new ones to build systems that can solve problems in a trustworthy and transparent way, while understanding their limitations.”

The researchers’ next step will be to combine approximation theory, numerical analysis, and computation foundations to discover which neural networks can be calculated by algorithms and which can be made stable and trustworthy. Just as Gödel and Turing’s paradoxes about the constraints of mathematics and computers led to complex foundation theories describing both the limitations and potential of mathematics and computing, perhaps a similar foundation theory will develop in AI.