Technology

New Cyber Software can Confirm how much Knowledge AI Truly Possesses

New Cyber Software can Confirm how much Knowledge AI Truly Possesses

Researchers typically use benchmark datasets and tests designed to assess an AI’s performance on various tasks to evaluate its knowledge. These benchmarks are meticulously designed to assess specific aspects of an AI’s capabilities, such as natural language comprehension, reasoning, and problem-solving abilities. These assessments, however, are not a direct measure of the AI’s “knowledge” in the sense that we understand human knowledge.

With a growing global interest in generative artificial intelligence (AI) systems, researchers at the University of Surrey have developed software that can verify how much information an AI obtained from an organization’s digital database.

Surrey’s verification software can be used as part of a company’s online security protocol, assisting an organization in determining whether an AI has learned too much or accessed sensitive data.

The software can also determine whether AI has discovered and exploited flaws in software code. For example, in an online gaming context, it could determine whether an AI has learned to always win at online poker by exploiting a coding flaw.

Over the last few months, there has been a huge surge of public and industry interest in generative AI models fuelled by advances in large language models such as ChatGPT.

Professor Adrian Hilton

Dr. Solofomampionona Fortunat Rajaona is a Research Fellow in formal verification of privacy at the University of Surrey and the lead author of the paper. He said:

“In many applications, AI systems interact with each other or with humans, such as self-driving cars in a highway or hospital robots. Working out what an intelligent AI data system knows is an ongoing problem which we have taken years to find a working solution for.”

“Our verification software can deduce how much AI can learn from their interaction, whether they have enough knowledge that enable successful cooperation, and whether they have too much knowledge that will break privacy. Through the ability to verify what AI has learned, we can give organizations the confidence to safely unleash the power of AI into secure settings.”

New cyber software can verify how much knowledge AI really knows

The study about Surrey’s software won the best paper award at the 25th International Symposium on Formal Methods.

“Over the last few months, there has been a huge surge of public and industry interest in generative AI models fuelled by advances in large language models such as ChatGPT,” said Professor Adrian Hilton, Director of the Institute for People-Centred AI at the University of Surrey.

“The development of tools that can verify the performance of generative AI is critical to ensuring their safe and responsible deployment. This study is an important step toward ensuring the privacy and integrity of training datasets.”

It is difficult to create software that can accurately assess how much true understanding an AI model possesses. There are ongoing efforts to evaluate AI models using benchmarks and testing, but these evaluations frequently focus on specific tasks or domains and do not provide a comprehensive measure of AI’s knowledge or understanding.