Sundar Pichai, the CEO of Alphabet, signed an “AI Pact” and spoke with top European Union officials on Wednesday (May 24, 2023) about election-related misinformation and the Russian war in Ukraine.
Pichai stated that Alphabet-owned Google would work with other businesses on self-regulation to guarantee that AI goods and services are created ethically during a meeting with Thierry Breton, the European commissioner for the internal market.
“Agreed with Google CEO @SundarPichai to work together with all major European and non-European #AI actors to already develop an “AI Pact” on a voluntary basis ahead of the legal deadline of the AI regulation,” Breton said in a tweet Wednesday afternoon.
“We expect technology in Europe to respect all of our rules, on data protection, online safety, and artificial intelligence. In Europe, it’s not pick and choose. I am pleased that @SundarPichai recognises this, and that he is committed to complying with all EU rules.”
The development gives an indication as to how leading technology executives are attempting to appease lawmakers and avoid impending legislation. The European Parliament earlier this month greenlighted a groundbreaking package of rules for AI, including provisions to ensure the training data for tools like ChatGPT doesn’t violate copyright laws.
The regulations aim to regulate AI using a risk-based approach, banning uses of the technology that are judged “high risk,” like facial recognition, and imposing rigorous transparency limitations on uses of the technology that are deemed “limited risk.”
Regulators are growing increasingly concerned by some of the risks surrounding AI, with tech industry leaders, politicians and academics having raised alarm about the recent advances in new forms of the technology such as generative AI and the large language models that power them.
These tools allow users to generate new content such as a poem in the style of William Wordsworth or an essay in a refined form by simply giving them prompts on what to do.
They have sparked worry, not least because of the possibility of labor market disruption and their capacity for spreading misinformation.
ChatGPT, the most popular generative AI tool, has amassed more than 100 million users since it was launched in November. Google released Google Bard, its alternative to ChatGPT, in March, and unveiled an advanced new language model known as PaLM 2 earlier this month.
During a separate meeting with Vera Jourova, a vice president of the European Commission, Pichai committed to ensuring its AI products are developed with safety in mind.
Both Pichai and Jourova “agreed AI could have an impact on disinformation tools, and that everyone should be prepared for a new wave of AI generated threats,” according to a readout of the meeting that was shared with CNBC.
“Part of the efforts could go into marking or making transparent AI generated content. Mr. Pichai stressed that Google’s AI models already include safeguards, and that the company continues investing in this space to ensure a safe rollout of the new products.”
Tackling Russian propaganda
Pichai’s meeting with Jourova also focused on disinformation around Russia’s war on Ukraine and elections, according to a statement.
Jourova “shared her concern about the spread of pro-Kremlin war propaganda and disinformation, also on Google’s products and services,” according to a readout of the meeting. The EU official also discussed access to information in Russia.
Jourova asked Pichai to take “swift action” on the issues faced by Russian independent media that can’t monetize their content in Russia on YouTube. Pichai agreed to follow up on the issue, according to the readout.
In addition, Jourova “highlighted risks of disinformation for electoral processes in the EU and its Member States.”
The next elections for European Parliament will take place in 2024. There are also regional and national elections across the region this year and next.
Jourova praised Google’s “engagement” with the bloc’s Code of Practice of Disinformation, a self-regulatory framework released in 2018 and since revised, aimed at spurring online platforms to tackle false information. She added, though, that “more work is needed to improve reporting” under the framework.
Signatories of the code are required to report how they have implemented measures to tackle disinformation.