Blocks of text can be generated by large language models like ChatGPT that read like they were written by a person. Businesses like Microsoft, Snap, and Shopify are vying to include ChatGPT into their applications. Nevertheless, if Apple decides to bar ChatGPT-based apps from its App Store, the sole place to get software for an iPhone, the trend might be put on hold.
Blix, an email app maker that has regularly clashed with Apple over its App Store rules, says it ran into that hurdle this week.
According to co-founder Ben Volach, who was quoted in the Wall Street Journal, Apple rejected an update to the BlueMail app because it included ChatGPT to assist with email writing but lacked content screening over the chatbot’s output. Volach has also claimed on Twitter that Apple is “blocking” an AI update.
Apple said that without content filtering, the Blue Mail chatbot could produce words that aren’t appropriate for children, and the email app would have to raise its recommended age to 17 and up, according to the report.
Apple is investigating and the developers can appeal the decision, a spokesperson told CNBC.
Regardless, the Blue Mail episode isn’t a sign of an impending Apple crackdown on AI apps.
In reality, the Microsoft Bing app and Snapchat both include capabilities powered by ChatGPT that are already available through the App Store. There have also been distributed and successful AI apps via the App Store, like Lensa.
The Apple App Store Guidelines, a document outlining what Apple authorizes on the App Store, do not have an official AI or chatbot policy. Before approving any apps or updates, Apple has personnel in a division called App Review load them up and give them a quick test run.
Apple could add AI-specific guidelines in the future. For crypto apps, for example, Apple explicitly introduced a section about cryptocurrency in the guidelines allowing wallet apps and banning on-device mining in a 2018 update. Apple introduced new rules about NFTs last year. The company often releases updates to its guidelines in June and October.
But the Blue Mail episode does reflect that Apple’s App Store is strict about content that’s generated at massive scale either by users (in the case of social media apps, for example), or, more recently, by AI.
If an app can display content that infringes intellectual property, or messages that amount to cyberbullying, for example, then the app must have a way to filter that material and a way for users to report it, Apple says.
The content moderation rule was likely at the heart of a skirmish with Elon Musk’s Twitter late last year and was the reason Apple booted Parler from the App Store in 2021. Apple let Parler back on the App Store when it added content moderation.
Before it was released on the iPhone in the Bing app, the ChatGPT-based AI in Bing produced some creepy conversations, including threats against its users and pleas for help.
But Bing does have content moderation and filtering tools built into it. Microsoft’s AI allows users to downvote harmful responses, and includes a “safety system” that includes content filtering and abuse detection.
Microsoft has made changes to its Bing chatbot in recent weeks to curb those unsettling discussions; as a result, the chatbot now frequently avoids engaging in subjects that can send it off course.