ChatGPT enters the Apple App Store policy battleground

The news: Apple blocked the email app BlueMail developed by Blix due to new ChatGPT features.

  • The app uses OpenAI’s chatbot to help make automated email writing sound more natural based on users’ previous emails and calendar events, per The Wall Street Journal.
  • Apple is concerned that the bot-based app could generate inappropriate content for children and wants a 17 years-and-older age restriction or content filtration added. As of last week, the app had a 4 years-and-older minimum age restriction.
  • Microsoft’s updated version of Bing AI for mobile comes with a 17-years-old minimum age restriction on the App Store.

Rocky road ahead: The App Store has long been a battleground pitting Apple’s policies against tech companies’ desire for consumer reach. The latest disagreement over BlueMail’s ChatGPT integration is a sign that tensions may worsen with the advent of commercial generative AI.

  • Apple’s wariness of the tech is likely heightened by the suggestion that tech companies could be held liable for AI-generated content during the Supreme Court’s Section 230 deliberations just as the FTC threatens its own intervention.
  • Microsoft has been tweaking features and policies for Bing AI following reports of the bot’s disturbing responses.
  • The wide gap between what’s appropriate content for 4-year-olds and 17-year-olds means there’s room for nuance in content filtration safeguards.

There’s a solution and a catch: With the technology still in its experimental phase, we’ll likely continue to see volatility in its commercial deployment.

  • Microsoft’s latest feature that allows users to switch between chatbot response styles could be further adapted for youth age categories to make the tech safer for minors.
  • As generative AI models become more advanced, their capacity for spontaneous learning means that they may continue to go off-script despite safety features.
  • What’s more, a study found that for $60, a bad actor could potentially make tiny changes to datasets used to train AI models.
  • Such tampering could go undetected and yet have serious consequences for generative AI output and increase companies’ liability risks.

This article originally appeared in Insider Intelligence's Connectivity & Tech Briefing—a daily recap of top stories reshaping the technology industry. Subscribe to have more hard-hitting takeaways delivered to your inbox daily.