Google aims to beat OpenAI with chat and search bot features

The news: Google is testing generative AI prototypes for launches this year.

  • As part of its “code red” response to OpenAI’s ChatGPT, the tech giant’s Atlas project has a chatbot called Apprentice Bard that it’s testing in a Q&A format with employees, per Insider.
  • Apprentice Bard is based on Google’s LaMDA model, which a former company engineer thought was sentient.
  • The chatbot looks similar to ChatGPT but also provides answers that incorporate current events.
  • Meanwhile, another product unit is testing a Search tool that would provide chatbot responses to queries based on five potential prompt options, replacing the “I’m feeling lucky” bar.
  • In addition to more human-like search responses, the tool would also include suggested follow-up questions and the typical link-based results.

A product-development tightrope: There’s no release date for these products, and a Google spokesperson said that the company wants to ensure that the tech is helpful and safe before sharing externally.

  • Such caution is mixed with urgency over Microsoft’s AI-Bing integration—an internal memo instructed the LaMDA team to prioritize Atlas over other projects.
  • Being second out of the gate, Google is under pressure to balance safety with a product that it hopes can outwit ChatGPT.
  • But a formidable rival isn’t enough. High compute costs associated with running generative AI models, coupled with languishing ad revenues, means it’ll have to devise a more robust monetization strategy for its AI.

One potentially profitable pathway could be to design more specialized generative AI tools.

  • OpenAI released an “imperfect” tool to detect machine-generated text and address plagiarism and cheating among students using ChatGPT.
  • With the education system scrambling to catch up with generative AI-induced upheaval, there’s an opportunity for tech companies to build AI-powered edtech products that augment instead of undermine the learning process.

Atlas shrugged: Products like Google’s and OpenAI’s are part of a watershed moment for AI that’s risking turbulence for society, like further spread of misinformation, bias, and educational disruption. We’re seeing the technology positioned as at least a partial replacement for human intelligence, and it might not be up to the task.

  • Despite a dearth of AI regulation, tech companies should watch out for their products breaking existing laws, like copyright infringement, and inadvertently violating anti-discrimination laws.
  • Even if AI chatbots become less error-prone and more naturally conversant, human-like intelligence isn’t human intelligence.
  • We might see more advanced chatbots give responses that are plausible and logically correct but ultimately incomplete and false.
  • Vast amounts of internet-scraped data used to train AI models can’t compete with the immeasurable data behind human history and evolution.

This article originally appeared in Insider Intelligence's Connectivity & Tech Briefing—a daily recap of top stories reshaping the technology industry. Subscribe to have more hard-hitting takeaways delivered to your inbox daily.