Google’s PaLM-SayCan research might make cobots truly collaborative

The news: Everyday Robots, a Google subsidiary, is working with Google researchers to develop a robot that understands human desires.

  • One of its “moonshot” projects, PaLM-SayCan is deployed in a prototype that combines AI chatbot and robotic technologies, per SiliconAngle.
  • The bot is helping out in a micro-kitchen at Google’s campus, grabbing snacks for workers, throwing away trash, and helping clean up spills.
  • PaLM, or Pathways Language Model, is Google’s advanced form of natural language processing (NLP), which it claims has increased robots’ ability to successfully respond to voice commands from 61% to 74%, per Reuters.

How it’s different: Despite the proliferation of chatbots, automation, and robotics in industry, a machine that combines voice command technology with the ability to perform general tasks safely and effectively remains elusive.

  • Google’s PaLM system helps the prototype bots respond to human requests by processing speech based on data from human interactions, the internet, and books.
  • It then relies on Google’s SayCan technology to decide on an appropriate response to the request, despite not necessarily having been specifically programmed to do so.
  • For example, when a researcher told the robot, “I’m hungry,” it responded by fetching a bag of chips—a spontaneous reaction due to language-based training.
  • Although the example seems simple, getting AI bots to respond to what humans mean instead of what they say is challenging.

Why it’s worth watching: Google’s PaLM-SayCan endeavor has the potential to solve AI’s King Midas problem. The essence of the problem is the danger of machines carrying out narrowly defined objectives without understanding the greater human context.

  • A classic thought experiment example is a machine programmed to manufacture paper clips so effectively that it drains the world’s resources to make paper clips because that was its objective.
  • By helping robots understand what humans mean by the objectives, technologies like Google’s PaLM-SayCan could help avert disastrous outcomes.
  • Though there’s no timeline for commercial deployment, the technology underlying the prototype could be game-changing for the rise of collaborative robots, or cobots, to safely and more effectively work alongside humans.

The pitfall: Merely understanding what humans mean won’t automatically make robots consider broader human values.

  • The problem of training AI systems using data from the internet is illustrated by Meta’s recently released BlenderBot 3, which is prone to making offensive, false, and contradictory statements.
  • Instead of being let loose online, robots, like children, should get a better educational foundation relying only on vetted information put into context.
  • As chips aren’t necessarily the best way to satisfy hunger, researchers could also fine-tune bots to ask clarifying questions and get permission before taking action.

This article originally appeared in Insider Intelligence's Connectivity & Tech Briefing—a daily recap of top stories reshaping the technology industry. Subscribe to have more hard-hitting takeaways delivered to your inbox daily.

Want to learn more about how you can benefit from our expert analysis?Click here