The trend: Calls for AI regulation are getting louder as the technology’s use expands.
- OpenAI CTO Mira Murati, who helped build ChatGPT, acknowledged that the chatbot isn’t always accurate and also called for government oversight of the technology.
- EU industry chief Thierry Breton is pushing for the passage of strict rules governing AI under the proposed AI Act.
- Executive director of global at Deloitte AI Institute Beena Ammanath has warned that a generative AI arms race, like the one between Microsoft and Google, could have “unintended consequences.”
- Ammanath also said that companies’ cavalier AI deployment is like “building Jurassic Park, putting some danger signs on fences, but leaving all the gates open.”
Use only as directed: OpenAI co-founder Sam Altman warned after the chatbot’s release last year that “it’s a mistake to be relying on it for anything important right now.” The problem is that the directive has been widely ignored.
- Last month, a judge in Colombia used ChatGPT to inform a ruling on a case involving health insurance coverage for a child with autism.
- The chatbot has an IQ of 83, well below the US average of 97.43, according to the Ulster Institute for Social Research, and is being used to write scientific reports, craft legislation, and write code, among other uses in business and education.
- Users are circumventing OpenAI’s safeguards by getting the bot to say positive things about drug abuse and give advice for how to smuggle drugs into Europe.
Moving fast and breaking things? AI chatbots don’t think. Instead, their algorithms enable them to string together information scraped from the web in creative ways. They’re generating interesting content that’s also laden with inaccuracies and bias.