The trend: ChatGPT is one of many powerful AI systems released this year that are raising questions about the future of work, companies, and the ethics of commercial AI.
- The stir prompted Morgan Stanley to issue a report saying that the technology could “disrupt Google’s position as an entry point for people on the internet,” but that Google is still in a strong position with its Search upgrades and similar AI products on the way.
- The ongoing tech recession has affected the valuation and revenue growth of Big Tech companies like Google this year.
- For this reason, concerned Google employees wonder if commercializing systems like ChatGPT could help bolster the tech giant’s financial position.
- The technology is expected to transform a slew of creative occupations, SEO specialists, and content marketers, who could use it to enhance and speed up their work.
The formidable balancing act: Google faces a greater reputational risk over product issues compared with a startup, but there are also legal risks that should concern any tech company that builds AI products.
- AI-powered search tools provide authoritative-sounding answers to complicated questions without citing sources.
- This means they could run afoul of legal and ethical standards if the bots give false or misleading answers to medical or other topics with social safety implications.
- Despite being impressive, such bots have repeatedly demonstrated that they’ll portray fiction as fact and give biased and offensive responses.
- Some point out that people make similar mistakes, but the difference is that our legal system is equipped to adjudicate human wrongdoing and hasn’t developed a framework for addressing generative AI’s culpability.
Careful consideration: Google’s cautious approach could signal that it's waiting to see what happens with the lawsuit filed against OpenAI and Microsoft over Copilot.
- Based on the tech giant’s AI investments, we can expect to see significant announcements from Google on the AI front in 2023.
- But the technology’s limitations, which include a lack of explainability, high compute costs, and monetization challenges, won’t be worked out within a matter of months.
- Going forward, tech companies may want to more carefully consider whether the publicity benefits of releasing half-baked AI systems for public testing is worth the risk—and expense.