The news: To make ChatGPT fit for public consumption, OpenAI outsourced data detoxification to Kenyan laborers to help the AI learn to avoid toxic content.
- Beginning in late 2021, the startup paid the workers $1.32 to $2 per hour to classify and filter harmful content depicting horrific abuse, with one worker describing the task as “torture,” per Insider.
- San Francisco-based outsourcing firm Sama employed the Kenyan workers and has coordinated similar data detoxification on behalf of companies like Google, Microsoft, and Meta. Meta was sued over inhumane working conditions at Sama last May.
- After the ChatGPT detoxification work was complete, Sama shuttered its Nairobi office, terminating 200 content moderation jobs, per Quartz.
An ethical dissection: Generative AI has quickly become a sensational technology with potential to ignite trillions of dollars’ worth of economic activity. Just as meteoric is its rise in controversy.
Moral depravity isn’t good for business: OpenAI’s ambitions are likely to bring significant scrutiny as it expects to earn $1 billion in revenue from its products by 2024, per Quartz. Similar attention could be paid to Microsoft, which has already invested $3 billion in the startup and is considering adding $10 billion.
- The outcome of OpenAI CEO Sam Altman’s vision of the tech’s “potential to shape the trajectory of humanity” depends on how AI companies choose to build and deploy their models.
- Tech firms could pay higher wages for outsourced data detoxification and still save money compared with hiring locally, but it likely wouldn’t address the cumulative deleterious effects on the communities where the workers live.
- Generative AI comes with a litany of potential costs to society that need to be weighed against the benefits in its deployment.
- If the technology becomes a focal point for the global tech cold war, regulators might be slow to act.
- Public outrage over social fallout from generative AI is a clear risk that should push tech firms to adopt a more cautious approach.