Openai Is Making the Mistakes Facebook Made
Zoë Hitzig writes that OpenAI Is Making the Mistakes Facebook Made. I Quit.
The data trove OpenAI possesses is incredibly personal:
For several years, ChatGPT users have generated an archive of human candor that has no precedent […] they have revealed their most private thoughts. People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife. Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.
She notes:
In its early years, Facebook promised that users would control their data and be able to vote on policy changes. Those commitments eroded. The company eliminated holding public votes on policy. Privacy changes marketed as giving users more control over their data were found by the Federal Trade Commission to have done the opposite, and in fact made private information public. All of this happened gradually under pressure from an advertising model that rewarded engagement above all else.
The erosion of OpenAI’s own principles to maximize engagement may already be underway.
She wonders, as she leaves OpenAI:
[…] the real question is not ads or no ads. It is whether we can design structures that avoid both excluding people from using these tools, and potentially manipulating them as consumers.
— via Techmeme