Trust, security, and privacy guide every product and decision we make.
Each week, 800 million people use ChatGPT to think, learn, create, and handle some of the most personal parts of their lives. People entrust us with sensitive conversations, files, credentials, memories, searches, payment information, and AI agents that act on their behalf. We treat this data as among the most sensitive information in your digital life—and we’re building our privacy and security protections to match that responsibility.
Today, that responsibility is being tested.
The New York Times is demanding that we turn over 20 million of your private ChatGPT conversations. They claim they might find examples of you using ChatGPT to try to get around their paywall.
This demand disregards long-standing privacy protections, breaks with common-sense security practices, and would force us to turn over tens of millions of highly personal conversations from people who have no connection to the Times’ baseless lawsuit against OpenAI.
They have tried this before. Originally, the Times wanted you to lose the ability to delete your private chats. We fought that and restored your right to remove them. Then they demanded we turn over 1.4 billion of your private ChatGPT conversations. We pushed back, and we’re pushing back again now. Your private conversations are yours—and they should not become collateral in a dispute over online content access.
We respect strong, independent journalism and partner with many publishers and newsrooms. Journalism has historically played a critical role in defending people’s right to privacy throughout the world. However, this demand from the New York Times does not live up to that legacy, and we’re asking the court to reject it. We will continue to explore every option available to protect our users’ privacy.
We are accelerating our security and privacy roadmap to protect your data. OpenAI is one of the most targeted organizations in the world. We have invested significant time and resources building systems to prevent unauthorized access to your data by adversaries ranging from organized criminal groups to state-sponsored intelligence services.
However, if the Times succeeds in its demand, we will be forced to hand over the very same data we’re protecting—your data—to third parties, including the Times’ lawyers and paid consultants.
Our long-term roadmap includes advanced security features designed to keep your data private, including client-side encryption for your messages with ChatGPT. We believe these features will help keep your private conversations private and inaccessible to anyone else, even OpenAI. We will build fully automated systems to detect safety issues in our products. Only serious misuse and critical risks—such as threats to someone’s life, plans to harm others, or cybersecurity threats—may ever be escalated to a small, highly vetted team of human reviewers. These security features are in active development and we will share more details about them, and other short-term mitigations, in the very near future.
The privacy and security protections must become more powerful as AI becomes more deeply integrated into people’s lives. We are committed to a future where you can trust that your most personal AI conversations are safe, secure, and truly private.
—Dane Stuckey, Chief Information Security Officer, OpenAI
## Answers to your questions
Why are The New York Times and other plaintiffs demanding this?
What led to this stage of the process?
Did you offer any other solutions to the Times?
Is the NYT obligated to keep this data private?
How are these 20 million chats selected?
* The 20 million user conversations were randomly sampled from Dec. 2022 to Nov. 2024.
Is my data potentially impacted?
Are business customers potentially impacted?
* This does not impact ChatGPT Enterprise, ChatGPT Edu, ChatGPT Business (formerly “Team”) customers, or API customers.
What are you doing to protect my personal information and privacy?
How will you store this data?
Who will be able to access this data?
Does this court order violate GDPR or my rights under European or other privacy laws?
Will you keep us updated?
* Yes. We’re committed to transparency and will keep you informed. We’ll share meaningful updates, including any changes to the order or how it affects your data.
Dane Stuckey, OpenAI
Why Codex Security Doesn’t Include a SAST Report Product Mar 16, 2026
Designing AI agents to resist prompt injection Security Mar 11, 2026
Codex Security: now in research preview Product Mar 6, 2026
Our Research * Research Index * Research Overview * Research Residency * OpenAI for Science * Economic Research
Latest Advancements * GPT-5.3 Instant * GPT-5.3-Codex * GPT-5 * Codex
Safety * Safety Approach * Security & Privacy * Trust & Transparency
ChatGPT * Explore ChatGPT(opens in a new window) * Business * Enterprise * Education * Pricing(opens in a new window) * Download(opens in a new window)
Sora * Sora Overview * Features * Pricing * Sora log in(opens in a new window)
API Platform * Platform Overview * Pricing * API log in(opens in a new window) * Documentation(opens in a new window) * Developer Forum(opens in a new window)
For Business * Business Overview * Solutions * Contact Sales
Company * About Us * Our Charter * Foundation * Careers * Brand
Support * Help Center(opens in a new window)
More * News * Stories * Livestreams * Podcast * RSS
Terms & Policies * Terms of Use * Privacy Policy * Other Policies
(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)
OpenAI © 2015–2026 Manage Cookies
English United States