In June 2024, a group of current and former AI researchers published an open letter. They called it the Right to Warn. The argument was simple: the people who know the most about what frontier AI can do are the least free to talk about it. NDAs. Financial incentives. A regulatory environment that hasn't caught up. The letter asked for the right to speak without being destroyed for it.
Five months later, one of the most credible insiders in the AI copyright debate was found dead in his apartment.
His name was Suchir Balaji. He was 26. He had helped build GPT-4. His name is in the official technical report. And in the months before he died, he had published a detailed legal argument that OpenAI's training practices violated copyright law, agreed to testify in active litigation, and been formally named as a witness by New York Times lawyers as someone with "unique and relevant documents."
The official finding is suicide. The investigation is closed.
But the story didn't close. It detonated.
What followed was a year of protests outside OpenAI's headquarters, a family suing the city for records, a wrongful death lawsuit against his apartment complex, a Tucker Carlson interview where Sam Altman was asked directly about murder allegations, and a statewide ballot initiative filed by his mother to regulate the company her son helped build.
Most coverage of this story focuses on the conspiracy theories. Whether he was really killed. Whether OpenAI had motive. That's the loudest version of the story.
It's not the most important one.
The more important question is structural. Frontier AI companies hold enormous amounts of non-public information about capabilities, risks, and what actually goes into these models. The people who know that information are bound by NDAs. And traditional whistleblower law only protects you when you report illegal activity. If the risk isn't regulated yet, there's no protected channel. The speech doesn't flow. And when someone tries to speak anyway, they do it alone, carrying the full weight of what they know, with no institutional framework to catch them.
That's the accountability vacuum. It's not a conspiracy. It's a design flaw.
Suchir Balaji's arguments are still in court. His essay is still on suchir.net. His name is still in the filings. The warning survived. The question is whether the infrastructure to act on it will exist before the next one.
This week's video goes deeper into the full story — the SEC investigations, the NDA complaints, the legal cases, and what the courts are actually deciding about AI copyright.
[Watch the full video here → link]
