Mrinank Sharma, who led Anthropic’s safeguards research team, has resigned from the AI company, saying his last day was February 9. In a public exit letter shared on social media, Sharma wrote that the “world is in peril” and signalled a dramatic shift in direction: he wants to focus on writing and pursue a poetry degree.
A resignation note that set off alarm bells
Sharma’s post stood out for its tone and framing. He linked his decision to a wider sense of crisis, citing risks beyond AI alone, including bioweapons and what he described as interconnected pressures unfolding at the same time.
He also pointed to a recurring tension he felt inside organisations. In his letter, he wrote about how difficult it can be to let stated values guide day-to-day actions when other pressures mount.
What Sharma says he worked on at Anthropic
In the same letter, Sharma said he had achieved what he set out to do at Anthropic. He referenced work on AI “sycophancy” and safety efforts aimed at reducing risks from AI-assisted bioterrorism, among other projects.
Anthropic has described the safeguards team as focused on areas such as jailbreak robustness, automated red-teaming and monitoring techniques to detect misuse and misalignment.
Why the poetry plan matters to the story
Sharma’s next step is unusual for a high-profile AI safety role. He said he wants to create space for writing that engages with the moment, and to place “poetic truth” alongside scientific work as a way of understanding what technology is doing to society.
He also said he hopes to explore a poetry degree and devote himself to what he called “courageous speech,” closing his letter with a poem by William Stafford.
What it indicates about the AI safety debate
The resignation is already being read as more than a personal career change. First, it highlights how emotionally and ethically taxing AI safety work can be, even for researchers who see the mission as urgent.
Second, it puts renewed focus on a core question facing AI labs: how closely internal incentives match public commitments on safety. Sharma did not provide detailed allegations, but he explicitly described feeling repeated pressure to set aside what matters most.
What Anthropic has said so far
Business Insider reported that neither Sharma nor Anthropic immediately responded to requests for comment about the departure.
For now, Sharma’s letter is doing the talking. It is part warning, part farewell, and part reinvention. Yet it lands at a moment when AI labs are under intense scrutiny over whether their safety promises will keep pace with the speed of their next releases.

