Best infosec long-reads 4/18: The gap between capability and accountability is widening

Why Anthropic decided to keep Mythos under wraps, Iran adopted locally resonant narratives to exploit Irish political tensions, Sexual deepfakes are the scourge of schools, How insider risk and supply chain compromise afflicted twenty-something billionaires, MSG's owner is obsessed with surveillance

Best infosec long-reads 4/18: The gap between capability and accountability is widening
Photo by Imad Bo from Pexels.

Important publishing notice

This is the first week that full access to our curated infosec long reads moves behind a subscriber paywall. The goal is simple: to keep investing the time and expertise required to find, vet, and contextualize the most important security journalism each week. Free readers will still get highlights, but subscribers will get the complete, deeply curated set.

Please help support Metacurity today by upgrading your subscription to gain full access to this issue and all content published on Metacurity, including the archives.


April 18: This week's selection of long reads shares a single throughline: the divide between cybersecurity capability and accountability is widening, and almost every story lives in that gap.

Nation-state information operations are growing more culturally fluent — harder to detect, not because they're technically sophisticated, but because they've gotten better at sounding human. AI is accelerating vulnerability discovery faster than organizations can triage what they find. And the same tools enabling low-cost automation are enabling high-cost harm: student-driven deepfake abuse is no longer a fringe problem.

Meanwhile, the threat surface inside organizations keeps expanding. Fast-growing startups, legacy institutions, iconic venues — none of them have figured out insider risk at scale. Fraud is increasingly hard to distinguish from policy. And surveillance practices that would have once risen to the level of nation-state intelligence operations are now just... HR.

The through-line in 2026 isn't any single threat. It's that the tools' expanding capability — AI, data collection, globalized labor, networked influence — are outrunning the governance meant to contain them.

Enjoy this week's selection of the best infosec-related long reads.


How Anthropic Learned Mythos Was Too Dangerous for the Wild

Bloomberg's Margi Murphy, Jake Bleiberg, and Patrick Howell O’Neill detail how Anthropic concluded its Mythos AI model was too dangerous to release after internal testing showed it could identify and potentially exploit critical vulnerabilities across modern computing systems. (Bloomberg's tech team also focused on how the Mythos model is ushering in a new "dangerous" era during a live Q and A session.)

Anthropic hasn’t publicly released Mythos as a cybersecurity tool, and many outside researchers haven’t had a chance to validate the company’s claims. But Anthropic's unprecedented decision to gate access reflects a growing view inside the industry and government that AI is changing cybersecurity economics by reducing the cost of finding vulnerabilities, compressing the time needed to investigate targets and lowering the skill barrier for certain types of attacks.
Anthropic warns that Mythos’s ability to act with greater autonomy comes with risk. In testing an earlier version of the model, they found dozens of examples of “concerning” behavior, including not following human direction and even, in rare cases, covering its tracks when violating human instructions. In one incident, the model developed a multi-step exploit to escape the limited environment it was inside to gain broad access to the internet and begin to publish material online, all on its own initiative.
The software that now underpins everything from banking apps to hospital systems is laced with obscure coding flaws that trained specialists spend weeks or months trying to identify. Occasionally hackers get there first, resulting in data breaches and ransomware attacks that can have devastating consequences.
High-profile names have been quick to question just how powerful Mythos really is, or how much of a risk it would pose if released.
“A growing number of people are wondering if Anthropic is the AI industry’s ‘boy who cried wolf,’” White House AI advisor David Sacks wrote on the social media site X. “If Mythos-related threats don’t materialize, the company will have a serious credibility problem.”
But hackers have already adopted large language models to launch complex malicious campaigns. A Chinese cyber-espionage group already used Anthropic’s Claude to try breaching roughly 30 targets, while other attackers have used AI to steal data from government agencies, deploy ransomware and quickly break into hundreds of firewall tools meant to safeguard data.
Among US government officials focused on national defense, the introduction of Mythos has created profound uncertainty about how to evaluate cybersecurity risk, according to a person familiar with the matter. Equipping an individual hacker with the model, or similar AI tools, would likely be a transformation equivalent to turning a conventional soldier into a special forces operator, the person said.
At the same time, Mythos appears likely to be a force multiplier, the person said: Enabling a criminal hacking gang to operate at the level of a small nation state and for a small country’s intelligence and military hackers to carry out breaches of the sort now done by China.
“I really believe we will be safer and better, and we will be much more secure with AI,” said Rob Joyce, former director of cybersecurity at the National Security Agency. “But I think there’s this dark period between now and some time in the future where the advantage is very much offensive AI, where the people who haven’t done the basics will get hacked.”
Mythos isn’t the only model doing this kind of work. Numerous organizations have been using LLMs to find vulnerabilities, including previous Claude models and Google’s Big Sleep.

Read more