Best infosec-related long reads for the week of 8/9/25
A twisted tale of how two men tortured someone for his crypto account passwords, Russia's cyber sector supports Putin's Ukraine war, A brain-reading implant requires a password, Social media algorithms didn't cause America's woes, The internet is really bad for children, more


Happy Saturday morning! Metacurity is pleased to offer our free and premium subscribers this weekly digest of the best long-form (and longish) infosec-related pieces we couldn't properly fit into our daily news crush. So tell us what you think, and feel free to share your favorite long reads via email at info@metacurity.com.
IMPORTANT PUBLISHING NOTICE: Metacurity will be on summer break starting August 18 and will resume publication on September 2. Stay safe out there, folks, and we'll see you in September!
Earn good infosec karma by helping Metacurity
Thank you so much for supporting Metacurity with your readership. But did you know that you could earn good infosec karma by stepping up your support so that Metacurity can continue to provide you with our weekday updates on the pressing infosec developments you need to know, alongside this weekly selection of infosec-related long reads?
Upgrading to a paid subscription will help us keep the lights on.
We also provide corporate subscription options, and soon we’ll be introducing affordable sponsorship opportunities—perfect for promoting your events or products to our highly engaged and elite cybersecurity professionals. We'd be happy to promote your brand or product as a designated sponsor.
To learn more, feel free to reach out at cynthia@metacurity.com.
Thank you so much for being part of the Metacurity community.
The Crypto Maniacs and the Torture Townhouse
In New York Magazine, Ezra Marcus and Jen Wieczner, with additional reporting by Isabella Sepahban and Franziska Wild, tell the twisted story of how two new entrants to the high-end New York nightclub scene, William Duplessie and John Woeltz, held an Italian man, Michael Carturan, captive and tortured him for weeks for the passwords to his cryptocurrency accounts.
That fall, Duplessie began visiting a new friend, John Woeltz, at his home in Kentucky. The 150-acre property abutted the Ohio River in Smithland, a 200-person community four hours west of Lexington; the area was so off the grid that even locals referred to it as “the stix.”
The two made an unlikely pair. Woeltz was a mild-mannered and nerdy perfectionist with a round face and deep-set eyes; a cybersecurity obsessive, he’d begun mining bitcoin soon after its launch in 2009. By 2019, in his 30s living alone in San Francisco with his cat, he no longer had to work. “He used to live off Soylent or Huel or his own homemade vegan sludge,” said a friend. Now, back near his hometown with a net worth of over $100 million, he led a quiet life. He studied regenerative agriculture, burying the skeleton of a fish he’d caught in his garden and sending friends photos of snakes he’d shot on the property. He’d also become a patron of the small local blockchain scene, promoting statewide bitcoin-mining efforts and donating money to a start-up accelerator and co-working space in Paducah called Sprocket. It appeared to everyone who knew Woeltz that he intended to settle down and start a family with his girlfriend, Kayla Barbour, an aspiring actress and small-business owner from Lexington. Woeltz was supportive, if controlling. “In the two years we dated, I was not allowed to work and he controlled all of my finances,” Barbour would later attest. Still, they were making plans to hold a wedding in Hawaii in late 2024.
As an early bitcoiner, Woeltz had developed an online reputation as a security white hat, a kind of good-guy hacker who could easily identify vulnerabilities and to whom strangers would sometimes reach out for help — which was why, in 2020, Michael Carturan first got in touch with him. Carturan was a socially awkward permaculture enthusiast from the small Italian town of Rivoli, a technically proficient programmer who grew up on internet forums like 4chan. He believed deeply in bitcoin as a tool that could help build a new tech-focused world order and was working on a decentralized version of a virtual private network. As they got to know each other, Carturan described his expertise operating anonymous online trollbots and “running psyops” — in other words, using social-media bots to hype meme-coin projects, making them appear more popular than they actually were. In the beginning, he was reluctant to give his real name, introducing himself only as “Sergio.” “It was always some made-up Italian name,” says a mutual friend. Soon, the two were collaborating on a brand-new cryptocurrency project. Over the years, Carturan came to see Woeltz not just as a business partner but as a role model and protector. When Carturan worried a former colleague was trying to kill him over a business dispute, he asked Woeltz for help. “John saved my life,” he would later tell others.
Carturan knew Duplessie, too — Pangea had invested in his cryptocurrency project in 2021. And it was through Carturan that Duplessie and Woeltz eventually developed their own relationship. By December 2024, Duplessie could almost always be found at Woeltz’s Smithland cabin. “John and Will began displaying very paranoid, cultlike behavior,” Barbour later wrote in a restraining-order petition against Woeltz. His demeanor had shifted dramatically; he and Duplessie bought thousands of dollars’ worth of guns and “began wearing matching militant clothes,” patrolling the property “on the hunt for terrorists who they were convinced would be tracking us down to kill us.”
Hacking and Firewalls Under Siege: Russia’s Cyber Industry During the War on Ukraine
In this Center for Naval Analysis (CNA) paper, writer Justin Sherman examines how Russia's cybersecurity sector is supporting the Russian government's efforts since the full-scale invasion of Ukraine in 2022, focusing specifically on three firms, Kaspersky, Security Code, and Positive Technologies, and how their functions tie into the Kremlin’s objectives.
Kaspersky is a global company that has been repeatedly accused of quietly supporting Russian government cyber operations— including by allegedly using its antivirus platform to exfiltrate classified and sensitive information from other countries’ systems. Security Code provides what appear to be principally defensive technologies and services to Russian customers, including the FSB, Ministry of Internal Affairs (MVD), Federal Protective Service (FSO), Russian Railways, Gazprom, and Sberbank. It also maintains educational partnerships with public and private institutions in Russia that train the future cyber workforce. Positive Technologies has been identified by the US government and in media reporting as a Russian intelligence contractor that supports offensive operations, reportedly by reverse engineering Western capabilities and turning vulnerabilities into exploits for offensive cyber operations. It also runs Russia’s largest security conference and capture-the-flag hacking competition—an annual event that the FSB and GRU use to recruit highly talented hackers into the intelligence services.
Since February 2022, the three companies have been subject to additional levels of scrutiny, but they have adapted relatively well. Kaspersky went from being banned on US federal government systems to being sanctioned by the United States. It was also banned from providing many cyber products and services to American consumers and businesses, and it was identified by Germany, Poland, and others as a potential national security threat. But it has opened “transparency centers” in Latin America and elsewhere, which—contrary to what some in the West might expect—have paid off greatly for the firm as it has expanded. The company’s marketing pitches seem to be landing well in many parts of the world, whether because of distrust of American technology post–Edward Snowden leaks, well-publicized abuses by Silicon Valley giants, or the mere fact that Kaspersky is a global firm with talented personnel. However, Kaspersky is now providing protections to a notorious Russian “bulletproof” web hosting provider for cybercriminals (meaning one that hides and refuses to disclose its customers, even to governments), marking a notable departure from its past efforts to portray itself as a trustworthy brand.
Security Code has been sanctioned by Ukraine and the United States but not by the European Union. It has also remained out of the Western press, perhaps because of its role in Russian cyberdefense rather than the much more headline-grabbing category of cyberoffense. In its 2024 financials, it disclosed that most of its clients are those protecting “critical information infrastructure,” a Russian legal term for entities handling information systems, networks, and technologies that are critical to the state’s security. As a result, most of Security Code’s clients ostensibly reside in Russia. It appears that the company’s bottom line is strengthening because of growing demands in Russia for cyberdefense amid the continued war.
Positive Technologies has been marketing itself as a way for entities in other countries to diversify their cybersecurity services. It does not suggest that countries forgo American, Chinese, or Israeli cyber providers; rather, it makes the case for adding a Russian vendor to avoid depending too much on one country for cyberdefenses. In addition, the company has launched new product offerings, and in-person attendance at its flagship conference (the event the FSB and GRU use to recruit personnel) has more than quintupled from 10,000 in 2022 to 55,000 in 2023, with another 100,000 tuning in online. All three of these companies—despite waves of Western sanctions, export controls, and technology isolation efforts—had their highest revenue figures ever in 2024.
A mind-reading brain implant that comes with password protection
A study by Erin Kunz, a neural engineer at Stanford University in California, published in Cell, found that a brain implant called a brain–computer interface (BCI) can decode a person’s internal chatter, but the device works only if the user thinks of a preset password. (The formal study can be found here.)
BCI systems translate brain signals into text or audio and have become promising tools for restoring speech in people with paralysis or limited muscle control. Most devices require users to try to speak out loud, which can be exhausting and uncomfortable. Last year, Wandelt and her colleagues developed the first BCI for decoding internal speech, which relied on signals in the supramarginal gyrus, a brain region that plays a major part in speech and language.
But there’s a risk that these internal-speech BCIs could accidentally decode sentences users never intended to utter, says Erin Kunz, a neural engineer at Stanford University in California. “We wanted to investigate this robustly,” says Kunz, who co-authored the new study.
First, Kunz and her colleagues analysed brain signals collected by microelectrodes placed in the motor cortex — the region involved in voluntary movements — of four participants. All four have trouble speaking, one because of a stroke and three because of motor neuron disease, a degeneration of the nerves that leads to loss of muscle control. The researchers instructed participants to either attempt to say a set of words or imagine saying them.
Recordings of the participants’ brain activity showed that attempted and internal speech originated in the same brain region and generated similar neural signals, but those associated with internal speech were weaker.
Next, Kunz and her colleagues used this data to train artificial-intelligence models to recognize phonemes, the smallest units of speech, in the neural recordings. The team used language models to stitch these phonemes together to form words and sentences in real time, drawn from a vocabulary of 125,000 words.
The device correctly interpreted 74% of sentences imagined by two participants who were instructed to think of specific phrases. This level of accuracy is similar to that of the team’s earlier BCI for attempted speech, says Kunz.
In some cases, the device also decoded numbers that participants imagined when they silently counted pink rectangles shown on a screen, suggesting that the BCI can detect spontaneous self-talk.
Scapegoating the Algorithm
For Asterisk Magazine, Dan Williams, Assistant Professor in Philosophy at the University of Sussex and an Associate Fellow at the Leverhulme Centre for the Future of Intelligence (CFI) at the University of Cambridge, argues that the existential crisis America is facing isn't, as the narrative goes, caused by the inability to distinguish fact from fiction due to social media algorithms that prioritize engagement over truth, but is generated by more deeply rooted American problems.
To evaluate whether social media is responsible for America’s epistemic crisis, we must first clarify what that crisis is. And here, it is essential to note that many of America’s epistemic challenges are not new. Problems such as political ignorance, conspiracy theories, propaganda, and bitter intergroup conflict have plagued the country throughout its history.
Research in political science has consistently documented astonishingly high rates of political ignorance among American voters. A landmark 1964 study found that most voters were unaware of basic political facts, estimating that roughly 70% were unable to identify which party controlled Congress. Similarly, from the Salem witch trials in the late seventeenth century to the widespread Satanic panic of the late twentieth century, false rumors, misinformation, and widespread misperceptions have been ubiquitous throughout American history. As political scientist Brendan Nyhan writes, there was never a “golden age in which political debate was based on facts and truth,” and “no systematic evidence exists to demonstrate that the prevalence of misperceptions today (while worrisome) is worse than in the past.”
Political polarization and vicious intergroup conflict have been more intense at previous stages in American history, not least during the Civil War. Although there was little polarization between the parties in the mid-twentieth century, this was a historical anomaly. It was also partially due to the parties’ shared interests in upholding a system of racial apartheid in the South. This system was, in turn, supported by widespread lies, racist myths, and censorship, from “scientific” racism painting Black people as inferior to the suppression of anti-lynching journalism.
Elite-driven disinformation has also been a pervasive force throughout American history. Both the tobacco and fossil fuel industries waged sophisticated propaganda campaigns to deny the harms caused by their products. McCarthyism involved systematic political repression based on largely fabricated communist threats. And there is nothing new about catastrophic, elite-driven epistemic failures, including their role in events as recent as the Iraq War and the 2007-08 financial crisis.
Perhaps most surprisingly, there is little evidence to suggest that rates of conspiracy theorizing have increased in prevalence in the social media age. In a recent study, political scientist Joe Uscinski and colleagues conducted four separate analyses to test for possible changes over time. They conclude: “In no instance do we observe systematic evidence for an increase in conspiracism, however operationalized.”
How We Got the Internet All Wrong
Writing for the Dispatch, German-American political scientist and author Yascha Benjamin Mounk says he tried to balance the notion that social media is a terrible development for children, rewiring their brains at the expense of in-person play, against the idea that skeptics always tend to exaggerate the impact of new technologies, and catastrophize their effects until he came across results from the Understanding America Study at Stanford University.
And then I came across a truly jaw-dropping chart.
That chart, published by Financial Times journalist John Burn-Murdoch and based on his analysis of data from the extensive Understanding America Study, shows how the traits measured by the personality test most widely used in academic psychology have changed over the past decade. The OCEAN test measures five things: openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism. Decades of research have demonstrated that some of these traits are highly predictive of life outcomes; in particular, conscientiousness (“the tendency to be organized, responsible, and hardworking”) predicts everything from greater professional success to a lower likelihood of getting divorced. Extroversion (a tendency to be “outgoing, gregarious, sociable, and openly expressive”) is associated with better mental health, broader social networks, and greater life satisfaction. Meanwhile, neuroticism (understood as a propensity toward anxiety, emotional instability, and negative emotion) is strongly correlated with negative outcomes, such as higher rates of depression, lower life satisfaction, and poorer overall mental health.
With these facts in mind, you will quickly realize why Burn-Murdoch’s chart demonstrates that something very, very concerning has been happening to young people.
What Burn-Murdoch shows is that the traits most strongly predictive of positive outcomes are in sharp decline. Young people, in particular, have become far less conscientious and extroverted over the past decade. Conversely, the trait most strongly associated with negative life outcomes, neuroticism, has sharply increased. To put it bluntly, the average 20-year-old today is less conscientious and more neurotic than 70 percent of all people were just a decade ago.
Hidden Links: Analyzing Secret Families of VPN Apps
Researchers from Arizona State University and CitizenLab identified and analyzed three families of VPN providers, introducing new methods for revealing how VPN providers are connected, misleading users about their ownership, and showing that they even share VPN servers’ cryptographic credentials, meaning they carry common sets of security vulnerabilities, misleading their users about how risky they are.
We identified three classes of problematic security and privacy issues with varying impacts on users. The undisclosed location collection issue is a major violation of user trust and privacy given the provider explicitly stated they did not collect such information. The client-side blind in/on-path attacks allow an attacker to infer with whom a VPN client is communicating. Most critically, on many of the VPNs we analyzed, a network eavesdropper between the VPN client and VPN server can use the hard-coded Shadowsocks password to decrypt all communications for all clients using the apps. These weaknesses nullify the privacy and security guarantees the providers claim to offer. These issues are even more concerning when accounting for the fact that the providers appear to be owned and operated by a Chinese company and have gone to great lengths to hide this fact from their 700+ million combined user bases.
The issues we identified affect users, providers, and app stores. At a minimum, VPN users who value privacy should avoid using Shadowsocks, including the apps from these developers, as Shadowsocks was not designed to facilitate privacy, merely censorship circumvention [11]. App store operators like Google face major challenges identifying and verifying ownership of apps on the Play Store, as well as ensuring Play Store apps are secure. Ownership identity verification and app security auditing is currently labor intensive and would require sophisticated, automated tools to achieve at scale. Google currently offers a security audit badge for VPN apps. Whether a similar badge for verified identity makes sense is debatable because there are valid reasons why a VPN provider might not want to reveal that information as it could expose them to legal or digital attack from a country or entity that opposes VPNs. Finally, VPN providers should avoid offering Shadowsocks to users or carefully explain the risks. The Shadowsocks protocol has no built-in asymmetric cryptography and requires the insecure use of hard-coded passwords, from which symmetric keys are deterministically derived, or for VPN providers to devise and implement a system for the secure distribution of these passwords. Prior work has found that home-rolled cryptographic systems commonly contain major flaws [18–20]. Thus, if not devised and maintained by experts, such a password distribution system would be liable for the introduction of additional security issues, and it may increase one’s vulnerability to network censorship if not carefully implemented.