Best Infosec-Related Long Reads for the Week, 9/9/23

Best Infosec-Related Long Reads for the Week, 9/9/23

China feeds security flaws to state hacking org, Digital tech reliance makes US military vulnerable, Questions surround software liability policies, Metadata fuels surveillance capitalism, more

Metacurity is pleased to offer our free and paid subscribers this weekly digest of the best long-form infosec-related pieces we couldn’t properly fit into our daily crush of news. So tell us what you think, and feel free to share your favorite long reads via email at We’ll gladly credit you with a hat tip. Happy reading!

How China Demands Tech Firms Reveal Hackable Flaws in Their Products

Wired’s Andy Greenberg reports on a new Atlantic Council report regarding a 2021 Chinese law that requires tech companies to report to the government security vulnerabilities within two days of discovery for inclusion in a national vulnerability database, shared with various government organizations, including China's Ministry of State Security, the source of many state-sponsored hacking operations.

Given that patching vulnerabilities in technology products almost always takes far longer than the Chinese law’s two-day disclosure deadline, the Atlantic Council researchers argue that the law essentially puts any firm with China-based operations in an impossible position: Either leave China or give sensitive descriptions of vulnerabilities in the company’s products to a government that may well use that information for offensive hacking.

The researchers found, in fact, that some firms appear to be taking that second option. They point to a July 2022 document posted to the account of a research organization within the Ministry of Industry and Information Technologies on the Chinese-language social media service WeChat. The posted document lists members of the Vulnerability Information Sharing program that “passed examination,” possibly indicating that the listed companies complied with the law. The list, which happens to focus on industrial control system (or ICS) technology companies, includes six non-Chinese firms: Beckhoff, D-Link, KUKA, Omron, Phoenix Contact, and Schneider Electric.

WIRED asked all six firms if they are in fact complying with the law and sharing information about unpatched vulnerabilities in their products with the Chinese government. Only two, D-Link and Phoenix Contact, flatly denied giving information about unpatched vulnerabilities to Chinese authorities, though most of the others contended that they only offered relatively innocuous vulnerability information to the Chinese government and did so at the same time as giving that information to other countries’ governments or to their own customers.

America’s Digital Achilles’ Heel

Just ahead of the revelation that Elon Musk denied Starlink satellite service to Ukraine to thwart a Ukrainian attack in Crimea, Erica Longergan, Assistant Professor of International and Public Affairs at Columbia University, and Jacquelyn Schneider, a Hoover Fellow at Stanford University’s Hoover Institution, penned this piece in Foreign Affairs warning how the reliance of the US military on sensitive digital technologies leaves it vulnerable to attacks.

Today, this tension between the necessity of technology and its concurrent risks is on full display in Ukraine, where efforts to gain the upper hand in digital warfare are shaping the physical conflict. Technologies including GPS-guided artillery, small drones, and civilian cellphone videos have given Ukraine an edge against a much larger Russian military force, allowing Kyiv to more accurately strike Russian targets. But Moscow’s assaults have revealed the fragility—and potential unreliability—of such technology: Russian cyberattacks on satellite communications can cut Ukrainian troops off from commanders; attacks that jam GPS systems blunt the effectiveness of smart artillery; and electromagnetic assaults destroy up to 5,000 small drones a month. Ukraine—and other countries that are seeking to transform their armed capabilities in line with the future of digital warfare—must find a way to make these systems less vulnerable.

The United States has no choice but to work to insulate itself from digital attacks. Rejecting digital technologies wholesale in favor of analog ones is unrealistic and expensive. At the same time, no level of investment will guarantee absolute security; governments must accept that trying to strengthen digital capabilities could expose new vulnerabilities. Countries are currently making such tradeoffs with little guidance about how to manage and mitigate risks.

Three Questions on Software Liability

In Lawfare, Maia Hamin and Stewart Scott, both associate directors of the Atlantic Council’s Cyber Statecraft Initiative, and Trey Herr, director of the Initiative, discuss how real-world applications of proposed reforms in US and European software liability policies to impose costs on inadequate software security are hampered by three unknowns, starting with uncertainty about how much security is enough.

There is surprisingly little consensus around measuring baseline questions about cybersecurity. Assuming things are worse than we would like, are they at least better than they were last year? Some journalists report that there are more incidents each year. By that measure, things are getting worse. Some scholars argue that there are also more computers, more people, more code, more vulnerabilities, more cyberspace, and more flowing through it each year—controlling for some or all of that might show things are stable or even better over time, like a digital per capita. Others contend that incident severity is paramount—incident distribution has undefined variance, so the rare but catastrophic is by far most important, and that class’s ever-growing potential tilts the scale toward worse as digital infrastructure underpins more critical functions every day. Troublingly, data is scarce. Congress has only just created reporting requirements for critical infrastructure, and no robust public incident reporting framework exists for industry writ large (though recent rulemaking by the Securities and Exchange Commission could help). Knowing how many incidents have been thwarted by good security is even more difficult, if not impossible.

A Radical Proposal for Protecting Privacy: Halt Industry’s Use of ‘Non-Content’

In Lawfare, Susan Landau, Bridge Professor in The Fletcher School and Tufts School of Engineering, Department of Computer Science, Tufts University, summarizes a lengthy law review article she authored with Patricia Vargas Leon, warning of the surveillance capitalism undertaken by commercial organizations with the metadata they are collecting on everyone.

Many users know to shut off the collection of GPS location data if they want their destination private. But few users are aware that data from a combination of the accelerometers, gyroscopes, and magnetometers on their smartphones can still track them not only to the medical building, but also inside, revealing whether they went to the dermatologist’s office or the abortion clinic. Or that those technologies could also reveal whether two people left the bar together and ended up in the same bedroom later in the evening.

It’s hypothetically possible, but are communications metadata and software and device telemetry actually used in this way? Companies keep very close to the chest the methods they use for targeting ads, but there are some strong hints that metadata and telemetry information play a role. In 2020, the Norwegian Consumer Council contracted with a cybersecurity company, Mnemonic, which examined the data flow from 10 popular smartphone apps. Mnemonic found that the MyDays app, used by women to track their periods, tracked “detailed GPS location data, WiFi access point data, cell tower data, and Bluetooth properties from the MyDays app[.]” This allowed the data collector Places to determine a phone’s location to a particular floor within a building.

Invasive? Without question. Potentially harmful? Absolutely. And it is not just apps collecting potentially harmful information. The FTC report raised concerns about the data that ISPs were collecting on their users. Vargas Leon and I also found a number of industry patents that used smartphone metadata and telemetry information to do such things as monitor a user’s network accesses to recommend social networks (AT&T Mobile), their location and activity to know when to send them an update (eBay), or who’s traveling nearby them day after day to recommend exchanging contact information with “someone you may know” (Facebook).

Love, Loss, and Pig Butchering Scams

Wired’s Joel Khalili tells the heartbreaking story of 50-year-old Evelyn, who reentered the data pool on Hinge after a long-term relationship ended, only to be scammed out of her life’s savings by a man," Bruce,” she met on the dating app in a pig butchering romance scam.

The people interacting with the victim—the Bruces—are often themselves victims of a different kind, trafficked into Thailand, Myanmar, or Nigeria and taken into forced labor. Lured with false promises of employment, they are “forced to sit in call centers that are essentially prisons and contact people endlessly,” says Cheek. At any one time, they might be speaking to hundreds of different targets, following a set script. “It’s a continual cycle of victims,” says Cheek. “They might be starting today on a new victim but at the same time be about to close out another. It’s work, work, work.”

The double victimization can make it even more difficult for those whose money has been stolen to process what has happened to them because it robs them of a focal point for their rage. A faceless cybercriminal organization is far harder to hate than someone with a name, face, and dating profile. Evelyn says knowing that she might be up against a large, organized entity makes retaliation feel futile.

The likelihood that a victim will get their money back is also slim; the only hope lies in tracing the movement of cryptocurrency. If law enforcement is able to identify which exchange the criminals use to cash out into regular money, there is a chance they might identify the individual or group responsible for the scam.

An analysis of the movements of the crypto stolen from Evelyn conducted by Chainalysis, a firm whose tools are used by law enforcement to inform investigations, reveals the lengths scammers will go to cover their tracks. After Evelyn handed over the crypto, it was divided into different wallets (which can be thought of like bank accounts) and eventually cashed out via a selection of exchanges based in multiple countries. But along the way, it traveled through up to five different wallets, where it was blended with takings from other victims and converted on multiple occasions into different crypto tokens. Each of the hops is designed to further obfuscate the origin of the funds and limit the likelihood an exchange might identify wrongdoing and freeze the assets. “This indicates a high level of organization,” says Phil Larratt, director of cybersecurity investigations at Chainalysis. “But it’s become more common, in the last 12 months, for scammers to use this kind of methodology.”

Read more