Best Infosec-Related Long Reads of the Week, 3/11/23

Best Infosec-Related Long Reads of the Week, 3/11/23

China's attempts to spy through its diaspora, Wirecard's efforts to silence critics, Home surveillance privacy collision, Facial scanning in NYC, Rotterdam's suspicion machine, Biden's cloud problem

Metacurity is pleased to offer our free and paid subscribers this weekly digest of the best long-form infosec pieces and related articles that we couldn’t properly fit into our daily crush of news. So tell us what you think, and feel free to share your favorite long reads via email at We’ll gladly credit you with a hat tip. Happy reading!

black framed eyeglasses on book page

The Daring Ruse That Exposed China’s Campaign to Steal American Secrets

Yudhijit Bhattacharjee examines in the New York Times magazine how the Chinese government has been conducting a damaging campaign to steal American trade secrets and intellectual property for decades for not only military secrets but also commercial technologies by attempting to exploit huge numbers of people of Chinese origin who have settled in the West.

Perhaps most unsettling is the way China has sought to exploit the huge numbers of people of Chinese origin who have settled in the West. The Ministry of State Security, along with other Chinese government-backed organizations, spends considerable effort recruiting spies from this diaspora. Chinese students and faculty members at American universities are a major target, as are employees at American corporations. The Chinese leadership “made the declaration early on that all Chinese belong to China, no matter what country they were born or living” in, James Gaylord, a retired counterintelligence agent with the F.B.I., told me. “They started making appeals to Chinese Americans saying there’s no conflict between you being American and sharing information with us. We’re not a threat. We just want to be able to compete and make the Chinese people proud. You’re Chinese, and therefore you must want to see the Chinese nation prosper.”

Stripped of its context and underlying intent, that message can carry a powerful resonance for Chinese Americans and expatriates keen to contribute to nation-building back home. Not all can foresee that their willingness to help China could lead them to break American laws. An even more troubling consequence of China’s exploitation of people it regards as Chinese is that it can lead to the undue scrutiny of employees in American industry and academia, subjecting them to unfair suspicions of disloyalty toward the United States.

Covert cameras and alleged hacking: how bust payments company Wirecard ‘hired spies and lawyers to silence critics’

The Guardian’s Jasper Jolly provides a glimpse into the world of corporate espionage and reputation management by examining the now-defunct German payments company Wirecard’s extensive efforts to leverage private investigations company Kroll and prominent law firm Jones Day into silencing a critic, Matthew Earl, a founder and fund manager at Shadowfall, a hedge fund that focuses on short-selling.

A March 2016 report that was allegedly prepared for Wirecard by an unnamed investigations firm suggested even more extreme means for the company to find its critics, including the potentially illegal use of an international mobile subscriber identity (IMSI) catcher – a device that intercepts mobile phone data as it is sent to the network. The report said it “would be extremely valuable to get information from the cellphones.”

Earl claims Wirecard also directed operatives to hack his private communications. On 8 December 2016 many of these details were published online in what was claimed to be a report by a whistleblower inside Zatarra – despite Zatarra consisting of only Earl and a collaborator.

That so-called whistleblower report, entitled “Zatarra RIP”, allegedly contained verbatim extracts from Skype conversations between Earl and others, including journalists at Reuters and Bloomberg, as well as photographs of emails. An earlier email to Wirecard purporting to be from the anonymous Zatarra whistleblower claimed to have seen communications “on Skype, Twitter, Signal and by SMS.”

The privacy loophole in your doorbell

Alfred Ng reveals in Politico how a search warrant delivered to a customer of Amazon-owned home video surveillance company Ring illustrates a growing collision between the law and users’ expectation of privacy over the videos recorded by their cameras.

Questions of who owns private home security footage, and who can get access to it, have become a bigger issue in the national debate over digital privacy. And when law enforcement gets involved, even the slim existing legal protections evaporate.

“It really takes the control out of the hands of the homeowners, and I think that’s hugely problematic,” said Jennifer Lynch, the surveillance litigation director of the Electronic Frontier Foundation, a digital rights advocacy group.

In the debate over home surveillance, much of the concern has focused on Ring in particular, because of its popularity, as well as the company’s track record of cooperating closely with law enforcement agencies. The company offers a multitude of products such as indoor cameras or spotlight cameras for homes or businesses, recording videos based on motion activation, with the footage stored for up to 180 days on Ring’s servers.

They amount to a large and unregulated web of eyes on American communities — which can provide law enforcement valuable information in the event of a crime, but also create a 24/7 recording operation that even the owners of the cameras aren’t fully aware they’ve helped to build.

Which Stores Are Scanning Your Face? No One Knows.

The New York Times’ Kashmir Hill offers this entertaining report of how her stroll around Manhattan offered some insight into the degree to which merchants in New York City are abiding by a new law where a business scanning faces has to post a sign telling customers that it is doing so.

As I crossed 25th Street, and the pedometer on my iPhone hit nearly 14,000 steps, I finally spotted a sign at the gourmet grocer Fairway Market. A flimsy white piece of paper, titled “Biometric Identifier Information Disclosure,” was taped to a sliding-glass door.

“They use it for security, if people steal,” a Fairway employee told me. The store, he said, used a vendor called FaceFirst; its website promises to “stop grocery store violence and theft.” The employee, who asked not to be identified by name because he wasn’t authorized to speak to a reporter, said a man had been kicked out just that morning because he had previously stolen coffee.

Retail theft has been on the rise since the pandemic. Karen O’Shea, a spokeswoman for Wakefern, Fairway’s parent company, said the facial recognition system was put in place about a year ago.

“Retail theft and shoplifting has a high rate of repeat offense and drives up grocery costs for all customers,” she said. “Only trained asset protection associates use the system, which helps us focus attention on repeat shoplifters.”

After leaving Fairway, I ran into more signs just eight blocks away. When I walked into Macy’s on 34th Street, two fancy white signs were affixed to the gray marble wall, one in English and one in Spanish, informing customers that their “biometric identifier information” was collected for “asset protection purposes.”

A security guard said he didn’t know whether facial recognition was used there. “What signs? Where?” he said, looking around, seemingly confused.

Inside the Suspicion Machine

Eva Constantaras, Gabriel Geiger, Justin-Casimir Braun, Dhruv Mehrotra, and Htet Aung offer this piece, co-published by Wired and Lighthouse Reports, that delves into how obscure government algorithms make life-changing decisions by examining the city of Rotterdam’s welfare fraud algorithm and the data used to train it.

Rotterdam’s algorithm is best thought of as a suspicion machine. It judges people on many characteristics they cannot control (like gender and ethnicity). What might appear to a caseworker to be a vulnerability, such as a person showing signs of low self-esteem, is treated by the machine as grounds for suspicion when the caseworker enters a comment into the system. The data fed into the algorithm ranges from invasive (the length of someone’s last romantic relationship) and subjective (someone’s ability to convince and influence others) to banal (how many times someone has emailed the city) and seemingly irrelevant (whether someone plays sports). Despite the scale of data used to calculate risk scores, it performs little better than random selection.

Machine learning algorithms like Rotterdam’s are being used to make more and more decisions about people’s lives, including what schools their children attend, who gets interviewed for jobs, and which family gets a loan. Millions of people are being scored and ranked as they go about their daily lives, with profound implications. The spread of risk-scoring models is presented as progress, promising mathematical objectivity and fairness. Yet citizens have no real way to understand or question the decisions such systems make.

Biden admin’s cloud security problem: ‘It could take down the internet like a stack of dominos’

Politico’s John Sakellariadis delves into how the White House is embarking on a comprehensive plan to regulate the security practices of cloud providers like Amazon, Microsoft, Google, and Oracle, which represent juicy targets for hackers because of their ability to compromise a wide range of victims all at once.

“A single cloud provider going down could take down the internet like a stack of dominos,” said Marc Rogers, chief security officer at hardware security firm Q-Net Security and former head of information security at the content delivery provider Cloudflare.

And cloud servers haven’t proved to be as secure as government officials had hoped. Hackers from nations such as Russia have used cloud servers from companies like Amazon and Microsoft as a springboard to launch attacks on other targets. Cybercriminal groups also regularly rent infrastructure from U.S. cloud providers to steal data or extort companies.

Among other steps, the Biden administration recently said it will require cloud providers to verify the identity of their users to prevent foreign hackers from renting space on U.S. cloud servers (implementing an idea first introduced in a Trump administration executive order). And last week the administration warned in its national cybersecurity strategy that more cloud regulations are coming — saying it plans to identify and close regulatory gaps over the industry.

Read more