Best Infosec-Related Long Reads of the Week, 7/22/23

Best Infosec-Related Long Reads of the Week, 7/22/23

TheTruthSpy's web of fake sellers, China and Russia use intel agencies in cold war revisionism, Twitter's disinformation sewer, The 50-year crypto war, Video codec tool nails police abuse, more

Metacurity is pleased to offer our free and paid subscribers this weekly digest of the best long-form infosec-related pieces we couldn’t properly fit into our daily crush of news. So tell us what you think, and feel free to share your favorite long reads via email at We’ll gladly credit you with a hat tip. Happy reading!

Fake passports, real bank accounts: How TheTruthSpy stalkerware made its millions

TechCrunch’s Zack Whittaker painstakingly investigated how Vietnamese-owned 1Byte’s TheTruthSpy, a collection of Android so-called “stalkerware” surveillance apps, including Copy9 and MxSpy, used a network of fake sellers that compromised hundreds of thousands of people’s phones around the world helping to conceal the sellers.

For years, TheTruthSpy brought 1Byte tens of thousands of dollars in monthly PayPal transactions from customers. But its rising popularity brought new problems. Selling spyware is fraught with legal and reputational risks, especially in the United States, where the startup saw growing demand for TheTruthSpy. PayPal’s systems would periodically flag transactions and limit access to the spyware maker’s accounts and funds. Customers also wanted to pay by credit card, but that would require the startup to fill out stacks of applications and paperwork that would have outed the operation.

A TechCrunch investigation based on hundreds of leaked documents can now reveal how the spyware operation evaded detection, and for so long — details which have not been previously reported.

From its software house in Vietnam, 1Byte devised a network of fake identities with forged American passports to cash out customer payments into bank accounts they controlled. It seemed like the perfect scheme: This stateside expansion allowed the startup to keep its identity a secret while making at least $2 million in customer payments since 2016. And the fake sellers would take the heat if the authorities discovered, seized or shuttered the operation. (Not that the feds would find them, since they claimed to live at phantom addresses.)

The scheme exploited weaknesses present in tech and financial system safeguards against fraud, like “know your customer” checks for verifying a person’s identity, which are designed to block organized crime gangs and money launderers from opening fraudulent accounts or moving funds using forged or stolen documents.

The New Spy Wars: How China and Russia Use Intelligence Agencies to Undermine America

In Foreign Affairs, Calder Walton, Assistant Director of the Applied History Project and the Intelligence Project at the Harvard Kennedy School’s Belfer Center for Science and International Affairs walks through how Russia, which believes the cold war never ended, and China, which seeks to reverse the outcome of the cold war, are using their spy agencies to advance their revisionist goals.

During the Cold War, both the United States and the Soviet Union industrialized intelligence collection, using computers to attack each other’s cryptology. Spying moved from on land, deep under the sea, into the stratosphere, and then even into space. Today, Western governments are in a new cold war with Russia and China that is again transforming the nature of espionage. This new cold war is not a repeat of the last one, but it does have continuities and similarities, including a stark asymmetry in the East-West intelligence conflict. It was colossally difficult for Western clandestine services to collect reliable intelligence on closed police states behind the Iron Curtain; now it is even more difficult for them to operate effectively in Russia or China, with their Orwellian domestic surveillance systems. Meanwhile, it is relatively easy for Russia and China to steal secrets from the open, free, and democratic societies of the West, just as it was for the Soviets before them.

But the similarities between this superpower conflict and the last one should not blind us to their differences. China’s massive economic weight and integration into the global economy differentiate it from the Soviet Union. Today’s information landscape is also much different from that of even the recent past. Commercial satellite companies, for example, now offer capabilities that until recently would have been the preserve of governments. Open-source and commercial intelligence are transforming national security. In the last Cold War, approximately 80 percent of U.S. intelligence was derived from clandestine sources while 20 percent came from open sources. Today, those proportions are thought to be reversed. The future of Western intelligence lies not with governments but with the private sector. The challenge for Western governments is to harness the capabilities of commercial intelligence providers. This will require new public-private partnerships.

What Western governments need more than anything, however, is imagination when it comes to intelligence collection about closed police states. Imagination is what led the CIA to develop high-altitude U-2 planes that were capable of spying behind the Iron Curtain when other methods were impossible. Similar imagination is needed today in areas at the forefront of national security, including open-source intelligence gathering, the use of machine learning and artificial intelligence, and quantum computing. These will be the weapons of this century’s cold war—and those that will determine its outcome.

Elon Musk’s Twitter Is Becoming a Sewer of Disinformation

In Foreign Policy, Miah Hammond-Errey, the director of the Emerging Technology Program at the United States Studies Centre at the University of Sydney, argues that the changes Elon Musk has made to Twitter have made the platform safe for disinformation, extremism, and authoritarian regime propaganda, with the sale of blue checks fueling the spread of disinformation, especially about the Russia–Ukraine war a topic on which Musk has repeatedly adopted Kremlin talking points.

Twitter users no longer need to actively seek out state-sponsored content to see it on the platform; the platform serves it to them. Many state-affiliated media outlets, particularly in authoritarian countries, exist to exert influence domestically and interfere abroad—including though coordinated disinformation campaigns during democratic elections. Without state media labels, these propaganda outlets with a clear mandate to spread disinformation operate without notification to users that the information is likely biased. No wonder RT’s Simonyan was so grateful.

Musk’s changes are a huge regression from the progress on disinformation and state interference made by Twitter and other social media platforms since the 2014 Russian invasion of Crimea and the 2016 U.S. presidential election. We know that what Musk is doing to Twitter flies in the face of existing research and practice on how to make social media resilient to hate, harassment, and authoritarian governments’ information operations. The new Twitter harms citizens of democracies and benefits autocratic governments.

What makes this even more disconcerting is that Musk has repeatedly adopted Kremlin narratives on Ukraine. Former U.S. National Security Council Russia specialist Fiona Hill wrote that, knowingly or unknowingly, “Elon Musk is transmitting a message for [Russian President Vladimir] Putin.” No stranger to autocratic talking points, Musk has also suggested that China be given partial control of Taiwan. He is not known for making any statements that are critical of the Chinese government.

That Musk calls the new Twitter Blue “verification” is an ingenious piece of disinformation itself.

Musk’s relationship with China is complicated by the fact that his companies (which include Tesla, Starlink, SpaceX, and Twitter) have various relationships, including financial ones, with the Chinese government. He has repeatedly heaped praise on China and the Chinese Communist Party. In his most recent visit to China in May and June 2023, Musk was uncharacteristically quiet on social media but visited Tesla’s factory in Shanghai and found time to compliment China’s technological development, continuing a long pattern of praise for China. He also described the economies of the United States and China as “conjoined twins” and stated his opposition to any moves that might undo that.

Almost 50 Years Into the Crypto Wars, Encryption’s Opponents Are Still Wrong

Wired’s Steven Levy reviews how the arguments in the almost 50-year-old war against encryption remain unchanged and how the need for encryption is more significant than ever.

Still, the foes of encryption keep fighting. In 2023, new battlefronts have emerged. The UK is proposing to amend its Investigatory Powers Act with a provision demanding that companies provide government with plaintext versions of communications on demand. That’s impossible without disabling end-to-encryption. Apple has already threatened to pull iMessage and FaceTime out of the UK if the regulation passes, and other end-to-end providers may well follow, or find an alternative means to keep going. “I’m never going to willingly abandon the people in the UK who deserve privacy,” says Signal president Meredith Whittaker. “If the government blocks Signal, then we will set up proxy servers, like we did in Iran.”

And now the US Senate has opened its own front, with a proposed law called the Cooper Davis Act. It renews the crypto war under the guise of the perpetually failing war on drugs. In its own words, the legislation “require(s) electronic communication service providers and remote computing services to report to the Attorney General certain controlled substances violations.”

In other words, if someone is doing drug deals using your phone service or private messaging app, you must drop a dime on them. This, of course, requires knowing what people are saying in their personal messages, which is impossible if you provide them end-to-encryption. But one draft of the bill sternly says that providing privacy to users is no excuse. A company’s deniability, reads the provision, applies only “as long as the provider does not deliberately blind itself to those violations.” No cryptanalysis is required to decode the real message: Put a back door into your system—or else.

Let me linger on the term “deliberate blindness.” Congress-critters, don’t you get that “deliberate blindness” is the essence of encryption? The makers and implementers of end-to-end privacy tools make a conscious effort—a deliberate one, if you will—to ensure that no one, including the system’s creators, can view the content that private citizens and businesses share among themselves. Signal’s Whittaker zeros in on the insidious irony of the nomenclature. “For many years, we've been talking about tech accountability and the need to enhance privacy and rein in the surveillance capacities of these companies,” she says. “But here you have a willful turn of phrase that frames the unwillingness to surveil as something negative.”

Right now, we need more end-to-end encryption. There’s little evidence that weakening encryption will make much of a dent on the fentanyl trafficking on our streets. But after the US Supreme Court’s Dobbs decision, end-to-end encryption is now a critical means of thwarting attempts to prosecute women who seek abortions in states where politicians lay claim to their major life choices. Last year, Meta turned over private messages from a Facebook user to Nebraska police that led to felony charges against a mother who aided her daughter in ending a pregnancy by abortion pills. If those messages had been protected by end-to-end encryption—as WhatsApp and Signal messages are—authorities would not have been able to read them. If “deliberate blindness” is banned, watch out for widespread snooping to find out who might be seeking abortions.

New Police Body Cam Data Exposes the True Scale of NYPD Violence Against Protesters

Wired’s Dhruv Mehrotra and Andrew Couts profile the use of a little-known tool called Codec, a video categorization tool developed by the civil liberties-focused design agency SITU Research, that proved essential in the recent class action settlement by New York City to pay $9,950 each to around 1,380 George Floyd protesters.

Now an open source tool that’s free to use, Codec was first built for internal use only, according to Brad Samuels, a founding partner of SITU who oversees its research division. Codec categorizes video files using a range of criteria, including the location where a video was taken and the time and date it was recorded. Analysts can add their own parameters, such as whether a video shows police use of pepper spray, which enables them to more easily pinpoint clips that are relevant to an investigation.

Early on, the SITU team focused on international incidents, such as the 2014 Maidan Revolution in Ukraine and a 2019 uprising in Iraq in which security forces fired military-grade tear gas canisters directly at protesters’ heads. After the murder of George Floyd in May 2020, the US-based SITU team “pivoted hard” to investigate domestic incidents. The agency’s focus on the 2020 Black Lives Matter protests that followed resulted in investigations into police misconduct in Portland, Oregon, and Charlotte, North Carolina. Investigations in Mott Haven and across New York City are the latest to join that list.

The legal team behind the New York City-wide lawsuit, which operated through the National Lawyers Guild, a civil rights-focused nonprofit, obtained thousands of videos from the NYPD, the city, and social media feeds, most of which were unpublished until this week. Using Codec, analysts pinpointed four broad categories of what the legal team characterized as police misconduct: improper arrests, improper use of pepper spray, improper baton strikes, and excessive force.

While Codec greatly enhanced the legal team’s ability to analyze thousands of videos, it still took “a lot of human hours” to make the data usable and identify specific moments that were useful for their case, Rankin says. This included identifying the location of specific police interactions, categorizing the alleged misconduct, and adding accurate time stamps—all of which require ample hours of human work to complete, even with Codec’s help.

Examining a New Bill to Label Apps “Made in China”

Lawfare contributing editor Justin Sherman unpacks questions about mobile apps, data collection, and privacy and security risks through the lens of recently introduced, GOP-backed legislation called the Know Your App Act, which, while aimed at China, isn’t limited to that country but extends more broadly to “countries of concern,” highlighting how many members of Congress remain focused on the country of origin as a significant determinant of risk in the software and data spaces.

This relates to a broader issue with the bill’s scope. The bill’s sponsors clearly intend it to focus on China and what they would likely describe as authoritarian technology practices. But if one way to qualify as a country of concern is merely having state control over user data transfers, virtually any country with a consumer privacy regime could make the list. If interpreted literally, dozens and dozens of countries could have their privacy laws and regulations about the collection, transfer, storage, and processing of data fit under the definition laid out in the bill’s text—from the EU bloc with its General Data Protection Regulation and EU-U.S. data transfer controls to Brazil’s privacy law with controls for data transfers to foreign countries, Nigeria’s new Data Protection Act, and many more. The Information Technology and Innovation Foundation found that data localization measures jumped from 35 countries with 67 barriers in 2017 to 62 countries with 144 barriers in 2021; in 2022, McKinsey estimated that 75 percent of all countries had “some level of data localization rules.”

Of course, the counterargument might be that the U.S. government would not be “concerned” about many of those countries, like France or Germany. This is likely correct, but it’s still significant that the bill could make most countries with privacy laws qualify for the list. That seems to be a strange potential outcome from a piece of legislation that (while written broadly) is clearly focused on China. It also raises a point of potential contradiction within the bill itself. Government control over user data transfers in a country is a way to become a country of concern. Simultaneously, the label required in app stores for countries of concern does not say that a government has control over user data transfers but that, as described above, “data from the application could be accessed by a foreign government.” These are two distinct things. Countries don’t need a regulation over user data transfers per se to access data from apps; all they might need is a court order to compel the company to hand over data. Or, in countries like Russia, the state can simply use brute force to make companies comply. Control over user data transfers is one way in which increased state surveillance could potentially occur, but it is not the only way for it to occur.

Read more