Best Infosec-Related Long Reads of the Week, 5/20/23

Best Infosec-Related Long Reads of the Week, 5/20/23

Police platform brings surveillance to the suburbs, Second-rate dating sites target lonely people, AI making steganography more secure, Vietnam's troll army, Can TikTok be banned?

Important publishing notice: Metacurity will be on a break starting Monday, May 22. We resume publication on Tuesday, May 30. Stay safe and sane out there!

Metacurity is pleased to offer our free and paid subscribers this weekly digest of the best long form infosec-related pieces we couldn’t properly fit into our daily crush of news. So tell us what you think, and feel free to share your favorite long reads via email at We’ll gladly credit you with a hat tip. Happy reading!

Privacy or safety? U.S. brings 'surveillance city to the suburbs'

Reuters’ Avi Asher-Schapiro digs into Fusus, a police technology platform that merges public and private cameras with predictive policing and other surveillance tools, which in 2022 networked more than 33,000 individual cameras in over 2,400 US locations operated by police departments, school districts, and sheriffs as part of public safety initiatives. Critics say Fusus tramples residents' privacy and gives the police power that can be abused.

"This surveillance is happening in areas that are already over-policed - and it does not keep us more safe," said Nia Sadler of advocacy group Triad Abolition Project, which campaigned unsuccessfully against the use of Fusus tech in Winston-Salem, North Carolina last year.

Sadler said the surveillance cameras in Winston-Salem tend to be concentrated in areas where Black residents like themselves live - a common critique among local activists who object to the use of Fusus tech.

Albert Fox Cahn, founder of the Surveillance Technology Oversight Project in New York, said Fusus is helping smaller American communities mimic the surveillance models of big cities "bringing the surveillance city to the suburbs".

That, he and other critics say, brings with it myriad privacy and civil rights concerns.

"Fusus takes surveillance tools that are constitutional on their own, and aggregates them into the kind of persistence tracking that is blatantly unconstitutional (when used by government bodies)."

This Is Catfishing on an Industrial Scale

Laura Cole offers this heartbreaking story of how desperate freelancers are lured into becoming “virtuals,” who work remotely under personas to engage in conversations for companies running second-rate dating and hookup sites to bilk the often lonely marks of coins they buy to continue chatting.

Once a user is hooked in the conversation, the aim is always to stretch out the talking phase. “If a New York user asks to meet up with your virtual, the freelancer is to say ‘let me check my schedule and let you know,’” says Liam, even if you are writing from Budapest. If the user asks to move off to a free messaging app, the freelancers must write through the virtuals “I prefer to stay in here until I know you better” or “I feel safer on this app until we are better acquainted,” and so on.

Alice says she saw chats where users said they had bought expensive gifts, “nevermind the weeks of money they had spent talking to the virtuals.” She spoke to elderly users in care homes, and others under protective conservatorships who asked virtuals to wait until they were given their next allowance. “I think a lot of the users are vulnerable enough to believe it is real,” she says.

One morning, Alice opened her chat to a new message:

“Please stop talking to my husband, he is spending money we do not have to talk to you,” read the chat line.

Secret Messages Can Hide in AI-Generated Media

Stephen Ornes in Quanta Magazine explores how steganography, a technique for hiding messages in images or other media (even, in ancient times, shaved heads) can be made more secure by large language models such as ChatGPT.

Sokota, Schroeder de Witt and their team had been trying to find ways to exploit the tool for new approaches to deep learning. But one day, Sokota recalled, their collaborator Martin Strohmeier mentioned that their work on minimum entropy coupling reminded him of the security issues around steganography.

Strohmeier was making a casual comment, but Sokota and Schroeder de Witt took it seriously. The group soon figured out how to use a minimum entropy coupling to design a steganographic procedure that met Cachin’s requirements for perfect security in the context of real-world machine learning systems.

“I was surprised to see that it has such a nice application in steganography,” said Murat Kocaoglu, an electrical and computer engineer at Purdue University. He doesn’t work with steganography, but he did help design one of the algorithms the team used in the paper. “This work really ties nicely back to minimum entropy coupling.”

Then the team went further, showing that for a steganography scheme to be as computationally efficient as possible, it must be based on a minimum entropy coupling. The new strategy lays out clear directions for how to achieve both security and efficiency — and suggests that the two go hand in hand.

The Vietnamese military has a troll army and Facebook is its weapon

Danielle Keeton-Olsen in Rest of World reveals how a Vietnamese military group devoted to policing the country’s internet called Force 47 operates freely across major platforms like Facebook and YouTube, harassing activists and journalists in an aggressive campaign to quell criticism of the government by reporting their work as mass violations of community standards.

Khiet Ngan, the host of Vietnamese language news and interview show V5TV, said she noticed the repeat community standard violations she receives had effectively hidden her profile, suspecting her page got demoted in the Facebook feed.

She first noticed her pages getting targeted around a year ago, when a swarm of trolls began calling the Australia-based vlogger a prostitute or a loser, but never commenting on the actual content of her show, she said.

Then her engagement on Facebook started to drop off, from fans and trolls alike. Her livestreams used to get thousands of views in April 2022, which plummeted to a few hundred on each stream this month. She said fans in Vietnam and Australia began to message her, saying they no longer get notifications when she goes live.

“If we continued to push back on these requests, it is highly likely our platforms would be blocked in their entirety.”

“You have to keep disputing [reports] while doing your work,” she said. “It’s very tiring actually but we have the confidence that what we’re doing is right and it gives us motivation, we know we’re doing a good job because if we’re not then they don’t bother.”

Should governments ban TikTok? Can they? A cybersecurity expert explains the risks the app poses and the challenges to blocking it

In The Conversation, Doug Jacobson, Professor of Electrical and Computer Engineering at Iowa State University, walks through the risks that TikTok poses and why it will be challenging to block it in the US, as the state of Montana has already done and as the US government contemplates doing.

The Montana law aims to use fines to coerce companies into enforcing its ban. It’s not clear if companies will comply, and it’s unlikely that this would deter users from finding workarounds.

Meanwhile, if the federal government comes to the conclusion that TikTok should be banned, is it even possible to ban it for all of its 150 million existing users? Any such ban would likely start with blocking the distribution of the app through Apple’s and Google’s app stores. This might keep many users off the platform, but there are other ways to download and install apps for people who are determined to use them.

A more drastic method would be to force Apple and Google to change their phones to prevent TikTok from running. While I’m not a lawyer, I think this effort would fail due to legal challenges, which include First Amendment concerns. The bottom line is that an absolute ban will be tough to enforce.

There are also questions about how effective a ban would be even if it were possible. By some estimates, the Chinese government has already collected personal information on at least 80% of the U.S. population via various means. So a ban might limit the damage going forward to some degree, but the Chinese government has already collected a significant amount of data. The Chinese government also has access – along with anyone else with money – to the large market for personal data, which fuels calls for stronger data privacy rules.

Read more