Best Infosec-Related Long (and Longish) Reads for the Weeks of 12/23/23 and 12/30/23

Best Infosec-Related Long (and Longish) Reads for the Weeks of 12/23/23 and 12/30/23

Cars have become mobile tracking machines for domestic abusers, ChatGPT exposes sensitive information, China's MSS gets sharper at spying, How to stay anonymous online, NIST snags NYT good tech award

Metacurity is pleased to offer our free and paid subscribers this weekly digest of the best long-form (and longish) infosec-related pieces we couldn’t properly fit into our daily crush of news. So tell us what you think, and feel free to share your favorite long reads via email at We’ll gladly credit you with a hat tip. Happy reading!

woman holding a cup of coffee at right hand and reading book on her lap while holding it open with her left hand in a well-lit room

Your Car Is Tracking You. Abusive Partners May Be, Too.

The New York Times’s Kashmir Hill offers a deep look into how modern automobiles equipped with Internet-connected services and advanced data collection functions can become tracking machines for domestic abusers with little recourse available for many targeted victims.

Modern cars have been called “smartphones with wheels” because they are internet-connected and have myriad methods of data collection, from cameras and seat weight sensors to records of how hard you brake and corner. Most drivers don’t realize how much information their cars are collecting and who has access to it, said Jen Caltrider, a privacy researcher at Mozilla who reviewed the privacy policies of more than 25 car brands and found surprising disclosures, such as Nissan saying it might collect information about “sexual activity.”

“People think their car is private,” Ms. Caltrider said. “With a computer, you know where the camera is and you can put tape over it. Once you’ve bought a car and you find it is bad at privacy, what are you supposed to do?”

Privacy advocates are concerned by how car companies are using and sharing consumers’ data — with insurance companies, for example — and drivers’ inability to turn the data collection off. California’s privacy regulator is investigating the auto industry.

For car owners, the upside of this data-palooza has come in the form of smartphone apps that allow them to check a car’s location when, say, they forget where it is parked; to lock and unlock the vehicle remotely; and to turn it on or off. Some apps can even remotely set the car’s climate controls, make the horn honk or turn on its lights. After setting up the app, the car’s owner can grant access to a limited number of other drivers.

Domestic violence experts say that these convenience features are being weaponized in abusive relationships, and that car makers have not been willing to assist victims. This is particularly complicated when the victim is a co-owner of the car, or not named on the title.

How Strangers Got My Email Address From ChatGPT’s Model

New York Times graphics editor Jeremy White recounts how Rui Zhu, a Ph.D. candidate at Indiana University Bloomington, got his email address from GPT-3.5 Turbo, highlighting how ChatGPT and generative AI can reveal even more sensitive information as these technologies evolve.

When you ask ChatGPT a question, it does not simply search the web to find the answer. Instead, it draws on what it has “learned” from reams of information — training data that was used to feed and develop the model — to generate one. L.L.M.s train on vast amounts of text, which may include personal information pulled from the Internet and other sources. That training data informs how the A.I. tool works, but it is not supposed to be recalled verbatim.

In theory, the more data that is added to an L.L.M., the deeper the memories of the old information get buried in the recesses of the model. A process known as catastrophic forgetting can cause an L.L.M. to regard previously learned information as less relevant when new data is being added. That process can be beneficial when you want the model to “forget” things like personal information. However, Mr. Zhu and his colleagues — among others — have recently found that L.L.M.s’ memories, just like human ones, can be jogged.

Chinese Spy Agency Rising to Challenge the C.I.A.

Based on more than two dozen interviews and internal and public Chinese documents, Edward Wong, Julian E. Barnes, Muyi Xiao, and Chris Buckley in the New York Times profile how China’s Ministry of State Security, or MSS, is using better training, bigger budgets and advanced technologies, artificial intelligence specifically, to sharpen the country’s spying skills.

The competition between the American and Chinese spy agencies harks back to the K.G.B.-versus-C.I.A. rivalry of the Cold War. In that era, the Soviets built an agency that could pilfer America’s most closely held secrets and run covert operations while also producing formidable political leaders, including Vladimir V. Putin, the president of Russia.

But there is a notable difference. Because of China’s economic boom and industrial policies, the M.S.S. is able to use emerging technologies like A.I. to challenge American spymasters in a way the Soviets could not. And those technologies are top prizes in espionage efforts by China and the United States.

“For China in particular, exploiting the existing technology or trade secrets of others has become a popular shortcut encouraged by the government,” said Yun Sun, director of the China program at the Stimson Center, a Washington-based research institute. “The urgency and intensity of technological espionage have increased significantly.”

The M.S.S. has intensified its intelligence collection on American companies developing technology with both military and civilian uses, while the C.I.A., in a change from even a few years ago, is pouring resources into collecting data on Chinese companies developing A.I., quantum computing and other such tools.

Though the U.S. intelligence community has long collected economic intelligence, gathering detailed information on commercial technological advances outside of defense companies was once the kind of espionage the United States avoided.

How to Be More Anonymous Online

Wired’s Matt Burgess delves into the near-impossible task of remaining anonymous online in the face of tracking cookies and ubiquitous device identifiers and scans the array of solutions to this problem, including using Tor, VPNs, selecting privacy-oriented apps such as Signal, using burner phones and more.

You’re constantly being tracked online. Often the main culprit is the advertising industry and the tech companies heavily reliant on advertising to make money (think: Google and Meta). Invisible trackers and cookies embedded in websites and apps can follow you around the web.

Start with your web browser. Ideally, you want to block invisible trackers and ads that have tracking tech embedded. Advertisers can also track you using fingerprinting, a sneaky profiling method where the settings of your browser and device (such as language, screen size, and many other details) are used to single you out. If you want to see how your current browser tracks you, the Electronic Frontier Foundation’s Cover Your Tracks tool can run a real-time test on your system. Using Chrome, the world’s most popular browser, neither tracking ads nor invisible trackers are blocked for me, and my browser has a unique fingerprint.

For the most anonymity, the Tor Browser is best. Downloadable in the same way as any other browser, it encrypts your traffic by sending it through a number of servers and also deploys anti-censorship, anti-fingerprinting, and other privacy measures. Because of its advanced protections, however, Tor can sometimes be slower than other browsers. Several privacy-focused browsers such as FireFox, the Mullvad Browser, and Brave offer enhanced protections against trackers and offer further customizable privacy settings.

The 2023 Good Tech Awards

In his annual struggle to overcome his negativity bias, the New York Times’s Kevin Roose, highlights the beneficial tech projects of 2023, giving a surprising big nod to, among others, the National Institute of Standards and Technology for its work on artificial intelligence.

One of the more surprising — and, to my mind, heartening — tech trends of 2023 was seeing governments around the world get involved in trying to understand and regulate A.I.

But all that involvement requires work — and in the United States, a lot of that work has fallen to the National Institute of Standards and Technology, a small federal agency that was previously better known for things like making sure clocks and scales were properly calibrated.

The Biden administration’s executive order on artificial intelligence, released in October, designated NIST as one of the primary federal agencies responsible for keeping tabs on A.I. progress and mitigating its risks. The order directs the agency to develop ways of testing A.I. systems for safety, come up with exercises to help A.I. companies identify potentially harmful uses of their products, and produce research and guidelines for watermarking A.I.-generated content, among other things.

NIST, which employs about 3,400 people and has an annual budget of $1.24 billion, is tiny compared with other federal agencies doing critical safety work. (For scale: The Department of Homeland Security has an annual budget of nearly $100 billion.) But it’s important that the government build up its own A.I. capabilities to effectively regulate the advances being made by private-sector A.I. labs, and we’ll need to invest more in the work being done by NIST and other agencies in order to give ourselves a fighting chance.

Read more