AI cyber risks are more evolutionary than revolutionary, experts
Gucci parent Kering is a victim of an attack claimed by ShinyHunters, Jaguar extends its shutdown after ShinyHunters attack, Google confirms LE platform attack claimed by ShinyHunters, ShinyHunters claims attack on SK Telecom, Poland boosts cyber budget after attacks, much more


Metacurity is a reader-supported publication that requires significant work and non-trivial expenses. We rely on the generous support of our paid readers. Please consider upgrading your subscription to support Metacurity's ongoing work. Thank you.
If you're unable to commit to a subscription today, please consider donating whatever you can. Thank you!
Last week at the Billington Cybersecurity Summit in Washington, DC, a panel of experts spoke to a standing-room-only crowd eager to hear about the new cybersecurity threats introduced by AI, a testament to how the heated news cycles surrounding this latest technology revolution are driving fears of new forms of threat actor attacks.
However, some experts counter that while existing and a proliferating number of new cyber adversaries are using AI to advance their phishing skills and create new malware, AI technology isn’t necessarily injecting new cybersecurity threats that LLMs can’t catch and correct, or at least not to the level of concern often hyped in the media.
Among the types of attacks that are possible now that were not feasible before are new forms of content creation, Chad Skipper, global security technologist at Cisco, told the attendees. “This is about using AI to create content that is more believable for us as susceptible end users to click on,” he said. “Think of phishing campaigns, think of Deepfakes, those types of things. Another one is enhancing malware development. ‘Hey, look, I have it in this program. I want you to convert it into this other program that's not detectable by said security devices.’ And then the third is automating the scale of attack. Those are the three main areas that we're dealing with today.”
Alexandra Seymour, staff director of the Subcommittee on Cybersecurity and Infrastructure Protection in the US House of Representatives, said, “A lot of these different types of attacks that we are seeing have already existed, but they are now improved. And I think one of the things we are seeing now is the access, that more threat actors are more easily able to execute these attacks, whether they have access to generative AI models where they can create those more convincing phishing attacks, where they can launch any of their attacks at scale.”
The ability to generate fake content is among the top security threats posed by AI, according to Chad Tetreault, field CTO, federal at Zscaler. He said, “If you look just recently at the North Korean efforts around simulating people that are working here, not necessarily for espionage but actually to bring money back into North Korea via this very interesting funnel, those identities were flawless from AI, from creating the LinkedIn profiles, to AI writing the resumes to AI doing face masking.”
Bobby Scharmann, cyber accelerator director at Leidos, underscored the vast improvements that AI has made in the ability of malicious actors to generate convincing emails. “In some cases, you can have better-targeted, more personal phishing attacks than you would if you had a human spending the entirety of the day,” he said.
Finally, Ryan Palmer, senior technical and strategic advisor at the General Services Administration, pointed to the fact that AI-based solutions embraced by cyber defenders are a counterpoint to all the AI adversarial action. “Using AI and actually helping build your unit test, and evaluate security, look at some of the content, [can help AI] be a tool to help mitigate some of these things,” he said.
After the panel, moderator Chris Wysopal, CTO of Veracode, told Metacurity that when it comes to code development, LLMs are lousy with vulnerabilities ingested from code repositories and other sources on the web, most notably Reddit and Wikipedia. “It's crazy that it's learning from Reddit,” he said. “Reddit is the biggest source of information that the LLMS are trained on. It's even bigger than Wikipedia. Wikipedia is number two.”