Jump to content

Swiss DoD research arm releases a report on large language models chatbots based on them in cyber-security and cyber-defense


chiffa

Recommended Posts

Cyber-Defence Campus - the research arm of the Swiss DoD in everything cyber - have released yesterday their report regarding the impact they think LLMs and their conversational fine-tunes could have on cyber-security in the nearest future. Excluding the stuff specific to the military, ,their main concerns are:

  • Information operations, bots and harassment, especially with small models able to run on commodity hardware (LLaMA that leaked to 4chan is cited)
  • Private information leakage from the models that are being fine-tuned from interactions with users (eg ChatGPT and Bing Chat), given that's what models doing reinforcement from human feedback are doing
  • Ability to search internet deeper, faster and get better summaries. Notably getting feedback from models regarding malware structuring or ideas for attack chains (they cite WannaCry design fault as something that could have avoided if the author ran its ideas against a search model that would be possible in near future)
  • Phishing, notably with reinforcement through click-rate and usage of AIs with access to to the documents databases and mailboxes to get faster summaries/retrieve documents they need, but also write phishing emails that continue ongoing conversations naturally,  even if the attacker doesn't speak the language of the target.
  • Injection of vulnerabilities into generated code by the models, although given the quality of generated code, if its copied by beginner developers and put into production, it's already Swiss cheese of vulnerabilities.
  • Hijacking of front-ends based on LLM chatbots. They believe it is impossible to properly secure LLMs against jailbreaking because of the tech behind, so anything controlled by LLMs will remain a giant pile of SQL-injection - like vulnerabilities
  • Faster lateral movement and target documents acquisition/modifications once the attackers are in, leaving minimal time for incident response, especially when involving humans

 

https://arxiv.org/abs/2303.12132

  • Thanks 3
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...