The UK’s National Centre for Cyber Security (NCSC) is warning that Artificial Intelligence tools are set to power a new wave of cybercrime. According to their predictions, AI tools will allow hackers of all abilities to ‘do’ more. Which will create a surge in attacks in the near term.
Experienced hackers get smarter with AI
Building on their existing knowledge of AI and cybersecurity, experienced hackers are expected to use artificial intelligence in most of their criminal enterprises. Perhaps more worrying is the prediction that there will be increased activity in virtually every cybersecurity threat area – particularly social engineering, new malware development and data theft.
The NCSC is also warning that well-resourced criminal gangs will be able to build their own AI models to generate malware that can evade detection by current security filters. However, because this requires access to quality exploit data and samples of existing malware to ‘train’ the system. These activities will likely be restricted to major players, like nation states engaging in cyber warfare.
Novice hackers get started with AI
One of the most useful aspects of generative AI and large language models (LLM) like ChatGPT and DALL-E is that anyone can use them to produce good quality content. However, the same applies to malicious AI – virtually anyone can use them to create effective cybersecurity exploits.
The NCSC warning suggests that low-skill hackers, opportunists and hacktivists may begin using AI tools to engage in cybercrime. Of particular concern is the use of AI for social engineering attacks. Designed to steal passwords and other sensitive personal data. Experts caution that tools like ChatGPT can generate text for phishing emails for instance, allowing virtually anyone to launch a moderately effective campaign for minimal cost.
It is at this low-end of the scale where there is likely to be the greatest uplift in criminal activity between now and the end of 2025.
What about AI safeguards?
Most generative AI systems include safeguards to prevent users from generating malicious code or the like. You cannot use ChatGPT to write a ransomware exploit for instance.
However, free and Open Source artificial intelligence engines do exist. And highly skilled, well-funded hacking groups have already built their own safeguard-free AI models. With access to the ‘right’ training data, these models are more than capable of creating malware and the like.
It is important to realize that AI will not bring about a cybercrime apocalypse on its own. The tools used by hackers are unable to develop entirely new exploits. They can only use their training to refine and improve existing techniques. Most AI “powered” attacks in the coming months will simply be updates to exploits we already encounter every day. Humans are still an integral part of identifying and building new threats.
Be prepared
There is likely to be a surge in attacks in the next year, so it pays to be prepared. Download a free trial of Panda Dome and ensure that your devices are protected against current and future threats today.