The media is full of stories warning about the threats AI poses to humanity. One of their favorite narratives is how cyber criminals are using artificial intelligence to create new attack techniques that threaten human existence.
Except that it’s not true.
AI still isn’t as clever as we think
The fact is, Artificial Intelligence systems are not truly ‘intelligent’. Applications like Google Bard and ChatGPT can help us perform common tasks more quickly and efficiently, but they still need human intervention to ‘work’.
This means that cyber criminals cannot simply tell an AI tool to ‘hack the Federal Reserve’ and expect the system to carry out a bank heist. However, they can ask AI to generate computer code to carry out specific tasks, some o which could be malicious.
How are criminals using AI?
That’s not to say that criminals are not using AI – they are. In most cases they are simply using the tools available to improve existing techniques.
Take phishing emails for instance. In the past, phishing messages could be quite easy to spot because of spelling errors and grammar mistakes. But by using ChatGPT, hackers can generate highly effective messages automatically – without spelling mistakes. It’s a very simple change, but it may make this technique marginally more effective.
As well as generating malicious code, criminals can also attack the Large Language Models that power public AI systems. Using a technique known as ‘prompt engineering’, hackers can trick the system into exposing sensitive personal data. This form of data theft is much easier than breaking into a properly protected corporate network. It also explains why everyone should avoid uploading personal information to AI models.
One other method to be aware of is AI ‘poisoning’. In this situation, criminals will attempt to subvert the AI by providing bad data to the system. The AI tool will process the bad data along with good – and this can lead to ‘hallucinations’ and other untrustworthy output.
Consider the problems Google had when they trained their Bard model on user generated data from Reddit. This led to Bard providing bad (potentially dangerous) advice to users, such as using glue to prevent cheese from sliding off their pizzas. Providing bad input data in this way has the potential to corrupt virtually any AI model.
Things may change
As you can see, AI has not yet revolutionized the cybercrime industry. However, as models become smarter and more powerful, there is a tiny (and highly unlikely) possibility that criminals will be able to create all-new exploits that really do threaten world order.
The good news is that AI developers are aware of these potential risks – and are already working to mitigate them before they can become a reality.
Read also: What Is Vishing (Voice Phishing)? Examples and Safeguards