Over the last few years, fake news has been a major worry. Fake news is believed to have played an important role in such important electoral processes as the 2016 US presidential election and the Brexit referendum on the withdrawal of the United Kingdom from the European Union the same year.
There is now another kind of fake that is causing concern: deep fakes (a portmanteau of deep learning and fake). A deep fake is the use of artificial intelligence to create and edit seemingly real videos and audio recordings of people. To do this, deepfakes use Generative Adversarial Networks (GANs), which are a kind of algorithm that can create new data from existing datasets.
For example, a GAN can analyze thousands of recordings of a person’s voice, and from this analysis, create a totally new audio file that sounds the same, and uses the same speech patterns.
The worries surrounding this technology lie in the possibility that it could be used to spread fake videos and recordings of politicians and other public figures. For example, a deepfake of a politician giving a racist speech could influence the outcome of an election, or even incite violence.
Deepfake technology and cybercrime
Although deepfakes are yet to cause any problems in the world of politics, we have now seen their first use in the world of cybercrime.
In August this year, it was revealed that a cybercriminal had used deepfake technology to scam a company out of €220,000. The fraud began back in March 2019, when the scammer created a deepfake imitating the voice of the CEO of the victim’s parent company.
The victim, the CEO of an energy company, received a call that seemed to be from his boss. In the call, the chief executive asked for an ‘urgent’ transfer of £200,000 to a Hungarian provider, and told him he would be reimbursed. The victim was tricked into believing that the voice was his boss’s; it had a slight German accent, like his boss, which made the scam more believable.
Once the transaction had been confirmed, the scammers called back, asking for another transfer. By this time, the CEO had begun to grow suspicious, and refused to make the transfer. The funds were reportedly sent from Hungary to Mexico, before being transfered to other locations.
Although so-called “voice fraud” is nothing new, this incident is the first of its kind using deepfake technology. In fact, between 2013 and 2017, vishing (voice phishing) incidents grew 350%. Cybersecurity experts fear that this incident could be the start of a new cybercriminal trend of using artificial intelligence in this way.
Cyberscams: a growing threat
Cybercriminals’ efforts to scam companies have increased significantly. The amount of money lost in BEC scams doubled between 2017 and 2018, and we regularly see headlines related to this cybercriminal tactic. Recently, 281 people were arrested for carrying out this kind of scam, and two weeks ago, Toyota announced that a subsidiary of the company had lost $37 million in this kind of fraud.
Artificial intelligence, for good and evil
Although the example we have seen here demonstrates that artificial intelligence can be used to carry out cybercrimes, it can also be used to stop them. Deep learning and machine learning are important in automating the detection of anomalies and cyberthreats that can endanger the IT systems of any organization.
Cybercriminals will never stop innovating in their search for new techniques to get onto organizations’ networks, steal company data, and make money This is why is is vital to stay up-to-speed with all the latest cybersecurity trends.