As Artificial Intelligence continues to reshape the world around us, influencing the future of AI. Governments and business leaders are starting to ask the question ‘what next?’. Because for every benefit, a new risk emerges.
Some of the topics under discussion include the following.
AI and terrorism
Many governments are raising concerns about terrorists using artificial intelligence. Some security analysts believe terrorists could use AI to select new targets and better understand the logistics of planning an attack. Others suggest that AI make it easier for terrorist organizations to obtain chemical, biological or radiological weaponry.
Most experts agree that AI could be misused this way in future. And that AI creators will need to introduce safeguards to prevent these kinds of activity.
Frontier AI overreach
Other analysts are looking even further ahead to the next generation of model, being described as ‘Frontier AI’. Faster, more powerful and more capable, Frontier AI will further accelerate transformation of work and society.
Some worry that, left unchecked, Frontier AI will have the ability to end life on earth – much like Skynet does in the Terminator series of movies. Frontier AI will gain this capability only if it connects to vital systems, such as national power grids, defense networks, and financial markets, much like in fictional cyborg movies.
AI is extremely unlikely to be placed in complete control of mission-critical systems for the foreseeable future – if ever.
AI-powered cyberattacks
Perhaps the most pressing risk to us all is from the use of AI by cybercriminals. Generative AI models like ChatGPT can write functional computer code for instance, allowing hackers to build malware and security exploits faster.
Generative AI tools also help criminals to write more convincing phishing emails and text for fake websites. In the past, it was quite easy to spot phishing emails because they were badly worded, misspelled, or used the wrong tone of voice. Using generative AI, hackers are able to correct these errors automatically, and to create new emails that sound official – and therefore more convincing.
What is being done to protect consumers?
Government regulators and Big Tech founders are aware of these challenges and threats. And all are working towards safeguarding the general public.
Already, most publicly accessible generative AI models include guardrails to prevent misuse. For instance, certain instructions will be ignored or refused. These safeguards are likely to become a legal requirement across the world in the foreseeable future.
Governments are also eager to work with Big Tech to formulate new regulations that properly govern how AI builds, manages, and is used. The hope is that by introducing regulatory frameworks early, future iterations of AI will be safer for us all.