With every passing month new stories emerge of how Artificial Intelligence (AI) systems are getting “smarter”. From Amazon Alexa learning your daily routine to automated fake news detection, there’s always something else that well-trained AI seems to be able to do.
But the latest AI development from Google is a little more unusual. A paper released from the scientists at Google Brain details how they have trained a computer system to “smell”.
Accurate molecular predictions
This new AI, known as a graph neural network (GNN), is not a ‘nose’ in the way we would expect. Instead, an algorithm tests a sample of a scent, analysing the individual molecular structures of the substance.
The analysed molecular structure is then compared against a database of smells the AI already ‘knows’. Where an exact match is found, the GNN can tell exactly what the compound is and what it should smell like.
When there is no exact match, the AI will make a prediction based on similarities with other known scents at the molecular level. According to Google the predictions generated by the GNN were incredibly accurate, correctly predicting what 1600+ scents would actually smell like.
A different way to tackle a problem
Google are keen to stress that the GNN works in a different way to the human nose. Scientists agree that our noses do not contain mass spectrometers as required by the GNN to understand molecular structure for instance.
What is interesting is how the GNN generates accurate conclusions that match our own biological experience – but in a totally different way.
Continual advancement of AI is exciting – and concerning. For every development that increases human understanding and improves society, there is a risk that the same technologies are being misused.
This tension is most obvious in the context of understanding human behaviour. Google Now can accurately predict your preferences and actions for instance, scheduling appointments or making context-aware suggestions to help make your life easier. But those same predictions could be used to steal personal data, impersonate you online or commit other crimes.
Applying human intelligence
The reality is that most of our activity on- and off-line is analysed every day. In most cases this analysis is beneficial and harmless. But those same analytical techniques may be used for scams like phishing, creating fake emails designed to steal information like your bank account details. The more closely these messages relate to your own behaviour, the more likely they are to work.
As you go about your business, pay close attention to the website you visit and the emails and messages you receive. There is a very real chance that at least some of these communications are fake – and you’ll have to be alert to avoid becoming a victim.
In the meantime, download a free trial of Panda DOME. Our anti-malware system uses Adaptive Defense machine learning to identify and block suspicious behaviour before cybercriminals can rob you.