Artificial intelligence teaches computers to spot malicious tinkering with their own code.
Gone are the days of the hobbyist hacker – modern malware is a trillion-euro business.
Dr Eva Maia at VisionTechLab, a young cybersecurity firm in Matosinhos, Portugal, said that attacks on computer networks are not only multiplying, they are also growing sneakier.
‘Malwares typically go unnoticed for months by remaining dormant on infected computers,’ said Dr Maia. ‘This was recently the case in the Panama Papers attack, where no one knew that the network had been compromised until long after the damage was done.’
In the EU-funded SecTrap project, VisionTechLab has been studying the market for a new line of defence that could rob malicious software of its current hiding places.
Conventional antiviruses and firewalls are trained like nightclub bouncers to block known suspects from entering the system. But new threats can be added to the wanted list only after causing trouble. If computers could instead be trained as detectives, snooping around their own circuits and identifying suspicious behaviour, hackers would have a harder time camouflaging their attacks.
The challenge is that machines have traditionally been built to follow orders, not recognise patterns or draw conclusions. Dr Maia is working on advances in artificial intelligence (AI) to change that.
‘We are seeing a boom in AI techniques,’ said Dr Maia. ‘Research that was previously theoretical is now moving from academic laboratories to industry at an unprecedented pace.’
‘Research that was previously theoretical is now moving from academic laboratories to industry at an unprecedented pace.’
Dr Eva Maia, VisionTechLab, Portugal
Over the past few years, computers have started driving passenger cars, following voice orders and outmatching humans at identifying faces on photographs. These breakthroughs are the fruit of a new trend in AI based on mimicking living neural networks.
In the same way as our brains sort new information based on past experiences, enough practice data can teach computers to learn, categorise and generalise for themselves.
How many examples are needed to identify a trend can run into astronomical numbers. Fortunately, vast hoards of behavioural data are strewn everyday across the internet by heedless bloggers, commentators and social media users.
Computers learnt their first cognitive functions by devouring terabytes of this online text, sound and images. With the help of recent computing power and the kind of algorithms developed by Dr Maia, they have become so good at identifying content that they now label some of it for us.
The challenge in cybersecurity is to use this ability to distinguish between innocent and malicious behaviour on a computer. For this, Alberto Pelliccione, chief executive of ReaQta, a cybersecurity venture in Valletta, Malta, has found an analogous way of educating by experience.
ReaQta breeds millions of malware programs in a virtual testing environment known as a sandbox, so that algorithms can inspect their antics at leisure and in safety. It is not always necessary to know what they are trying to steal. Just to record the applications that they open and their patterns of operation can be enough.
So that the algorithms can learn about business as usual, they then monitor the behaviour of legal software, healthy computers, and ultimately the servers of each new client. Their lesson never ends. The algorithms continue to learn from their users even after being put into operation.
In doing so, ReaQta’s algorithms can assess whether programs or computers are behaving unusually. If they are, they inform human operators, who can either shut them down or study the tactics of the malware infecting them. ‘The objective of the artificial intelligence is not to teach computers what we define as good or bad data, but to spot anomalies,’ said Pelliccione.
Nowhere to hide
This is welcome news for IT administrators. Cyber criminals typically attack the computer networks of large organisations by compromising the machines of less security-savvy users on their periphery and working their way through to the centre. A few weak links in a sprawling network are difficult to spot and can progressively put an entire company at risk.
To make matters worse, hackers install dormant access points on each machine that they compromise. If security analysts manage to block one, hackers return through another. Dormant access points are notoriously difficult to spot because they do nothing until hackers activate them.
As part of the European ProBOS project, ReaQta has developed software that can be nested at the very core of machines, between their operating system and hardware. Its role is to monitor daily operations in every corner of the system, allowing AI algorithms to sift through ubiquitous data and spot any malicious installation.
ReaQta is licensing the security platform across European and Southeast Asian markets this month. Its first clients are companies that operate over 500 computers simultaneously.
Next year, VisionTechLab plans to release its first AI security services for banks and governments. In the longer term, Dr Maia sees applications for individuals.
For all the benefits of mobile devices, social networking and cloud computing, these technologies are placing more private data at risk. While AI may not yet be capable of guaranteeing its safety, it can now shine a powerful search light on any attempts to steal it.
If you liked this article, please consider sharing it on social media.
Cybersecurity is at the heart of the EU's strategy for the Digital Single Market.
The EU's cybersecurity strategy was created to embed cybersecurity into new policies in areas such as automated driving, make the EU a strong player in the cybersecurity market, and ensure that all Member States have similar capabilities to fight cyber crime.
Imagine controlling your computer just by thinking. It sounds far-out, but real advances are happening on these so-called brain-computer interfaces. More researchers and companies are moving into the area. Yet major challenges remain, from user training to the reality of invasive brain implant procedures.
Artificial intelligence is growing ever more powerful and entering people’s daily lives, yet often we don’t know what goes on inside these systems. Their non-transparency could fuel practical problems, or even racism, which is why researchers increasingly want to open this ‘black box’ and make AI explainable.
A mysterious flu-like illness that caused loss of taste and smell in the late 19th century was probably caused by a coronavirus that still causes the ‘common cold’ in people today, according to Professor Marc Van Ranst at KU Leuven in Belgium, an expert on coronaviruses.
In a lab in Amsterdam, arachnophobes have volunteered to encounter their eight-legged nemeses to help researchers hoping to conjure and obliterate fear memories. These studies, as well as new understanding of overlooked brain regions, are revealing how fears linked to PTSD or phobias work, and how they may be treated.
Virologist Prof. Marc Van Ranst says that today’s common cold viruses are likely to have been introduced through pandemics.
Researchers are mapping brain circuits and testing an approved drug to inhibit strong fear memories.
Dr Kate Rychert studies ocean plate structures.