Your Source of Innovation in the Medical Field
Artificial IntelligenceFeaturedOther Specialized CarePrimary CareSpecialtiesTechnologies

AI, the Double Agent of Cybersecurity (Part I)

AI, the Double Agent of Cybersecurity (Part I)
Is artificial intelligence a kind of double agent working for both attackers and cybersecurity defenders? (Credit: iStock)

The rise of artificial intelligence as well as increasingly sophisticated cloud technologies and computing capabilities are creating an extra challenge for cybersecurity in general. AI can be used both to strengthen the cybersecurity of systems and to launch new cyberattacks that are more effective than ever before. Benoit Grunemwald is a cybersecurity expert at Eset, one of the leading providers of IT security software and services in Europe. He shared his insights into the double use of AI when it comes to cybersecurity, with a focus on healthcare.

MedicalExpo e-magazine: Can we say that AI is a kind of double agent working for both attackers and cybersecurity defenders?

Benoit Grunemwald: Yes, absolutely. AI has been used for an extremely long time and, without saying that the use of AI is evenly split between attackers and defenders, there is no reason why either should use it less. AI can be used to prevent attacks as well as to create new, even more effective attacks. 

This is made possible by the evolution of technology in general, including innovations in the cloud and computational capabilities, or factors such as the decreasing cost of hardware and the increasing number of human resources focused on AI. These human resources are working on creating, defining and making available algorithms and hardware that are much more powerful than what we had before.

How has this evolved compared to 15 years ago?

Benoit Grunemwald: At Eset, we have been using AI since 1998, in our solutions and data centers. Of course the AI we used in 1998 was nothing like the AI we use today, mainly because the volume of data we have to process has multiplied. Today, we process nearly 700 million samples per day. Not all of them are malicious. Many are similar but we still need to be able to process and classify them, and, if necessary, create remedies in order to protect customers as quickly as possible. 

"The AI we used in 1998 was nothing like the AI we use today, mainly because the volume of data we have to process has multiplied." (Credit: Shutterstock)
“The AI we used in 1998 was nothing like the AI we use today, mainly because the volume of data we have to process has multiplied.” (Credit: Shutterstock)

We are really in a cat-and-mouse game. This game has evolved with the rise of cloud technologies. Because before, when an attacker created a threat, they could use our solutions, which were mostly intelligent on the desktop, and they could do it independently. Today, if the attacker wants to know if they are at risk of being detected, they can’t just submit their sample to the desktop, they have to submit it to the cloud as well, and therefore they’ll be forced to reveal themself before they even carry out their attack. 

This is part of today’s detection technologies, which are now both on the desktop and more generally in the cloud. This is true for the entire profession: we are in an ecosystem that has moved from mostly desktop detection to more global detection.

What AI systems are being used to increase cybersecurity?

Benoit Grunemwald: As far as we’re concerned, we deploy AI technology in two main places: on customer endpoints and in our data centers.

The endpoint can be the smartphone, the computer, the server, or even the messaging system of a company or a hospital that uses our solutions. On this endpoint, we will deploy hyper-optimized artificial intelligence since we don’t have crazy computing power on the workstations and in order to allow users to continue to work without slowdowns.  

Then, in our data centers, we deploy an arsenal of artificial intelligence that is much broader and deeper. These AI models enable us to analyze very large volumes of samples on a daily basis. They also allow us to go back to previous elements in order to improve the quality of detection and, sometimes, to find signals that will lead us to better understand and contextualize certain threats that could pass through human detection for example, but that would not escape the wisdom of the AI which treats a very large amount of data. 

"This is where the human eye still has its place, both in the piloting and design of AI, but also in the horizontal cross-checking of information between the different algorithms and/or artificial intelligences." (Credit: SecureWeek)
“This is where the human eye still has its place, both in the piloting and design of AI, but also in the horizontal cross-checking of information between the different algorithms and/or artificial intelligences.” (Credit: SecureWeek)

But is the human eye still necessary in this process?

Benoit Grunemwald: Absolutely, the human eye is always necessary. AI can, through vertical models, discover certain elements that escape human vigilance. On the other hand, it will not necessarily be able to link together certain signals that are in silos, i.e. in artificial intelligence specialties. This is where the human eye still has its place, both in the piloting and design of AI, but also in the horizontal cross-checking of information between the different algorithms and/or artificial intelligences. 

What do you do when you receive samples to analyze?

Benoit Grunemwald: When samples are received, they can immediately be classified as malicious, healthy or in the middle—in a gray area. This is where the relationship between humans and artificial intelligence is generally interesting because if this area is gray, it is because the AI does not get to the end of its analysis to classify the sample. The human being can therefore do two things here: reconfigure the AI or refine it in order to ensure the validity of the sample. 

It is important to understand that it is in this category of software called “gray” or “weird” that we will be able to extract the elements that will help us to make progress in terms of detection—particularly the detection of malicious groups that are trying to go under the radar.

Are attackers using AI to get their malware to land in that very gray area that your own AI can’t handle? 

Benoit Grunemwald: Yes, their samples land in the gray area because attackers are using malware en masse. And in order to be able to create this software en masse, the attackers use AI. So they try to get their malware into the green zone (non-malicious) and we, with our own AI, try to spot it. Eventually the two AIs confront each other. It’s a kind of AI war.

Is this AI warfare reflected in the use of ChatGPT?

Benoit Grunemwald: Absolutely. For example, you are a student, you ask ChatGPT to create a text for yourself and then you polish it a bit so that the AI detector doesn’t notice that it is a text written by ChatGPT. AIs try to fake themselves. The AI is a kind of double agent, used by both sides. 

For attackers, there is a massive and industrial use of AI, especially in creating very large numbers of malware to avoid detection. And then there is also a more artisanal use of AI, when people use automatic translators for example or ChatGPT. 

Imagine you’re a cybercriminal, you want to get into a hospital’s information system to do a phishing attack but you don’t know the hospital’s jargon. In this case, you will ask ChatGPT to write an email to doctors as if you were a pharmaceutical company requesting an attachment for a specific blood test. You, as a cybercriminal, know nothing about this jargon, but ChatGPT will provide you with the language, the context and everything that will make your message seem more realistic l when you send it. This is how ChatGPT is used for malicious purposes.

Can ChatGPT detect this?

Benoit Grunemwald: Yes it can detect it but only if you tell it to write you a phishing email to hack a hospital… 

When ChatGPT came out, that’s the first thing I did: I asked it to write a phishing email as if I were a bank writing to a customer. It worked, and two days later I tried it again, but by then rules had been put in place to prohibit ChatGPT from responding to explicitly malicious questions or illegal requests.

Advertisement
Advertisement
Advertisement
Advertisement
Advertisement