BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

IBM, AI And The Battle For Cybersecurity

Following
This article is more than 3 years old.

As Artificial Intelligence (AI) becomes a bigger part of the IT landscape, cybersecurity is becoming an AI battlefield. The latest and most aggressive attacks in cybersecurity are now leveraging AI to evade traditional security defenses and to counter adversarial responses. The cat and mouse game between attacker and defender is moving to a different level where AI is augmenting the human element. The future of cybersecurity will likely be AI versus AI.

Attackers can use AI in cybersecurity attacks to evade detection (evasive), hide in many locations without detection (pervasive) and automatically adapt to counter measures (adaptive).IBM Research is using its expertise to help build the tools to defend against attacks of all kinds and protect data privacy.

As enterprises experiment with AI services, machine learning models that power AI have become so important that the models themselves are the target of intrusion attacks. These machine learning models are often built using hours and days of compute time and may contain proprietary information, making them valuable and a target for theft. Models may be stolen for their monetary value or intellectual property they contain, or in order to design countermeasures to the model. The countermeasures may be used to fool the AI-based systems into incorrect behavior by “poisoning” the training model or the deployed neural net. This can be dangerous if that AI model is essential to running an autonomous car and hacking it puts the safety of the occupants at risk. The trained AI model may also be corrupted/exploited in other ways to design methods to mis-categorize objects of interest.

The ART of the attack

Attackers can use AI in cyberattacks to disguise the attack. One goal of IBM Research’s work in security AI is to imagine possible attacks and create defensive strategies before the actual black hats (criminals) launch an attack. One program IBM created was the Adversarial Robust Toolbox (ART), an open sourced tool for use by the AI and software community for this type of research.

IBM Research has been investigating ways to create and propagate sophisticated and stealthy malware that will only reveal/extract itself when it finds itself in the desired location making it undetectable until it is too late.

With IBM researchers interest AI and security, they were determined to create such an attack as part of its countermeasure research. The malware was called DeepLocker. The malware includes the encrypted payload, but not the key to decrypt the malware. The key is derived and malware activated when the malware determines it has reached its specific target. To all other defenses and all other potential targets, it will look benign and therefore will elude detection. With the use of AI, it will be easy to mass develop such malware customized for each specific target. Determining the target might use neural nets to identify the proper destination by using image recognize images or recognize voices or noises.

Another area of IBM Research investigation is providing tools for differential privacy. Differential Security adds some random bias to a database that anonymizes the data but doesn’t disrupt the results.

IBM Research BlogIBM Differential Privacy Library: The single line of code that can protect your data | IBM Research Blog

AI Enhances Adversarial Attacks

While older adversarial attack patterns were algorithmic and easier to detect, new attacks add AI features such as natural language processing and a more natural human computer interaction to make malware more evasive, pervasive and scalable. The malware will use AI to keep changing form in order to be more evasive and fool common detection techniques and rules. Automated techniques can make the malware more scalable and combined with AI can move laterally through an enterprise and attack targets without human intervention.

The use of AI in cybersecurity attacks will likely become more pervasive. Better spam can be crafted that avoids detection or personalized to a specific target as a form of spear phishing attack by using natural language processing to craft more human like messages. In addition, malware can be smart enough to understand when it is in a honeypot or sandbox and will avoid malicious execution to look more benign and not tip off security defenses.

Adversarial AI attacks the human element with the use of AI augmented chatbots to disguise the attack with human-like emulation. This can escalate to the point where AI powered voice synthesis can fool people into believing that they’re dealing with a real human within their organization.

The use of AI in cyber-attacks is part of the continuum of the applications of AI in situations that are designed to fool humans. For example, Deepfakes videos and images are useful for social engineering and can erode our trust in various medias. Text is another leading-edge opportunity for AI with programs such as OpenAI GPT-2 that can generate fake text messages that appear more humanlike and make it harder to detect.

AI helps Humans defend against adversarial attacks

To defend against adversarial attacks there is the GLTR (http://gltr.io/) tool to detect automatically generated text by using the very same machine learning techniques to create fake text to determine whether the text was created by ML. This is an example of using AI to defend against AI. But is this a war that we can win? Detection is getting harder as AI’s become more sophisticated and models become more complex. It will get harder to look for markers that can distinguish between human and AI and may require specific watermarking, digital signatures, or other fingerprinting patterns to identify real content.

The use of AI in cyberattacks makes it much harder to defeat because while algorithmic code can be more easily matched to an attack pattern, attacks utilizing machine learning are much harder to recognize because its less predictable. That is why you need machine learning defenses to counter machine learning attacks.

The future of cybersecurity will need to utilize AI to counter the bad guy’s AI. The use of AI in cybersecurity is so important that the DARPA created and ran the Cyber Grand Challenge program in 2016 to incorporated AI attacks against defensive AI’s in a battle to see which AI wins. The winner of the Cyber Grand Challenge used the opportunity to build a security business based on the technology used in the challenge.

IBM Research and other security researchers are working on strategies by imagining the attack and creating defenses and counterattacks. It is a critical area of security investment as defenses need to keep pace with the attacks. This game of AI vs. AI in cybersecurity has only just begun.

The author and members of the Tirias Research staff do not hold equity positions in any of the companies mentioned. Tirias Research tracks and consults for companies throughout the electronics ecosystem from semiconductors to systems and sensors to the cloud. Members of the Tirias Research team have consulted for IBM, and other companies throughout the AI and Security ecosystems.

Follow me on Twitter