Stephen Hawking believes AI could be “the worst event in the history of our civilization.”
Elon Musk sees AI as a “fundamental risk to the existence of human civilization.”*

EXECUTIVE SUMMARY:

Clearly, we are all doomed. But until we surrender to the evil computers, one thing is for sure: Everywhere you look, everyone is talking about artificial intelligence (AI). Ultimately it comes down to responsible and strategic use of AI. To get a better understanding of how and when it works, we talked with one of Check Point’s female visionaries: Orli Gan, Head of Threat Prevention at Check Point.

Q: Orli, can you set the context for where we’re at with AI today?

A: Artificial Intelligence is already shaping up to be next Industrial Revolution. Billions of dollars have been invested into AI technologies and the startups that build them. Personal helpers, such as Siri, Cortana, and Alexa are all still in their infancy stage. Yet, they are becoming actual companions, capable of human-like conversation.

Q: How much is AI integrated into our lives at this point?

A: Whether you realize it or not, AI technologies are already present virtually everywhere you look, addressing almost every aspect of our modern and not-so-modern lives. Speech recognition, image recognition, autonomous cars that rely on AI-technology to keep them safe. The financial sector is moving to AI-based insurance risk analysis, credit scores, and loan eligibility. We’re also seeing the emergence of AI-based robot lawyers and AI-based medical diagnostics and prognoses. And these are all just the beginning.

Orli Gan, head of threat prevention

Q: It seems like AI has taken off all of a sudden. What’s behind that?

A: The way I see it, there are three key underlying technologies that have matured:

  1. Storage: We can now store enormous amounts of data at a fraction of what it used to cost.
  2. Compute Power: The capability now available lets us process mountains of data.
  3. Mathematics: Math and algorithms drive AI. Machine learning, deep learning, and big data analytics have all seen major breakthroughs in the past several years.

So now, AI technologies have moved from being purely a tool for academic research to something practical that companies can build into their commercial products. But the question to ask is: Can we trust AI to make the right choices?

Q: So, can we? Can we trust AI engines to make our decisions?

A: Well, it’s a mixed bag right now, in these early days. Let me give you some examples.
Tay Bot was an AI-based twitter chat bot by Microsoft, which went online in March 2016. It took a few hours of free chatting on the Internet for it to learn the drill. Since the internet has all sorts of “teachers,” what this bot quickly learned and excelled at were profanity and racial bias. After 16 hours, Microsoft realized the catastrophe it created and shut it down for good.

A few months ago, Mashable ran an article about another good example involving Google Translate. Turkish is a gender-neutral language. There is no distinction between male and female forms. They use “O” for both “He” and “She.” But when translated to English through AI, the machine-driven algorithm shows bias: She is a cook, he is a doctor. She is a teacher, he is a soldier. And, seemingly apropos of nothing, He is happy, she’s unhappy. It’s not that Google engineers are sexist. They just fed their machines with all the pre-existing texts they could find and let the tool reach its own conclusions.

It seems fair to say that we are still decades away from a magical engine that takes data in and spits the correct decision out.

Q: So is AI useless?

A: No, not at all. We’ve seen that AI is far from useless. For the right applications, it can make all the difference.

Q: What does it take to have a good and useful AI solution?

A: You need two things: Data. Lots and lots of data that covers the entire spectrum of the problem you’re trying to solve. And expertise. Both in the mathematics that drives AI and in the specific domain being addressed.

Q: So what about in cybersecurity? Can AI be useful in that space?

A: Let’s start by looking at the limitations—which, no surprise, are identical to the prerequisites: not enough data and not enough expertise. Access to cybersecurity training data is anything but trivial. Furthermore, AI systems do not explain themselves, meaning you have to manually validate each decision or blindly trust it. Only to then realize that this technology is notorious for having a fairly high false classification rate. You can’t have that in cybersecurity—we all know that missed detections and false positives can have disastrous consequences.

But, let’s go back to what these systems can do well. AI, machine learning, deep learning and big data analytics are letting us mechanize tasks previously only handled by our scarcest resources–the smartest human analysts. They can make sense of our gigantic mountains of data logs. They are opening our eyes in places where we were previously blind.

Q: Can you provide a few examples to illustrate that point?

A: Absolutely. As Check Point thinks more and more about AI’s role in cybersecurity, we’ve begun to explore AI-based engines across our threat prevention platform. We’re already using them in a few different capacities.

The first one that’s worth mentioning is Campaign Hunting. The goal with this engine is to enhance our threat intelligence. So, say you have a human analyst looking at malicious elements. Typically, the analyst can trace the origins of those elements and incriminate similar instances–for example, domains registered by the same person at the same time with the same lexicographic pattern.

So now, by using AI technologies to emulate—and mechanize–an analyst’s intuition, Check Point’s algorithms can analyze millions of known indicators of compromise, and hunt for additional similar ones. As a result, we’re able to produce an additional threat intelligence feed that offers first-time-prevention of attacks that we’ve never seen before. More than 10 percent of the cyberattacks we block today are based on intelligence gained solely through Campaign Hunting.

A second engine, Huntress, looks for malicious executables, one of the toughest problems in cyber security. By nature, an executable can do anything when it’s running, right? It’s not breaching any boundaries, so it’s hard to figure out if it’s trying to do something bad.

The good news, though, is that cyberattackers rarely if ever write everything from scratch. That means similarities to previously known malicious executables are likely to surface—except they are often hidden to the human eye.

But when we use a machine-driven algorithm, our scope of analysis broadens. Using a sandbox as a dynamic analysis platform, we let the executables run and collect hundreds of runtime parameters. Then, we feed that data to the AI-based engine, previously trained by millions of known good and known bad executables, and ask it to categorize those executables.

The results are amazing. We end up with a dynamic engine, capable of detecting malicious executables beyond what antivirus and static analysis would find. In fact, 13 percent of the detected malicious executables are based on findings solely from this engine. If it weren’t for Huntress, we wouldn’t have known to block them.

I’ll give you one more example: CADET, which stands for Context Aware Detection. Our platform gives us access and visibility into all parts of the IT infrastructure: networks, data centers, cloud environments, endpoint devices, mobile devices. This means that rather than inspecting isolated elements, we can look at the full session context. Like, did it come through email or as a web download? Was the link sent in an email or a text message on a mobile device? Who is the sender? When was his domain registered? By whom?

Essentially, we’re extracting thousands of parameters from the inspected element and its context. And, using the CADET AI engine, we can reach a single, accurate, context-informed verdict. It’s pretty cool.

So far, our testing shows a two-fold improvement in our missed detections rate, and a staggering 10-fold reduction in the false-positive rate. You have to keep in mind: These are not just nice mathematical results. In real-life cybersecurity, engine accuracy is crucial.

Q: Wow, those examples are pretty compelling. So, then would you say AI is a critical component when it comes to cybersecurity?

A: Well look, those examples I provided? They are the outcomes of smart people, coming up with the best approach to make cybersecurity practical, using the entire arsenal of available technologies. We combine AI with all of the other technologies that we have in order to improve the metrics that actually matter. For now, we believe that AI technologies are still not mature enough to be used on their own.

Q: So it seems that when it comes to cybersecurity, artificial needs the natural to go with it, for it to be effective.

A: That’s right. When AI is used as an additional layer, added to a mixture of expert engines designed to cover the entire attack landscape, that’s where we see it shine.

In cybersecurity accuracy matters. If an engine is too noisy it won’t be used in production, and it certainly won’t be used for prevention. The overhead and impact on productivity would simply be too great.

Cybersecurity must be practical. And as we move farther along the AI continuum, those technologies are taking us farther along toward being able to develop smarter and more practical threat defense.

 

*CNBC, “Steve Wozniak explains why he used to agree with Elon Musk, Stephen Hawking on A.I. — but now he doesn’t,” February 23, 2018.