Back

 Industry News Details

 
Machines learning evolves, and hackers stand to gain Posted on : Apr 07 - 2017

As government agencies are beginning to turn over security to automated systems that can teach themselves, the idea that hackers can sneakily influence those systems is becoming the latest (and perhaps the greatest) new concern for cybersecurity professionals.

Adversarial machine learning is a research field that “lies at the intersection of machine learning and computer security,” according to Wikipedia. “It aims to enable the safe adoption of machine-learning techniques in adversarial settings like spam filtering, malware detection and biometric recognition.” According to Nicolas Papernot, Google PhD Fellow in Security at Pennsylvania State University, AML seeks to better understand the behavior of machine-learning algorithms once they are deployed in adversarial settings -- that is, "any setting where the adversary has an incentive, may it be financial or of some other nature, to force the machine-learning algorithms to misbehave.”

“Unfortunately, current machine-learning models have a large attack surface as they were designed and trained to have good average performance, but not necessarily worst-case performance, which is typically what is sought after from a security perspective,” Papernot said. As such, they are vulnerable to generic attacks, which often can be conducted regardless of the machine-learning model type or the task being solved.

Yevgeniy Vorobeychik, professor of electrical engineering and computer science at Vanderbilt University, pointed out that while some government agencies -- like the Defense Department and its research arm, DARPA -- are “reaching a level of sophistication that we [academics] do not have,” AML is just beginning to emerge in this sector. It is being “seriously considered” by many governments and adjunct groups like metropolitan and national law enforcement to forecast criminal activity, for example.

In the public sector, machine learning can be used in many applications, ranging from “techniques for defending against cyber attacks; for analyzing scientific data, such as astronomy observations or data from large scale experiments conducted by the Department of Energy; for biological and medical research; or for building crime-prediction models, used in parole and sentencing decisions,” according to Tudor A. Dumitras, assistant professor at the University of Maryland at College Park. These systems are all susceptible to AML attacks, he added.

To illustrate the problem, Dumitras pointed to cyber defense systems, which must classify artifacts or activities -- such as executable programs, network traffic or emails -- as benign or malicious. In order to do this, he said,  machine-learning algorithms start from a few known benign and known malicious examples, and, using them as a starting point, the algorithms learn models of malicious activity without requiring a predetermined description of these activities.

“An intelligent adversary can subvert these techniques and cause them to produce the wrong outputs,” he said. Broadly, Dumitras said that there are three ways adversaries can do this:

  • Attack the trained model by crafting examples that cause the machine-learning algorithm to mislabel an instance or to learn a skewed model.
  • Attack the implementation by finding exploitable bugs in the code.
  • Exploit the fact that a machine-learning model is often a black box to the users.

“As a consequence, users may not realize that the model has a blind spot or that it is based on artifacts of the data rather than meaningful features,” Dumitras said, “as machine-learning models often produce malicious or benign determinations, but do not outline the reasoning behind these conclusions.”

AML rising

AML is becoming important in the public sector and law enforcement, because computer scientists “have reached sufficient maturity in machine-learning research for machine-learning models to perform very well on many challenging tasks, sometimes superseding human performance,” according to Papernot. “Hence, machine learning is becoming pervasive in many applications, and is increasingly a candidate for innovative cybersecurity solutions.” However, Papernot said that as long as vulnerabilities -- such as the ones identified with adversarial examples -- are not fully understood, the predictions made by machine-learning models will remain difficult to trust.

A large number of specific attacks against machine learning have been discovered over the past decade, Dumitras said. “While the problem that the attacker must solve is theoretically hard, it is becoming clear that it is possible to find practical attacks against most practical systems,” he said. For example, hackers already know how to evade machine learning-based detectors; how to poison the training phase so that the model produces the outputs they want; how to steal a proprietary machine-learning model by querying it repeatedly; and how to invert a model to learn private information about the users it is based on.

At the same time, defending against these attacks is largely an open question.“There are only a few known defenses,” Dumitras said, “which generally work only for specific attacks and lose their effectiveness when the adversary changes strategies.”

For example, he pointed to the spread of “fake news,” which can erode trust in the government. The proliferation of fake news -- especially on social media sites like Facebook, Twitter or Google -- is amplified by users clicking on, commenting on, or liking these fraudulent stories. This behavior constitutes “a form of poisoning, where the recommendation algorithms operate on unreliable inputs, and they are likely to promote more fake news,” he said. View More