Security Isn't Enough. Silicon Valley Needs 'Abusability' Testing

Former FTC chief technologist Ashkan Soltani argues it's time for companies to formalize and test not just a product's security, but how it can be abused.
internet cursor knocking over dominos
Lauren Joseph; Getty Images

Technology has never limited its effects to those its creators intended: It disrupts, reshapes, and backfires. And even as innovation's unintended consequences have accelerated in the 21st century, tech firms have often relegated the thinking about its second-order effects to the occasional embarrassing congressional hearing, scrambling to prevent unexpected abuses only after the harm is done. One Silicon Valley watchdog and former federal regulator argues that's officially no longer good enough.

At the USENIX Enigma security conference in Burlingame, California, on Monday, former Federal Trade Commission chief technologist Ashkan Soltani plans to give a talk centered on an overdue reckoning for move-fast-and-break-things tech firms. He says it's time for Silicon Valley to take the potential for unintended, malicious use of its products as seriously as it takes their security. From Russian disinformation on Facebook, Twitter, and Instagram to YouTube extremism to drones grounding air traffic, Soltani argues, tech companies need to think not just about protecting their own users but about what he calls abusability: the possibility that users could exploit their tech to harm others, or the world.

"There are hundreds of examples of people finding ways to use technology to harm themselves or other people, and the response from so many tech CEOs has been, 'We didn't expect our technology to be used this way,'" Soltani said in an interview ahead of his Enigma talk. "We need to try to think about the ways things can go wrong. Not just in ways that harm us as a company, but in ways that harm those using our platforms, and other groups, and society."

Courtesy of Ashkan Soltani

There's precedent for changing the paradigm around abusability testing. Many software firms didn't invest heavily in security until the 2000s, when—led, Soltani notes, by Microsoft—they began taking the threat of hackers seriously. They started hiring security engineers and hackers of their own and elevated audits for hackable vulnerabilities in code to a core part of the software development process. Today, most serious tech firms not only try to break their code's security internally, they also bring in external red teams to attempt to hack it and even offer "bug bounty" rewards to anyone who warns them of a previously unknown security flaw.

"Security guys were once considered a cost center that got in the way of innovation," Soltani says, remembering his own pre-FTC experience as a security administrator working for Fortune 500 companies. "Fast forward 15 or 20 years, and we're in the C-suite now."

But when it comes to abusability, tech firms are only starting to make that shift. Yes, big tech companies like Facebook, Twitter, and Google have large counter-abuse teams. But those teams are often reactive, relying largely on users to report bad behavior. Most firms still don't put serious resources toward the problem, Soltani says, and even fewer bring in external consultants to assess their abusability. An outside perspective, Soltani argues, is critical to thinking through the possibilities for unintended uses and consequences that new technologies create.

Facebook's role as a disinformation megaphone in the 2016 election, he notes, demonstrates how it's possible to have a large team dedicated to stopping abuses and still remain blind to devastating ones. "Historically, abuse teams were focused on abuse on the platform itself," Soltani says. "Now we’re talking about abuse to society and the culture at large, abuse to democracy. I would argue that Facebook and Google didn’t start out with their abuse teams thinking about how their platforms can abuse democracy, and that’s a new thing in the last two years. I want to formalize that."

Soltani says some tech companies are beginning to confront the issue—albeit often belatedly. Facebook and Twitter scrubbed thousands of disinformation accounts after 2016. WhatsApp, which has been used to spread calls for violence and false news from India to Brazil, finally put limits on mass message forwarding earlier this month. Dronemaker DJI has put geofencing limits on its drones to keep them out of sensitive airspaces, in an attempt to avoid fiascos like the paralysis of Heathrow and Newark airports due to nearby drones. Soltani argues those are all cases where companies managed to limit abuse without curtailing the freedoms of their users. Twitter didn't need to ban anonymous accounts, for instance, nor did WhatsApp need to weaken its end-to-end encryption.

Those sorts of lessons now need to be applied at every tech firm, Soltani says, just as security flaws are formally classified, checked for, and scrubbed out of code before it's released or exploited. "You need to define the problem space, the history, to build a compendium of different types of attack and classify them," Soltani says. And even more important, tech companies need to work to predict the next form of sociological harm their products might inflict before it happens, not after the fact.

That sort of prediction can be immensely complex, and Soltani suggests tech firms consult those who make it their job to foresee the unintended consequence of technology: academics, futurists, and even science fiction authors. "We can use art to think about the potential dystopias we want to avoid," Soltani says. "I think Black Mirror has done more to inform people on the potential pitfalls of AI than any White House policy paper."

In his time at the FTC—as a staff technologist in 2010 and then later as its chief technologist in 2014—Soltani was involved in the commission's investigations of privacy and security problems at Twitter, Google, Facebook, and MySpace, the sort of cases that have demonstrated the FTC's growing role as a Silicon Valley watchdog. In several of those cases, the FTC put those companies "under order" for deceptive claims or unfair trade practices, a kind of probation that's since led to tens of millions of dollars in fines for Google and will likely lead to far more for Facebook, as punishment for the company's latest privacy scandals.

But that kind of regulatory enforcement can't solve the abusability problem, Soltani says. The victims of the indirect abuse he's warning about often have no relationship with the company, so they can't level accusations of deception. But even without that immediate regulatory threat, Soltani argues, companies should still fear reputational damage or knee-jerk government overreactions to the next scandal. He points as an example to the controversial FOSTA anti-sex-trafficking law passed in early 2018.

All of that means Silicon Valley needs to put the kind of thinking and resources into abusability that security—not to mention growth and revenue—has received for years. "There are opportunities in academia, in research, in science fiction, to at least inform some of the known knowns," Soltani says. "And potentially some of the unknown unknowns too."


More Great WIRED Stories