In the “good old days” of cybersecurity risk, we only had to worry about being hacked or downloading malware. But the stakes have ramped up considerably in the past decade, Berkman Klein Center for Internet & Society Directors James Mickens and Jonathan Zittrain ’95 told a Zoom audience on Monday. The more algorithms affect our lives, they suggested, the greater the chance for a security breach to have huge societal impact.

Their virtual conversation was given the title “Cybersecurity: How Far Up the Creek Are We?” But the talk actually went deeper, to examine how murky the creek itself has become. “There’s never been a time at which cybersecurity has been great,” said Mickens, a professor of computer science at Harvard School of Engineering, at the outset. But with new algorithms determining everything from who should get job interviews to who should go to jail; and with the rise of the IoT (or “internet of things,” namely common objects like smartphones and appliances), security questions have become more complex. “It used to be, ‘Can my data be stolen?’ But as the technology becomes more pervasive you have to start thinking about bigger questions: If they can access my system in some seemingly benign way, can they game or influence it?”

Zittrain, the George Bemis Professor of International Law and the co-founder of the Berkman Center, identified a few large-scale threats, including individuals being spied on through their devices or college-admission systems being hacked. While “old-school” attacks had a relatively easy fix, like anti-virus software, he asked what the equivalent was for modern attacks. Mickens responded that it’s not something you can solve with a purchase from an app store. Instead, he said, security is something that needs to be built-in during the design process.

“Trying to achieve security is something of a design attitude—where at every level in your system design, you are thinking about the possible things that can go wrong, the ways the system can be influenced, and what circuit-breakers you might have in place in case something unforeseen happens,” said Mickens. “That seems like a vague answer because it is: There isn’t a magic way to do it.”

Designers, Mickens continued, might even need to consider the political or ethical mindset of the people using their system. “There’s no simple way to figure out if our system is going to be used ethically or not, because ethics itself is very poorly defined. And when we think about security, we need to have a similarly broad attitude, saying that there are fundamental questions which are ambiguous, and which have no clean answer—‘What is security and how do I make my product secure?’ As a result, we need to be more imaginative than we are right now.”

Thus, suggested Zittrain, the question has moved to the supply side: Consumers want safe products, and the onus is on designers to provide them. This, he said, opens an even thornier question: Does there need to be a regulatory board for people producing code, and if not, “What would incent the suppliers to worry about systematic risks that might not even be traced back to them?”

“The citizen side in me says yes: Certainly, we need regulation to force these tech companies to do the right thing,” Mickens said. But as a computer scientist, he wondered about what regulations might come out: Would they be technically articulate, or clearly written? As an example, he cited the EU’s GDPR (General Data Protection Regulation)—on the surface a good set of rules that grants consumers ownership of their data. But he noted how easily legislation could get thorny: Suppose one customer’s Fitbit data gets uploaded to a larger database, and the customer demands ownership of that data. How would it be isolated from the larger pool and returned? “When you look at how companies try to comply with the GDPR, a lot of it is prayer-based: They don’t understand exactly what the GDPR is asking of them, and there isn’t a lot of good tech support that would help people to comply with these laws. So, it’s a bit of a mess.”

At the moment, Mickens said, even a trusted company might be providing bad apps. He said he had just been working on Duolingo, the language-teaching app sold on both Google and Apple. One of the ads that came up was for a game that looked suspect. “It had clearly been designed in a week and the reviews had to be written by bots. I’m looking at it and I’m saying, ‘That’s malware, a hundred percent of the time.’ So, when you allow an open app store, you allow a lot of vulnerabilities. In the Google world, and to some extent the Microsoft world, you can still load things that aren’t good for you.”

Zittrain said that many of his HLS students have wondered why there aren’t the same legal penalties for building vulnerable software as there might be for putting a dangerous product in the supermarket. “The weird answer is the happenstance of American common law, that purely economic damages usually aren’t recoverable from mere negligence. If it doesn’t hurt you physically and only makes you deeply upset, no case. We teach the exceptions, but those are rare.”

During the Q&A session afterwards, they identified a future problem: With much of the world’ s information becoming consolidated in three major cloud systems (AWS, Azure and Google), they may be the target of future security attacks. As Zittrain said, “It’s like the old Willie Sutton line. They asked him why he robbed banks and he said, ‘That’s where the money was.”