The Next Big Gasoline Shortage Is Coming

If the pandemic has taught us anything, it’s that we cannot ignore the warning signs for future catastrophes.

Art of a snake crawling out of a laptop
Paul Spella / Getty / The Atlantic

In North Carolina, where I live, only about one-third of gas stations are currently reporting that they have any gas, and that’s after some improvement in availability. A ransomware attack shut down a key pipeline supplying these stations, an event that could, but likely won’t, serve as a wake-up call, before we experience a true catastrophe.

Prior to the pandemic, I wrote a lot about digital security, or the lack thereof. I once compared our security status quo to “building skyscraper favelas in code—in earthquake zones.” Not much has changed since then, but we are starting to hear more rumbles.

The dynamics of digital insecurity, ransomware, and related threats are eerily similar to the global public health dynamics before the pandemic. Battlestar Galactica helps explain one key similarity: Networked systems are vulnerable. The premise of the series is that the battleship Galactica, and only Galactica, survived an attack by the Cylons (humanoid robots) on the human fleet simply because it was old and had just been decommissioned in the process of being turned into a museum. Being older, it had never been networked into the system. The “shutdown” command sent by the attackers never reached it, and it was thus spared.

In pandemic terms, Galactica was an island that no one could travel to.

Our software infrastructure is not built with security in mind. That’s partly because a lot of it depends on older layers, and also because there has been little incentive to prioritize security. More operating systems could have been built from the start with features such as “sandboxing,” in which a program can play only in a defined, walled-off area called a “sandbox” that is unreachable by anything else. If that program is malicious, it can do damage only in its sandbox. (This is analogous to the idea of “air gapping,” in which crucial parts of a network are unplugged from a network’s infrastructure.)

Adding security after the fact to a digital system that wasn’t built for it is very hard. And we are also surrounded by “technical debt,” programs that work but were written quickly, sometimes decades ago, and were never meant to scale to the degree that they have. We don’t mess with these rickety layers, because it would be very expensive and difficult, and could cause everything else to crumble. That means there is a lot of duct tape in our code, holding various programs and their constituent parts together, and many parts of it are doing things they weren’t designed for.

Our global network isn’t built for digital security. As I wrote in 2018, the early internet was intended to connect people who already trusted one another, such as academic researchers and military networks. It never had the robust security that today’s global network needs. As the internet went from a few thousand users to more than 3 billion, attempts to strengthen security were stymied because of cost, shortsightedness, and competing interests.

Even putting aside the security of our networks, our ordinary devices are sometimes shipped with passwords that are drawn from a preexisting list that includes the very-hard-to-crack “password,” “1234,” and “default.” In 2019, I explained how vulnerable that leaves us, using the example of interlinked zombie baby-monitors being used to cripple infrastructure (such as by bringing down cell communication infrastructure in Liberia) or to censor journalists:

Most of our gizmos rely on generic hardware, much of it produced in China, used in consumer products worldwide. To do their work, these devices run software and have user profiles that can be logged into to configure them. Unfortunately, a sizable number of manufacturers have chosen to allow simple and already widely known passwords like “password,” “pass,” “1234,” “admin,” “default” or “guest” to access the device. In a simple but devastating attack, someone put together a list of 61 such user name/password combinations and wrote a program that scans the Internet for products that use them. Once in, the software promptly installs itself and, in a devious twist, scans the device for other well-known malware and erases it, so that it can be the sole parasite. The malicious program, dubbed Mirai, then chains millions of these vulnerable devices together into a botnet—a network of infected computers. When giant hordes of zombie baby monitors, printers and cameras simultaneously ping their victim, the targeted site becomes overwhelmed and thus inaccessible unless it employs expensive protections.

Many problems like these aren’t fixed, because of what economists call “negative externalities”: Shipping software or devices like these is free, and fixing any issues that come up is expensive. Taking the latter, more expensive route provides no immediate reward. It’s like telling factories that they can pollute as much as they want, dumping their waste into the air or a nearby river, or they can choose to install costly filtering systems, in a setup where the pollution isn’t quickly visible through smell or appearance. You can guess what happens: The companies don’t worry about it, because they don’t have to.

It’s actually surprising that digital hacks and ransomware attacks don’t happen more, given how widespread these problems are. There has been hack after hack, thefts of profitable data (such as in the Equifax hack), and devices being chained together for denial-of-service attacks—and little to no accountability. And just like with the pandemic, our digital vulnerability is rooted in a connected network with coupled vulnerabilities: Like the biological viruses that travel when we do, malware and software viruses can travel through interconnected networks (which are now everywhere, as software eats the world). And in a coupled system, when one thing goes wrong, it usually ends up dragging other things down with it. Tightly coupled systems are prone to cascading failures, in which one failure essentially triggers an avalanche.

Before bitcoin, there was no obvious way to monetize all of this digital malfeasance. Despite its freewheeling appearance, the global financial sector is fairly heavily regulated. People may be deceived by how easily money can be transferred here or there within the system, but laundering large amounts of illicit gains from outside the system into the kind of money that can be spent freely in legal markets is not that easy if the sums are large enough and the regulators in a few choke points are dead set against it. Of course, such laundering is done all the time, such as by large drug cartels, but those are large, professional operations and it’s not easy even for them. These choke points include the SWIFT money-transfer systems, the United States Treasury and the Office of Foreign Assets Control program, and the U.S. attorney for the Southern District of New York, where Wall Street is located.

Of course, bitcoin changes this calculus, at least the temptation to try. It’s still not as easy as people might think to use bitcoin to move truly large amounts of money out of the system—to buy things with it, or turn it into cash. Small amounts, sure. The kind of sums that would make large-scale fraud attractive? That would be much harder without being traced. However bitcoin sure makes it more tempting to try, even for small sums. A lot of ransomware attacks aren’t for huge sums, meaning bitcoin and the cryptocurrency ecology have given ransomware a scalable business model, at least in the minds of its “entrepreneurs.”

This is a very costly problem to fix. A solution would require our government to shift its priorities. And we would need a regulatory environment to encourage and force different practices, to devote resources to the issue. Programs would need to be more reliable, crucial functions would need to be isolated, and external audits would need to be commonplace.

Some of the steps we could take on the financial side—such as targeting the ways in which people can launder money out of the crypto currencies they have acquired through such illicit activities—may be practically easy, but they raise a lot of thorny questions too. Would that mean finally looking at regulations for cryptocurrencies? That would bring up how they have become speculative tools as well, and that raises an issue that’s even more fundamental: how the global economy keeps producing asset bubbles and massive waves of speculation, like the one that led to the 2008 financial crisis. And that problem relates to the concentrated nature of global wealth chasing returns, and the lack of strong oversight for some of the implications of this chase. All of this is to say, just like with technical debt, duct-taping our way out of the immediate crisis does not address the fundamental problems.

Addressing digital insecurity would also entail providing better regulation up and down the technical stack, so that the negative externalities become, instead, internal issues for the companies and they’re responsible for solving the problems they create.

The more likely scenario is that there will be moves on the financial side (making it harder to launder large sums from crypto currencies into the regular financial system) and on the state-sector side (you can disincentivize another government from hacking your infrastructure, but doing that with independent players is much harder). There may also be efforts to “make an example” of a few high-profile ransomware attempts: tracking down the perpetrators and handing down massive sentences. This isn’t as difficult as it sounds, but it requires resources. If ransomware attempts proliferate, punishment will not be as effective a deterrent, because most people will not be caught, given that so many are making attempts. This would set up a catastrophe lottery for the ransomware folks: Most of them probably will not be snared, but the few that do will be crushed.

Again, I’m reminded of our pre-pandemic era: We knew that a major threat was afoot, and that our infrastructure was lacking. We had the Ebola crisis from 2014 to 2016 where we worried more about slight risks to ourselves rather than strengthening our global response; we had SARS in 2003 which was barely averted from becoming a pandemic; and we had the HIV/AIDS catastrophe starting in the 1980s which also had an inexcusable delay in providing access to affordable medications globally. Did we move to truly fix the things that were revealed to be lacking from those experiences? We did not. Meanwhile, my Honda Civic has half a tank of gas, so I’ll be fine for now, but I’m not so sure about the future of the networked world.

Zeynep Tufekci is a contributing writer at The Atlantic and an associate professor at the University of North Carolina. She studies the interaction between digital technology, artificial intelligence, and society.