The Blockchain Solution to Our Deepfake Problems

Technology to hack videos will only keep getting better. A decentralized ledger might help us know when we're seeing the truth.

Blockchain has always seemed to me like a solution looking for a problem, which isn’t a criticism. The laser, the transistor, and the integrated circuit all lingered, underutilized, until either the technology evolved, a complementary technology matured, and/or some clever entrepreneur enabled their wide and disruptive adoption. Or take barcodes, which were first deployed with middling success to track train cars, and only took on the eponymous universality of "UPC codes" when cash registers became more than mechanical contraptions.

Currencies such as Bitcoin may well be only the first iteration of a blockchain-powered technology, but one that could disappear in a puff of smoke and a pool of investor tears if the speculative bubble pops. However, the notion of a decentralized, public ledger of consensus-driven facts about the world—which is what a blockchain fundamentally is—has a utility well beyond wild-eyed, crypto-anarchist dreams.

Take our latest truth crisis: near real-quality video editing and creation, otherwise known as deepfakes. Briefly, anyone’s face can now be superimposed on anyone else’s, creating uncannily authentic videos of just about anything. (Yes, including that.) The technology is getting so good that most anyone will be able create arbitrary scenes of almost any variety, not just Hollywood studios creating fleeting resurrections of dead actors in franchise sequels. This last remaining artifact that we take as an accurate portrayal of physical reality—the moving image, captured in real time—will now be as plastic and mutable as a paint canvas, or the historical photos that the Soviet apparatus once clumsily modified (or we more slickly Photoshop now). Most fake news now is textual, with some photo hacking thrown in. But remember the hubbub that a few seconds of Clinton stumbling into a vehicle at a 9/11 event caused during the last election? Well, imagine dozens of versions of that, plus any flavor of Trumpian "pee tape" you might like.

It’s my technofuturism credulity bar that no technology can be taken seriously until it’s used for either crime or porn (or both). Bitcoin certainly met the former bar by being the currency of choice for illicit online marketplaces like Silk Road. And deepfakes are, of course, now being used for the latter.

What problems powerful computation gives, powerful computation can (usually) also take away, at least partially. Gifycat, a popular video hosting and editing platform, is already running AI over submitted videos to pick out fakes. The problem is these approaches only tend to work on celebrity faces, which have lots of training data for the AI to learn from. Police bodycams, along with any flavor of amateur GoPros or nanny cams are still fair game. Even in the case of celebrities, video hacking technology will get better—and the fakers will start gaming the automated defenses as they now do with every programmatic hurdle that, say, Facebook throws up to fake news.

“What is truth?” asked a cynical Pontius Pilate in the Gospel of John, when tasked with adjudicating the contrasting accounts of a certain meddlesome Judean preacher. Courts have long engineered some legal version of reality, whether it's the two witnesses required in Leviticus to convict someone of murder, or the DNA tests and footage often used today. Video, whether from victim smartphone or police bodycam, is still the key make-or-break element of many a defense or prosecution, with no alternative in sight.

What then would be the ideal architecture of a video “truth” infrastructure, one that could send someone to prison for years, or exonerate someone from the same fate? Well, it would be decentralized (no single arbiter of truth) and public (we can all check it), which is precisely what Bitcoin’s blockchain provides for payments.

Can the greedy bubble of Bitcoin be repurposed toward a less monetary goal?

A three-year-old Austin, Texas-based company named Factom thinks so. Building on top of the existing Bitcoin infrastructure almost as if it were the network layer of a new truth web, Factom provides a streamlined way to assert the existence of a piece of data or document at a certain time. Since the blockchain isn’t designed to store reams of streaming data (e.g. a 24/7 security camera), Factom’s hashes and organizes incoming data to establish proof that some specific information exists. In practice, this would mean that, say, 10-minute blocks of video from a given camera would live inside the Factom data structure, and “truth” could be assured for that window of time, with one such assertion for a long chain of such windows stretching for however long the camera's been recording. Factom assures what’s known as “data integrity” in both senses of the word integrity: whole and trustingly honorable. By combining that with a hardware solution that digitally signs and hashes the data instantly, right as the pixels are pulled off the camera, one can confidently claim that a video is “real” and was really taken by the camera that digitally signed the data.

The Department of Homeland Security, which maintains an array of cameras and sensors along our country's southern border, is now testing Factom’s newfangled truth recorder. The fear is that those border cameras will be hacked by sophisticated smugglers (of the drug or human variety) who buy their own cameras, wire them to show whatever false scene, and then plug them back into the DHS network. The smugglers carry on while the border’s overseers stare at a contrived scene of false tranquility. Border videos can also be used as evidence in immigration trials—another legal showcase where the juridical definition of truth is key.

But that’s a court of law, not our social media awash in dueling versions of political “fake news.” Will all this sophisticated wiring together of hash functions and networked computers, the ballyhooed blockchain, really make a difference among the jury of voters and that important verdict they regularly issue—this being an election result? A recent study about social media and the impact of bots on the dissemination of fake news does not bode well for our species. According to the result published by a MIT team in the journal Science, bots routinely spread as many true as false pieces of news, nullifying their net effect. It was the humans that very preferentially and nefariously spread pleasant-sounding (to some ears) lies. Such was their inverted selectivity that fake news traveled six times faster on Twitter than real news, proving scientifically what Jonathan Swift once stated satirically: "Falsehood flies, and truth comes limping after it." On Twitter, the falsehood zips around at the speed of Wi-Fi on 280-character wings, with Twitter’s operations team limping after it.

Our real political enemy isn’t Russian bots but human credulity, as well as a nagging psychological quirk known as cognitive dissonance. The former makes us want to believe whatever flatters our existing worldview; the latter makes us double down on those false beliefs, not despite credible evidence to the contrary, but precisely because of it.

Factom’s use of blockchain, assuming it gains wider adoption, may well help change how the law defines truth. Outside of a courtroom, though, we’ll still fall into the temptations of magical thinking. The truth could be sitting there right in front of us, as cryptographically secure and public as can be, and we’d all stubbornly refuse to see it. What is truth? Whatever we want it to be right then, and whatever we’re eager to preach as truth to others, at internet scale.


Put It On the Blockchain

Photograph by WIRED/Getty Images