What Happens To Privacy When The Internet Is In Everything?

This week Google’s Eric Schmidt was on a panel at the World Economic Forum in Davos, where he suggested that the future Internet will be, in one sense, invisible — because it will be embedded into everything we interact with.

“The Internet will disappear,” he predicted (via The Hollywood Reporter). “There will be so many IP addresses…so many devices, sensors, things that you are wearing, things that you are interacting with that you won’t even sense it. It will be part of your presence all the time.

“Imagine you walk into a room, and the room is dynamic. And with your permission and all of that, you are interacting with the things going on in the room.”

This is not an especially outlandish forecast, given the trajectory of connected devices. Analyst Gartner calculated there were some 3.8 billion such ‘smart objects’ in use last year, and forecast 4.9 billion this — rising to 25 billion in circulation by 2020. (The global human population was estimated at around seven billion, at the last count.) In other words the sensornet is here, it’s just not densely (or evenly) distributed yet.

Google already owns Nest, a maker of connected devices for the home, such as a smoke alarm and learning thermostat. Google-Nest also owns Dropcam, a Wi-Fi security camera maker. Mountain View is clearly making a bid to be the nexus of the ‘connected home’ — which, along with the ‘connected car’ (of course Google is also building driverless, Internet-tethered cars), is the early locus for the sensornet. See also: wearables (‘connected people’), and the fact smartphones are gaining additional embedded sensors, turning our pervasive pocket computers into increasingly sensory mobile data nodes.

One of Davos’ more outlandish (perhaps) predictions for our increasingly connected future came from a group of Harvard professors who apparently sketched a scenario where mosquito sized-robots buzz around stealing samples of our DNA, as reported by Mail Online. “Privacy as we knew it in the past is no longer feasible,” computer science professor Margo Seltzer is quoted as saying. “How we conventionally think of privacy is dead.”

What Seltzer was actually arguing is that it needs no sneaky, DNA-sealing robo-mosquitos for connected technologies to violate our privacy. The point is, she later told TechCrunch, we are already at a privacy-eroding tipping point — even with current gen digital technologies. Let alone anything so futuristic as robotic mosquitos.

“The high order message is that we don’t need pervasive sensor net technologies for this to be true. We merely have to use technologies that exist today: credit cards, debit card, the web, roads, highway transceivers, email, social networks, etc. We leave an enormous digital trail,” she added.

Seltzer was also not in fact arguing for giving up on privacy — even if the Mail’s article reads that way. But rather for the importance of regulating data and data usage, rather than trying to outlaw particular technologies.

“Technology is neither good nor bad, it is a tool,” she said. “However, hammers are tools too. They are wonderful for pounding in nails. That doesn’t mean that someone can’t pick up a hammer and use it to commit murder. We have laws that say you shouldn’t murder; we don’t specialize the laws to call out hammers. Similarly, the laws surrounding privacy need to be laws about data and usage, not about the technology.”

With your permission

What especially stands out to me from Schmidt’s comments at Davos is his afterthought caveat — that this invisible, reactive, all-pervasive future sensornet will be pulling its invisible strings with your permission.

Perhaps he was paying lip-service to the warning of the FTC’s Chairwoman, Edith Ramirez, at CES earlier this month that building connected objects — the long discussed ‘Internet of Things’ — demands a new responsibility from businesses and startups to bake security and privacy protections into their products right from the get go.

“[The Internet of Things] has the potential to provide enormous benefits for consumers, but it also has significant privacy and security implications,” she warned. “Connected devices that provide increased convenience and improve health services are also collecting, transmitting, storing, and often sharing vast amounts of consumer data, some of it highly personal, thereby creating a number of privacy risks.”

Ramirez said that without businesses adopting security by design; engaging in data minimization rather than logging everything they can; and being transparent about the data they are collecting — and who else they want to share it with — by providing notifications and opt outs to users; then the risks to users’ privacy and security are enormous.

The problem with those well-meaning words from a consumer watchdog organization is that we are already struggling to achieve such rigorous privacy standards on the current Internet — let alone on a distributed sensornet where there’s no single, controllable entry point into our lives. The Internet and the mobile Internet can still be switched off, in extremis, by the user turning off their router and/or powering their phone down (and putting it in the fridge if you’re really paranoid, post-Snowden).

But once a distributed sensornet has achieved a certain penetration tipping point, into the objects with which we humans are surrounded, well then the sheer number of devices involved is going to take away our ability to trivially pull the plug. Unless some kind of regulatory layer is also erected to provide a framework for usage that works in the interests of privacy and consumer control.

Without such consumer-oriented controls embedded into this embedded Internet, the user effectively loses the ability to take themselves offline, given that the most basic level of computing control — the on/off switch — is being subducted beneath the grand, over-arching utility of an all-seeing, always on sensornet. (Battery life constraints, in this context, might be viewed as a privacy safeguard, although low power connectivity technologies, such as Bluetooth Low Energy, work to circumvent that limit.)

In parallel, a well-distributed Internet of Things likely demands greater levels of device automation and autonomy, given the inexorable gains in complexity generated by a dense network of networked objects. And because of the sheer number of connected devices. And more automation again risks reducing user control.

Connected objects will be gathering environmental intelligence, talking to each other and talking to the cloud. Such a complex, interwoven web of real-time communications might well generate unique utility — as Schmidt evidently believes. But it also pulls in increased privacy concerns, given how many more data points are being connected and how all those puzzle pieces might slot together to form an ever more comprehensive, real-time representation of the actions and intentions of the people moving through this web.

Earlier generation digital technologies like email were not engineered with far-sighted privacy protections in mind. Which is why they have been open to abuse — to being co-opted as part of a military industrial surveillance complex, as the Snowden revelations have shown, offering a honeypot of metadata for government intelligence agencies to suck up. Imagine what kind of surveillance opportunities are opened up by an ‘invisible’ Internet — which is both everywhere but also perceptually nowhere, encouraging users to submit to its data-mining embrace without objection. After all how can you resist what you can’t really see or properly control?

That is exactly the Internet that Schmidt wants to build, from his position atop Google’s ad sales empire. The more intelligence on web users Google can harvest, the more data it can package up and sell to companies who want to sell you stuff. Which, for all Google’s primary-colored, doodle-festooned branding, is the steely core of its business. Mountain View has long talked about wanting search to become predictive. Why? Because marketing becomes a perfect money-pipe if corporates can channel and influence your real-time intentions. That’s the Google end game.

Learning about human intention from the stuff people type into search engines is laughably crude compared to how much can be inferred from a sensornet that joins up myriad, real-time data-dots and applies machine learning data-mining algorithms dynamically. More dots are already being joined by Google, across multiple web products and its mobile platform Android — which brings it a rich location layer. Doing even more and deeper data mining is a natural evolution of its business model. (Related: Google acquired AI firm Deep Mind last year — a maker of “general-purpose learning algorithms”.)

The core reality of the Internet of Things is that a distributed network of connected objects could be deliberately engineered to catch us in its web — triangulating our comings and goings as we brush past its myriad nodes. The more connected objects surround us, the more data points wink into existence to be leveraged by the Googles of the digital world to improve the accuracy and texture of their understanding of our intentions, whether we like it or not.

So while the future Internet may appear to fade into the background, as Schmidt suggests, that might just signify a correspondingly vast depth of activity going on in the background. All the processing power required to knit together so many connections and weave a concealed map of who we are and what we do.

The risk here, clearly, is that our privacy is unpicked entirely. That an embedded ‘everywhere Internet’ becomes a highly efficient, hugely invasive machine analyzing us at every turn in order to package up every aspect of our existence as a marketing opportunity. That’s one possible future for the sensornet.

But it seems to me that that defeatist argument is also part of the spinning which vested interests like Google, whose business models stand to benefit massively, engage in when they discuss the digital future that they are trying to shape. Technology is a tool. Diverse applications are possible. And just because technology makes something possible does not also mean it is inevitable.

As Seltzer says, we need to be thinking about how we want the data to flow or not flow, rather than throwing our hands up in horror or defeat. What is also clearly necessary — indeed, I would argue, is imperative — is joined up thinking from regulators to comprehend the scope of the privacy risks posed by increasingly dense networks of networked objects, and how the accumulation of data-points can collectively erode consumer privacy. A clear-sighted strategy for ensuring end users can comprehend and control the processing of their personal data is paramount.

Without that, the risk for startup businesses playing in this space is that the rise of more and more connected devices will be mirrored by a parallel rise in human mistrust of increasingly invasive products and services.

In the hyper personal realm of the Internet of Things, user trust is paramount. So building a framework to regulate the data flows of connected devices now, while the sensornet is still in its infancy, is imperative for everyone involved.

In the offline world we have cars and roads. We also have speed limits — for a reason. The key imperative for regulators now, as we are propelled towards a more densely-packed universe of connected devices, is coming up with the sensornet’s speed limits. And fast.