Back to News & Commentary

The Privacy Threat From Always-On Microphones Like the Amazon Echo

Amazon Echo Blue
Amazon Echo Blue
Jay Stanley,
Senior Policy Analyst,
ACLU Speech, Privacy, and Technology Project
Share This Page
January 13, 2017

A warrant from police in Arkansas seeking audio records of a man’s Amazon Echo has sparked an overdue conversation about the privacy implications of “always-on” recording devices. This story should serve as a giant wakeup call about the potential surveillance devices that many people are starting to allow into their own homes.

The Amazon echo is not the only such device; others include personal assistants like Google Home, Google Now, Apple’s Siri, Windows Cortana, as well as other devices including televisions, game consoles, cars and toys. We can safely assume that the number of live microphones scattered throughout American homes will only increase to cover a wide range of “Internet of Things” (IoT) devices. (I will focus on microphones in this post, but these devices can include not just audio recorders but video as well, and the same considerations apply.)

The insecurity of a nearby mic

I was at a dinner party recently with close friends where the conversation turned to some entirely theoretical, screenplay-writing-type speculations about presidential assassinations—speculations that would be pretty dicey should certain outside parties who did not know us and where we were coming from be listening in. Realizing this as we spoke, the group thought of our host’s Amazon Echo, sitting on a side table with its little light on. The group’s conversation became self-conscious as we began joking about the Echo listening in. Joking or not, in short order our host walked over and unplugged it.

It is exactly this kind of self-consciousness and chilling effects that surveillance—or even the most remote threat of surveillance—casts over otherwise freewheeling private conversations, and is the reason people need ironclad assurance that their devices will not—cannot—betray them.

Overall, digital assistants and other IoT devices create a triple threat to privacy: from government, corporations, and hackers.

It is a significant thing to allow a live microphone in your private space (just as it is to allow them in our public spaces). Once the hardware is in place, and receiving electricity, and connected to the Internet, then you’re reduced to placing your trust in the hands of two things that unfortunately are less than reliable these days: 1) software, and 2) policy.

Software, once a mic is in place, governs when that microphone is live, when the audio it captures is transmitted over the Internet, and to whom it goes. Many devices are programmed to keep their microphones on at all times but only record and transmit audio after hearing a trigger phrase—in the case of the Echo, for example, “Alexa.” Any device that is to be activated by voice alone must work this way. There are a range of other systems. Samsung, after a privacy dust-up, assured the public that its smart televisions (like others) only record and transmit audio after the user presses a button on its remote control. The Hello Barbie toy only picks up and transmits audio when its user presses a button on the doll.

Software is invisible, however. Most companies do not make their code available for public inspection, and it can be hacked, or unscrupulous executives can lie about what it does (think Volkswagen), or government agencies might try to order companies to activate them as a surveillance device.

The dumber and more straightforward a user’s control, the better. Depriving a microphone of electricity by unplugging it and/or removing any batteries provides ironclad assurance that it’s not recording. A hardware switch is nearly as good, provided there’s no software mediation that could be overcome by hackers. (Switches can be bought for just that purpose.) A verbal command is far less certain, and devices like Echo will sometimes misinterpret sounds as their “wake word” and record random snippets of conversation. It’s easy to see how a sentence such as “He was driving a Lexus in a way she said was dangerous” could be heard by an Echo as “Alexa: Sin away she said—was dangerous.” The constant potential for accidental recording means that users do not necessarily have complete control over what audio gets transmitted to the cloud.

Once their audio is recorded and transmitted to a company, users depend for their privacy on good policies—how it is analyzed; how long and by whom it is stored, and in what form; how it is secured; who else it may be shared with; and any other purposes it may be used for. This includes corporate policies (caveat emptor), but also our nation’s laws and Constitution.

Access to recordings by law enforcement

We fear that some government agencies will try to argue that they do not need a warrant to access this kind of data. We believe the Constitution is clear, and that, at a minimum, law enforcement needs a warrant based on probable cause to access conversations recorded in the home using such devices. But more protections are needed. Congress, recognizing the extremely invasive nature of traditional wiretaps, enacted safeguards that go beyond what the courts had ruled the Constitution requires. These include requirements that wiretaps be used only for serious crimes, or be permitted only when other investigative procedures have failed or are unlikely to succeed. We think that these additional privacy protections should also apply to invasive digital devices in the home.

Unfortunately the existing statutes governing the interceptions of voice communications are ridiculously tangled and confused and it’s not clear whether or how data recorded by devices in the home are covered by them. (CDT’s Joe Jerome has written a good roundup of that law and how it might apply to always-on devices.)

When it comes to law enforcement access, the key issues for us as a legal matter are:

  • Breadth. Access needs to be no broader than necessary. Any warrant authorizing access to stored conversations should particularly and narrowly describe the data that law enforcement has probable cause to believe is related to a crime—for example a specific time period, subject matter, and/or type of activity.
  • Minimization. There need to be protections in place to limit the collection of information that is ultimately irrelevant. In the wiretap context these include rules requiring the police to stop listening when a conversation is irrelevant, and analagous rules should be developed for IoT device data.
  • Notice. Historically, citizens served with search warrants have always received notice that their property is being searched—especially when that property is one’s home—and that practice should not be ended just because it is moving into the electronic realm. (We discussed this issue in greater depth last year.) Notice of IoT searches should always be served on all affected parties.

In the Arkansas case, the police did serve Amazon with a warrant—but Amazon has fought it, apparently because of overbreadth. Our only information is what the company has said in a statement: that “Amazon objects to overbroad or otherwise inappropriate demands as a matter of course.”

A legal contradiction

Digital assistants, like smart meters and many other IoT devices, split open a contradiction between two legal doctrines that both sit at the core of privacy law:

  1. The sanctity of the home. The inside of the home has for centuries been sacred when it comes to privacy. The Supreme Court has refused to let police use thermal scanners on private homes, for example, despite government protests that it was only measuring the heat leaving the home. And although the Court has ruled that dog sniffs for drugs are not a search in cars or in public spaces, it refused to allow them near the home. As Justice Antonin Scalia pointed out in the thermal scanner case, “the Fourth Amendment draws a firm line at the entrance to the house.”
  2. The third-party doctrine. As strong as privacy jurisprudence has been in protecting the home, it has been very weak in another area. Under the court’s so-called “third party doctrine,” the Constitution does not require police to get a warrant to get people’s records from their bank, telephone company, internet service provider, Google, Amazon, or any other third party. As a result, law enforcement agencies have argued in recent years that they should be able to obtain information such as individuals’ cell phone records, location history, even emails from companies without a warrant.

The contradiction arises when devices inside the home stream data about activities in that home to the servers of a third-party corporation. Because of the third party doctrine, to give just one example, police have been obtaining home energy-use data from utilities without a warrant. In a home with a “smart meter,” that kind of data can be so minutely detailed that it can reveal all kinds of details about what people are doing inside their homes—which appliances they use, and when they use them—and even what television shows they watch (based on the patterns of light and dark in a show, which changes a television set’s electricity draw). In fact, in the very same Arkansas murder case in which the police are seeking data from the suspect’s Amazon Echo, they built their case against him using warrantlessly obtained data from his smart water meter, which prosecutors say shows he used 140 gallons of water between 1 and 3 a.m. They allege he was trying to hose blood stains off his patio. (He says the AM/PM setting on the meter’s clock was wrong and he used that water in the afternoon in order to fill his hot tub.)

The solution to the contradiction between the sanctity of the home and the third-party doctrine is clear: the third party doctrine must go. It is increasingly untenable in an era where much of the data created about people’s lives sits on the servers of international corporations—and the growth of IoT devices like digital assistants will make its inadequacy in the information age even clearer.

Recommendations

In addition to rigorously applying constitutional privacy protections as outlined above, the following steps should be applied to IoT microphones:

  • Speech fragments transmitted to companies should be retained for the minimal necessary period, should not be shared absent a warrant, and should not be used for other purposes.
  • Companies should do whatever necessary to ensure their users have a clear understanding about what data is kept and for how long. That means fine print buried in a click-through agreement is not enough.
  • Users should have access to any of their audio recordings that a company retains, and the option to delete them. Commendably, some companies (Google and Amazon, for example) already do this. It needs to become at minimum an expected, standard best practice.
  • It should become standard for microphones to feature a hardwired, non-software-modifiable LED indicator light that turns on whenever a mic is on (defined as transmitting electrical signals to anywhere else). It might make sense for there to be another, separate indicator when software is recording and/or transmitting signals to the Internet. The more transparency to the consumer, the better.
  • It should also become standard to build in a hardware power switch that physically cuts off electricity to a microphone so that consumers can stop a microphone from recording. As much as possible, the power interruption that that switch effects should be tangible or even visible, so that customers can feel complete certainty that the microphone cannot record, akin to the certainty that comes from putting a bandage (or ACLU sticker) over the camera on one’s laptop.
  • To the greatest extent possible, the code governing the operation of microphones should be public. When people depend on software to protect their privacy, transparent code is the only way to give people assurance a device is doing what it’s supposed to and no more.
  • Special attention should be paid to any capability for remote activation of recording. Best for privacy is for no such activation to be possible. If there is a strong case to be made that such capability may be desired by consumers, then it should be to the greatest extent possible designed to be something that only consumers themselves can activate, and that consumers can permanently disable if they wish. Consumers must also be given explicit warning where any such capabilities exist.
  • Companies and policymakers need to address the raft of issues around the stability of IoT devices, especially in-home devices with microphones or cameras. When not regularly updated, for example, such devices quickly become security threats. And what happens when the company providing those updates goes out of business or is acquired—or just changes its privacy policy? Devices that start out as private and secure can become a toxic presence inside the home as a result of things happening in the outside world, and right now, consumers are on their own in a Wild West.
  • One of the best things that can be done for privacy is for speech recognition capabilities to be embedded locally in a device, so there’s no need to send audio clips to servers across the Internet. While that can work now for some simple commands, experts say that good recognition of a broader array of speech still requires processing in the cloud.
  • Legislative privacy protections are also needed. In addition to broad privacy rules governing corporate use of private data, which would help in this area as in so many others, Congress should lay out strong and precise standards for when the government can access data from these new devices. As with wiretaps, the privacy and public interests at stake may require protections beyond a warrant and notice requirement.

Again, all of these principles should also apply to video that in-home devices may capture and potentially stream to the cloud, which carries the same threats and problems.

It is a healthy thing that this Arkansas story has sparked a public conversation about always-on devices. We and other privacy advocates have been watching this technology for some time—our allies at the privacy group EPIC wrote a letter to the FTC in 2015 requesting an investigation of always-on devices and their privacy implications. And the Future of Privacy Forum, a DC think tank, produced a helpful report on the issue last year.

But if microphones are going to be part of our daily lives in our intimate spaces, we need broader awareness of the issues they raise, and to settle on strong protections and best practices as soon as possible.

Learn More About the Issues on This Page