Advertisement

SKIP ADVERTISEMENT

Op-Ed Contributors

Why Apple’s Stand Against the F.B.I. Hurts Its Own Customers

Jamil N. Jaffer and

Credit...Justin Sullivan/Getty Images

Two weeks ago, privacy advocates across the country celebrated as the Federal Bureau of Investigation backed off its request for Apple to help gain access into the iPhone of Syed Farook, one of the terrorists who killed 14 people in San Bernardino, Calif., in December.

On Friday, the F.B.I. again sought Apple’s assistance — this time to help crack an iPhone belonging to a convicted drug dealer — by requesting that a federal judge overturn an earlier decision in Brooklyn supporting Apple.

Apple had argued in part that it didn’t want iPhone hacking by the government to become routine. That the F.B.I. so soon wants access to another iPhone would seem to prove Apple right and to vindicate the company’s principled resistance, right? Wrong.

Apple’s decision not to help in the Farook case was ultimately bad for the company and its customers. Apple has lost leverage in legal cases and the average iPhone user is significantly more vulnerable — both to government access and to criminal hacking — than if Apple had assisted the government in the first place.

The F.B.I. has already found a company able to access Mr. Farook’s phone without Apple’s assistance, presumably taking advantage of a vulnerability that Apple has either not yet identified or not yet patched. That means that the F.B.I. should be able to access information on similar iPhones. (Even though it may have the technical means to access the Brooklyn phone, the Justice Department argued in its letter to the court that “the government continues to require Apple’s assistance in accessing the data that it is authorized to search by warrant.”)

Apple is now asking the F.B.I. to “responsibly” disclose the vulnerability so that Apple can rapidly patch it. Apple’s request is predicated on a White House process, created in 2014, that explicitly requires the government to weigh the trade-off between the security benefits of eliminating a vulnerability in a piece of consumer software and the intelligence benefits of continuing access.

Relying on this White House process may not help much, though. First, as Ben Wittes of the Brookings Institution has pointed out, the F.B.I. has no legal obligation to disclose the vulnerability. While the White House made clear that its process was “biased toward responsibly disclosing” vulnerabilities, it also went out of the way to note that it would not “completely forgo this tool as a way to conduct intelligence collection.”

Moreover, it’s certainly possible, indeed likely, that the company that cracked the iPhone made nondisclosure a condition of giving the government this information; after all, finding vulnerabilities is presumably a moneymaking endeavor for the company (and thus, this vulnerability is conceivably available in the marketplace to the highest bidder), and voluntarily letting Apple fix it doesn’t seem like a great business plan.

While the F.B.I. has refused to say whether it will ultimately share the vulnerability with Apple (and Senator Dianne Feinstein, Democrat of California and vice chairwoman of the Senate Intelligence committee, has suggested that it should not), Apple’s previous position is almost certain to make the government more likely to withhold the information. Apple has made it clear that it will fight tooth and nail the government’s effort to get lawful access, and will implement as many technical measures as necessary to limit such access.

Given that, it wouldn’t be surprising if the government decided in favor of leaving this back door open so it could continue to access iPhones.

Why does all this matter? Simple: The longer it takes Apple to patch this vulnerability (either because the government discloses it to Apple or because Apple figures it out on its own), the longer iPhone user data is at risk — both from the government and from criminals.

So privacy advocates should carefully weigh the costs and benefits of Apple’s decision for both the company and its customers. If Apple had complied with the California court’s order, it could have used a method to obtain access that was known only to the company. The vulnerability would be safe, or at least, safer (hackers could eventually re-create this method).

For those of us concerned with government overreach and privacy, there would be three clear benefits to Apple’s cooperation. First, by forcing the government to come to it in the first instance, Apple retains the ability to litigate court orders — assuming its position is not to refuse to cooperate no matter the particular facts — and thereby challenge the government in specific cases of possible overreach.

Second, Apple can choose to protect the method for lawful access as closely as it wants, including by storing it in the same way that it stores its source code or its software signing keys. And, perhaps most important, Apple would leave it to the courts do their job to figure out whether the F.B.I. has probable cause for a search.

Ultimately, the question is this: For lawful access to material important to terrorism investigations, would we rather trust Apple itself under the close supervision of the courts, or the F.B.I. and some private company that makes money selling cellphone hacks? The latter, which is where we clearly are, seems bad for the privacy and security of iPhone users around the globe.

Jamil N. Jaffer, an adjunct professor at the George Mason University School of Law, was an associate council to President George W. Bush. Daniel J. Rosenthal, an adjunct professor at the University of Maryland, was director for counterterrorism with the National Security Council under President Obama.

Follow The New York Times Opinion section on Facebook and Twitter, and sign up for the Opinion Today newsletter.

Advertisement

SKIP ADVERTISEMENT