Tesla’s Autopilot Could Save the Lives of Millions, But It Will Kill Some People First

The complicated ethics of Elon Musk’s grand autonomous vehicle experiment.
A nonfatal crash in Laguna Beach, Calif., in May 2018, involving a Tesla in Autopilot mode and an unoccupied police cruiser.

A nonfatal crash in Laguna Beach, Calif., in May 2018, involving a Tesla in Autopilot mode and an unoccupied police cruiser.

Source: Laguna Beach Police Department/AP

On the last day of his life, Jeremy Banner woke before dawn for his morning commute. He climbed into his red Tesla Model 3 and headed south along the fringes of the Florida Everglades. Swamps and cropland whizzed past in a green blur.

Banner tapped a lever on the steering column, and a soft chime sounded. He’d activated the most complex and controversial auto-safety feature on the market: Tesla Autopilot. It’s a computer system that performs all the functions of normal highway driving without any input from the driver. When the computer is in control, the car can speed up, change lanes, take exits, and—if it spots an obstacle ahead—hit the brakes.