What will it take for humans to trust self driving cars?
Listen now
Description
Popular Science On March 18, 2018, Elaine Herzberg, 49, was crossing a road in Tempe, Arizona, when a Volvo SUV traveling at 39 miles per hour hit and killed her. ­Although she was one of thousands of U.S. pedestrians killed by vehicles every year, one distinctive—and highly modern—aspect set her death apart: Nobody was driving that Volvo. A computer was. A fatality caused by a self-driving car might not be more tragic than another, but it does encourage the wariness many of us feel about technology making life-and-death decisions. Twelve months later, a survey by AAA revealed that 71 percent of Americans were too scared to zip around in a totally autonomous ride—an eight percent increase from a ­similar poll taken before Herzberg’s death. Self-driving cars are already cruising our streets, their spinning lasers and other sensors scanning the world around them. Some are from big companies such as Waymo—part of Google’s parent conglomerate Alphabet—or General Motors, while others are the work of outfits you might not have heard of, including Drive.ai or Aptiv. (Uber operated the Volvo involved in Arizona’s fatal crash and took its self-​­driving cars off the roads for about nine months afterward.) But what makes some of us so wary of these robotic chauffeurs, and how can they earn our trust? To understand these questions, it first helps to consider what psychologists call the theory of mind. Put simply, it’s the recognition that other people have brains in their heads that are busy thinking, just like ours (usually) are. The theory comes in handy on the road. Before we venture into a crosswalk, we might first make eye contact with a driver and then think, He sees me, so I’m safe, or He doesn’t, so I’m not. It’s a technique we likely use more than we realize, both behind the wheel and on our feet. “We know how other people are going to act because we know how we would act,” explains Azim Shariff, an associate professor of psychology at the University of British Columbia, who has written about this issue in the journal Nature Human Behaviour. But you can’t make eye contact with an algorithm. Autonomous cars generally have backup humans ready to take control if necessary, but when the car is in self-driving mode, the computer’s in charge. “We’re going to have to learn a theory of the machine mind,” Shariff says. What that means in practice is that self-driving cars will need to provide clear signals—and not just turn signals—to let the public know what that machine mind is planning. One solution comes from Drive.ai, a company ­running self-driving vans in Texas. The bright-orange-and-blue vehicles have LED signs on all four sides that respond to the environment with messages. They can tell a pedestrian who wants to cross in front of the car, “Waiting for You.” Or they can warn them: ­“Going Now/Please Wait.” A related strategy is intended for passengers, not pedestrians: Screens in Waymo vehicles show car occupants a simple, animated version of what the autonomous vehicle is seeing. Those displays can also show what the car is doing, like if it’s pausing to allow a human to cross. “Trust is the willingness to make yourself vulnerable to somebody else,” Shariff says. “We engage in it because we can pretty easily predict what the other person will do.” All of which means that if the cars are predictable and do what they say they will do, people will be more likely to trust them. Sound familiar? Communicating with the machine mind is important, but that doesn’t mean we want it to mimic exactly how humans think and act while driving. In fact, the promise of traveling by autonomous car is that silicon brains won’t do dumb things such as text and drive, or drink and drive, or rocket down the highway while upset after a breakup. (Cars don’t date.) “I believe th
More Episodes
The Atlantic America has long had a fickle relationship with homework. A century or so ago, progressive reformers argued that it made kids unduly stressed, which later led in some cases to district-level bans on it for all grades under seventh. This anti-homework sentiment faded, though, amid...
Published 03/29/19
The Verge “Prison labor” is usually associated with physical work, but inmates at two prisons in Finland are doing a new type of labor: classifying data to train artificial intelligence algorithms for a startup. Though the startup in question, Vainu, sees the partnership as a kind of prison...
Published 03/29/19
The Guardian Nobody was supposed to see Yovana Mendoza eating the fish. The 28-year-old influencer, also known as Rawvana, has amassed more than 3 million followers across YouTube and Instagram by extolling the life-changing properties of a raw vegan diet. She has built a lucrative brand around...
Published 03/29/19