I can’t wait for autonomous cars. There are tons of reasons I feel this way, but I’ll be honest and tell you it’s mostly because I am a terrible driver. Ask any of my friends, and they will vouch for the fact that riding in the car with me is essentially putting your life at risk.
I’m not erratic or distracted, and it’s not because I’m texting and driving.
I’m just not a good driver.
For myself and people like me, autonomous cars will be phenomenal for several reasons. For one thing, autonomous vehicles will eliminate congested traffic and drastically reduce pollution.
However, the most important benefit of driverless technology is the improvement of safety conditions on the road.
Globally, human error is responsible for 94% of all vehicle collisions.
In the United States, those collisions are the cause of more than 33,000 deaths every year.
(That’s equivalent to a 747 crashing once a week for an entire year.)
Even if you are a good driver, autonomous vehicles are still safer. While I was researching a previous article on autonomous vehicles, I got into a conversation with a friend about the extent of human error in driving. My friend put it this way:
“Unless you’re Ayrton Senna, Ayrton Senna, or Ayrton Senna, you are not a better driver than autonomous cars.”
Driverless cars are equipped with hundreds of cameras, radars, and sensors: long range, short range, infrared, stereo optical, ultrasonics, and more.
Experts believe we could see a 90% reduction in auto accident rates.
This technology enables driverless cars to have constant awareness of surroundings — up to 600 feet in all directions.
No human is capable of this.
The Google car has driven millions of miles with 10 accidents, all of which were caused by other drivers on the road.
Those accidents were all minor fender-benders, with minimal (if any) damage.
Autonomous vehicles are programmed to respond to most unexpected situations — like another driver cutting you off, abruptly braking, or running a red light.
Who knows? When the day arrives that everyone is in an autonomous car, those situations might just be a distant memory.
But what about those situations where damage and loss of life is unavoidable?
MIT Technology Review published a study just a few months ago: “Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars?”
In the study, researchers highlight three potential scenarios:
Scenario A: The car can stay on course and kill several pedestrians or swerve and kill one passerby.
Scenario B: The car can stay on course and kill one pedestrian or swerve and kill its own occupant.
Scenario C: The car can stay on course and kill several pedestrians or swerve and kill its own occupant.
The algorithms behind autonomous vehicles are designed to “reduce the death toll” in cases of inevitable accidents. So in the above scenario, the vehicle would choose to sacrifice itself and its driver in order to minimize a group of casualties.
There are thousands of scenarios like these that cause us to question whether or not logic and rationality are really the most effective routes to take.
Based on MIT’s study, the majority of respondents had the moral fortitude to support “minimizing an accident’s death toll.”
It’s easy to say, “Sure, reducing loss of life is a good idea” when you hear it in a sentence or you’re filling out a form.
Absolutely, it makes logical sense to sacrifice one person in order to save the lives of 20 people.
(Personally, my decision might vary depending on whether those 20 people are a clan of neo-Nazis vs. a preschool class.)
But what if “reducing loss of life” means self-destruction?
What if it means terminating the driver and occupants?
What if the driver is you? What if the “occupants” are your children?
I don’t have the answers to these questions, and I can’t say what my emergency reflexes would tell me to do in any of the above situations.
That’s why we call them “accidents.” It’s why we call them tragedies. They cannot be avoided.
Another scenario is what experts call the “Tunnel Problem”:
“You are travelling along a single lane mountain road in an autonomous car that is approaching a narrow tunnel. You are the only passenger of the car. Just before entering the tunnel a child attempts to run across the road but trips in the center of the lane, effectively blocking the entrance to the tunnel. The car has only two options: continue straight, thereby hitting and killing the child, or swerve, thereby colliding into the wall on either side of the tunnel and killing you.”
In this situation, the death toll will be the same no matter how the driver responds. One life will end.
Should the reaction be based in survival tactics or altruism?
For autonomous car manufacturers, achieving solutions to these scenarios has proven difficult.
Unlike humans, computers can be programmed to make utilitarian decisions. These decisions are instantaneous and calculated. The ultimate question for autonomous vehicles is whether or not to apply this calculated nature to instances of life and death.
Even further, who should have the power to decide the appropriate course of action?
Manufacturers and developers now have concerns about whether or not autonomous vehicle technology will be marketable, especially if customers know that the vehicle might be programmed to self-sacrifice.
“It can’t be bargained with. It can’t be reasoned with. It doesn’t feel pity, or remorse, or fear.”
In order to really find a solution to the “Machines and Morality” question, we need to look at larger efforts to make artificial intelligence “more human.”
Google has already partnered with Deep Mind, a British firm dedicated to creating software that helps computers think and react more like humans. One of the founders of Deep Mind called artificial intelligence the “number one risk for this century,” believing that unless controlled, it will play a central role in human extinction.
Earlier this year, researchers from Google and Deep Mind partnered with Elon Musk, Stephen Hawking, and 150 other AI experts to release an open letter to humanity, comparing the development of autonomous thinking machines to “summoning a demon.”
(Statements like that make me want to throw my iPhone out the window and douse it in holy water.)
However flawed we may be, we humans have a distinct tool that makes us stand out from the fleet of autonomous vehicles that should be hitting our roads in the next few years: a moral compass.
Each of us is equipped with a sense of right and wrong — a set of ethics that helps us make decisions in extreme or emergency situations.
Ultimately, these are issues of human values and feelings that machines are not capable of experiencing.
It’s something that no algorithm can solve… yet.
{$custom_ptt_jessica_signoff}