Let's talk about a good use case to explain the narrative of right and wrong — Full Self Driving.
Under the field of automation when it comes to full self-driving, who is responsible when we make such decisions? Now let's talk a little bit about why it's kind of a problem around the world.
FSD as a concept is great, but.
The truth is applying full self-driving is not a problem and it's already applied in a lot of places. Additionally, it is statistically safer to have cars driving in full self-driving mode all the time compared to humans. Think about it — The number one cause of accidents is human error, but there are ethical dilemmas.
I read that in a book once before. One example is an Uber self-driving car in 2018 that hit a pedestrian who later died in the hospital. It was the first death involving a self-driving car. Then the court ruled that the company is not criminally liable for the death of the pedestrian because she was distracted with her cell phone.
This is a small example of how scary it is. You see if you read about a piece of news of some human car rider hitting another pedestrian and the other pedestrian was on her phone, you wouldn't think about it the next day.
Yet when a machine does it, this is when your core instinct is moved. The result of the death of this human being was not another human being. We can errors as much as possible. But machines doing that scares us. That's the core reason for the problem.
An impossible decision
Let's think of the ethical dilemma when a full self-driving car is going to have an accident. Let's say it's at a crossroads where it is forced to have an accident and it has to make the best possible choice.
Usually, humans do that in a split second. When it comes to AI it does that very quickly as well, but it does it statistically not emotionally.
So it could choose for example to save an elderly person rather than a young person. Which is not a problem, but some people might not like it.
There was a research survey called The Morality Machine to see how people react to such situations. In Western countries, people were more likely to save an elderly person than a young person (Women being saved versus men was in most cultures.)
This is mainly due to the cultural differences we have. There's no right or wrong in this sense.
So the truth is some people have biases and artificial intelligence does not have that bias. If there is a child versus a group of 60-year-old people, the machine would generally go for the most rational decision, which is save those 60-year-old people and just drive towards the child.
But you'll see it in the news the next day — a machine is responsible for the demise of a child. It's a little bit scary and for that reason, it's a problem in many nations.
But it's working.
Yet full self-driving is thriving in the US. I believe China has also signed some agreements in this sense and will follow.
A good use case is Waymo. It's more like a robo-taxi sort of situation and they have started their operations in Phoenix and San Francisco. Quoting them — "we've begun fully autonomous freeway operations in Phoenix and San Francisco. These efforts have enabled us to provide over 100,000 paid weekly trips — a tenfold increase from last year."
So it is growing. More people are accepting full self-driving. I personally don't think I would have a problem with riding a robo taxi because I would be curious. If it does the job well, then I'll keep trying it out. If I have a negative experience, I'll probably reconsider riding it.
But the problem comes with global scaling. This is the core issue that comes with full self-driving. Cultures are different and diverse. Putting an FSD vehicle everywhere in the world sounds utopian right now.
One of the questions we will always think of is who's gonna make driving decisions and who's responsible for those decisions.