UncategorizedNo Comments

default thumbnail

Are you into technology? If you are, I’ve got a great read for you. Chances are you’ve heard about autonomous cars or self-driving cars, cars that don’t require a human to drive.

This technology has been developing slowly but surely in recent years, with companies like Tesla making impressive progress. The idea of an autonomous vehicle is certainly amazing because it’s something that we’re used to seeing in the movies, but the future may be closer than you think.

Toyota, Huawei, Tesla, General Motors, Ford, Mercedes-Benz – these are just some of the big names involved in working on the development and perfection of this technology. They invest millions of dollars into making it work, but so far their effort has not been super great.

We’ve progressed to the point of testing self-driving vehicles on the streets, and had a lot of successful runs. However, some of them didn’t go as planned. For example, an Uber self-driving car caused an accident that resulted in the death of a pedestrian earlier this year in Arizona. The case was the first reported fatal crash involving autonomous vehicles and a pedestrian in the U.S. and caused a strong response from the public.

Also, there have been reports of incidents where angry Californians attacked self-driving cars because of their behavior on the road.

As this technology progresses and becomes adopted by more drivers, we need to face the fact that a self-driving car may be involved in emergency situations where the computer will have to decide how to act. In some cases, these will be human-sacrifice situations, so the car will have to determine the outcome of that situation by itself.

What should it do if human sacrifice is unavoidable?

Sure, humans can answer such ethical questions using intuition, but doing so isn’t quite as simple for technology. null

The Moral Machine Project

Clearly, humans think and make better decisions in complex situations than machines, so it makes perfect sense to collect their feedback and teach machines to act more like us. This was the main goal of the Moral Machine project recently completed by MIT.

The project offered participants the opportunity to provide feedback on specific situations on the road including fictional car crashes. To identify how humans would act in these emergency situations, the project collected information on nine distinct factors, including preferences for crashing into males or females, youngsters or seniors, pedestrians or jaywalkers, sparing more lives or fewer, and others.

According to the researchers, they’ve been running the experiment since 2016 and gathered 40 million decisions in 10 languages from people in 233 countries. The answers given by the participants revealed a number of preferences.

For example, most participants thought that autonomous vehicles should be trained to prioritize humans over animals, youngsters over seniors, and more lives rather than fewer. Some of the situations presented highly complex ethical dilemmas, including the one below.

Source: Moral Machine Project 

Although the results of this project (which, by the way, were recently published in the Nature Journal) may not be considered by the developers of self-driving vehicles, they’re still important to have because they identify how would humans act in these situations. “Hopefully, one day they will contribute to developing socially acceptable principles of machine ethics and the software for self-driving vehicles,” says Tom Meeks, a translator from The Word Point and a participant of Moral Machine project. “Obviously, it’s the responsibility of humans to teach autonomous systems to act in a certain way in case of a driving emergency.”r

Teaching Machines to Make Ethical Decisions

Since so many automakers are involved in the development of autonomous vehicles, chances are their machines are already dealing with complex ethical exercises. Officials representing these companies have been avoiding the question or even making some controversial statements.

For example, Mercedes have made a lot of people frustrated and even angry with the statement that their autonomous vehicles will prioritize the safety of occupants over pedestrians.

“All of Mercedes-Benz’s future Level 4 and Level 5 autonomous cars will prioritize saving the people they carry,” Car and Driver quoted Christoph von Hugo, the German automaker’s manager of driver assistance systems and active safety, as saying.

The company was quick to realize the mistake, and the following statement was released shortly after the first one:

“There is no instance in which we’ve made a decision in favor of vehicle occupants. We continue to adhere to the principle of providing the highest possible level of safety for all road users.”

While the European Union is still planning regulations for autonomous vehicles, the U.S. Department of Transportation issued its first-ever rules two years ago. The document is titled Federal Automated Vehicles Policy and could be located on the organization’s website.

The policy covers a wide range of concerns related to self-driving cars, including potential ethical issues. Accordingly, the Department of Transportation requires automakers to provide a clear explanation of their self-driving vehicle design and how they operate on the road. If the organization suspects a defect in the design or decide that the design presents an unreasonable risk to safety, it has the authority to prohibit vehicles from being tested and driven on the road.

The policy contains a separate section called Ethical Considerations where all related concerns and regulations are described. Accordingly, the Department of Transportation requires automakers to work cooperatively with regulators and other stakeholders such as road users to address ethical dilemmas to ensure that they make ethical decisions consciously and intentionally.

Also, the policy emphasizes the importance of a transparent creation of machine learning algorithms involving feedback from Federal and State Regulators, and road users such as pedestrians, drivers, and passengers.

The importance of such legislation for predicting decisions of self-driving cars in complex ethical dilemmas is difficult to overstate. They surely need to be as prepared as possible to minimize harm and make ethically sound decisions. With the U.S. emerging as the world leader in such legislation, China and the EU are trying to catch up. For example, at the Financial Times’ Future of the Car Summit in London earlier this year, the European Commissioner for Transport announced plans to assemble a team of ethical experts to solve some of the ethical dilemmas that autonomous driving systems may face on the road.

Final Thoughts

Researchers and authorities around the world have made tremendous progress at advancing self-driving technology. Clearly, we should be prepared for such an important technological leap because it carries significant risks for drivers, passengers, pedestrians, and other vulnerable road users.

The preparation is clearly underway, with the U.S. leading the way in defining how autonomous vehicles decide to behave in case the unthinkable happens. Of course, being involved in deciding how to act and who to save in certain road incident is far from easy, but it’s something decision makers and the public around the world must do.

At this point, both legislation and the autonomous driving industry are still in their infancy, but with so many carmakers being actively involved, it’s safe to assume that we’re on the verge of the self-driving technology revolution. That’s why it’s critical to create regulations and laws now and make autonomous cars act more like humans in complex, emergency situations on the road. 

You might like: Why Kangaroos Drive Self-Driving Cars Crazy

Be the first to post a comment.

Add a comment