Tesla is set to face a class-action lawsuit alleging that the automaker's Model S and Model X are prone to sudden, unintended acceleration (SUA). This has led to many questions involving a self-driving crash dilemma.
According to a report by Ars Technica, the Silicon Valley company is now the defendant in a class-action lawsuit alleging that its vehicles are prone to SUA. In particular, the EV maker's Model X and Model S vehicles allegedly drove through a garage or into a wall either by human or computer error. Moreover, at least 23 of these accounts involving unintended acceleration are on record with the National Highway Traffic Safety Administration.
The present lawsuit claims that 13 reports to NHTSA were made by Model S drivers who experienced full-power acceleration while their vehicles were either parked or traveling at low speed. As for the Model X, there were another 13 nearly identical instances where vehicles suddenly accelerated, ten of which resulted in a car crash.
The automaker contends that all 23 accounts involved human error. In addition, the car maker contends that it has no "legal duty to design a failsafe car." Moreover, the company denies that its cars are defective in any way. An algorithm which eliminates full throttle acceleration into fixed objects is something that "no manufacturer has ever done" and cannot be covered by warranty.
A recent article by Forbes expounded on Tesla's response. According to the article, the lawsuit shows an important key to understanding how a self-driving car would handle the "trolley problem" in ethics. The trolley problem basically asks: if a runaway streetcar is hurtling towards five unsuspecting people, is it ethical to divert the trolley onto another track and only kill one man who works along or is it better to do nothing?
The article defined the question in terms of autopilots. The problem would thus be:
"Do you remember that day when you lost your mind? You aimed your car at five random people down the road. By the time you realized what you were doing, it was too late to brake. Thankfully, your autonomous car saved their lives by grabbing the wheel from you and swerving to the right. Too bad for the one unlucky person standing on that path, struck and killed by your car. Did your robot car make the right decision?"
Both sides can be justified. A self-driving car can either be programmed to take control to avoid a bigger accident for a smaller one, but not doing anything isn't ethically immoral as well.
Taking into account what Tesla responded to the lawsuit, automakers have no legal duty to design a failsafe car. In addition, the company headed by Elon Musk also cannot design an algorithm that doesn't allow the car to drive into a fixed object while it is controlled by a human. While self-driving cars will soon be a thing, they are not meant to replace human drivers. When a human takes the wheel, the autopilot cannot and also should not override what the human driver is doing. Otherwise, that would be the beginning of the rise of the machines.
Tesla is doing what no other car manufacturer has done before. They will likely win the self-driving car race, but in doing so, the company already made it clear that their vehicles will not be failsafe. What do you think about Tesla's argument? Should self-driving cars be able to override a human driver in order to avoid greater harm? Share your thoughts and comments below.
See Now: OnePlus 6: How Different Will It Be From OnePlus 5?