Moral dilemmas used to be safely pinned to philosophy for more than 6000 years. With “thinking machines” at least one of them escaped from that safe domain. At the moment all producers that develop self-driving cars face that moral question that underlies almost every moral dilemma in general:
Would you decide to kill one person (child for instance) for sure if only other option was to most probably (but not necessarily) kill dozens?
the shift from personal experience and philosophy to technology lies in the fact that once you place a solution algorithm in a machine as an agent, you can not avoid a priori decision while in human reality we as agents procrastinate up until the last moment when we somehow trust that our intuition will “decide” instead of our ratio that avoids making such decisions. With self-driving cars a human agent has to decide in advance since we tend to understand machines as agents that act only upon pre designed implanted instructions.
Mercedes already decided that drivers are going to be those that will be on the top of the priority to save their lives. Which makes sense of course. But it makes sense only for situations in which “the machine” will be able to differentiate safety of passengers from the safety of all the rest. Such decision will be futile in cases when it will be clear that the situation presents no threat to passengers but for two pedestrians or for one pedestrian and one cyclist, or for one driver in another car and one pedestrian, and so on and so on.
What is quite clear from present state of algorithm development is that it should include all possible situations. “The machine” should be able to decide upon weighted evaluation of enormously large number of different situation. Should it kill 50years old man before 50 years old woman? Is 80years old woman worth less than 20 years old student? What if that student is seriously ill so she will die in next 6 months? Is 11 years old child worth more or less than 12 years old child? What about a 30 years old couple in comparison to 18 years old junkie?
And there are other dilemmas like that one just experienced on German TV. Is Lars Koch guilty (or not) for killing 164 passengers in an aircraft while avoiding the only other solution to kill 70.000 people on Alianz Arena watching football game between Germany and England? Germany decided that he was not guilty. But this dilemma was quite easy since passengers would die anyway.
What is quite apparent is moral dilemma is unsolvable through technology as long as technology works as a non-agent, as something that “decides” completely upon algorithm developed and uploaded by a human agent for it is in principle impossible to bring all possible situations in that algorithm. It is not even an option for a machine that has an ability to learn. For if it would only upgrade the base of possible situations it would still rest on weights that would come from one or another moral dilemma solution devised by humans.
To overcome this gap “thinking machine”, “rational machine” should become a moral machine at the same time. That means that such a machine should be able not only to understand moral dilemma, but to feel it. And we all know what makes us (humans) to really feel moral dilemma: the fear of death. So the machine that could run self-driving cars should not only be able to die, but be conscious of that option and be afraid of that option. Such a machine would be a negative zombie. It would not look like a human, but it would feel like a human.
It is thus quite clear that such machines could only evolve if matter can really become alive and conscious through emergence as an underlying principle of evolution as we understand it at present moment.
Should that not be possible, then we do not only have unsolvable problem in development of self-driving cars but also in present concept of evolution and present concepts of how consciousness emerges from brain activity for instance. For if the emergence of consciousness and life rests on vast enough complexity only (materialistic solution of Darwin, Dawkins, Dennett etc.), then such thinking and feeling machines are possible. If not, then not only car industry but materialism and evolutionary theory have unsolvable problem.