Quote Originally Posted by Matticus View Post
However, this was intended as more of a philosophical discussion, rather than a technical one.
I think the only possible background to allow an AI to reason through and act on a problem as you described, is to make the AI a first-rate citizen with the same rights and responsibilities of all other humans. Note that in many situations as you described, regardless of your choice you can still be charged criminally. And convicted. The ethical conundrum is only an ethical conundrum. It's applicability in real life is limited and most of its practical manifestations will occur under less controlled circumstances. Like the driver that needs to make a split decision about where to swerve, without any certainty as to the outcome.

For an AI to be able to reason through a similar case, there would have to be laws in place that regulated trolley problems. Laws that both humans and AIs had to obey. Without such laws (and the extreme unlikelihood of such laws ever existing in the future), a human has long learned to understand the problem of facing the consequences of his actions. For an AI to do the same it would have to be held accountable the same as humans. And for that accountability to be on par with human accountability, an AI would have to share with humans other properties, like fear, pain, love, etc. Otherwise there would be no point in sending a machine to prison for 3 years for involuntary man-slaughter.