Thread: Self-Driving Cars and Ethics

  1. #1
    Registered User
    Join Date
    Jun 2011
    Posts
    4,513

    Self-Driving Cars and Ethics

    I recently read an interesting article about self-driving cars and ethics.

    As self-driving car technology continues to be developed, important ethical questions are raised. For instance, at some point, a variation of the "trolley problem" thought experiment will become a statistically inevitable reality. The computer in such a machine will have to make a decision where any outcome may result in the injury or death of one or more people.

    Let's assume that some time in the future, this technology proliferates and is in common use. A car is speeding along in autopilot mode, when a careless pedestrian steps directly in front of the vehicle. There is no place for the car to safely move - there are objects on both sides. If the car swerves to avoid the pedestrian, it will crash and the driver will be killed. If the car does not swerve, the pedestrian will be struck and killed. How should such a decision be made?

    Should the computer take into account that the pedestrian is at fault, and allow them to be struck?

    What if there are two pedestrians and one person in the car? Should the computer sacrifice the driver since this choice would ultimately save two lives while costing only one? Would the driver even be aware beforehand that such a rule existed?

    What if the computer could not fully determine the cost of swerving the car? For instance, in doing so to save two pedestrians, the car has a head-on collision with another vehicle resulting in even more deaths.

    What if there are two (adult) pedestrians, and two people in the car (one adult and one small child)? Should the life of a small child have any bearing on the ultimate decision made by the computer?

    What other potentially devastating decisions could such a machine be faced with?

    I have always been fascinated by ethical quandaries, and have used the trolley problem myself to spark discussion amongst friends in the past. While considering these problems from a human perspective is interesting, I wonder if considering them from a programming perspective is as interesting to anyone else here.

  2. #2
    Programming Wraith GReaper's Avatar
    Join Date
    Apr 2009
    Location
    Greece
    Posts
    2,738
    Since we're already assuming, I will go on and assume that future self-driven cars will be secure enough for their passengers to survive the collision.
    Devoted my life to programming...

  3. #3
    Registered User
    Join Date
    Oct 2006
    Posts
    3,445
    Quote Originally Posted by GReaper View Post
    Since we're already assuming, I will go on and assume that future self-driven cars will be secure enough for their passengers to survive the collision.
    Makes me think of the movie "Demolition Man," in which the cars would fill with foam on impact.
    What can this strange device be?
    When I touch it, it gives forth a sound
    It's got wires that vibrate and give music
    What can this thing be that I found?

  4. #4
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Ireland
    Posts
    8,446
    Self-driving cars are an anomaly, I think. The technology itself is well reasoned in that self-driving cars aren't really self-driving, but rely instead on humans to make almost all decisions. Self-driving cars are instead auto-pilot cars. The technology however is being presented as some form of self-driving on the false assumption that we dominate AI development. There's a gap between what the technology really offers and what is being said about it. So, it is very likely that (bar a world of fully automated roads) you won't get to witness in your lifetime those type of quandaries. To allow an on-board computer to process and resolve a collision, when not even our best aeronautical computers do it, is not going to happen until we move past the seemingly impassable barrier of weak AIs.
    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  5. #5
    Registered User
    Join Date
    Jun 2011
    Posts
    4,513
    Quote Originally Posted by Mario F. View Post
    Self-driving cars are an anomaly, I think. The technology itself is well reasoned in that self-driving cars aren't really self-driving, but rely instead on humans to make almost all decisions.
    While proliferation of fully autonomous vehicles may not happen for a long time (if ever), the technology already exists (in its infancy) to self-drive vehicles with very little human interaction.

    However, this was intended as more of a philosophical discussion, rather than a technical one.

  6. #6
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Ireland
    Posts
    8,446
    Quote Originally Posted by Matticus View Post
    However, this was intended as more of a philosophical discussion, rather than a technical one.
    I think the only possible background to allow an AI to reason through and act on a problem as you described, is to make the AI a first-rate citizen with the same rights and responsibilities of all other humans. Note that in many situations as you described, regardless of your choice you can still be charged criminally. And convicted. The ethical conundrum is only an ethical conundrum. It's applicability in real life is limited and most of its practical manifestations will occur under less controlled circumstances. Like the driver that needs to make a split decision about where to swerve, without any certainty as to the outcome.

    For an AI to be able to reason through a similar case, there would have to be laws in place that regulated trolley problems. Laws that both humans and AIs had to obey. Without such laws (and the extreme unlikelihood of such laws ever existing in the future), a human has long learned to understand the problem of facing the consequences of his actions. For an AI to do the same it would have to be held accountable the same as humans. And for that accountability to be on par with human accountability, an AI would have to share with humans other properties, like fear, pain, love, etc. Otherwise there would be no point in sending a machine to prison for 3 years for involuntary man-slaughter.
    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  7. #7
    Registered User
    Join Date
    Jun 2011
    Posts
    4,513
    Quote Originally Posted by Mario F. View Post
    For an AI to be able to reason through a similar case, there would have to be laws in place that regulated trolley problems. Laws that both humans and AIs had to obey. Without such laws (and the extreme unlikelihood of such laws ever existing in the future), a human has long learned to understand the problem of facing the consequences of his actions. For an AI to do the same it would have to be held accountable the same as humans. And for that accountability to be on par with human accountability, an AI would have to share with humans other properties, like fear, pain, love, etc. Otherwise there would be no point in sending a machine to prison for 3 years for involuntary man-slaughter.
    It's not a matter of a computer making decisions on the fly based on replication of human emotions and experience. It's about receiving data from its immediate environment and following pre-programmed actions depending on that data. While still extremely complex, it does not require AI way beyond the current capabilities of technology.

    For example (source):

    The video above shows the results. At one point you can see the car stopping at an intersection. After the light turns green, the car starts a left turn, but there are pedestrians crossing. No problem: It yields to the pedestrians, and even to a guy who decides to cross at the last minute.

    Sometimes, however, the car has to be more "aggressive." When going through a four-way intersection, for example, it yields to other vehicles based on road rules; but if other cars don't reciprocate, it advances a bit to show to the other drivers its intention. Without programming that kind of behavior, Urmson said, it would be impossible for the robot car to drive in the real world.
    Some of these computers are already programmed to avoid hitting pedestrians. The "interesting" part will be the edge cases when the actions of swerving or stopping will present their own dangers (which will increase in likelihood if this technology expands greatly); hence, the original post.

    I do agree with the first sentence of your second paragraph - laws and regulations need to be in place governing how to deal with these cases. That's where I disagree with the original article; I don't think it will be up to the programmers or manufacturers to make these decisions. Instead, I expect government and/or regulatory bodies would be responsible for setting those guidelines.

    In fact, certain places are already beginning to introduce legislation pertaining to driverless vehicles. I just don't think trolley problem situations will be addressed until the technology becomes more mainstream.

  8. #8
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    I can't comment on what's right; there's no right answer. You haven't even considered the potential influence that such a death would cause. For example, is the pedestrian you saved a genius, hence influencing the world in a good way? Or is the driver a genius? Or whatever. These kinds of decisions just can't be made in any sane manner. I'd be surprised if there was any sort of regulation for this sort of thing in a loooong time.

    So my guess is: the car breaks as hard as it possibly can such that it keeps its driver alive and healthy, and if that kills the pedestrian, so be it. Simple logic: keep the driver alive. Why? Because otherwise people would be scared of using driverless cars. I think it's most logical that people would be terrified if they think they might die in a driverless car because said car decides they should die so that someone else lives rather than the logic that someone else may die to keep you alive. Because that's just how our minds work. Put yourself before others. A basic instinct.

    Hope that makes sense.
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

  9. #9
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Ireland
    Posts
    8,446
    Quote Originally Posted by Matticus View Post
    In fact, certain places are already beginning to introduce legislation pertaining to driverless vehicles. I just don't think trolley problem situations will be addressed until the technology becomes more mainstream.
    Or any other type of technology, for that matter.

    Let's go back a little, so you understand what I mean.

    Currently the trolley problem, as a thought process as it is, is unsolved. The debate about whether the lives of 5 people are more important than the lives of one people is terribly nuanced. If we were to legislate on the matter, the complexity of the problem would present itself in full force. "Is it really lawful, just to say that for all cases, the lives of 5 people are more important than the life of one person?". The ethical debate here is endless. What if that one person is Shakespeare? I don't know... can I even measure anyone's worth in numbers of people, Shakespeare or not? In other words, isn't that in itself ethically insane?

    Should I kill him before he publishes his works and save those 3 despots, 1 pedophile and that another one who is an arsehole? What if he is already an old man and can't write anymore? Or, why exactly is the life of a child more important than a woman in fertile age? Or the life of a productive man, less important than that of a lazy kid?

    Conversely, if we know nothing about the people involved, how can we consubstantiate legislation when our legal systems are based almost entirely on the needed minutiae that can ensure fairness across a wide spectrum of possibilities through exceptions and addends?

    So then one day we evolve our AIs. We are ready to give them the power to make autonomous decisions. We know they will always make the right decision, we just need to decide what exactly the right decision will be so we can input it. Where does this leave us?

    We are back to the drawing board, trying to solve the trolley problem. A problem to which you cannot find a good decision. So, we cannot input any data of this sort it into our AIs. And while we can't decide for ourselves, we can't have them decide for themselves who lives or dies. For which reason self-driving vehicles that decide who lives or dies won't happen even if the technology presents itself.
    Last edited by Mario F.; 08-09-2016 at 03:10 PM.
    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  10. #10
    Lurking whiteflags's Avatar
    Join Date
    Apr 2006
    Location
    United States
    Posts
    9,612
    I can't see this thought experiment becoming a real problem... at least not one that nations are going to work very hard to solve.

    First, recognize that the road was already taken away from pedestrians with the advent of the earliest cars. Walking illegally in the street is known as jaywalking, and this is mostly a civil offense to protect people from accidents.

    I don't see much changing to be honest. A determined person will step in front of an oncoming automatic car and even if it initiates the breaks, physics will run you over. There is no decision for the car to make, at a certain point. I imagine when it happens it will be handled much like regular car accidents. The person who owns the vehicle is responsible for what happens in/around the vehicle at all times.

    EDIT: I'm glad I will die before automatic cars are dominant. I personally can't handle the transition. I'm too worried.
    Last edited by whiteflags; 08-10-2016 at 12:18 AM.

  11. #11
    Registered User
    Join Date
    Apr 2015
    Posts
    16
    A self driving car is a computer operated and probable users must be educated on the concept in order to understand it. It should be a solution toward enhancing driver and passenger safety. Though the cons of this is the reassurance that the technology is without vulnerabilities and issues.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. On the Ethics of Hacking
    By UnregdRegd in forum A Brief History of Cprogramming.com
    Replies: 6
    Last Post: 04-06-2004, 01:55 AM
  2. War ethics
    By Jeremy G in forum A Brief History of Cprogramming.com
    Replies: 18
    Last Post: 03-26-2003, 06:38 AM
  3. Ethics
    By adamviper in forum Tech Board
    Replies: 23
    Last Post: 12-16-2002, 05:30 PM
  4. A question of ethics.
    By iain in forum A Brief History of Cprogramming.com
    Replies: 13
    Last Post: 01-20-2002, 10:27 AM
  5. Ethics & Programming Debate
    By cozman in forum A Brief History of Cprogramming.com
    Replies: 19
    Last Post: 10-09-2001, 04:59 PM

Tags for this Thread