Moral Decisions and the Brain

In the July/August issue of Discover Magazine, Kristin Ohlson’s article “The End of Morality” explains how our brains come up with the answers to moral dilemmas, “…allowing logic to triumph over deep-rooted instinct.” The article goes into some great depth regarding competing philosophies on morals as well as how sometimes need to think more and set aside emotions, override our automatic settings. But for this post I’m going to concentrate on dilemmas.

One study mentioned measured blood oxygen levels in the brain as subjects were given a series of moral dilemmas. Kill a person or don’t kill a person. Easy-peasy. Very little increase in blood oxygen while judgment was being made.

The first true dilemma involved a run-away vendor’s cart. You’re at a bicycle race and witness a hot dog cart rolling down a hill. If you can shove it across the road, three spectators will die. If you don’t, dozens of cyclists will die. What do you do? That scenario showed an increase in blood oxygen for the volunteers who opted to save more people. Someone—several someones—still had to die, and it still seems like a no-brainer to sacrifice three for the lives of several dozen, but to the scientists behind the study it appeared “…that reason was overriding an automatic, instinctual response” to not kill at all.

But two of the more intriguing moral dilemmas I read went a bit further into how we’re wired to make such decisions. The first had to do with pulling a switch to alter the course of a streetcar. If the streetcar stays on its current path, it would kill five people. But if you threw the switch, it would kill one. More subjects chose to pull the switch and kill one person, thereby saving five. A tough choice, but still morally acceptable.

Then the scientists threw this little wrench in: What if the streetcar heading for the five people could be stopped if you pushed one man (for this case, a man large enough to stop the streetcar) off a bridge and into its path? Though the results would be the same, one dying versus five, the study showed high activities in the brain where emotion and social cognition work. In the scenario where the decision involves throwing the switch, more activity was in the region associated with reasoning. It seems that whether or not our decisions require a “hands on” approach makes us think harder about how we’ll deal emotionally and socially with our actions.

A similar hands-on difficult dilemma (and as one scientist in the article states, “A good dilemma is one that makes you go ugh.”) involves imagining yourself as a doctor. You have five transplant patients desperate for organs. You can save all five if you murder one man and harvest his organs. Would you do it?

Everyone has likely heard of the guy back in the 19th century who survived a metal spike going through his head. Sure, he lived, but the spike damaged the portion of his brain where well-reasoned decisions and future plans are made, as well as an area associated with emotion.

By studying his skull, as well as the function of other patients whose brain damage caused similar disruptions to personality, doctors learned that “…the decision-making process, long deemed rooted in reason, was guided by emotion as well.” Emotion is hardwired into our brains. Even young children “knew” it was admissible to throw the switch to kill one person, but not to actively push a man in front of the streetcar.

So where is all this leading me?

Artificial intelligence and those spiffy, life-like robots that are being designed to teach school, care for patients and perform a dozen other tasks. Sure, I know Asimov’s Laws of Robotics, but can we teach them morality? Can we teach them that it should be a more difficult decision to push a man to his death than throw a switch? That in some circumstances, the needs of the one outweigh the needs of the many? Or that, as in I, Robot, that it’s not always “right” to save one person over another because their chance of survival is greater?

Will we come up with a way to make a truly human brain? What other issues regarding the brain and morality can you come up with? What insidious scenarios involving the brain and morality can you envision?

File:Actroid-DER 01 crop.jpg
fom Wikipedia.com

Advertisements

10 comments

  1. ddpole · · Reply

    Very interesting topic. And what if Artificial Intelligence, one day, comes out with a “different” moral than our?

    1. That’s a very good question. And a bit scary 😛

      Thanks for stopping by!

  2. Interesting post! I have no real moral scenarios, and no opinions regarding AI, but I would be interested to know how the results would differ if the same experiment was undertaken with convicted murderers or other types of criminals. Or what about soldiers on active duty? Just me, pondering 🙂

    1. That would be an interesting experiment, KC. As far as an insidious use for the brain activity information, what if someone tweaked that area, creating people who lacked the process to consider those dilemmas any sort of dilemma at all?

  3. This is a really difficult thing for people to understnad about robotics. I wonder sometimes how many mistakes will be made as we try to develop AI’s with decidion making abilities and just how bad those mistakes will be. How far reaching.

    1. Excellent question, Lilly. I think our ability to weigh decisions with both logic and emotion is something that can’t readily be programed into an AI.

  4. Fascinating blog, Cathy. And a little frightening.

    1. Thanks, Elise. I agree it can be a little frightening. How will we handle this sort of “programing”? *Can* we handle it? We are notorious for having science out-pace our ability to deal with its consequences.

  5. When I’m facing a difficult decision I muse on it. I think about it and try to imagine the consequences of my various possible reactions. I can’t imagine a computer being able to muse, wonder, imagine. Maybe a cross between a computer and a human, if such a thing is possible, would be able to do this?

    1. It’s the emotional input that is the stickler. Can you program emotion? Or can it be “pulled” from a human brain and integrated into a computer? I have my doubts.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: