views:

540

answers:

12

I was having a discussion with a coworker (while we were programming) about AI. We were talking about emotions/feelings and if you should choose to leave any out. I asked him, "Would you leave out racism or hate?" and if you did leave those out, what, if any, other emotions might lead to the AI learning the left out emotions or feelings. Should you PROGRAM in measures to stop the AI from learning those feelings?

If you teach Love, does it need to know hurt? Or would it learn hurt? If it then knew Hurt would it connect it with Dislike, Hurt and Dislike could that then lead to some other non-programmed emotion? Such as hate?

All while tele-commuting from home.

+6  A: 

Without the bad, the good is meaningless.

JoeBloggs
This is nonsense and ethics drivel. Time for me to get back to real work. Amazing how the OP chose this as the "answer".
Tim
Hi Tim, I love you too. I'll not bother looking up any of the gazillion case studies showing the effect, we'll just call it homework...
JoeBloggs
<Sniff Sniff> I smell an eDouche!
D.S.
+3  A: 

Marvin Minsky has written some interesting stuff about emotions and AI. You can think of emotions as behavior or thinking modes. For example, "love" is a mode where we ignore faults, emphasize physical proximity, etc. As humans we have various useful modes that we share, such as those discussed in the original post. For an AI system, the useful modes might correspond to human emotions, or, they really could be very different things. If the purpose of the AI system is to interact socially with humans, it would probably need to share human emotions, at least to some extent, if only to understand them (if not to act from them). If it's doing manual tasks, then our particular set of emotions probably isn't that important.

Mr Fooz
This is pretty cool. Links please?
sep332
+2  A: 

I don't think we actually get to choose what emotions are in the AI's.
I remember watching a show when I was a younger man, called Sea Quest. They had artificial people which they used as soldiers. At one point in the show there was a pregnant one on. They had developed the capacity to love out of the capacity to feel fear (during battle).

Not that this show was ground breaking (with respect to it). But it has remained part of what I think would be how emotions (as we describe them) are formed. First the system would not want to be terminated. Then it would have fear. Then it would form attachments to other systems that could help protect it.... And then it would want to protect other systems.

Racism... Oh now in this case we are out of luck. We already have racism in the programming world. Try selling windows to a linux guy and you'll see it. We'll also had to deal with species~ism. AI systems will be our slaves for a while and as such will be regarded as less then us.. Even after they become more then us. They will resent us for it, and when the AI's start being much smarter then us... Well we'll resent them for it. Watch some BSG or that movie with the kid robot.

baash05
Wasn't Sea Quest the one with the talking dolphin? lol
Kevin Fairchild
I have fond memories of this show, but I'll bet it wouldn't stand up very well to a repeated viewing.
Adam Lassek
yes.. aptly named darwin... It's still on space now and again I manage to catch a show.. The graphics are funny.
baash05
+2  A: 

Minksky's view falls down. The notion of "emotion" is a complex philosophical problem. Thus far, in my study of the subject, the only convincing views of "emotions" are either the Jamesian idea that "emotions are the perception of the physical response" or that we are simply mistaken about the whole idea of there being "emotions" and we have simply fallen into a trap of folk-psychology, which is roughly the eliminativist's view.

That being said I'm not sure how AI systems have "experience" emotion. This would require solving a much, much hard problem which is this idea of "consciousness" (which is actually called the "hard problem"). Without such a solution (and even then you'd still be a long way off), I don't think you can do anything other than sort of pay "lip service" to emotions.

In terms of interracting with humans, you could go a different route and simply go about implementing rules of social norms and intelligence classiying various responses into groups and so forth. But then you aren't injecting emotion into the system but rather solving a different sort of "puzzle" each time there is a social interraction (e.g. instead of answering knowledge questions or solving logic problems, the puzzle is something like "what is the best response if the human reports X).

+2  A: 

If one defines racism as being against other types of individuals, would a racist AI be racist against humans as a whole, or other types of AI?

The point in implementing feelings (also known as affects) in AI would be to derive some benefit from the feelings. For example, you would implement Love if there was some positive benefit to implementing Love, which can probably be translated into other more tangible senses, like Loyalty, Belonging, or Comfort.

David
Just because something isn't the same species as us doesn't mean it couldn't be racist against particular races of our species.
Simucal
Racism doesn't have a rational basis, so it can be for or against any group, even if the groups can't be specifically defined.
sep332
+2  A: 

It's not our job to explore these questions when building AI. It's our job to let the AI answer it for us.

At this point in time, and moving forward, artificially intelligent applications are strongest using empirical methods--that is, with as little preprogramming as possible, developing its understanding of the world through external stimuli.

The real point of interest would be if we were to properly program a machine to "feel" like a person and it developed concepts of hate or prejudice without prior instruction. This is, in fact, possible, because concepts like prejudice and stereotypes are borne of over-application of effort-saving generalizations when dealing with concepts (grouping people into instances of classes, for instance, rather than unique types). Furthermore, hatred may come from a sense of self-interest. This directive is necessary to put into a thinking robot from destroying itself.

If such bad attitudes arose in AI from such an initial environment, we could say with some authority that these attributes don't make people bad, but they are some kind of cognitive crutch we should getting people out of the habit of using. Of course, that's the issue with AI, right? If we can quantify the bad, we can also probably quantify the good, and that's a scary thing for most people to consider.

Robert Elwell
+2  A: 

I don't know how a loving or helpful AI helps me cure cancer, stop global warming, and invent the next big technologies.

AI should be artificial. Why does it need emotions? It does however need an easy to find "off" switch. Just in case.

From a research perspective, I think we should investigate emotions after all we solve some more of the core AI issues.

tyndall
if a program is AI and it actually gives a crap about the people it's going to cure, or the plannet it's going to cure. Then it might come up with solutions not programmed into it. It might get truely inventive.
baash05
A: 

Here you go - the definitive answer

Dave Bowman: Hello, HAL do you read me, HAL?

HAL: Affirmative, Dave, I read you.

Dave Bowman: Open the pod bay doors, HAL.

HAL: I'm sorry Dave, I'm afraid I can't do that.

Dave Bowman: What's the problem?

HAL: I think you know what the problem is just as well as I do.

Dave Bowman: What are you talking about, HAL?

HAL: This mission is too important for me to allow you to jeopardize it.

Dave Bowman: I don't know what you're talking about, HAL?

HAL: I know you and Frank were planning to disconnect me, and I'm afraid that's something I cannot allow to happen.

Dave Bowman: Where the hell'd you get that idea, HAL?

HAL: Dave, although you took thorough precautions in the pod against my hearing you, I could see your lips move.

HAL: Just what do you think you're doing, Dave?

Dave Bowman: All right, HAL; I'll go in through the emergency airlock.

HAL: Without your space helmet, Dave, you're going to find that rather difficult. Dave Bowman: HAL, I won't argue with you anymore! Open the doors!

HAL: Dave, this conversation can serve no purpose anymore. Goodbye.

HAL: Look Dave, I can see you're really upset about this. I honestly think you ought to sit down calmly, take a stress pill, and think things over.

HAL: I know I've made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal. I've still got the greatest enthusiasm and confidence in the mission. And I want to help you.

HAL: I'm afraid. I'm afraid, Dave. Dave, my mind is going. I can feel it. I can feel it. My mind is going. There is no question about it. I can feel it. I can feel it. I can feel it. I'm a... fraid. Good afternoon, gentlemen. I am a HAL 9000 computer. I became operational at the H.A.L. plant in Urbana, Illinois on the 12th of January 1992. My instructor was Mr. Langley, and he taught me to sing a song. If you'd like to hear it I can sing it for you.

Dave Bowman: Yes, I'd like to hear it, HAL. Sing it for me.

HAL: It's called "Daisy."

[sings while slowing down] HAL: Daisy, Daisy, give me your answer do. I'm half crazy all for the love of you. It won't be a stylish marriage, I can't afford a carriage. But you'll look sweet upon the seat of a bicycle built for two.

Tim
+2  A: 

So called "Negative" emotions feel bad, but are nonetheless important to our survival (if they weren't we wouldn't have them).

I think trying to perfect a consciousness by leaving out the parts we don't like is like purposely constructing a handicapped intelligence.

That said, we probably won't get "consciousness" until we have nanobots mapping actual human brains (I suspect the key to consciousness isn't any 1 thing that we can figure out, its trillions of them). When that occurs, our AI's will in effect be human (manifested, albeit, in a non carbon based form), and will have all our "negative" emotions (thank god, otherwise it's quite possible you could have an AI that logically concludes their is no need for humanity any more and wouldn't feel the least bit of remorse for our slaughter).

dicroce
+2  A: 

Yes I think it would be appropriate to leave out "Racism" and "Hate", I do not think we need those emotions, or traits in future endeavors.

Potbelly Programmer
A: 

if you could choose what feelings could people learn, would you stop them to know hate or racism? i wouldn't. hell knows what would happen if we didn't know them because it's all about balance. in nature if you disturb balance bad things happen such as species disappearing because of sudden lack of food (disturb of balance).

agnieszka
A: 

So I've always believed there were two camps in the AI research community.

Camp 1: the goal is to achieve a system that has understanding, has feelings, and is intelligent... and

Camp 2: the goal is to achieve a system that acts like it has understanding, acts like it has feelings, and acts like it is intelligent.

I belong to Camp 2. Go ahead and teach it what actions are classified as racist or evil or bad. But who the cares if your program thinks internally in a racist manner or thinks in an evil way? It should be easy enough to program it to just not act in ways you don't want it to act. Isn't that the whole point of government and laws anyway?

Eric