<<

Lecture 14: Social Emotions

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Experiment

▪ We going to play a game for some real $$

Reminder on Economic vs. Psychological research Deception taboo in economic games

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch experiment The

1. Highest bidder wins $10

2. Bidding starts at $1 and proceeds in $1 increments. And, yes, this is for real money.

3. I will give all bidders fair warning before the auction ends.

4. Cartels and among bidders are strictly prohibited. This means no communication, verbal or nonverbal, is allowed (other than bidding)

5. The highest bidder pays me what they bid and receives $10.

6. The second highest bidder pays me what they bid.

7. Only Play if you prepared to pay me

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch What should have happened

▪ What happened and how can we explain it? – The structure of task can “hook” bidders that bid high – E.g. to avoid a loss of $9, one can bid $11 and only lose $1 (if bidding stops)

▪ Why do people stop bidding? – When they realize they better cut their losses – Unfortunately, hard to recognize this early in the game

▪ Why don’t people stop bidding? – People can get caught in “auction fever”: many factors conspire ▪ People tend to get excited when they bid ▪ Emotions increase when auction deadlines approach (Ku, Malhotra & Murnighan 2005) ▪ People want to avoid loss

▪ Task emphasizes importance of Theory of Mind – Important to anticipate how others will respond – Important to shape other’s beliefs about you (e.g., I will never back down) – This reasoning is recursive and thus difficult

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch A strange game. The only winning move is not to play - War Games

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Another game Called the impunity game

▪ I give $10 to “Proposer” (P) ▪ Proposer can split with “Responder” (R): offer $X ∈ {$0 .. $10} ▪ R can accept or reject ▪ If R accepts, R gets $X, P gets $10-X, (e.g., P keeps $7, R keeps $3) ▪ If R rejects, R gets $0, P gets $10-X, (e.g., P keeps $7, R keeps 0)

▪ What offer X yields most $ to Proposer? ▪ What decision (accept/reject) yields most $ for Responder? ▪ Most people offer $2 or $3. Why? ▪ How much power does Responder have to influence proposer? ▪ People often reject unfair offers. Why?

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Yet Another game Called the (take it or leave it)

▪ I give $10 to “Proposer” (P) ▪ Proposer can offer $X ($0 to $10) to “Responder” (R) ▪ R can accept or reject ▪ If R accepts, R gets $X, P gets $10-X, ▪ If R rejects, both get $0

▪ What offer yields most $ to Proposer? ▪ What decision yields most $ for Responder? ▪ Most people offer $4 or $5 (more than last game). Why? ▪ How much power does Responder have over proposer?

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch How do Chimps play the Ultimatum Game?

▪ Chimpanzees behave according to rational analysis. They propose an unequal split and it is not rejected (Jensen, Call, Tomasello 2007)

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Slide borrowed from Edward Cartwright Overview: Emotions in social situations

▪ Preview next 3 lectures

▪ Introduce social rationality: – What is “proper” way to make social decisions? – Game theory

▪ Highlight departures from classical game theory

▪ Discuss “behavioral game theory” – Considers how to incorporate emotional influences – Discuss Fehr and Schmidt’s Equity Aversion Model

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Decision-theory Reminder

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Preview Social Goals “Do unto others…”

Appraisal Appraisal Desirability Desirability Expectedness Social decision- Situation Goals Expectedness Controllability Situation Goals making Controllability Causal Attribution Causal Attribution

Emotion Action Action Tendency Emotion Tendency

Regulation and Strategic Emotions Emotion is Social Emotion is Information Evocative Noise

Signal Encoding Decoding

Feedback Emotion is evocative

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Theories of Social Decision-Making

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Rational Choice Theory (Review)

▪ Developed over centuries

▪ Central foundation of economic decision-making

▪ Serves two basic purposes – Normative: how people (and machines) should act and think ▪ Helps us avoid confused, poor thinking ▪ Helps us analyze arguments ▪ Aids in design of “optimal” artificial decision-makers

– Descriptive: how people (and machines) actually act and think? ▪ Fundamental postulate of economics: people act rationally ▪ (allows that individuals may not be rational but this can be viewed as noise so that the population will act rationally)

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Variants of Rational Choice Theory

▪ Decision theory centers on cost-benefit calculations that individuals make without reference to anyone else’s plans (Lecture 7)

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Variants of Rational Choice Theory

▪ Decision theory centers on cost-benefit calculations that individuals make without reference to anyone else’s plans

▪ Game theory analyzes how people make choices based on what they expect other individuals to do. – We will discuss this when we consider social emotions

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Do you think your behavior influenced the agent? - Emotions - Decisions

What if agent playes “fixed policy” - Ignores your actions - Choose Green 60% of time - Choose Blue 40% of time

How should you play against such a policy?

60% 40% This is Decision Theory Green $5 $2 Solve w/ reinforcement learning S 60% 40%

Blue $7 $4

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Do you think your behavior influenced the agent? - Emotions - Decisions

What if agent plays tit-for-tat - Green if you choose green on last round - Blue if you chose blue on last round

How should you play against such a policy?

This is Game Theory

CANNOT solve via reinforcement learning. Need to think about opponent’s responses to your actions CSCI 534(Affective Computing) – Lecture by Jonathan Gratch How did agent play?

2x2 mixed factorial design: (within) x expression (between)

de Melo and Terada. The interplay of emotion expressions and strategy in promoting cooperation in the iterated prisoner’s dilemma. Scientific Reports 2020

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Game Theory Examples Send a signal

Assumption: my actions will influence other’s actions This is the essence of game theory

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Example

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Another approach

Imagine these are all driverless cars

Assumption: my actions cannot influence other’s actions

These cars are just part of the environment

This is the essence of decision-theory

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch FYI: game theory exercises different brain regions

▪ Compared to decision theory, people use different brain regions – MPFC associated with Theory of Mind Reasoning – Insula associated with emotion and activated when treated unfairly

▪ These regions not activated when playing same game against a computer (people special)

Alan G. Sanfey, et al. Social Decision-Making: Insights from Game Theory and Neuroscience. Science 318, 598 (2007);

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch What is Game Theory?

▪ Game theory is a language for describing strategic interactions when what happens to one person is affected by another person

▪ A large number of situations that confront us in our day to day lives can be thought of as “games” with us as “players”

▪ And they can be analyzed using the tools of game theory

GT slides adapted from Ananish Chaudhuri, Department of Economics, University of Auckland

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Games in everyday life

▪ Tennis players deciding whether to serve to the forehand or backhand of their opponent

▪ The local bakery offering a discounted price on pastries just before it closes

▪ Employees deciding how hard to work when the boss is away

▪ Pharmaceutical firms investing in research to develop a drug

▪ People bidding for stuff on eBay

▪ Airline companies trying to decide whether to cut prices

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Pioneers of Game Theory

▪ Game theory enables us to understand and analyze the nature of the interaction between players in such games

▪ Foundations developed by von Neumann and Morgenstern

▪ Extended by John Nash (played by Russell Crowe in “A Beautiful Mind”) with and

▪ Used extensively in computer science, economics, biology, sociology, political science, and all branches of business- related disciplines such as management and marketing

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Elements of A Game

▪ Player: Who is interacting? N={1,2,…,n}

▪ Actions/ Moves: What the players can do? Action set : A = a ,a ,,a  i i1 i2 ili ▪ Payoff: What the players can get from the game

n ui :i=1 Ai → R

Payoff determined by joint action

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Strategy

▪ Strategy: complete plan of actions

▪ Mixed strategy: probability distribution over the pure strategies

 li  S = s s = (s , s ,, s ),s  0, s =1 i  i i i1 i2 ili ij  ij   j=1 

u (s , s ) = s s u (a ,a ), =1,2. ▪ Payoff:  1 2 i  j 1i 2 j  1i 2 j

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch An Example: Rock-paper-scissor

▪ Players: A and B B ▪ Actions/ Moves: rock paper scissor {rock, scissor, paper} rock 0,0 -1,1 1,-1 ▪ Payoff: A paper 1,-1 0,0 -1,1 u1(rock, scissor) = 1 scissor -1,1 1,-1 0,0 u2(rock, paper) = -1 ▪ Mixed strategies

s1=(1/3,1/3,1/3) s2=(0,1/2,1/2) u1(s1, s2) = 1/3(0·0+1/2·(-1)+1/2·1)+ 1/3(0·1+1/2·0+1/2·(-1))+1/3(0·(-1)+1/2·1+1/2·0) = 0 CSCI 534(Affective Computing) – Lecture by Jonathan Gratch What is the solution of the game?

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Typical Assumptions

▪ Axiomatic assumptions on games

1. Assumes player is rationally self-interested In any given situation a decision-maker always chooses the action which maximizes own self-interest (i.e., maximize expected utility).

2. Assumes opponent is rationally self-interested

3. Assumes perfect knowledge: players know structure of game Moves, utilities, etc. 4. Assumes communication only through actions Talk is “cheap” (since people can lie, no point listening to them)

5. Assumes nothing carries over to other games

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Example: Prisoners’ Dilemma (Split-Steal)

Imagine Opponent Imagine Opponent picks Green picks Blue

Action Opponent Green Blue You Opp You Opp Green 12, 12 0, 18 You You Opp You Opp Blue 18, 0 6, 6 Your best move Your best move payoffs Picking Blue is the dominant strategy: best regardless of what other player does

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Example: Prisoners’ Dilemma

And people typically do better than “rational” solution Highest Why? Opponent Joint Green Blue return

Green 12, 12 0, 18 You

Blue 18, 0 6, 6

Rational Picking Blue is the dominant strategy: Solution best regardless of what other player does

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Iterated game

▪ You played a multi-round game. Does this change this reasoning?

▪ Assume finite horizon game (4 rounds) – Using argument above, can prove you should pick Green (non- cooperative) choice. – Similarly, can prove opponent will pick this as well – Working backwards () can prove you should pick Green on round 1

▪ If unknown horizon more complicated but, given reasonable assumptions, same conclusion follows

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Why do people beat the rational solution?

▪ Not always the case – Sometimes rational actors perform better – Depends on structure of game

▪ But clear that people depart from the rational model

▪ Thus, rational model poor choice for predicting human social behavior, especially when situations evoke emotions

▪ To fix, models appeal to concepts that seem like emotion

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Why don’t people follow game theory?

▪ Axiomatic assumptions on games

1. Assumes player is rationally self-interested In any given situation a decision-maker always chooses the action which maximizes own self-interest (i.e., maximize expected utility).

2. Assumes opponent is rationally self-interested

3. Assumes perfect knowledge: players know structure of game Moves, utilities, etc. 4. Assumes communication only through actions Talk is “cheap” (because it people can lie)

5. Assumes nothing carries over to other games

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch One solution: Maybe talk isn’t cheap

If we can predict opponent next action from words or emotions, we can do better (e.g., they have a “tell”)

Opponent Green Blue You Opp You Opp Green 12, 12 0, 18 You You Opp You Opp Blue 18, 0 6, 6

In terms of game theory, knowing opponent’s first move is a special case called a “Stackelberg Game”

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch One solution: Maybe talk isn’t cheap

If we can predict opponent next action from words or emotions, we can do better (e.g., they have a “tell”)

Opponent We know in advance We know in advance opponent will pick Blue Opponent will pick Green Green Blue You Opp You Opp Green 12, 12 0, 18 You You Opp You Opp Blue 18, 0 6, 6 Your best move Your best move In terms of game theory, knowing opponent’s first move is a special case called a “Stackelberg Game”

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch One solution: Maybe talk isn’t cheap

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch One solution: Maybe talk isn’t cheap

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch One solution: Maybe talk isn’t cheap

▪ Previous example shows some limits of our abilities, but evidence that people can predict cooperation

Brosig, J. (2002). Identifying cooperative behavior: some experimental results in a prisoner's dilemma game. Journal of Economic Behavior and Organization, 47, 275-290.

Frank, R. H., Gilovich, T., & Regan, D. T. (1993). The evolution of one-shot cooperation: an experiment. Ethology and Sociobiology, 14, 247-256.

▪ People interacted w/ partner for 5 min before playing

▪ Were better than chance at predicting who would cooperate

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Affective computing approach

Identified nonverbal cues in human dyads that were associated with untrustworthiness

If robot showed these cues before game, people didn’t trust it

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch One solution: relax assumption people purely self-interested

Opponent We know in advance Opponent will pick Green Green Blue You Opp You Opp Green 12, 12 0, 18 You You Opp You Opp Blue 18, 0 6, 6 Your best move

▪ What would you actually pick?

▪ Why?

▪ How would you feel?

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch One solution: relax assumption people purely self-interested

▪ Recall, can fix decision theory by maximizing expected emotion rather than expected utility

▪ Maybe we have emotions about other people? – We feel bad when we hurt others ▪ Feel guilt ▪ Try to repair relationships – We feel bad when other’s hurt us ▪ Feel anger ▪ Try to get even

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Another solution: We are influenced by other’s emotion

▪ Emotional signals reinforce prosocial motives – We feel bad when we hurt others (feel guilt) – We may feel worse if they show they are hurt (show anger)

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Preview Social Goals “Do unto others…”

Appraisal Appraisal Desirability Desirability Expectedness Social decision- Situation Goals Expectedness Controllability Situation Goals making Controllability Causal Attribution Causal Attribution

Emotion Action Action Tendency Emotion Tendency

Regulation and Strategic Emotions Emotion is Social Emotion is Information Evocative Noise

Signal Encoding Decoding

Feedback Emotion is evocative

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch First (today) Social Goals “Do unto others…”

Appraisal

Desirability Situation Expectedness Goals Controllability

Causal Attribution

Action Emotion Tendency

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Social Goals

▪ Majority of economic and game-theoretical models based on the assumption that agents have self-regarding preferences

▪ But people don’t only care about themselves – We feel bad when we hurt others (guilt) – Wee feel bad when others hurt us (anger)

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Examples of “other-regard”?

▪ Donating to charity

▪ Opening a door for someone carrying a heavy item

▪ Yielding to somebody who is trying to merge into rush hour traffic

▪ An eBay seller providing positive feedback on a buyer after the buyer provides positive feedback on the seller

▪ Repaying a favor

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Examples of other-regarding behavior?

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Examples of other-regarding behavior? Altruist Fair Self-interested

“Rational” choice Competitive

“Rational” choice

▪ This personality difference called Social Value Orientation (SVO)

▪ It’s an example of an other-regarding

▪ Are all other-regarding preferences pro-social?

▪ And, actually, this not inconsistent with rational theory – Axioms of decision theory don’t say utility is self-interested

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch One model: Inequity Aversion (Fehr & Schmidt, 2006)

▪ Self-interest only considers our own outcomes – When receiving offer in ultimatum game, $1 better than $0

▪ Fairness involves a social comparison – Hey! You got $4! – We feel bad when others gain more than us (envy) – We feel bad when we gain more compared with others (guilt)

▪ Just like Decision-affect, theory, we can change the utility fn Envy

Ume($me , $you) = $me – αme ∙ max{$you – $me ,0}

– βme ∙ max{$me – $you ,0}

Guilt Self interest

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Receiver’s Perspective Example: Ultimatum Game α = Envy parameter β = Guilt parameter Envy

$5 $1

Sender Receiver

U($1,$4) = $1 – α×max{4 – 1, 0} - β×max{1-4,0} U($1,$4) = $1 – α×3 - β×0 U($1,$4) = $1 –3 = -$2 (if α=1); Receiver will reject

53 CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Sender’s Perspective Example: Ultimatum Game α = Envy parameter β = Guilt parameter

Guilt

$5 $1

Sender Receiver

U($4,$1) = $4 – α×max{1 – 4, 0} - β×max{4 - 1,0} U($4,$1) = $4 – α×0 - β×3 U($4,$1) = $4 – 3 = $1 (if β=1); Sender will make offer (but feel guilty about it) 54 CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Limitations of Inequity Aversion

▪ Emphasizes relative fairness of outcomes – If outcome is unequal across multiple parties, seen as unfair

▪ Is outcome the only factor people care about in social situations?

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Imagine this modified ultimatum game

▪ I give $10 to Proposer ▪ Proposer can share some money with Responder ▪ Responder can accept or reject ▪ If Responder rejects, both get nothing (e.g., Ultimatum game) ▪ What if proposer gives $2? ▪ What if you learned that Proposer was only allowed to share $0 or $2?

Inequity aversion predicts reject

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Intentions matter

▪ What matters is how the other person has treated me relative to how they could have treated me – People are willing to sacrifice their own payoff to help those that they think have been kind to them – The are prepared to give up their own payoff to punish those that they think have been unkind

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Rabin’s Reciprocity Model

▪ Emphasizes “kindness” over outcomes

Ui(ai, bj, ci) = πi(ai, bj) + α fj(ai, ci)[1- fi(ai, bj)]

– ai : player i's strategy (e.g., split or steal)

– bj : player i's belief about what player j's strategy will be

– ci : player i's beliefs about player j's beliefs about player i's strategy

– πi(ai, bj): player i’s payoff if I plays ai and j plays bj

– fi(ai, bj): player i’s “kindness” Based on what player could give = ($2 - $0) / 2

▪ Relies on beliefs about other player’s intentions – I believe; I believe that you believe

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch How do we form these beliefs?

▪ Playing randomly?

▪ Attending to my actions?

▪ Attending to my emotions?

▪ Care about farness?

▪ More generally, what is my opponent’s “type”

▪ How did you figure out?

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch General comments on other-regarding preferences

▪ People act as thought they care about others

▪ Can incorporate these into utility function

▪ Improves fit to data

▪ Also allows us to model individual “types” – How does behavior change if we alter alpha and beta?

Ume($me , $you) = $me – αme ∙ max{$you – $me ,0}

– βme ∙ max{$me – $you ,0}

▪ Envy usually larger than guilt

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Individual differences Ume($me , $you) = $me – αme ∙ max{$you – $me ,0}

– βme ∙ max{$me – $you ,0} ▪ Social-value orientation – Some people more individualistic, some more collaborative – Can model with alpha and beta

▪ Culture – Some cultures more collectivist (e.g., China) – More guilt for “in-group” members – Less guilt toward “out-group members

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Situational differences: e.g., Ultimatum Game

Guilt

$5 $1

Sender Receiver

62 CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Situational differences: e.g., Ultimatum Game

Guilt

$5 $1

Sender

63 CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Situational differences: e.g., Ultimatum Game

Guilt

$5 $1

Sender

64 CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Psychological Distance (Trope and Lieberman, 2010)

People show less other-regard as “social distance” increases

Dictator game with another participant “like you” With machine representing another participant With machine representing experimenter With machine itself

Throw your $ in trash Increasing social distance from “other”

de Melo, Carnevale, and Gratch. Mind Perception of Computers and Humans (in prep) Social goals

▪ Up to now we have focused on fairness

▪ Being unfair is a type of social harm

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Social harm

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Another example

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Mind perception explains “distance” effects

Autonomy Humans

Automation Animals

▪ Research suggests social cognition influenced by “mind perception” ▪ How we treat other entities depends on extent to which we attribute them “a mind” ▪ People organize other minds in 2 broad dimensions: Do they think? Do they feel?

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Mind perception explains “distance” effects

When treated unfairly

• Feel envy

• Reject unfair offer

Accountability Intentionality Intentionality • Don’t feel envy • Accept unfair offer

▪ These mind perceptions have consequences

de Melo, Carnevale and Gratch, Social categorization and cooperation with autonomous agents and avatars, in prep.

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Mind perception explains “distance” effects

When treating others unfairly

→ Accountability

Merits protection from harm→

• Feel no guilt • Feel guilty • Happily cause harm • Avoid causing harm

de Melo, Carnevale and Gratch, Social categorization and cooperation with autonomous agents and avatars, in prep.

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Also explains how we “dehumanize others”

▪ Animalistic dehumanization – Treat others “as though” they were animals (deny them intelligence) → – Often done to minorities, colonialist attitudes (i.e., patronizing)

▪ Mechanistic dehumanization

Accountability Accountability – Treat others “as though” they were machines or objects – e.g., doctors often dehumanize Merits protection from harm→ patients in this way

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch But emotion can move this around

Autonomous Agent Avatar

How we treat other entities depends on extent to which we attribute them “a mind” People organize other minds in 2 broad dimensions: Do they think? Do they feel?

Add behaviors that Take emotions away Convey emotion from the human

▪ Machines that express emotion treated “as if” they are human-controlled ▪ Humans that fail to express emotion treated “as if” they are computer-controlled

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch de Melo, Carnevale, and Gratch. Mind Perception of Computers and For example Humans (in prep)

▪ People play as sender in iterated ultimatum game for $ ▪ With a purported human or computer opponent ▪ That does or does not exhibit emotions in response to offers

People more fair towards other humans

▪ Suggests important role of “emotion-like” behaviors in But effect vanishes if we human-machine interaction control for emotion

74 Social goals

▪ Up to now we have focused on fairness, harm

▪ Can you think of other social goals?

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Subjective value inventory

▪ Feelings about the instrumental outcome – How satisfied are you with your own outcome – i.e., the extent to which the agreement benefited you

▪ Feelings about the self – Did you “lose face” (i.e., damage your sense of pride)

▪ Feelings about the process – Would you characterize the negotiation process as fair?

▪ Feelings about the relationship – How satisfied are you with your relationship with your counterpart(s) as a result of this negotiation

Curhan, J. R., Elfenbein, H. A., & Xu, A. (2006). What do people care about when they negotiate? Mapping the domain of subjective value in negotiation. Journal of Personality and Social Psychology, 91, 493–512

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Moral foundation theory (Haidt)

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch A note on learning

▪ People (and algorithms) adapt to other’s behavior

▪ Different types of algorithms – Focus only on own actions: Reinforcement learning

– Focus on other player’s strategies: Belief Learning ▪ Fictious Play ▪ Cournot Adjustment ▪ Experienced Weighted Attractions learning

https://www.uni-heidelberg.de/md/awi/forschung/lecture_belief_based_learning.pdf

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch Summary

▪ Game theory describes how people “should” act in social situations (proscriptive theory)

▪ People fail to follow predictions from game theory

▪ One solution is to model social goals (e.g., other-regarding preferences)

▪ People vary in terms of other regard – Based on individual differences (SVO) – Based on culture – Based on the situation and nature of their partner – Based on “psychological distance” (animalistic and mechanistic dehuminazation) – Based on moral framework

▪ Technology can influence these processes

CSCI 534(Affective Computing) – Lecture by Jonathan Gratch CSCI 534(Affective Computing) – Lecture by Jonathan Gratch