Week 11: Fairness vs. Social Welfare

DSAN 5450: Data Ethics and Policy
Spring 2024, Georgetown University

Jeff Jacobs

jj1088@georgetown.edu

Wednesday, April 3, 2024

Utility \(\rightarrow\) Social Welfare

Externalities

Songs Jeef Keef Total
0 0 0 0
1 13 -2 11
2 18 -6 12
3 24 -13 11
4 28 -20 8
5 30 -42 -12

So What’s the Issue?

  • These utility values are not observed
  • If we try to elicit them, both Jeef and Keef have strategic incentives to lie (over-exaggerate)
  • Jeef maximizes own utility by reporting \(u_j(s) = \infty\)
  • Keef maximizes own utility by reporting \(u_k(s) = -\infty\)
  • (…Second price auctions tho)

Now with Scarce Resources

  • In a given week, Jeef and Keef have 14 meals and 7 aux hours to divide amongst them

\[ \begin{align*} \max_{m_1,m_2,a_1,a_2}& W(u_1(m_1,a_1),u_2(m_2,a_2)) \\ \text{s.t. }& m_1 + m_2 \leq 14 \\ \phantom{\text{s.t. }} & ~ \, a_1 + a_2 \; \leq 7 \end{align*} \]

  • Let’s assume \(u_i(m_i, a_i) = m_i + a_i\) for both
  • \(\Rightarrow\) One solution: \(m_1 = 14, m_2 = 0, a_1 = 7, a_2 = 0\)
  • \(\Rightarrow\) Another: \(m_1 = 0, m_2 = 14, a_1 = 0, a_2 = 7\)
  • Who decides? Any decision implies \(\omega_1, \omega_2\) (\(\omega_1 + \omega_2 = 1\))

The Dark Secret Behind Fairness in AI

The Conveniently-Left-Out Detail

  • Recall predictive parity:

\[ \mathbb{E}[Y \mid D = 1, A = 1] = \mathbb{E}[Y \mid D = 1, A = 0] \]

  • Who decides which \(Y\) to pick?
  • Answer: Whoever picks the objective function!
  • For profit-maximizing firm: \(\mathbb{E}[D (Y - c)]\)
  • For welfare-maximizing society: \(W(u_1(D), \ldots, u_n(D))\)
  • Do these align? Sometimes yes, often no (affirmative action!)

Remaining (Challenging) Details

  • Who gets included in the SWF?
  • People in one community? One state? One country?
  • People in the future?
  • Animals?

Let’s Talk Projects!