Week 1: Introduction to the Course

DSAN 5450: Data Ethics and Policy
Spring 2024, Georgetown University

Class Sessions
Author
Affiliation

Jeff Jacobs

Published

January 17, 2024

Open slides in new window →

Who Am I? Why Is Georgetown Having Me Teach This?

Prof. Jeff Introduction!

  • Born and raised in NW DC → high school in Rockville, MD
  • University of Maryland: Computer Science, Math, Economics (2008-2012)

Grad School

  • Studied abroad in Beijing (Peking University/北大) → internship with Huawei in Hong Kong (HKUST)
  • Stanford for MS in Computer Science (2012-2014)
  • Research Economist at UC Berkeley (2014-2015)

  • Columbia (NYC) for PhD[+Postdoc] in Political Science (2015-2023)

Dissertation (Political Science + History)

“Our Word is Our Weapon”: Text-Analyzing Wars of Ideas from the French Revolution to the First Intifada

Why Is Georgetown Having Me Teach This?

  • Quanty things, but then PhD major was Political Philosophy (concentration in International Relations)
  • What most interested me: unraveling history; Easy to get lost in “present-day” details of e.g. debiasing algorithms and fairness in AI, but these questions go back literally thousands of years!
  • Pol philosophers distinguish “ancients” and “moderns” based on a crucial shift in perspective: ancients sought perfection, while Rousseau (1762) “took men [sic] as they are, and laws as they could be”.
import plotly.express as px
import plotly.io as pio
pio.renderers.default = "notebook"
import pandas as pd
year_df = pd.DataFrame({
  'field': ['Math<br>(BS)','CS<br>(BS,MS)','Pol Phil<br>(PhD Pt 1)','Econ<br>(BS+Job)','Pol Econ<br>(PhD Pt 2)'],
  'cat': ['Quant','Quant','Humanities','Social Sci','Social Sci'],
  'yrs': [4, 6, 3, 6, 5]
})
fig = px.sunburst(
    year_df, path=['cat','field'], values='yrs',
    width=450, height=400, color='cat',
    color_discrete_map={'Quant': cb_palette[0], 'Humanities': cb_palette[1], 'Social Sci': cb_palette[2]},
    hover_data=[]
)
fig.update_traces(
   hovertemplate=None,
   hoverinfo='skip'
)
# Update layout for tight margin
# See https://plotly.com/python/creating-and-updating-figures/
fig.update_layout(margin = dict(t=0, l=0, r=0, b=0))
fig.show()
Figure 1: Years spent questing in various dungeons of academia
  • But is separation of ethics from politics possible? (Bowles 2016) Should we accept “human nature” as immutable/eternal? My answer: yes AND no simultaneously…

Dialectics

My Background/Biases

  • Raised in religious Jewish, right-wing (Revisionist Zionist) Republican environment
  • “Encouraged” to emigrate to Israel for IDF service, but after learning history I renounced citizenship etc., family no longer big fans of me (Traumatic and scary to admit, ngl 🙈)
  • 2015-present: Teach CS + design thinking in refugee camps in West Bank and Gaza each summer (Code for Palestine)
  • Metaethics: Learn about the world, challenge+update prior beliefs (Bayes’ rule!); I hope to challenge+update them throughout semester, with your help 🙂

On the One Hand…

On the Other Hand…

Remembering Why It Matters

Rules of Thumb

  • Ask questions about power, about inequities and especially about structures that give rise to them!
  • “Philosophers have hitherto only tried to understand the world; the point, however, is to change it.” (Marx 1845)
  • Dialectical implication: the more we understand it the better we’ll be at changing it

Ethics as an Axiomatic System

Axiomatics

  • Popular understanding of math: Deals with Facts, statements are true or false
    • Ex: \(1 + 1 = 2\) is “true”
  • Reality: No statements in math are absolutely true! Only conditional statements are possible to prove!
  • We cannot prove atomic statements \(q\), only implicational statements: \(p \implies q\) for some axiom(s) \(p\).
    • \(1 + 1 = 2\) is indeterminate without definitions of \(1\), \(+\), \(=\), and \(2\)!
    • (Easy counterexample for math/CS majors: \(1 + 1 = 0\) in \(\mathbb{Z}_2\))

Steingart (2023)

Example: \(1 + 1 = 2\)

Whitehead and Russell (1910), p. 83. See here for page in context

Proving \(1 + 1 = 2\)

(A non-formal proof that still captures the gist:)

  • Axiom 1: There is a type of thing that can hold other things, which we’ll call a set. We’ll represent it like: \(\{ \langle \text{\textit{stuff in the set}} \rangle \}\).
  • Axiom 2: Start with the set with nothing in it, \(\{\}\), and call it “\(0\)”.
  • Axiom 3: If we put this set \(0\) inside of an empty set, we get a new set \(\{0\} = \{\{\}\}\), which we’ll call “\(1\)”.
  • Axiom 4: If we put this set \(1\) inside of another set, we get another new set \(\{1\} = \{\{\{\}\}\}\), which we’ll call “\(2\)”.
  • Axiom 5: This operation (creating a “next number” by placing a given number inside an empty set) we’ll call succession: \(S(x) = \{x\}\)
  • Axiom 6: We’ll define addition, \(a + b\), as applying this succession operation \(S\) to \(a\), \(b\) times. Thus \(a + b = \underbrace{S(S(\cdots (S(}_{b\text{ times}}a))\cdots ))\)
  • Result: (Axioms 1-6) \(\implies 1 + 1 = S(1) = S(\{\{\}\}) = \{\{\{\}\}\} = 2. \; \blacksquare\)

How Is This Relevant to Ethics?

(Thank you for bearing with me on that 😅)

  • Just as mathematicians slowly came to the realization that

\[ \textbf{mathematical results} \neq \textbf{(non-implicational) truths} \]

  • I hope to help you see how

\[ \textbf{ethical conclusions} \neq \textbf{(non-implicational) truths} \]

  • When someone says \(1 + 1 = 2\), you are allowed to question them, and ask, “On what basis? Please explain…”.
    • Here the only valid answer is a collection of axioms which entail \(1 + 1 = 2\)
  • When someone says Israel has the right to defend itself, you are allowed to question them, and ask, “On what basis? Please explain…”
    • Here the only valid answer is an ethical framework which entails that Israel has the right to defend itself.

Axiomatic Systems: Statements Can Be True And False

  • Let \(T\) be the sum of the interior angles of a triangle. We’re taught \(T = 180^\circ\) is a “rule”
  • Euclid’s Fifth Postulate \(P_5\): Given a line and a point not on it, exactly one line parallel to the given line can be drawn through the point.
\(P_5 \implies T = 180^\circ\)
(Euclidean Geometry)
\(\neg P_5 \implies T \neq 180^\circ\)
(Non-Euclidean Geometry)

Ethical Systems: Promise-Keeping

  • Scenario: You just baked a pie, and you promised your friend you’d give them the pie. You’re walking over to the friend’s house to give them the pie.
  • Suddenly, you turn the corner to encounter a hostage situation: the hostage-taker is going to kill their hostage unless someone gives them a pie in the next 30 seconds
  • Do you give the hostage-taker the pie?
Consequentialist Ethics \(\implies\) Yes
  • To be ethical is to weigh consequences of your actions
  • The positive consequences of giving the pie to the hostage-taker (saving a life) outweigh the negative consequences (breaking your promise to your friend)
  • (Ex: Utilitarianism, associated with British philosopher Jeremy Bentham)
Deontological Ethics \(\implies\) No
  • To be ethical is to live by rules which you would want everyone to follow.
  • As a rule (a “categorical imperative”), you must not break promises. (Breaking this rule \(\implies\) others can also “pick and choose” when to honor promises to you)
  • (Ex: Kantian Ethics, associated with German philosopher Immanuel Kant)

Making and Evaluating Ethical Arguments

Descriptive vs. Normative

bin Laden (2005)
Descriptive Statement: “Bin Laden attacked us because we had been bombing Iraq for 10 years” Normative Statement: “Bin Laden attacked us because we had been bombing Iraq for 10 years, and that is a good justification
Descriptively True (empirically verifiable) Normatively True (entailed by axioms + descriptive facts) in some ethical systems, Normatively False (not entailed by axioms + descriptive facts) in others

The Is-Ought Distinction

Hume on Is vs. Ought (Hume 1739)
  • the author proceeds for some time in the ordinary way of reasoning
  • suddenly, instead of the usual copulations of propositions is and is not,
  • I meet with no proposition that is not connected with an ought, or an ought not.
  • This change is imperceptible; but is, however, of the last consequence.
Descriptive (Is) Normative (Ought)
Grass is green (true) Grass ought to be green (?)
Grass is blue (false) Grass ought to be blue (?)

What Happens When We Confuse The Two?

  • Makes it impossible to “cross the boundary” between your own and others’ beliefs
  • Collective welfare: Bad on its own terms (see: wars, racism, etc.)
  • Self-interest: Prevents us from convincing other people of our arguments

Geertz (1973)

Collective vs. Self-Interest

  • Good for collection of people \(\; \nimplies\) good for each individual person! (😰)
  • \(p\) = Unions improve everyone’s workplace conditions, whether or not they pay dues
  • \(q\) = Union dues are voluntary
  • \(p \wedge q \implies\) I can obtain benefits of unions without paying
  • \(\implies\) Individually rational to not pay dues
  • (Think also about how this applies to climate change policy) 🤔

Olson (1965)

Modeling Individual vs. Societal Outcomes

  • Individual Perspective: Individual \(i\) chooses whether or not to pay union dues

\(\implies\) Social Outcome: No Union

Schelling (1978)

Takeaway for Policy Whitepapers

  • You cannot (just) say, “doing \(x\) will be better for society”
  • You must also justify benefits to individuals, or at minimum, the individual organization and its stakeholders!
  • (Is this a normative or descriptive claim?)

Ethical Issues in Data Science

  • Data Science for Who?
  • Operationalization
  • Fair Comparisons
  • Implementation

Data Science for Who?

  • What are the processes by which data is measured, recorded, and distributed?

The Library of Missing Datasets. From D’Ignazio and Klein (2020)

Example: Measuring “Freedom” and “Human Rights”

Operationalization

  • Think about claims commonly made on the basis of “data”:
    • Markets create economic prosperity
    • A glass of wine in the evening prevents cancer
    • Policing makes communities safer
  • How exactly are “prosperity”, “preventing cancer”, “policing”, “community safety” being measured?

Stiglitz, Sen, and Fitoussi (2010)

What Is Being Compared?

  • Are countries with 1 billion people comparable to countries with 10 million people?
  • Are countries which were colonized comparable to the colonizing countries?
  • When did the colonized countries gain independence?

Drèze and Sen (1991)

Implementation

From D’Ignazio and Klein (2020), Ch. 6 (see also)

From Lerman and Weaver (2014)

Fairness… 🧐

Figure 2: From Lily Hu, Direct Effects: How Should We Measure Racial Discrimination?, Phenomenal World, 25 September 2020
Figure 3: From Kasy and Abebe (2021)

…And INVERSE Fairness 🤯

From Machine Learning What Policymakers Value (Björkegren, Blumenstock, and Knight 2022)

Ethical Issues in Applying Data Science

Facial Recognition Algorithms

Facia.ai (2023)

Wellcome Collection (1890)

Ouz (2023)

Wang and Kosinski (2018)

Large Language Models

Figure 4: From Schiebinger et al. (2020)
Figure 5: From DeepLearning.AI’s Deep Learning course

Military and Police Applications of AI

Ayyub (2019)

McNeil (2022)

Your Job: Policy Whitepaper

  • So… is technology/data science/machine learning “bad” in and of itself, or a tool to be wielded for both “good” and “bad” uses?
  • How can we curtail uses of some kinds and/or encourage other uses?
  • If only we had some sort of… institution… for governing its use in society… some sort of… govern… ment?

From Week 7 Onwards, You Work At A Think Tank

Morozov (2015)

From Ames (2014)

References

Ames, Mark. 2014. “The Techtopus: How Silicon Valley’s Most Celebrated CEOs Conspired to Drive down 100,000 Tech Engineers’ Wages,” January. http://web.archive.org/web/20200920042121/https://pando.com/2014/01/23/the-techtopus-how-silicon-valleys-most-celebrated-ceos-conspired-to-drive-down-100000-tech-engineers-wages/.
Ayyub, Rami. 2019. “App Aims to Help Palestinian Drivers Find Their Way Around Checkpoints.” The Times of Israel, August. https://www.timesofisrael.com/app-aims-to-help-palestinian-drivers-find-their-way-around-checkpoints/.
bin Laden, Osama. 2005. Messages to the World: The Statements of Osama Bin Laden. Verso Books.
Björkegren, Daniel, Joshua E. Blumenstock, and Samsun Knight. 2022. “(Machine) Learning What Policies Value.” arXiv. https://doi.org/10.48550/arXiv.2206.00727.
Bowles, Samuel. 2016. The Moral Economy: Why Good Incentives Are No Substitute for Good Citizens. Yale University Press.
D’Ignazio, Catherine, and Lauren F. Klein. 2020. Data Feminism. MIT Press.
Drèze, Jean, and Amartya Sen. 1991. “China and India.” In Hunger and Public Action, 0. Oxford University Press. https://doi.org/10.1093/0198283652.003.0011.
Facia.ai. 2023. “Facial Recognition Helps Vendors in Healthcare.” Facia.ai. https://facia.ai/blog/facial-recognition-healthcare/.
Geertz, Clifford. 1973. The Interpretation Of Cultures. Basic Books.
Hume, David. 1739. A Treatise of Human Nature: Being an Attempt to Introduce the Experimental Method of Reasoning Into Moral Subjects; and Dialogues Concerning Natural Religion. Longmans, Green.
Kasy, Maximilian, and Rediet Abebe. 2021. “Fairness, Equality, and Power in Algorithmic Decision-Making.” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 576–86. FAccT ’21. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3442188.3445919.
Lerman, Amy E., and Vesla M. Weaver. 2014. Arresting Citizenship: The Democratic Consequences of American Crime Control. University of Chicago Press.
Marx, Karl. 1845. Thesen über Feuerbach. Stuttgart: J. H. W. Dietz. https://de.wikisource.org/wiki/Thesen_%C3%BCber_Feuerbach.
McNeil, Sam. 2022. “Israel Deploys Remote-Controlled Robotic Guns in West Bank.” AP News, November. https://apnews.com/article/technology-business-israel-robotics-west-bank-cfc889a120cbf59356f5044eb43d5b88.
Morozov, Evgeny. 2015. “Socialize the Data Centres!” New Left Review, no. 91 (February): 45–66.
Olson, Mancur. 1965. The Logic of Collective Action. Harvard University Press.
Ouz. 2023. “Google Pixel 8 Face Unlock Vulnerability Discovered, Allowing Others to Unlock Devices.” Gizmochina. https://www.gizmochina.com/2023/10/16/google-pixel-8-face-unlock/.
Rousseau, Jean-Jacques. 1762. The Social Contract. Geneva: J. M. Dent.
Schelling, Thomas C. 1978. Micromotives and Macrobehavior. Norton.
Schiebinger, Londa, Ineke Klinga, Hee Young Paik, Inés Sánchez de Madariaga, Martina Schraudner, and Marcia Stefanick. 2020. “Machine Translation: Gendered Innovations.” http://genderedinnovations.stanford.edu/case-studies/nlp.html#tabs-2.
Steingart, Alma. 2023. Axiomatics: Mathematical Thought and High Modernism. University of Chicago Press.
Stiglitz, Joseph E., Amartya Sen, and Jean-Paul Fitoussi. 2010. Mismeasuring Our Lives: Why GDP Doesn’t Add Up. The New Press.
Wang, Yilun, and Michal Kosinski. 2018. “Deep Neural Networks Are More Accurate Than Humans at Detecting Sexual Orientation from Facial Images.” Journal of Personality and Social Psychology 114 (2): 246–57. https://doi.org/10.1037/pspa0000098.
Wellcome Collection. 1890. “Composite Photographs: "The Jewish Type".” https://wellcomecollection.org/works/ngq29vyw.
Whitehead, Alfred North, and Bertrand Russell. 1910. Principia Mathematica. Cambridge University Press.