Computational Understanding
of Police Officer Respect


Rob Voigt

(now) Northwestern Linguistics

robvoigt@northwestern.edu

with Nicholas P. Camp, Vinodkumar Prabhakaran, William L. Hamilton, Rebecca C. Hetey, Camilla M. Griffiths, David Jurgens,
Dan Jurafsky, and Jennifer L. Eberhardt

Police-Community Interaction


  • Media focus on explosive incidents,
    research focus on outcomes
  • But what's happening on the ground?
  • 25% of adults interact with police during a year
    Eith and Durose (2011)

Procedural Justice


  • A person treated with respect:
    • has more trust in the officer’s fairness
    • and the fairness of the institution
    • and is more willing to support or
      cooperate with the police
      (Tyler 1990; Tyler and Huo 2002; Mazerolle et al. 2013)
  • Black community members report:
    • more negative experiences with the police
    • and being treated with less respect
      (Huo and Tyler 2001; Peffley and Hurwitz 2010; Epp et al. 2014)

Research Questions



How is respect instantiated linguistically in the police-community interactional context?


Are black community members treated with less respect by officers than is afforded white community members?

Previous Work on Procedural Fairness


  • citizens’ recollection of past interactions
    (Epp et al 2014)
  • researcher observation of officer behavior
    (Mastrofski et al 2009; Dai et al 2011;
    Jonathan-Zamir et al 2015; Mastrofski et al 2016)
  • These methods are invaluable but indirect
  • ... and the presence of a researcher may influence police behavior
    (Mastrofski and Parks, 1990)

This Work: Footage as Data


  • Oakland PD has been wearing body cameras since 2010, usually used only as evidence
  • ... but, a window into everyday behavior!

Data


  • Focus on traffic stops,
    black and white community members
  • 981 stops by 245 officers in April 2014
  • 70% black drivers; 183 hours of footage
  • Professionally transcribed and diarized:
    36,738 officer utterances, 350k+ words

"Thin-slice" Human Social Evaluations


  • Formal, Friendly, Polite, Respectful, Impartial
  • 414 utterances,
    10 raters each
  • High consistency: Cronbach's \(\alpha\) 0.73-0.91
  • White-directed utterances rated higher in all categories

The Latent Space of Respect



PCA Loadings
 
 variance explained:
Formal
Friendly
Polite
Respectful
Impartial
Respect
 71%
0.27
0.47
0.49
0.47
0.50
Formality
 22%
0.91
-0.39
-0.04
0.03
-0.11

Computational Modeling of Respect


    Use human judgments as training data
    for a machine-learned model of respect
    by operationalizing theories of politeness

  • Linguistic theories of politeness from pragmatics focus on requests
    • Requesting that you do something is face-threatening
      (Goffman 1967; Lakoff 1973; Culpepper 1976; Brown and Levinson 1978)

Features: Operationalizing Politeness


Negative Politeness

(hearer's freedom of action)

  • Minimize my request
  • Put the imposition on record (vs. ignore impact on you)

Positive Politeness

(hearer's self-image)

  • Emphasize your value
  • Emphasize my good relationship with you

apologizing, gratitude, reassurance ("it's okay"), hedges, etc

formal vs. informal titles ("sir" vs "bro"), introductions, mentioning safety, etc

Brown and Ford (1961), Brown and Levinson (1978), Culpepper (1976),
Pennebaker et al. (2007), Prabhakaran et al (2012), Danescu-Niculescu-Mizil et al (2013)

Feature Engineering


32 hand-engineered features

  • Lexicons

    formal titles: sir, ma'am, vs. informal titles: bro, dude

  • Regular expressions

    introductions: (i|my name).+officer | officer.+(oakland|opd))

  • Dependency-based rules

    adverbial "just": "just" appears as head of advmod relation

  • More complex functions

    bald commands: sentence starts with bare verb
    (e.g. "stand", "give", "wait")

Computational Model


  • Linear regression predicting Respect
  • log-tranformed counts of features per utterance
  • stepwise removal of uninformative features
  • Reasonable R2: 0.258

What does the model learn?


Example Score

Sorry to stop you. My name's Officer [name] with the Police Department.

0.84
Example Score

There you go, ma'am.
Drive safe, please.

1.21
Example Score

It just says that, uh, you've fixed it. No problem. Thank you very much, sir.

2.07
Example Score

Where are you guys coming from? Don't lie.

-0.57
Example Score

So let me see that registration stuff, bro.

-1.03

Results Across the Entire Dataset


  • Estimate respect for all 37k officer utterances
  • Hierarchical mixed-effects model
    • predict respect from contextual and social variables
    • controlling for officer-level and stop-level variation
  • Results - officers are more respectful:
    • with older community members
    • when a citation is issued
    • with white community members

Respect Through the Interaction


Controls


  • Only “everyday” interactions
    (no arrest, no search)
  • Crime rate in the area
  • Density of businesses in the area
  • Officer race
  • Officer years of experience
  • Severity of the reason for the stop

Computation and Impact


  • Provides concrete strategies for officers
  • Cooperation with Oakland to integrate results into procedural justice training
  • ... and we can measure impact

Intervention Results (preliminary)


  • 315 pre- and
    305 post-training stops
  • 22.7k officer utterances
  • Overall increase in respect post-training; disparity not reduced

Impact and Meaning in Society


  • Many social problems are problems
    of linguistic and social meaning

    How do we interpret one another's behavior?
    ... recognize one another's intentions?
    ... validate one another's personhood?

Computational Linguists are
uniquely positioned to contribute!


Questions?

robvoigt@northwestern.edu
full paper