Big Data, Data Science, and Civil Rights

Solon Barocas, Elizabeth Bradley, Vasant Honavar, Foster Provost

Research output: Book/ReportCommissioned report

7 Downloads (Pure)

Abstract

Advances in data analytics bring with them civil rights implications. Data-driven and algorithmic decision making increasingly determine how businesses target advertisements to consumers, how police departments monitor individuals or groups, how banks decide who gets a loan and who does not, how employers hire, how colleges and universities make admissions and financial aid decisions, and much more. As data-driven decisions increasingly affect every corner of our lives, there is an urgent need to ensure they do not become instruments of discrimination, barriers to equality, threats to social justice, and sources of unfairness. In this paper, we argue for a concrete research agenda aimed at addressing these concerns, comprising five areas of emphasis: (i) Determining if models and modeling procedures exhibit objectionable bias; (ii) Building awareness of fairness into machine learning methods; (iii) Improving the transparency and control of data- and model-driven decision making; (iv) Looking beyond the algorithm(s) for sources of bias and unfairness-in the myriad human decisions made during the problem formulation and modeling process; and (v) Supporting the cross-disciplinary scholarship necessary to do all of that well.
Original languageUndefined/Unknown
PublisherComputing Community Consortium
StatePublished - Jun 9 2017

Cite this

Barocas, S., Bradley, E., Honavar, V., & Provost, F. (2017). Big Data, Data Science, and Civil Rights. Computing Community Consortium.
Barocas, Solon ; Bradley, Elizabeth ; Honavar, Vasant ; Provost, Foster. / Big Data, Data Science, and Civil Rights. Computing Community Consortium, 2017.
@book{89c7078ed9474f93b262ce9948b620b9,
title = "Big Data, Data Science, and Civil Rights",
abstract = "Advances in data analytics bring with them civil rights implications. Data-driven and algorithmic decision making increasingly determine how businesses target advertisements to consumers, how police departments monitor individuals or groups, how banks decide who gets a loan and who does not, how employers hire, how colleges and universities make admissions and financial aid decisions, and much more. As data-driven decisions increasingly affect every corner of our lives, there is an urgent need to ensure they do not become instruments of discrimination, barriers to equality, threats to social justice, and sources of unfairness. In this paper, we argue for a concrete research agenda aimed at addressing these concerns, comprising five areas of emphasis: (i) Determining if models and modeling procedures exhibit objectionable bias; (ii) Building awareness of fairness into machine learning methods; (iii) Improving the transparency and control of data- and model-driven decision making; (iv) Looking beyond the algorithm(s) for sources of bias and unfairness-in the myriad human decisions made during the problem formulation and modeling process; and (v) Supporting the cross-disciplinary scholarship necessary to do all of that well.",
author = "Solon Barocas and Elizabeth Bradley and Vasant Honavar and Foster Provost",
note = "A Computing Community Consortium (CCC) white paper, 8 pages",
year = "2017",
month = "6",
day = "9",
language = "Undefined/Unknown",
publisher = "Computing Community Consortium",

}

Barocas, S, Bradley, E, Honavar, V & Provost, F 2017, Big Data, Data Science, and Civil Rights. Computing Community Consortium.

Big Data, Data Science, and Civil Rights. / Barocas, Solon; Bradley, Elizabeth; Honavar, Vasant; Provost, Foster.

Computing Community Consortium, 2017.

Research output: Book/ReportCommissioned report

TY - BOOK

T1 - Big Data, Data Science, and Civil Rights

AU - Barocas, Solon

AU - Bradley, Elizabeth

AU - Honavar, Vasant

AU - Provost, Foster

N1 - A Computing Community Consortium (CCC) white paper, 8 pages

PY - 2017/6/9

Y1 - 2017/6/9

N2 - Advances in data analytics bring with them civil rights implications. Data-driven and algorithmic decision making increasingly determine how businesses target advertisements to consumers, how police departments monitor individuals or groups, how banks decide who gets a loan and who does not, how employers hire, how colleges and universities make admissions and financial aid decisions, and much more. As data-driven decisions increasingly affect every corner of our lives, there is an urgent need to ensure they do not become instruments of discrimination, barriers to equality, threats to social justice, and sources of unfairness. In this paper, we argue for a concrete research agenda aimed at addressing these concerns, comprising five areas of emphasis: (i) Determining if models and modeling procedures exhibit objectionable bias; (ii) Building awareness of fairness into machine learning methods; (iii) Improving the transparency and control of data- and model-driven decision making; (iv) Looking beyond the algorithm(s) for sources of bias and unfairness-in the myriad human decisions made during the problem formulation and modeling process; and (v) Supporting the cross-disciplinary scholarship necessary to do all of that well.

AB - Advances in data analytics bring with them civil rights implications. Data-driven and algorithmic decision making increasingly determine how businesses target advertisements to consumers, how police departments monitor individuals or groups, how banks decide who gets a loan and who does not, how employers hire, how colleges and universities make admissions and financial aid decisions, and much more. As data-driven decisions increasingly affect every corner of our lives, there is an urgent need to ensure they do not become instruments of discrimination, barriers to equality, threats to social justice, and sources of unfairness. In this paper, we argue for a concrete research agenda aimed at addressing these concerns, comprising five areas of emphasis: (i) Determining if models and modeling procedures exhibit objectionable bias; (ii) Building awareness of fairness into machine learning methods; (iii) Improving the transparency and control of data- and model-driven decision making; (iv) Looking beyond the algorithm(s) for sources of bias and unfairness-in the myriad human decisions made during the problem formulation and modeling process; and (v) Supporting the cross-disciplinary scholarship necessary to do all of that well.

M3 - Commissioned report

BT - Big Data, Data Science, and Civil Rights

PB - Computing Community Consortium

ER -

Barocas S, Bradley E, Honavar V, Provost F. Big Data, Data Science, and Civil Rights. Computing Community Consortium, 2017.