May 17

Designing an Ethical Web

Designing an Ethical Web

The Web of Health

The Web, for much of its existence, has evolved with limited focus on its interaction with social and political values. Over the last several years, researchers, practitioners, and policymakers have increasingly become aware that technical designs impact human rights. Issues of security, accessibility, freedom of expression, and privacy have received increased attention in standard-setting bodies. The technical choices of companies, as well as business models and policies, are being closely scrutinized for their interaction with human rights and other public interests. In this track, we'll hear from a diverse set of speakers about the role of technical design — standards, implementations, and interfaces — on human rights, current efforts and pressing challenges, and visions for building a more just and ethical web.

Organizers

Sharad Goel Stanford University

Deirdre K. Mulligan UC Berkeley School of Information

The Web of Health Schedule

Schedule
 

Opening remarks

10:30 - 10:35

Invited talk

Towards transparency in AI, Methods and Challenges

Timnit Gebru • Google

More info [ + ]

Automated decision making tools are currently used in high stakes scenarios. From natural language processing tools used to automatically determine one’s suitability for a job, to health diagnostic systems trained to determine a patient’s outcome, machine learning models are used to make decisions that can have serious consequences on people’s lives. In spite of the consequential nature of these use cases, vendors of such models are not required to perform specific tests showing the suitability of their models for a given task. Nor are they required to provide documentation describing the characteristics of their models, or disclose the results of algorithmic audits to ensure that certain groups are not unfairly treated. I will show some examples to examine the dire consequences of basing decisions entirely on machine learning based systems, discuss work on auditing and exposing the gender and skin tone bias found in commercial gender classification systems. I will end with the concept of AI datasheets for datasets and model cards for model reporting to standardize information for datasets and pre-trained models, in order to push the field as a whole towards transparency and accountability. Recently, we have seen many powerful entities in academia and industry announcing initiatives related to AI ethics. I will spend some time in this talk discussing how we can learn from the mistakes and evolutions of other disciplines who have/continue to perform what some call parachute research: one that uses the pain of marginalized communities without centering their voices or benefiting them.

 

Timnit Gebru is a research scientist in the Ethical AI team at Google and just finished her postdoc in the Fairness Accountability Transparency and Ethics (FATE) group at Microsoft Research, New York. Prior to that, she was a PhD student in the Stanford Artificial Intelligence Laboratory, studying computer vision under Fei-Fei Li. Her main research interest is in data mining large-scale, publicly available images to gain sociological insight, and working on computer vision problems that arise as a result, including fine-grained image recognition, scalable annotation of images, and domain adaptation. She is currently studying the ethical considerations underlying any data mining project, and methods of auditing and mitigating bias in sociotechnical systems. As a cofounder of the group Black in AI, she works to both increase diversity in the field and reduce the negative impacts of racial bias in training data used for human-centric machine learning models.

10:35 - 11:05

Invited talk

When Users Control the Algorithms: values expressed in practices on Twitter

Jenna Burrell • Associate Professor, UC Berkeley

More info [ + ]

Recent interest in ethical AI has brought a slew of values, including fairness, into conversations about technology design. Research in the area of algorithmic fairness tends to be rooted in questions of distribution that can be subject to precise formalism and technical implementation. We seek to expand this conversation to include the experiences of people subject to algorithmic classification and decision-making. By examining tweets about the "twitter algorithm" we consider the wide range of concerns and desires Twitter users express. We find a concern with fairness (narrowly construed) is present, particularly in the ways users complain that the platform enacts a political bias against conservatives. However, we find another important category of concern, evident in attempts to exert control over the algorithm. Twitter users who seek control do so for a variety of reasons, many well justified. We argue for the need for better and clearer definitions of what constitutes legitimate and illegitimate control over algorithmic processes and to consider support for users who wish to enact their own collective choices.

 

Jenna Burrell is an Associate Professor in the School of Information at UC Berkeley. She is the co-director of the Algorithmic Fairness and Opacity Working Group. Her first book Invisible Users: Youth in the Internet Cafes of Urban Ghana (The MIT Press) came out in May 2012. She is currently working on a second book about rural communities that host critical Internet infrastructure such as fiber optic cables and data centers. She has a PhD in Sociology from the London School of Economics. Her research focuses on how marginalized communities adapt digital technologies to meet their needs and to pursue their goals and ideals.

11:05 - 11:35

Plenary

Engineers and the Just Web

Deirdre K. Mulligan • UC Berkeley School of Information

More info [ + ]

Contrary to today's portrayal of engineers as ethically deficient, in this talk, I highlight Internet architects' and engineers' decades long concern with the social and political values of their technical choices. While Internet standard setting organizations, such as W3C and the IETF, initially eschewed policy and politics, over time they accepted that protocols embed values and developed tools and methods for considering the impact of protocols on societal goals. Computer science research on values such as privacy and fairness has exploded. And perhaps most dramatically engineers within companies are opposing the use of systems they design to undermine human rights. At least some engineers are interested in protecting human rights through their design and engineering choices. But building technical standards, products, and socio-technical systems that align with human rights is a complicated task. It requires identifying relevant rights, prioritizing among them, and figuring out how best to distribute responsibility for protecting them across human and machine components, and public and private actors. In this talk I outline the guidance and support human rights law offers engineers, and offer a conceptual model to reason about the distribution of responsibility for protecting rights in socio-technical systems consistent with norms of democratic governance.

 

Deirdre K. Mulligan is an Associate Professor in the School of Information at UC Berkeley, a faculty Director of the Berkeley Center for Law & Technology, a co-organizer of the Algorithmic Fairness & Opacity Working Group, an affiliated faculty on the Hewlett funded Berkeley Center for Long-Term Cybersecurity, and a faculty advisor to the Center for Technology, Society & Policy. Mulligan’s research explores legal and technical means of protecting values such as privacy, freedom of expression, and fairness in emerging technical systems. Her book, Privacy on the Ground: Driving Corporate Behavior in the United States and Europe, a study of privacy practices in large corporations in five countries, conducted with UC Berkeley Law Prof. Kenneth Bamberger was recently published by MIT Press. Mulligan and Bamberger received the 2016 International Association of Privacy Professionals Leadership Award for their research contributions to the field of privacy protection. She is a member of the Defense Advanced Research Projects Agency's Information Science and Technology study group (ISAT); and, a member of the National Academy of Science Forum on Cyber Resilience.

11:35 - 12:30

Lunch

12:30 - 14:00

Panel

The Role of technical standards for Human Rights

More info [ + ]
Alan Davidson  

Panelist:
Alan Davidson • VP of Global Policy, Trust and Security at Mozilla

Bio Page

Arvind Narayanan  

Panelist:
Arvind Narayanan • Associate Professor of Computer Science, Princeton University

Bio Page

Katie Shilton  

Panelist:
Katie Shilton • Associate Professor, University of Maryland

Bio Page

14:00 - 15:20

Closing remarks

15:20 - 15:30

Our Partners and Sponsors

About Our Sponsors »