Keynote: Data Aren’t Always What They Seem: Examining Problems in Data Infrastructure

Presented by danah boyd

There are always invisible politics to data. What data we choose to collect, how we structure our data, how we clean our data. It’s easy to think of data as “real” or “fake,” but data are much messier than that. Drawing on her ethnographic work exploring how U.S. census data are made and then rendered legitimate, danah will peel back the layers of democracy’s data infrastructure to speak more generally about the politics of data. She will argue that those invested in data science and AI must avoid propagating statistical illusions (let alone fantasies that data will solve all.the.problems™). Uncertainty is at the heart of data science and must be negotiated, not obfuscated. Understanding the limitations and politics of data can help strengthen the data ecosystem.

Keynote: Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People

Presented by Sorelle Friedler

Last week, the White House Office of Science and Technology Policy released the Blueprint for an AI Bill of Rights. It is a set of five principles and associated practices to help guide the design, use, and deployment of automated systems to protect the rights of the American public in the age of artificial intelligence. Developed through extensive consultation with the American public, these principles are a blueprint for building and deploying automated systems that are aligned with democratic values and protect civil rights, civil liberties, and privacy. In this talk, we will discuss the Blueprint for an AI Bill of Rights and its Technical Companion which provides examples and concrete steps for communities, industry, governments, and others to take in order to build these protections into policy, practice, or the technological design process.

Panel 1: AI governance frameworks: challenges and opportunities for equity

Coordinated by Laura Cabrera, Associate Professor of Engineering Science and Mechanics, and Philosophy; and Dorothy Foehr Huck and J. Lloyd Huck Career Chair in Neuroethics

About: The increasing use of artificial intelligence in different applications and domains creates substantial ethical, social, and legal challenges. Various frameworks for the governance of AI have been suggested to help navigate these challenges. In this interdisciplinary panel discussion, panelists will explore and discuss these governance mechanisms, with particular attention given to the challenges of — and opportunities for — integrating equity considerations.

Panelists include:

Victoria Dean, Ph.D. Student, Robotics Institute, Carnegie Mellon University
danah boyd, Partner Researcher, Microsoft, and Founder, Data and Society
S. Shyam Sundar
S. Shyam Sundar, James P. Jimirro Professor of Media Effects, Penn State

Panel 1 is co-organized by:

IEEE Tech Ethics logo

Panel 2: The Future of Compute

Coordinated by Todd Price, Corporate Relations Director for Research, ICDS

About: ICDS will be moderating an industry-led discussion to highlight where companies see their future going in the high performance computing (HPC) arena. Panelists will share about clients’ needs and what companies need from researchers to help get there.


Subramanian Kartik, Vice President of Systems Engineering, VAST Data
Matthew Klos, Senior Solutions Architect, IBM
Matt McIntire, Senior Technical Staff Member, AMD
Ben Crist, SE Manager, WEKA

Panel 3: Fireside Chat: Responsible AI & AI Bill of Rights

Co-coordinated by Hadi Hosseini, Assistant Professor of Information Sciences and Technology, and Jennifer Wagner, Assistant Professor of Law, Policy, and Engineering

About: The development of AI has accelerated technological advances in a significant number of fields ranging from healthcare and education to businesses and military applications. The astonishing pace of the progress and adoption of AI calls for immediate considerations for the responsible use of technologies and systems that are powered by/empowering AI, which necessitates discussions around ensuring fairness, privacy, access, and possible restrictions of these technologies. In this fireside chat, we will sit down with Vincent Conitzer to discuss technical challenges, best practices, ethical considerations, and principles for design, developments, and deployments of AI technologies, and delve into a deep discussion about self-enforced rules and societal guidelines towards instituting responsible practices and perhaps synthesizing a universal AI bill of rights. 


Vincent Conitzer, Professor of Computer Science, Carnegie Mellon University

Click here to submit a question for the fireside chat

    Questions accepted through October 7.

    Panel 4: AI/ML for prediction and mitigation of natural disasters

    Coordinated by Xiaofeng Liu, Associate Professor of Civil and Environmental Engineering, and ICDS Co-Hire

    About: Natural disasters, such as flooding, hurricanes, earthquakes, volcanic eruptions, and wild fires, can cause devastating damages and losses. The prediction of their occurrence and severity helps better planning and response. The increasingly available high-quality and high-resolution data, in conjunction with innovative AI/ML technologies, provide tremendous opportunities to gain more insights and make better predictions. This panel will discuss various aspects of using AI/ML for natural disaster predictions.

    Panelists include:

    Guido Cervone, Professor of Geography and Meteorology and Atmospheric Science, and Associate Director, ICDS
    Matthew Farthing, Research Hydraulic Engineer, U.S. Army Corps of Engineers
    Steven Greybush
    Steven Greybush, Associate Professor of Meteorology and Atmospheric Science, and ICDS Co-Hire

    Special Distinguished AI Seminar: “Designing Agents’ Preferences, Beliefs, and Identities”

    Abstract: In artificial intelligence, we often think of the systems we design as agents. We generally assume that each agent has a well-defined identity, well-defined preferences over outcomes, and well-defined beliefs about the world. However, when designing agents, we in fact need to specify where the boundaries between one agent and another in the system lie, what objective functions these agents aim to maximize, and to some extent even what belief formation processes they use. What is the right way to do so? As more and more AI systems are deployed in the world, this question becomes increasingly important. In this talk, I will show how it can be approached from the perspectives of decision theory, game theory, social choice theory, and the algorithmic and computational aspects of these fields. (No previous background required.)


    Vincent Conitzer, Professor of Computer Science, Carnegie Mellon University

    ICDS Symposium Sponsors