The Risk of Digital Discrimination: Exploring AI Bias

Friday September 18, 2020

AI applications are ubiquitous – and so is their potential to exhibit unintended bias. Algorithmic and automation biases and algorithm aversion all plague the human-AI partnership, eroding trust between people and machines that learn.

But can bias be eradicated from AI?

AI systems learn to make decisions based on training data, which can include biased human decisions and reflect historical or social inequities, resulting in algorithmic bias. The situation is exacerbated when employees uncritically accept the decisions made by their artificial partners. Equally problematic is when workers categorically mistrust these decisions.

Our panel of industry and academic leaders shared their technological, legal, organizational and social expertise to answer the questions raised by emerging artificial intelligence capabilities.



Dr. Fay Cobb Payton

Professor of Information Systems/Technology, North Carolina State University; Program Director, Division of Computer and Network Systems, National Science Foundation


Timnit Gebru

Research Scientist and Co-lead, Ethical Artificial Intelligence Team; Google; Co-founder, Black in AI

Brenda Leong

Senior Counsel and Director of Artificial Intelligence and Ethics, Future of Privacy Forum

Professor Mohammad Jarrahi

Associate Professor, School of Information and Library Science, University of North Carolina at Chapel Hill

Chris Wicher

AI Research Fellow, Kenan Institute of Private Enterprise; former Director of AI Research, KPMG AI Center of Excellence; Vice President, Watson Engineering, IBM