Is solving bias in artificial intelligence distracting us from the real issues?

Published in:  Artificial intelligence

by Pisana Ferrari – cApStAn Ambassador to the Global Village

“Fixing” the bias problem in AI may simply be a “seductive diversion” from more pressing questions about the technology. Not surprisingly, says the author of this article, many of those sounding the alarm on bias do so with the blessing and support of the big tech companies. “The endgame is always to fix AI systems, never to use a different system or no system at all”. Accepting this narrative means resigning ourselves to the “normalization of massive data capture, the one-way transfer to technology companies, and the application of automated, predictive solutions to every societal problem.” A radical reappraisal over who controls the data is long overdue: governments should act to “disincentivize and devalue data hoarding with creative policies, including carefully defined bans, levies, mandated data sharing, and community benefit policies.” The fundamental questions also need to be addressed. “Should we be building these systems at all? Which systems really deserve to be built? Who is best placed to build them? And who decides?” Genuine accountability mechanisms should be set up, external to companies, accessible to citizens and representatives of the public interest. And, there must always be the possibility to stop the use of automated systems with high societal costs, just as there is with any other technology. Food for thought.


“The seductive diversion of solving bias in artificial intelligence”

Author: Julia Powles, Research Fellow in the Information Law Institute at New York University and a 2018 Poynter Fellow at Yale University.

Co-authored with: Helen Nissenbaum, Director, Digital Life Initiative (DLI), Cornell Tech.

Link: https://medium.com/s/story/the-seductive-diversion-of-solving-bias-in-artificial-intelligence-890df5e5ef53

Photo credit: chuttersnap/Unsplash