We Didn’t Prove Prejudice Is True: Why and When Machines Have Human Bias
With Dr. Joanna Bryson
Machine learning is a means to derive artificial intelligence by discovering patterns in existing data. In a 2017 article with colleagues Aylin Caliskan and Arvind Narayanan, I showed that applying machine learning to ordinary human language results in human-like semantic biases. We replicated a spectrum of known biases, as measured by the Implicit Association Test, using a widely used, purely statistical machine-learning model trained on a standard corpus of text from the World Wide Web. Our results indicate that text corpora contain recoverable and accurate imprints of our historic biases, whether morally neutral as toward insects or flowers, problematic as toward race or gender, or even simply veridical, reflecting the status quo distribution of gender with respect to careers or first names.
In the abstract to our article, we assert that “Our methods hold promise for identifying and addressing sources of bias in culture, including technology.” In this talk I will first present our results, then discuss what our research on machine bias demonstrates concerning the origins of human biases, stereotypes, and prejudices. Then I will discuss the extent to which implicit and explicit human bias accounts for bias in AI, and how and whether we can address such bias, perhaps using AI.
Date(s) - March 1, 2018
5:30 pm - 7:30 pm