- This event has passed.
Spring 2021 GRASP SFI: “Hunting for Unknown Unknowns: AI and Ethics in Society”
February 24 at 3:00 PM - 4:00 PM
Abstract: Homo Sapiens is considered a “hyper-cooperative species,” and this aptitude for cooperation may be responsible for our dominance over the Earth. Cooperation promises great benefits, but each participant is vulnerable to exploitation by their partners. Successful cooperation requires trust: acceptance of vulnerability, with confidence that it will not be exploited. The culture of any society includes ethical principles specifying how to be trustworthy, to whom trustworthiness is owed, and how to recognize who is likely to be trustworthy. The continued viability of a society depends on how well this mechanism does its job. Deployments of AI systems for autonomous vehicles, facial recognition, medical diagnosis, decisions about credit or parole, and other domains have raised questions about their trustworthiness. These questions apply not only to robotic and AI systems based on digital computers, but also to institutional structures such as governments and corporations. Trust failures arise when a carefully designed decision mechanism confronts a situation outside its comprehension: an “unknown unknown.” Science, engineering, economics, law, and public policy all depend on models to cope with the unbounded complexity of the real world. A model specifies a limited set of elements and relations that support inferences relevant to the purpose of the model. Everything else is considered negligible relative to the purpose of the model. If some of these unknown unknowns stop being negligible, the model can fail, possibly with serious consequences. Game-theoretic reasoning, maximizing expected utility, can be a powerful decision tool in multi-agent settings, but its validity depends critically on the quality of the model, especially the definition of utility. A particular failure mode, when the utility measure is oblivious to trust and trustworthiness, is to encourage each participant to optimize expected utility by exploiting the vulnerabilities of the other participants. Trust and cooperation are thereby discouraged. Widespread loss of trust and cooperation can become an existential threat to the society. Our task in AI is to identify potentially dangerous unknown unknowns, and find appropriate ways to incorporate them into our models, supporting trust and cooperation in our society.
Professor of Computer Science and Engineering, University of Michigan
Benjamin Kuipers is a Professor of Computer Science and Engineering at the University of Michigan. He was previously at the University of Texas at Austin, where he held an endowed professorship and served as Computer Science department chair. He received his B.A. from Swarthmore College, his Ph.D. from MIT, and he is a Fellow of AAAI, IEEE, and AAAS. His research in artificial intelligence and robotics has focused on the representation, learning, and use of foundational domains of commonsense knowledge, including knowledge of space, dynamical change, objects, and actions. He is currently investigating ethics as a foundational domain of knowledge for robots and other AIs that may act as members of human society.