Interdisciplinary Thinkpiece Panel I with Michele Gilman, Katina Michael, and Tae Wan Kim
Friday, February 19, 2021, 11:00am
Check here for the video recording of the talk
- Michele Gilman, Venable Professor of Law at U of Baltimore, director of the Saul Ewing Civil Advocacy Clinic
“Poverty Lawgorithms: The Economic Injustices of Automated Decision-Making”
As a result of automated decision-making systems, low-income people can find themselves excluded from mainstream opportunities, such as jobs, education, and housing; targeted for predatory services and products; and surveilled by systems in their neighborhoods, workplaces, and schools. These dynamics impede people's economic security and potential for social mobility, and yet the law provides scant recourse. Thus, we must consider how to challenge opaque and unfair algorithmic systems as part of an economic justice agenda.
Michele Gilman is the Venable Professor of Law at the University of Baltimore School of Law. Professor Gilman teaches in the Civil Advocacy Clinic, where she supervises students representing low-income individuals in a wide range of litigation and law reform matters. She also teaches evidence and federal administrative law. Professor Gilman writes extensively about privacy, poverty, and social welfare issues, and her articles have appeared in journals including the California Law Review, the Vanderbilt Law Review, and the Washington University Law Review. In 2019-2020, she was a Faculty Fellow at Data & Society, where she researched the intersection of privacy law, data-centric technologies, and low-income communities.
Optional Reading: "Poverty Lawgorithms"
- Katina Michael, Professor in the School for the Future of Innovation in Society and School of Computing, Informatics and Decision Systems Engineering at Arizona State
“Misdirected Dreams? Trusting in AI: the hopes, the needs, and the challenges.”
Technology is edging ever closer to interfacing with the human or even brazenly replacing the human function. As we seek dreams of automation through artificial intelligence, the question centers on whether we are engaged in a process of deep techno-utopian distraction, or we are in fact on the right path to addressing our critical global needs. What are the challenges? How will we ensure a sustainable future?
Katina Michael is a professor at Arizona State University, holding a joint appointment in the School for the Future of Innovation in Society and School of Computing, Informatics and Decisions Systems Engineering. She is also the director of the Society Policy Engineering Collective (SPEC) and the Founding Editor-in-Chief of the IEEE Transactions on Technology and Society. Katina is a senior member of the IEEE and a Public Interest Technology advocate who studies the social implications of technology.
Optional Reading: "Big Data: New Opportunities and New Challenges"
- Tae Wan Kim, Associate Professor of Business Ethics and Xerox Junior Chair, Carnegie Mellon University
“Flawed Like Us and the Starry Moral Law: A Critical Perspective to Artificial Intelligence.”
AI is an imitation game. “What is a good AI system?” Is the same question as “What is a good human being?” In this talk, engaging with Ian McEwan’s Machines Like Me, I invite the audience to rethink what it means to be human in the age of AI.
Tae Wan Kim is Associate Professor of Business Ethics and Xerox Junior Faculty Chair at Carnegie Mellon’s Tepper School of Business. Kim is a faculty member of the Block Center for Technology and Society at Heinz College, and CyLab at Carnegie Mellon’s School of Computer Science. Prior to joining Tepper's faculty in 2012, Kim did his PHD in the Department of Legal Studies and Business Ethics at The Wharton School, University of Pennsylvania. Kim is on the editorial boards of Business Ethics Quarterly, Journal of Business Ethics, and Business & Society Review.
Optional Reading: "Taking Principles Seriously: A Hybrid Approach to Value Alignment"
- Visit Critical AI for the full list of the events in this series
- For the list of the upcoming events, check here.