2021 Australasian Association of Philosophy Conference: Might A.I. Agents be Blameworthy? by Oisin Deery

Activity: PROFESSIONAL DEVELOPMENTConference AttendanceProfessional Development

Description

A.I. systems increasingly behave in ways that approximate human agency—e.g., self-driving cars, autonomous financial systems. As A.I. systems pursue goals in these high-stakes contexts, harms can be caused. When human agents behave in such contexts, typically they act freely and are morally responsible, and thus are blameworthy for the harms that they cause. Might A.I. agents also be blameworthy? Even if we answer this question negatively, we require a means of doing so. This requirement is made more urgent given the so-called “responsibility gap”—insofar as an A.I. system acts autonomously, its designers or owners seem not to be fully responsible, yet if the A.I. system does not share any responsibility, the total human responsibility seems insufficient to the harms. Existing theories of free agency cannot help. We need a new theory, which should close the responsibility gap—or if it cannot be closed, explain why this is so. I maintain that viewing free and responsible agency through the lens of intelligent behavior provides for such a theory. I compare my view with Christian List’s recent position, according to which A.I. systems might be responsible by analogy with how group agents are responsible.
Period15 Jul 2021