As artificial intelligence becomes increasingly ubiquitous in our society, so do the various mistakes models and agents make, introducing the 21st-century blame game. The decisions the creators of artificial intelligence projects must make expand well beyond that of an engineer and raise complex philosophical, scientific, and socio-economic questions that if answered poorly, create damaging effects. Given the complexity of the job and the various people assigned at all levels to implement AI projects comes an important question: Who do we blame when things go awry? And if we establish these protocols, will it change how often and why these mistakes are made? As I was reflecting on this question, I remembered a story I read that deals with many of these underlying principles. The story, titled The Drawbridge, reminded me of the inherent complexity of assigning blame when an AI agent or model fails. In the story, a jealous baron instructs the baroness not to leave their castle while he is gone. After he leaves, and as she grows lonely, she decides to visit her lover, believing that she will surely be home before her husband. On her way back she is greeted by a gateman who tells her if she crosses the bridge, she will be killed, as he has been ordered to do so by the king. Following this order from the gateman, she seeks help from a lover, a friend, and a boatman who all fail her in one way or another. The lover by saying their relationship is only sexual, and he cannot help her. The friend disapproves of the baroness defying her husband and is unwilling to help. And the boatman who demands payment for his service despite the baroness being penniless. Exhausting all of her options, she attempts to cross the bridge against the gateman’s orders and is killed. This leaves the question, who is responsible for the baroness’s death? Is it the husband who commanded the order? Could it be the gateman for following orders? Is it her friend or lover who refused to help in her time of need? Or finally, is it the baroness herself for willingly putting herself in the position that led to her death? There is, of course, not a clear answer, and arguably all characters share in some blame for the death of the Baroness. I’d argue that the convoluted nature of the story and specifically the theme of blame can be applied to our society as we grapple with the effects of artificial intelligence all around us, unable to pinpoint who to thank or blame. These characters and their real-world representations can be assigned to many variations of our AI blame game. In the context of AI, the baron could represent big tech. The castle: Silicon Valley. The baroness: society, shackled by technology and its choke hold over our lives but without the power to change it. The gateman enforcing the rules could represent an artificial agent itself, a blind follower of orders, rules, the algorithm. Alternatively, is the gateman the engineer working for a tech giant whose actions as a result of their time, skill, and or obedience are ultimately destructive? In these two tech world representations of the gateman, the engineer, or the artificial agent, there are vast differences in authority and
therefore, reprehensibility. As the engineer, they have free will to leave the company, change the project, not kill the princess. However, the artificial agent does not. Because ascribing blame to an entity without psychological capacity is fruitless, maybe the question should therefore be how much ethical responsibility can we expect of our engineers? Are those who are taking the orders responsible for the problem? If they are the gateman, can we blame them? Or, if the artificial agent is the gateman, are they the Baron? And if so, are they responsible for our current culture or the implantation of a destructive model? Ultimately, deciding who to blame is nuanced and changes from situation to situation. But I think the process of breaking it down into its various parts illuminates a few key questions we should be thinking about as we continue down the AI rabbit hole. I think primarily it highlights how we should be empowering everyone throughout the process to keep ethical considerations at the back of their mind and encourage employees to be asking difficult questions. These questions lead to ethical explorations that are imperative to be considered if we want artificial intelligence to be a tool, not a hindrance. I, in no way, have the answer to the blame game, but I do have ideas on how we answer some of the key questions it raises. As I mentioned above, the domain-specific questions that come up when building an AI model which is meant to be implemented in a society with such complexity must be tackled by a team of multidisciplinary experts. No one person can be tasked to play the role of philosopher, psychologist, economist, scientist… In addition, it would be incredibly effective to empower employees within tech to be able to take the moral high ground and challenge their superiors, without the risk of job loss. Ideally, engineers would have a moral stop work authority. However, this is a tricky realm to navigate, and to put the responsibility solely on the employees excuses the billionaire CEOs of our world. In
addition, it is problematic and lacks a nuanced understanding of the privileges of those who hold these various positions. Moreover, In addition to empowering engineers, I think policy is necessary to establish a protocol that holds executives accountable. Among all of these things, continued research into the expanding role of AI in society and a commitment to open science is imperative to the prevention of AI failure. I believe the collection of goals I’ve discussed above, while only scratching the surface, will not only lead to a more thoughtful, productive community but will work to clarify the blame game, and hopefully, prevent the need for assigning blame whenever possible.
Citation
@online{giesie2022,
author = {Giesie, Mallory},
title = {Artificial {Intelligence:} {The} {Blame} {Game}},
date = {2022-10-24},
url = {https://mallorygiesue.github.io/posts/arctic-traffic-analysis/},
langid = {en}
}