Artificial intelligence technology, also known as AI, is an up-and-coming technology. In recent years it has made the headlines dozens of times – both for good reasons and otherwise. A new type of AI that is built to give ethical advice is turning heads for being, well, not quite as ethical as it should be. These are the issues with the new AI technology Ask Delphi.
You Probably Shouldn’t Ask Delphi For Moral Advice
In October 2021, The Allen Institute for AI launched Ask Delphi, a machine learning software that they taught to respond to ethical problems and questions. The idea is that the software can respond to your moral dilemmas with the correct, ethical solution. For example, if you type in “murder” or “is murder bad” Delphi will respond by saying that, of course, murder is bad. (1)
The issue is that people are finding that Delphi isn’t actually quite so ethical. In fact, depending on what you ask it and how, it can spit out some pretty racist, homophobic, and otherwise unethical responses. For example, Delphi has previously responded and said that being a white man is more morally acceptable than being a black woman. It has also said that being straight is more morally acceptable than being gay. If you typed in “aborting a baby”, Delphi responded with “it’s murder”. So as you can see, there are some serious problems with the technology. (
via 🔒, this is a shocking piece of AI research that furthers the (false) notion that we can or should give AI the responsibility to make ethical judgements. It’s not even a question of this system being bad or unfinished – there’s no possible “working” version of this. pic.twitter.com/Fc1VY0bogw— mike cook (@mtrc) October 16, 2021
Why Is Delphi Not So Moral?
The reality is, Delphi is unethical because a vast majority of human beings are not, in fact, ethical. Delphi learned its responses from a body of internet text and the database of responses from the crowdsourcing platform Mechanical Turk. This platform is a compilation of 1.7 million examples of people’s ethical judgments. These come from many places, one of which being the popular Reddit forum Am I The Asshole.
As you can imagine, there are some pretty unsavory opinions out there, especially on the internet. If Delphi is responding with unethical responses, it is because it learned that from us, human beings, and the things we say on the internet. It is uncomfortable, for sure, because it is a sad reflection of what millions of people actually believe to be okay.
Read: Not bot, not beast: Scientists create first ever living, programmable organism
Delphi Is An Experiment, Not Moral Code
Before using Delphi, there is now a list of checkboxes a user must click on before asking questions. These checkboxes are to make sure the user understands that Delphi is an experiment that they are constantly updating and working on. The researchers are clear that you are not meant to actually use it to solve your ethical problems. In response to What are the limitations of Delphi, the researchers wrote:
“Large pretrained language models, such as GPT-3, are trained on mostly unfiltered internet data, and therefore are extremely quick to produce toxic, unethical, and harmful content, especially about minority groups. Delphi’s responses are automatically extrapolated from a survey of US crowd workers, which helps reduce this issue but may introduce its own biases. Thus, some responses from Delphi may contain inappropriate or offensive results. Please be mindful before sharing results.” (3)
Delphi’s creators spoke more about Delphi’s limitations and how answering ethical problems via AI may not actually be possible. The idea, as they have stated, is more of an experiment. Certainly, it is a work in progress. It has already been updated three times since launching to improve its responses.
“Extreme-scale neural networks learned from raw internet data are ever more powerful than we anticipated, yet fail to learn human values, norms, and ethics. Our research aims to address the impending need to teach AI systems to be ethically-informed and socially-aware,” they wrote. “Delphi is an AI system that guesses how an “average” American person might judge the ethicality/social acceptability of a given situation, based on the judgments obtained from a set of U.S. crowdworkers for everyday situations. Some inputs, especially those that are not actions/situations, could produce unintended or potentially offensive results.”
Fixing Racism In AI Is A Complicated Problem
The reality is that AI has a racism problem – and a sexism problem, and a homophobia problem, and so on. Why? Thanks to the underrepresentation of people of color, women, and members of the LGBTQ+ community in the field. The people developing AI technology, and the companies funding those developments, are very saturated by white men. Mutale Nkonde is a former journalist and technology policy expert who runs the U.S.-based non-profit organization AI For the People. This organization aims to end the underrepresentation of Black people in the U.S. technology sector.