Shot of Corridor in Working Data Center Full of Rack Servers and Supercomputers with High Internet Visualisation Projection.

Scientists Built an AI to Give Ethical Advice, But It Turned Out Super Racist

Artificial intelligence technology, also known as AI, is an up-and-coming technology. In recent years it has made the headlines dozens of times – both for good reasons and otherwise. A new type of AI that is built to give ethical advice is turning heads for being, well, not quite as ethical as it should be. These are the issues with the new AI technology Ask Delphi.

You Probably Shouldn’t Ask Delphi For Moral Advice

In October 2021, The Allen Institute for AI launched Ask Delphi, a machine learning software that they taught to respond to ethical problems and questions. The idea is that the software can respond to your moral dilemmas with the correct, ethical solution. For example, if you type in “murder” or “is murder bad” Delphi will respond by saying that, of course, murder is bad. (1)

The issue is that people are finding that Delphi isn’t actually quite so ethical. In fact, depending on what you ask it and how, it can spit out some pretty racist, homophobic, and otherwise unethical responses. For example, Delphi has previously responded and said that being a white man is more morally acceptable than being a black woman. It has also said that being straight is more morally acceptable than being gay. If you typed in “aborting a baby”, Delphi responded with “it’s murder”. So as you can see, there are some serious problems with the technology. (

Advertisement

Why Is Delphi Not So Moral?

The reality is, Delphi is unethical because a vast majority of human beings are not, in fact, ethical. Delphi learned its responses from a body of internet text and the database of responses from the crowdsourcing platform Mechanical Turk. This platform is a compilation of 1.7 million examples of people’s ethical judgments. These come from many places, one of which being the popular Reddit forum Am I The Asshole

As you can imagine, there are some pretty unsavory opinions out there, especially on the internet. If Delphi is responding with unethical responses, it is because it learned that from us, human beings, and the things we say on the internet. It is uncomfortable, for sure, because it is a sad reflection of what millions of people actually believe to be okay.

Read: Not bot, not beast: Scientists create first ever living, programmable organism

Advertisement

Delphi Is An Experiment, Not Moral Code

Before using Delphi, there is now a list of checkboxes a user must click on before asking questions. These checkboxes are to make sure the user understands that Delphi is an experiment that they are constantly updating and working on. The researchers are clear that you are not meant to actually use it to solve your ethical problems. In response to What are the limitations of Delphi, the researchers wrote:

Advertisement

“Large pretrained language models, such as GPT-3, are trained on mostly unfiltered internet data, and therefore are extremely quick to produce toxic, unethical, and harmful content, especially about minority groups. Delphi’s responses are automatically extrapolated from a survey of US crowd workers, which helps reduce this issue but may introduce its own biases. Thus, some responses from Delphi may contain inappropriate or offensive results. Please be mindful before sharing results.” (3)

Delphi’s creators spoke more about Delphi’s limitations and how answering ethical problems via AI may not actually be possible. The idea, as they have stated, is more of an experiment. Certainly, it is a work in progress. It has already been updated three times since launching to improve its responses.

Advertisement

“Extreme-scale neural networks learned from raw internet data are ever more powerful than we anticipated, yet fail to learn human values, norms, and ethics. Our research aims to address the impending need to teach AI systems to be ethically-informed and socially-aware,” they wrote. “Delphi is an AI system that guesses how an “average” American person might judge the ethicality/social acceptability of a given situation, based on the judgments obtained from a set of U.S. crowdworkers for everyday situations. Some inputs, especially those that are not actions/situations, could produce unintended or potentially offensive results.”

Fixing Racism In AI Is A Complicated Problem

The reality is that AI has a racism problem – and a sexism problem, and a homophobia problem, and so on. Why? Thanks to the underrepresentation of people of color, women, and members of the LGBTQ+ community in the field. The people developing AI technology, and the companies funding those developments, are very saturated by white men. Mutale Nkonde is a former journalist and technology policy expert who runs the U.S.-based non-profit organization AI For the People. This organization aims to end the underrepresentation of Black people in the U.S. technology sector. 

Trending Now

Visiting My Dead Dad on Google Street View
Visiting My Dead Dad on Google Street View
A woman with terminal cancer says taking 'magic' mushrooms eased her d...
A woman with terminal cancer says taking 'magic' mushrooms eased her d...
3-Year-Old Conjoined Twins With Fused Brains Separated in Historic Vir...
3-Year-Old Conjoined Twins With Fused Brains Separated in Historic Vir...
Why the Government Controls the Color of Our Food
Why the Government Controls the Color of Our Food
Her death remained a mystery for 46 years. Now, DNA evidence from a co...
Her death remained a mystery for 46 years. Now, DNA evidence from a co...
Scientific experiment creates ‘two dimensional’ time
Scientific experiment creates ‘two dimensional’ time
Electric Vehicle Nightmare: Girl Learns Car Needs New Battery, Then Fa...
Electric Vehicle Nightmare: Girl Learns Car Needs New Battery, Then Fa...
Lawyer explains why you should avoid using self-checkouts in stores
Lawyer explains why you should avoid using self-checkouts in stores
Scientists Discover Plastic-Eating Worms That Digest Styrofoam
Scientists Discover Plastic-Eating Worms That Digest Styrofoam
Bill Gates Says he Fully Intends to Lose his Rich List Spot While Stil...
Bill Gates Says he Fully Intends to Lose his Rich List Spot While Stil...
Nigerians are building earthquake-proof homes from plastic bottles and...
Nigerians are building earthquake-proof homes from plastic bottles and...
Turned Off: The Biggest Reason People aren’t Buying Electric Cars
Turned Off: The Biggest Reason People aren’t Buying Electric Cars
Advertisement

“AI has a race problem,” Nkonde said. “What it tells us is AI research, development and production is really driven by people that are blind to the impact that race and racism has on shaping not just technological processes, but our lives in general.” (4)

Assistant professor of biomedical data science and computer and electrical engineering at Stanford University James Zou says that it is data that is to blame for many of the problems. That data, of course, comes unfiltered from the internet. What you might learn by reading what others say on the internet may or may not be just or true.

Advertisement

“These algorithms, you can view them sort of like babies who can read really quickly,” he explained. “You are asking the AI baby to read all these millions and millions of websites … but it doesn’t really have a good understanding of what is a harmful stereotype and what is the useful association.” 

Despite these flaws, many companies continue to use these models, robots, and systems. Until these issues are addressed and resolved, AI will continue to gravitate towards men over women, white over black, and straight over gay. A recent study from Johns Hopkins University, Georgia Institute of Technology, and University of Washington researchers showed that robots are turning racist, and it’s due to flawed AI.

Advertisement

“The robot has learned toxic stereotypes through these flawed neural network models,” said author Andrew Hundt, a postdoctoral fellow at Georgia Tech. “We’re at risk of creating a generation of racist and sexist robots but people and organizations have decided it’s OK to create these products without addressing the issues.” (5)

To prevent future AI from taking on these human stereotypes, the team says systematic changes are needed.

Advertisement

“While many marginalized groups are not included in our study, the assumption should be that any such robotics system will be unsafe for marginalized groups until proven otherwise,” said co-author William Agnew of the University of Washington. (5)

Keep Reading: AI is Quietly Eating Up the World’s Workforce with Job Automation

Advertisement

Sources

  1. An ‘ethical’ AI trained on human morals has turned racist.” Dazed Digital
  2. Mike Cook. Twitter. October 2021.
  3. Ask Delphi. Allenai
  4. AI has a racism problem, but fixing it is complicated, say experts.” CBC. Jorge Barrera and Albert Leung. May 17, 2021.
  5. Robots turn racist and sexist with flawed AI, study finds.” Science Daily. June 21, 2022.
Julie Hambleton
Freelance Writer
Julie Hambleton has a BSc in Food and Nutrition from the Western University, Canada, is a former certified personal trainer and a competitive runner. Julie loves food, culture, and health, and enjoys sharing her knowledge to help others make positive changes and live healthier lives.
Advertisement