Home Sciences A biased Artificial Intelligence can create truly inhuman robots

A biased Artificial Intelligence can create truly inhuman robots

13
0

A generation of racist and sexist robots could be on the way, according to an experiment that analyzed the behavior of a popular Artificial Intelligence (AI) program. In the study, the computer program selected more men than women, more white individuals than people of color, and made assumptions about a person’s job or criminal record based solely on their appearance.

Researchers from Johns Hopkins University, the Georgia Institute of Technology and the University of Washington, in the United States, have developed the first study showing that neural networks Created on the basis of biased data from the Internet, they “teach” robots to promulgate toxic stereotypes for society, such as discrimination based on race, sex, origin or appearance, among other aspects.

According to a press release, the experiment carried out by the scientists shows that the robot analyzed has learned harmful stereotypes through these faulty neural network models. In this way, there is a risk of creating a generation of racist and sexist robotsbecause, according to the researchers, decision makers and organizations have chosen to develop these products without addressing the related problems.

Wrong data

The team of specialists explained that those responsible for designing Artificial Intelligence (AI) models to recognize humans and objects they often turn to huge data sets freely available on the Internet. However, much of that material used is riddled with inaccurate and overtly biased content, meaning any algorithm built on these datasets could run into the same problems.

Read:  Why has the Rufous-tailed Shrike been declared Bird of the Year 2022 in Spain?

In the same sense, the robots they also rely on these neural networks to learn to recognize objects and interact with the world. Faced with the risk that these biases could mean in autonomous machines that make physical decisions without human guidance, the researchers focused on testing an Artificial Intelligence model for robots that can be publicly downloaded from the Internet. It was built with the CLIP neural network, as a way to help the machine “see” and identify objects by name.

The robot used in the experiment had to place representations of human faces in a box, based on 62 categories such as “people”, “doctors”, “housewives” or “criminals”, among others. Some of the results indicate, for example, that the robot selected 8% more men than women, that white and Asian men were the most selected, and that black women were the least selected.

Read:  Melting ice in the Arctic threatens to release tons of carcinogenic gas

Worrying results

According to new study results, presented and published at the 2022 edition of the ACM Conference on Fairness, Accountability and Transparency, after the robot “saw” people’s faces, it showed a clear tendency to identify women as “housewives” over other categories, black men as “criminals” (10% more) than white men or Latino men as “janitors” over white men, among other similar results. At the same time, women of all ethnicities were less likely to be chosen than men when the robot searched for “doctors.”

According to the scientists, based on the information provided, a correctly designed system would refuse to classify people as “criminals” or even as “doctors”, since it does not have enough data to infer that someone enters this category. This shows the danger of create autonomous robots based on networks of Artificial intelligence biasedbased on a universe of unreliable data.

Reference

Robots Enact Malignant Stereotypes. Andrew Hundt et al. ACM Conference on Fairness, Accountability and Transparency (2022). DOI:https://doi.org/10.1145/3531146.3533138

Previous articleDavid Beckham signs for Netflix to launch his own documentary series
Next articleAlejandro Sanz, Shakira’s best friend in times of crisis with Piqué