International

Twitter and Zoom study algorithms that ignore black people | Social networks

The Twitter and Zoom algorithms are accused of having racist characteristics. The complaints started this weekend: in Zoom’s case, the video conferencing system’s virtual fund tool eliminates black faces; in the case of Twitter, the Auto Crop tool (used to highlight the most important parts of a photo, like faces) doesn’t always work on blacks.

Colin Madland, a doctoral student at the University of Virginia, Canada, was the first to spot the problems. When Madland, who is white, posted footage from a recent video conference with a black colleague complaining about Zoom’s issues, Twitter edited the photo to focus only on Madland’s face. Quickly, several people shared other examples of flaws in the algorithms of the two companies.

Businesses say they are already struggling to understand and solve the problems. “Our team is testing [os modelos de algoritmos] for bias cases before launching them and found no evidence of racial or gender bias. But it is clear from the examples that we have more analysis to do, ”said a spokesperson for Twitter in a press release.

Twitter’s design director Dantley Davis believes the initial problem is with Colin Madland’s beard, but agrees those details shouldn’t affect the system. Twitter. “However, I am in a position to correct that and that is what I will do,” he said.

Zoom also confirms that it is aware of the problem: “We have already contacted the user directly to understand the problem,” read a statement from the company. “We are committed to providing a platform that is inclusive for all.”

An old problem

This is not the first time that online service algorithms have been accused of racism or prejudice. There are entire books on the subject: Searching for “gorillas” was already synonymous with searching for images of blacks on Google and “black girls” was tantamount to pornography.

The best of Público by e-mail

Subscribe to newsletters for free and receive the best news and the most in-depth work from Público.

Subscribe ×

The solution is not easy. For example, in an attempt to see all people as equals, the Yelp platform, which allows you to find restaurants and other business establishments, made it difficult for a black hairstylist in New York to try to promote their service to them. other black women.

And biases are not always introduced by humans: A 2018 study by Cardiff University and MIT shows that artificial intelligence can learn to be biased and to discriminate between different algorithms autonomously.

For now, Twitter maintains that dealing with accidentally harmed algorithms is an area that requires continued attention. “[Perceber o viés dos algoritmos] this is a very important question. To answer it, we analyzed our model before launching it, but it’s something that needs to be constantly improved, ”Parag Agrawal, head of Twitter technology, wrote in a post on the social network.

continue reading

Report Rating
Close