Home https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ Technology https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ Twitter is shuffling to find out why its visualization tool looks racist

Twitter is shuffling to find out why its visualization tool looks racist



Illustration for an article titled Twitters Scrabling to Understand Why His Photo Visualization Algorithm Looks Racist

Photo: Leon Neal (Getty images)

The neural network that Twitter uses to generate photo visualizations is a mysterious beast. When it debuts the intelligent cutting tool back in 2018, Twitter said the algorithm determines the most “noticeable” part of the picture, ie. what your eyes are first drawn to to be used as a preview image, but what exactly this involves is the subject of frequent speculation.

Faces are the obvious answer, of course, but what about smiling and non-smiling faces? Or dimly lit against brightly lit faces? I’ve seen a lot of informal experiments on my schedule where people are trying to figure out the secret sauce on Twitter, some have even used the algorithm in an involuntary system to delivery of punchline, but the latest virus experiment reveals a very real problem: Twitter’s auto-crop tool seems too often to favor whites over blacks.

Several Twitter users demonstrated so much over the weekend with images featuring both a white face and a black face. White faces appear much more as previews, even when photos are controlled for size, background color, and other variables that could possibly affect the algorithm. One especially a viral thread on Twitter uses a photo of former President Barack Obama and Senator Mitch McConnell (already the subject of very bad press for his insensitive response until the death of justice Ruth Bader Ginsburg) as an example. When they were both shown together in the same image, Twitter’s algorithm showed a preview of that time and the time of the grinning turtle again, effectively saying that McConnell is the most “noticeable” part of the picture.

(Click the embedded tweet below and click on his face to find out what I mean).

The trend started after a consumer tried to tweet for a problem with Zoom’s face recognition algorithm on Friday. Zoom systems do not detect the head of his colleague Black, and when he uploads screenshots of the problem on Twitter, he finds that the Twitter automatic crop tool is also the default on his face, not his colleague’s in the preview images.

This issue was obviously news for Twitter as well. In response to the Zoom thread, chief designer Dantley Davis conducted some informal experiments on Friday with mixed results, tweeting, “I’m annoyed by this, like everyone else.” The platform’s chief technology officer, Parag Agraval, also addressed the issue via tweet, adding that while Twitter’s algorithm it was tested, it still needed “continuous improvement” and it was “eager to learn” from rigorous consumer testing.

“Our team took a bias test before submitting the model and found no evidence of racial or gender bias in our testing. But it is clear from these examples that we have more analysis to do, “Twitter spokeswoman Liz Kelly told Gizmodo. “We will open our work so that others can view and copy.”

When reached by email, she could not comment on a schedule for the scheduled Twitter review. Kelly on Sunday, too tweet for the problem, thanking users who brought it to Twitter’s attention.

Vinay Prabhu, a chief scientist at Carnegie Mellon University, also conducted an independent analysis of Twitter’s auto-cutting trends and published his findings on Sunday. You can read more about his methodology here, but he basically tested the theory by publishing a series of photos from Chicago Individual Database, a public repository of standardized photographs of male and female faces that were controlled by several factors, including facial position, lighting, and expression.

Surprisingly, the experiment showed that Twitter’s algorithm slightly favored darker skin in its visualizations, cutting to Black Faces in 52 of the 92 images it posted. Of course, given the vast amount of evidence to the contrary found through more informal experiments, Twitter apparently still has some changes to the auto-crop tool. However, Prabhu’s findings should be useful to help the Twitter team isolate the issue.

It should be noted that when it comes to machine learning and artificial intelligence, prediction algorithms do not have to be explicitly designed to be racist in order to be racist. Face recognition technology has a long and disappointing history unexpectedly racial bias, and commercial face recognition software has repeatedly proven this to be the case less accurate of people with darker skin. This is because there is no vacuum system. Intentionally or unintentionally, technology reflects bias of the one who builds it, so much so that the experts have a term for the phenomenon: algorithmic bias.

That is why it needs to undergo further scrutiny before institutions dealing with civil rights issues on a daily basis include it in their arsenal. Evidence shows that this disproportionately discriminates against people of color. Of course, the biased automatic cutting of Twitter is a pretty harmless problem (which still needs to be addressed quickly, don’t get me wrong). What rightly worries civil rights defenders is when cop relies on AI to search for a suspect or hospital uses an automated system for patient triage– then the algorithmic bias can potentially lead to a decision for life or death.




Source link