قالب وردپرس درنا توس
Home https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ US https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ YouTube and other social networks are radicalizing white men. Big tech could be doing more

YouTube and other social networks are radicalizing white men. Big tech could be doing more



There is still much we do not know about the suspect and his background. But before anything else was known about him, anyone who has studied or covered extremism and these kinds of attacks could have given you an educated guess about what kind of person he was: Male. Probably in his 20s. Decent chance of at least a minor criminal record. More than likely a history of hate toward or violence against women. Oh, and one more thing – probably spent a fair amount of time on the internet.

People could easily be radicalized before social media. Many are still radicalized without it. But social media, often in combination with other factors, has proven to be an efficient radicalizer, partly because it allows for easy formation of communities and partly because of its algorithms, used to convince people to stay a little longer, watch one

Combine those algorithms with men who are disaffected, who may feel that the world owes them more, and you have a recipe for creating extremism of any stripe. 1

9659004] "They're picking up an ideology that helps them justify their rage, their disappointment, and it's something available," Jessica Stern, research professor at Boston University's Pardee School of Global Studies and co-author of ISIS: The State of Terror, "told CNN Business Friday. "Isis ideology was an attractive way for some of these men to express their fury and disappointment. "

For all the largely much-deserved criticism they've recently got over all the things they've failed to act on, social networks have stepped up and take real and impressive action when faced with a deluge of ISIS supporters and content.

"The issue on mainstream sites is for the most part there has been an aggressive takedown" of ISIS-related content, Seamus Hughes, the deputy director of the program on Extremism at George Washington University, said. "The same dynamics did not happen when it came to white supremacy."

 How the Christchurch terrorist attack was made for social media

The companies could take action against white supremacists now. Indeed, they could go on forever like that, playing whac-a-mole with different movements that pop up and begin radicalizing their users, moving against them after enough people have been killed. It would be easier for them to do that than to actually deal with the underlying problem of those algorithms designed to keep people around

"It makes sense from a marketing perspective, if you like Pepsi then you're going to watch more Pepsi videos … but you take that to the logical extreme with white supremacy videos, "Hughes said. "They're going to have to figure out how to not completely scrap a system that has brought them hundreds of millions of dollars in ad revenue while not promoting someone's radicalization or recruitment."

Perhaps the most disheartening aspect of this is that companies have been told, over and over again, that they have a problem. Ben Collins, and reporter with NBC News, tweeted Friday "Extremism researchers and journalists (including me) warned the company in emails, on the phone, and to employees' faces after the last terror attack that the next one would show signs of YouTube radicalization again, but the outcome would be worse. I was literally scoffed at

So what would the platforms do now?

Asked that question, Bill Braniff, the director of the National Consortium for the Study of Terrorism and Responses to Terrorism (START) and a professor of practice at the University of Maryland, said: "What I think we should be asking them to do is continue to minimize the salience or reach of a violent extremist propaganda, that calls for violence … but not limit itself to just content takedowns as the way to do that. Maybe fewer people "

Content Takedowns alone can both contribute to narrative persecution and drive people to smaller, more radical sites, Braniff noted.

 Why AI is still terrible at spotting violence online

"We know that people can actually be addressed through counseling [and] mentorship," he said. "If instead of directing people who might be flirting with extremism to support, if you censor them and remove them from these platforms you lose … the ability to provide them with an off-ramp."

While noting that platforms should still take down content that explicitly calls for violence, which also violates their terms of service, Braniff said, "There is some content that does not violate the terms of use, and so the question is, can you make sure that information is contextualized with videos before and after it on the feed? "

The comprehensive solution he sees is a change to the algorithms so that they can point people to different views or even in some cases to support such as counseling.

"Algorithms can either foster groupthink and reinforcement or they can drive discussion," he said. "Right now the tailored content tends to be, 'I think you'll like more of the same,' and, unfortunately, that's an ideal scenario for not just violent extremism but polarization … We're only sharing subsets of information and (19459025) The massive part of violent extremism is polarization, and it's really dangerous. "


Source link