YouTube Sets New Policies to Curb Extremist Videos

YouTube Sets New Policies to Curb Extremist Videos

- in World Biz
198
0

“That means these videos will have less engagement and be harder to find,” Kent Walker, Google’s general counsel and senior vice president, wrote in a company blog post on Sunday. “We think this strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints.”

Google, which has relied on computer-based video analysis for the removal of most of its terrorism-related content, said it would devote more engineering resources to help identify and remove potentially problematic videos. It also said it would enlist experts from nongovernmental organizations to help determine which videos were violent propaganda and which were religious or newsworthy speech.

Google said it would rely on the specialized knowledge of groups with experts on issues like hate speech, self-harm and terrorism. The company also said it planned to work with counter-extremist groups to help identify content aimed at radicalizing or recruiting extremists.

By allowing anyone to upload videos to YouTube, Google has created a thriving video platform that appeals to people with a wide range of interests. But it has also become a magnet for extremist groups that can reach a wide audience for their racist or intolerant views. Google has long wrestled with how to curb that type of content while not inhibiting the freedom that makes YouTube popular.

Photo

Khuram Shazad Butt, who was identified as one of the London Bridge attackers. He is said to have been influenced by sermons on YouTube from an Islamic cleric in Michigan.

Credit
Metropolitan Police

Part of the challenge is the sheer volume of videos uploaded to YouTube. The company has said that more than 400 hours of video content is uploaded to the site every minute, and YouTube has been unable to police that content in real time. Users flag offensive videos for review, while the company’s algorithms comb the site for potential problems. Videos with nudity, graphic violence or copyrighted material are usually taken down quickly.

Companies throughout the tech industry are working on how to keep platforms for user-generated content open without allowing them to become dens of extremism. Like YouTube, social media companies have found that policing content is a never-ending challenge. Last week, Facebook said it would use artificial intelligence combined with human moderators to root out extremist content from its social network. Twitter said it suspended 377,000 accounts in the second half of 2016 for violations related to the “promotion of terrorism.”

In the aftermath of terror attacks in Manchester and London, Prime Minister Theresa May of Britain criticized large internet companies for providing the “safe space” that allows radical ideologies to spread. According to news media reports, friends and relatives of Khuram Shazad Butt, identified as one of the three knife-wielding attackers on London Bridge, were worried about the influence of YouTube videos of sermons by Ahmad Musa Jibril, an Islamic cleric from Dearborn, Mich.

Mr. Jibril’s sermons demonstrate YouTube’s quandary because he “does not explicitly call to violent jihad, but supports individual foreign fighters and justifies the Syrian conflict in highly emotive terms,” according to a report by the International Center for the Study of Radicalization and Political Violence.

A spokesman for YouTube said the new policies were not the result of any single violent episode, but part of an effort to improve its service. Google did not respond to a question about whether Mr. Jibril’s videos would fall under Google’s guidelines for videos containing inflammatory language but not violating its policies. Mr. Jibril still has videos on YouTube, but without ads.

In its blog post, Google acknowledged that “more needs to be done” to remove terrorism-related content from its service. YouTube said it would do more in “counter-radicalization” efforts, including targeting potential Islamic State recruits with videos that could change their minds about joining the organization. Google said that in previous counter-radicalization attempts, users clicked on ads at an “unusually high rate” to watch videos that debunk terrorism recruitment messages.

Google also announced a series of measures aimed at identifying extremist videos more quickly, an effort that the company started this year as YouTube tries to assure advertisers that its platform is safe for their marketing dollars.

YouTube came under fire this year when The Times of London and other news outlets found examples of brands that inadvertently funded extremist groups through automated advertising — a byproduct of YouTube’s revenue-sharing model that provides content creators a portion of ad dollars.

Brands such as AT&T and Enterprise Rent-A-Car pulled ads from YouTube. Google responded by changing the types of videos that can carry advertising, blocking ads on videos with hate speech or discriminatory content. Google also created a system to allow advertisers to exclude specific sites and channels in YouTube and Google’s display network.

Continue reading the main story

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like

Arbitrum Price Prediction: 10% losses likely for holders ahead of $107 million worth of cliff token unlocks

Arbitrum network will unleash 92.65 million tokens to