Facebook, Twitter, YouTube explain what they’re doing to stop terrorism

Wednesday, January 17th 2018  (WASHINGTON) – On October 31, 2017, a man drove a rental van into a bike lane in Manhattan, killing 8 people and injuring over a dozen more. As investigators delved into the attacker’s history, they found a virtual stockpile of Islamic State propaganda, including dozens of ISIS propaganda videos the attacker had downloaded to his smartphone. In the following days, ISIS propagandists claimed credit for the deadly attack.

At the time, John Miller, a  top counterintelligence official at the New York City Police Department said of the attacker, “He appears to have followed, almost exactly to a ‘T,’ the instructions that ISIS as put out in its social media channels.”

Even as social media revolutionizes the way people communicate, connect and interact with the world around them, it has also played an undeniable role in enabling terrorist and extremist groups to spread hateful propaganda and inspire violent attacks across the globe.

Faced with this problem, representatives from the largest social media companies, Facebook, Twitter and YouTube, were called before Congress on Wednesday to account for what they have done to stop their platforms from being exploited to spread violent propaganda and inspire acts of terrorism. Members of the Senate Commerce Committee pressed the top tech officials about what they have done to identify and remove extremist content, sometimes even before that content is ever seen by another user.

FACEBOOK, TWITTER, YOUTUBE COUNTERTERRORISM ACTIVITIES

“The issues we’re discussing here today are of the utmost importance,” Monika Bickert, the head of Facebook’s product policy and counterterrorism department told lawmakers. “We share your concerns about terrorist’s use of the internet,” she stressed, explaining that is why Facebook has been working to “proactively” to quickly identify terrorist propaganda and even prevent it from ever being uploaded.

Facebook is currently staffed up with a team of nearly 200 counterterrorism experts, former law enforcement and intelligence agents, analysts and engineers dedicated to the social media giant’s anti-terrorism mission. Contributing to that team are approximately 10,000 content reviewers, that will soon double to 20,000 people, working across the world in dozens of languages to identify security threats.

According to the company, more than 99 percent of the ISIS and Al-Qaeda propaganda removed from Facebook is identified by the company before a user flags it. Bickert explained that using a database of video and still-image hashes, openly shared throughout the tech world, and utilizing the “digital fingerprints” of identified terrorist content, Facebook has been able to automate a significant amount of its counterterrorism activities. The rest falls on their team of experts, engineers and content reviewers.

Twitter and YouTube have also employed artificial intelligence and machine learning to block known content from being uploaded and shared, and to prevent repeat offenders from exploiting their platforms.

According to Juniper Downs, YouTube’s director of public policy, machine learning has allowed content reviewers to remove five times the number of videos it was removing before. Specifically, in June 2017, Youtube’s algorithms were only identifying 40 percent of the videos being taken down for promoting violent extremism. Today, she said, that number is 98 percent.

“Our advances in machine learning let us now take down nearly 70 percent of violent extremism content within 8 hours of upload and nearly half of it in 2 hours,” Downs said. It’s not an easy filtration process, either, considering users upload 400 hours of video every minute to YouTube.

Similarly, Twitter is employing algorithms and AI in their counter-extremism efforts that have resulted in more than 1.1 million Twitter accounts suspended since mid-2015.

As of last year, Twitter’s proprietary technology was used to identify 90 percent of the accounts they suspended before they were identified by someone in the community. Moreover, three-quarters of those suspended accounts were cut off before they had a chance to send a single tweet, explained Carlos Monje, Twitter’s director of public policy and philanthropy.

Despite their advances, Monje admitted that “there is no magic algorithm for identifying terrorist content,” and Twitter is constantly having to evolve as the threat evolves.

Currently, the company is working to improve technology to prevent users from opening new accounts to replacing those that have been suspended. Twitter is also working on a tool to prevent the distribution of propaganda in the aftermath of attacks, Monje said.

LAWMAKERS AND EXPERTS SAY THESE EFFORTS ARE NOT GOOD ENOUGH

While the social media giants presented compelling descriptions of their programs, a number of lawmakers expressed concerns that they are still not doing enough to cooperate with the government.

Sen. Roger Wicker,  R-Miss., asked Twitter’s Carlos Monje why the company refused to share information about its users with U.S. intelligence and law enforcement agencies, even though it makes that information available for purchase by third parties.

Monje explained that Twitter, as a policy, does not allow “the persistent surveillance” of its users, stressing the importance of protecting user privacy.

This is a policy that Wicker believes “should be revisited.”

“I think when a terrorist group is using a public platform, like Facebook or Twitter, then to me, they’re waiving their right to privacy,” Wicker told Sinclair Broadcast Group.

He continued that “certainly, the people who are trying to protect Americans ought to be able to surveil those sites and see what they can find out to make us safer and more secure.”

The ranking Democrat on the Commerce Committee, Sen. Bill Nelson of Florida is hoping the social media giants are even more transparent and more willing to cooperate with members of Congress and the government in general.

“I think they’re doing things in the right direction, but this is where you need 100 percent cooperation,” he said.

Nelson, who saw the carnage of the deadly 2016 Pulse Nightclub terrorist attack in Orlando, remains concerned about ISIS’ continued ability to inspire terrorist attacks through social media, even as they have lost almost all of their physical territory in Syria and Iraq.

“They’ve adapted,” he warned. “At the end of the day, they’re still doing great harm to us and we’ve got to get the cooperation of the social media engines in order to protect our people and our country.”

Experts, too, were skeptical, arguing that the social media companies’ actions are simply not commensurate with the threat.

Clint Watts, a Foreign Policy Research Institute expert in intelligence and countering state and non-state actors, warned that the companies “get beat by a new terrorist group every few years.” Essentially because they rely on information about the last attack or the last threat, rather than anticipating the emerging threats on the horizon.

“AI and machine learning, even with its advances, can only detect what’s already been seen before,” he explained. “The problem is you’re always trailing whatever the threat actor is, you’re not getting out in front of it.”

Moreover, Watts noted that a troubling trend among terrorist and extremists groups, who are migrating to smaller platforms with stronger encryption, like TeleGram or WhatsApp, making it even more difficult to detect their activities.

Other experts warned that despite the social media companies claiming they can identify and remove extremist content within hours of it being posted, they are still not moving fast enough.

According to the non-profit Counter Extremism Projects, Google has yet to take down a bomb-making manual that was used in a thwarted terror attack in the U.K.

As of Tuesday, the YouTube page of extremist Abu Haleema was still accessible. Haleema has suspected ties to one of the terrorists in the June 2017 London Bridge attack and was previously arrested for attempting to travel to Syria to join an extremist group, the Counter Extremism Project reported.

In the past, lawmakers have taken a largely hands-off approach to imposing specific requirements on the internet giants that have thrived in a largely regulation-free environment. Ultimately, the companies have made the compelling case that it is in their own best interest to ensure terrorists and extremists of all stripes have no place on their platforms, that their interests and the interests of the government are aligned and that there is no reason to compel further cooperation or impose new regulations.

Senate Commerce Committee chairman John Thune of South Dakota anticipates that the government’s “light-touch” regulatory policy will continue, and the big tech companies will be allowed to continue self-regulating, even with the security concerns.

“We know how important these platforms are to extremist groups, to recruit and radicalize folks that will commit violent acts against Americans,” Thune told reporters. “We just want to make sure we’re staying on top of that issue and staying on top of what these companies, that have such powerful platforms, are doing to prevent that kind of activity.”

Overall, Thune said he was satisfied with the steps Facebook, Twitter and YouTube are taking to respond to the threat. “I don’t know at this point if it requires or necessitates additional action,” the senator added, noting the dialogue will continue.

PERMANENT LINK: http://komonews.com/news/nation-world/facebook-twitter-youtube-explain-what-theyre-doing-to-stop-terrorism

Categories: , ,