;

San Francisco— Social media giant Facebook announced on May 23, Thursday, that it took down more than 3 billion fake accounts from October 2018 to March 2019. This figure is twice as large than in the six months prior to March.

Facebook said, however, that almost every fake account was caught before it even had an opportunity to be an active user on the social network.

The social media network reported that it has seen a “steep increase” in fake and abusive accounts in the last six months. And, even though most of these accounts were successfully blocked minutes after they were created, Facebook says that the growing number of “automated attacks” by bad actors indicates that they are so many that some of the accounts may have actually escaped detection.

Moreover, Facebook said that as of now, 5 percent of the 2.4 billion monthly active users of the platform are fake accounts. But before this most recent six month period, the number of fake accounts had only been at 3 to 4 percent.

See also  Malaysia's Lawyers for Liberty will not comply with correction notice by POFMA

This also indicates that while Facebook has gotten better in detecting and taking down accounts created by computers to spread online falsehoods, spam, and other questionable content, the efforts of the creators of these false accounts have stepped up and may have become more successful.

But according to Alex Schultz, the vice president of Analytics for Facebook, “The number for fake accounts actioned is very skewed by simplistic attacks, which don’t represent real harm or even a real risk of harm. If an unsophisticated, bad actor tries to mount an attack and create a hundred million fake accounts — and we remove them as soon as they are created — that’s one hundred million fake accounts actioned. But no one is exposed to these accounts and, hence, we haven’t prevented any harm to our users. Because we remove these accounts so quickly, they are never considered active and we don’t count them as monthly active users.”

See also  Guest asks Tan Chuan-Jin 'You are Mr Ong Ye Kung right?' Tan Chuan-Jin replies 'errrr. No. I’m Mr Chan Chun Sing'

This is just the latest of the challenges the social media giant has faced lately, from grappling with fake news, to the role it played in election interference, to hate speech and the possibility of it leading to violence in India, Myanmar, and the US, and even to the streaming of a massacre at two mosques in New Zealand in March through Facebook Live.
Included in Facebook’s announcement was that the social media platform also removed over 7 million pictures, posts, and other content as these violated the company’s anti-hate speech rules.

While thousands of people around the globe are employed by Facebook for the express purpose of evaluating content for possible violations, and the company also employs artificial intelligence for this purpose as well, errors in detection—both human and with AI, have been made.

Former Facebook employee and White House tech policy adviser Dipayan Ghosh said that if there is no greater transparency from the social media network, there is no way for the public to know if the improved automated detection system are more effective in solving the problems of disinformation and hate speech.

See also  ‘A grand illusion’ and the shattered Facebook’s facade

Mr Ghosh said, “We lack public transparency into the scale of disinformation operations on Facebook in the first place.”

He wondered if even only 5 million accounts escaped detection from the human and AI content evaluators, how is it possible to determine the amount of hate speech and disinformation are bad actors are spreading through bots “that subvert the democratic process by injecting chaos into our political discourse?”

“The only way to address this problem in the long term is for government to intervene and compel transparency into these platform operations and privacy for the end consumer,” Mr Ghosh said./ TISG

Read related: Facebook to tighten FB live controls and ban users who share extremist content