News

A report revealed that Facebook failed to crack down on extremist content


A new report claims that Facebook has described images of beheadings and violent hate speech from ISIS and the Taliban as “piercing” and “attractive”.

Extremists have turned to the social media platform as a weapon to “promote their hate-filled agenda and mobilize supporters” in hundreds of groups, according to a review of activity between April and December of this year.

The review found that these groups have mushroomed across the platform over the past 18 months and have varied in size from a few hundred to tens of thousands of members.

The pro-Taliban group was created this spring and grew to 107,000 members before it was deleted, a review published by Politico alleges.

In general, extremist content “routinely passes through the network,” despite claims by Meta – the company that owns Facebook – that it is cracking down on extremists.

A new report said there were “dozens of groups” allowed to operate on Facebook that were supporting the Islamic State or the Taliban.

Dead position of terrorist groups

“We do not allow individuals or organizations involved in organized crime, including those designated by the United States Government as Specially Designated Drug Trade Leaders (SDNTKs); hates; or terrorism, including entities designated by the United States Government as organizations foreign terrorist (FTOs) or designated global terrorist (SDGTs), to have a presence on the Platform We do not allow other people to represent these entities.

We do not allow prominent leaders or members of these organizations to be on the Platform, their icons to be used on the Platform, or content that praises them or their work. Additionally, we remove any coordination of objective support for these individuals and organizations.

Taken from the Transparency Center in Meta

The groups were discovered by Mostafa Ayyad, executive director of the Institute for Strategic Dialogue, a think tank that tracks online extremism.

MailOnline has contacted Meta – the company led by CEO Mark Zuckerberg and which owns several social media platforms inlcuidng Facebook – for comment.

“It’s very easy to find these things online,” said Ayyad, who shared his findings with Politico. “What happens in real life happens in the Facebook world.”

It’s basically trolling – it annoys the group members and likewise makes someone moderately to take notes, but the groups are often not removed.

“This is what happens when there is a lack of moderation of content.”

“Dozens of groups” were reportedly allowed to operate on Facebook, and were supporting either the Islamic State or the Taliban.

Some offensive posts were marked as “insightful” and “attractive” by new Facebook tools released in November that were meant to boost community interactions.

Politico found that the posts defended the violence of Islamist extremists in Iraq and Afghanistan, including videos of suicide bombings, and “calls to attack opponents across the region and in the West.”

In several groups, rival Sunni and Shiite militias were reported to be manipulating each other by posting pornographic images, while Islamic State supporters shared links to terrorist propaganda sites and “insulting memes” attacking the rivals.

Meta (or Facebook as it was known until the end of October) led by Mark Zuckerberg (pictured)

Meta (or Facebook as it was known until the end of October) led by Mark Zuckerberg (pictured)

Facebook changes its name to META to distance itself from scandals

In October, Facebook (the company, not the product) changed its name to “Meta”.

The name is part of Zuckerberg’s new ambition to turn the social media platform into “Metaverse” – a shared virtual space featuring avatars of real people.

But the decision was seen as an attempt by CEO Mark Zuckerberg to distance his company from the escalating scandals after the leaked whistleblower documents alleged that its platforms harmed users and fueled anger.

The company is mired in deep crisis after whistleblower Frances Hogan leaked internal documents and made heartbreaking claims that it was ‘putting profits on people’ by intentionally harming teens with its content and fueling anger among users.

Meta removed Facebook groups promoting extremist Islamic content when it was reported by Politico.

However, dozens of ISIS and Taliban content still appeared on the platform, which indicates the failure of Facebook in its efforts “to prevent extremists from exploiting the platform”, according to Politico.

In response, Meta said it has invested heavily in artificial intelligence (AI) tools to automatically remove extremist content and hate speech in more than 50 languages.

“We recognize that our application is not always perfect, which is why we are reviewing a range of options to address these challenges,” Meta spokesperson Ben Walters said in a statement.

The problem is that a lot of extremist Islamic content is written in local languages, which is difficult to detect for Meta’s predominantly English-speaking staff and English-trained detection algorithms.

“In Afghanistan, where nearly five million people log on to the platform every month, the company had few local language speakers to monitor content,” Politco reports.

“Because of this local staff shortage, less than 1 percent of hate speech has been removed.”

Adam Hadley, director of the non-profit Tech Against Terrorism, said he’s not surprised Facebook struggles to detect extremist content because its automated filters aren’t sophisticated enough to report hate speech in Arabic, Pashto or Dari.

“When it comes to non-English content, there is a failure to focus enough machine language algorithm resources to combat that,” Hadley told Politco.

Meta previously said she had “identified a wide range of groups as terrorist organizations based on their behavior rather than their ideologies.”

“We don’t allow them to be on our services,” the company says.

In July of this year, Facebook started sending users notifications asking if their friends were “radicalised”.

Screenshots shared on Twitter showed one note asking “Are you concerned that someone you know is turning into an extremist?”

In July, Facebook users started getting scary notifications asking if their friends were

In July, Facebook users started receiving scary notifications asking if their friends were “radicalised”.

Other users warned: “You may have been exposed to harmful extremist content recently.” Both include links to “Get Support”.

Meta said at the time that the small test was only in operation in the United States, as a pilot for a global approach to preventing radicalization at the site.

The world’s largest social media network has long been under pressure from lawmakers and civil rights groups to combat extremism on its platforms.

That pressure may have escalated in 2021, following riots in the Capitol on January 6 when groups supporting former President Donald Trump tried to prevent the US Congress from certifying Joe Biden’s victory in the November elections.

Facebook privacy disaster

April 2020: Facebook hackers leaked the phone numbers and personal data of 553 million users online.

July 2019: Facebook data scandal: Social network fined $5 billion for ‘inappropriate’ sharing of users’ personal information

March 2019Facebook CEO Mark Zuckerberg promised to rebuild based on six “privacy-focused” principles:

  • special interactions
  • encryption
  • Reduce permanence
  • safety
  • interoperability
  • Store data securely

Zuckerberg promised end-to-end encryption for all messaging services, which would be integrated in a way that would allow users to communicate via WhatsApp, Instagram Direct and Facebook Messenger.

December 2018: Facebook is under fire after a sensational report found that the company had allowed more than 150 companies, including Netflix, Spotify and Bing, to access unprecedented amounts of user data, such as private messages.

Some of these “partners” had the ability to read, write, and delete private messages of Facebook users and see all participants in a thread.

It also allowed Microsoft’s search engine, Bing, to see the names of all Facebook users’ friends without their consent.

Amazon was allowed to get usernames and contact information through their friends, and Yahoo could view streams of friends’ postings.

September 2018: Facebook revealed it had suffered its worst data breach ever, affecting 50 million users – including Zuckerberg users and COO Sheryl Sandberg.

The attackers took advantage of the site’s “view as” feature, which allows people to see what their profiles look like to other users.

Facebook (file photo) made headlines in March 2018 after the data of 87 million users was improperly accessed by Cambridge Analytica, a political consulting firm.

Facebook (file photo) made headlines in March 2018 after the data of 87 million users was improperly accessed by Cambridge Analytica, a political consulting firm.

Anonymous attackers have taken advantage of a feature in the code called “access tokens” to take over people’s accounts, which could give hackers access to private messages, photos and posts — although Facebook said there was no evidence of this.

The hackers also tried to collect people’s information, including name, gender, and city, from Facebook’s systems.

Zuckerberg assured users that passwords and credit card information were not accessed.

As a result of this breach, the company has logged out nearly 90 million people from their accounts as a security measure.

March 2018: Facebook made headlines after the data of 87 million users was improperly accessed by Cambridge Analytica, a political consulting firm.

This disclosure prompted government investigations into the company’s privacy practices around the world, and led to the “#deleteFacebook” movement among consumers.

Telecom company Cambridge Analytica had offices in London, New York and Washington, as well as Brazil and Malaysia.

The company prides itself on being able to “find your constituents and move them into action” through data-driven campaigns and a team that includes data scientists and behavioral psychologists.

Cambridge Analytica claimed on its website that “within the United States alone, we have played a pivotal role in winning presidential races and congressional and state elections,” with data on more than 230 million American voters.

The company has taken advantage of a feature that means apps can ask for permission to access your private data as well as that of all your Facebook friends.

The data company has suspended its chief executive, Alexander Nix (pictured), after recordings emerged of him making a series of controversial claims, including bragging that Cambridge Analytica was pivotal in the election of Donald Trump.

The data company has suspended its chief executive, Alexander Nix (pictured), after recordings emerged of him making a series of controversial claims, including bragging that Cambridge Analytica was pivotal in the election of Donald Trump.

This means the company was able to mine the information of 87 million Facebook users even though only 270,000 people gave them permission to do so.

This is designed to help them create software that can predict and influence voters’ choices at the polls.

The data company has suspended its chief executive, Alexander Nix, after recordings emerged of him making a series of controversial claims, including bragging that Cambridge Analytica was pivotal in the election of Donald Trump.

This information is said to have been used to help the UK’s Brexit campaign.



Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button