March 27, 2021

Facebook’s tech regulation idea isn’t as transparent as it looks

Facebook Inc. Chief Executive Officer Mark Zuckerberg pushed his idea this week that Big Tech can self-police content by publishing reports and data on how well the industry removes objectionable posts. The problem is Facebook has a system in place already that’s done little to improve accountability, according to outside experts.

“Transparency can help hold the companies accountable as to what accuracy and effectiveness they’re achieving," Zuckerberg told Congress on Thursday. Facebook wouldn’t have to change much if such a system was the industry norm, he added. “As a model, Facebook has been doing something to this effect for every quarter."

Zuckerberg has pushed his proposal many times amid widening calls to make social media companies more responsible for the content users post. As tech platforms come under fire for an increase in harmful posts, from hate speech to threats of violence, US lawmakers are debating how to reform Section 230 of the Communications Decency Act, which shields companies from liability for user-generated content.

While a crackdown on Big Tech has been deliberated for years, the call for renewed action comes after social media companies were criticized for playing a part in spreading misinformation that fueled the Capitol riots in January and false claims about Covid-19. Thursday’s hearing brought Congress no closer to a legislative solution, giving Facebook an opportunity to influence the outcome.

“If one company does something, it at least allows the discussion to move forward," said Jenny Lee, a partner at Arent Fox LLP who has represented technology clients on Section 230.

However, the self-reported numbers aren’t as transparent as they sound. Facebook, for instance, reported in February that more than 97% of content categorized as hate speech was detected by its software before being reported by a user, and that it acted on 49% of bullying and harassing content on is main social network in the fourth quarter before it was flagged by users, up from 26% in the third quarter. But the denominator of the equation is what Facebook’s AI took down -- not the total amount of harmful content. And Facebook doesn’t share how many people viewed the postings before they were removed, or how long they were up.

“It was a bit shocking and frustrating that Zuckerberg was mentioning that report as something that the industry should aspire to," said Fadi Quran, campaigns director at Avaaz, which tracks misinformation and other harmful content on Facebook. When the social media company disclosed how much violent content it removes, “did they take it down within minutes or within days?" he added.

The report focuses on AI, which means it doesn’t disclose content that Facebook’s human users flag as a violation of policy, what share is removed once reported, or whether it reviews those reports at all.

A system like Facebook’s, which relies on machine learning, has significant flaws when applied broadly, according to Emma Llansó, a director at the Center for Democracy & Technology. “You really start increasing the risk that the automated systems are going to miss something by having false negatives, and have false positives where totally acceptable speech is taken down in error."

The pitfalls of Facebook’s reliance on AI were outlined earlier this year by the company’s external oversight board, an independent panel that Facebook created to review its most contentious content decisions. The board recently overturned Instagram’s decision to remove an image raising awareness for breast cancer symptoms, even though breast cancer awareness was an allowed exception to the company’s nudity policy.

“The incorrect removal of this post indicates the lack of proper human oversight which raises human rights concerns," the board wrote. “The detection and removal of this post was entirely automated." The panel requested that users should be notified if their content was taken down by AI and given the option to have a human take a look at an appeal.

Facebook has said giving users that kind of option would be operationally difficult. Facebook’s services have more than 3 billion users, and about 15,000 content moderators, some of whom are working from home due to the pandemic -- and can’t look at the most sensitive content outside of the office for legal reasons.

The shortage of human staff, as well as the AI still in development, poses particular challenges for Facebook’s global network. “We need to build systems that handle this content in more than 150 languages," Zuckerberg said Thursday. “And we need to do it quickly. And unfortunately, there are some mistakes in trying to do this quickly and effectively."

The content transparency reports contain no data about about the languages or geography of the posts Facebook is enforcing its rules against. It also doesn’t say anything about misinformation -- another key area of concern for lawmakers.

“That transparency report gives almost zero transparency," Quran said.

This story has been published from a wire agency feed without modifications to the text.

Input: livemint

You may also like

July 23, 2021
Bhageerathi Amma, the oldest equivalency course student passes away

Bhageerathi Amma returned to studies in 2019 and cleared her papers as a 105-year-old student Bhageerathi Amma, the oldest equivalency course student from Kerala, passed away at Prakkulam due to age-related ailments. A recipient of the Nari Shakti Puraskar, country's highest civilian honour for women, she was 107 when she died. She had cleared Class […]

July 23, 2021
Mamata Banerjee elected Trinamool Parliamentary Party chairperson

West Bengal 23: Chief Minister Mamata Banerjee was unanimously chosen as the chairperson of its Parliamentary Party placing her in a pivotal position to coordinate with the other opposition parties. After TMC’s victory in West Bengal Assembly elections in May this year, the party has been positioning itself as unifier for all opposition parties, a […]

July 23, 2021
Released Manipur scribe to move Supreme Court against NSA

His petition against sedition law is pending in apex court. Manipur journalist Kishorchandra Wangkhem, released after more than two months in jail, said he was considering petitioning the Supreme Court for scrapping the National Security Act (NSA). His petition against the sedition law is pending in the apex court. “I will talk to my lawyers […]

July 21, 2021
Border situation with Mizoram fragile: Assam CM

The chief ministers of the region will be present at a meeting of the North East Space Applications Centre (NESAC), to be chaired by Union Home Minister Amit Shah. Guwahati: Assam Chief Minister Himanta Biswa Sarma on Tuesday said that the situation along the interstate border with Mizoram is currently “fragile” and it will take some time to resolve […]

Get The latest news, opinion and much more from north east part of India and India on Midage

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram