No featured image available
Google and Facebook executives were grilled Tuesday over their inability to govern white nationalism on their platforms, but they gave few answers about why their algorithms are dinging conservative content.
Neil Potts, Facebook’s director of public policy, and Alexandria Walden, counsel for free expression and human rights at Google, spoke to the House Judiciary Committee alongside activists of the Anti-Defamation League, among other groups. House lawmakers asked the two executives about the effectiveness of their artificial intelligence.
“That’s why hate speech and violent extremism have no place on YouTube,” Walden said during the House Judiciary Committee, noting what Google is doing to combat white nationalism on its platform. Walden and Potts also noted that their algorithms sometimes have a difficult time fleshing out the difference between legitimate forms of speech and language that is not permitted.
“We don’t and we won’t always get it right, but we’ve improved significantly,” Potts added, referring to Facebook’s new stance on nixing white nationalism. He noted that the company is not prohibiting people from expressing their love for country and community, but it does not permit bigotry and hatred. Walden made similar comments.
“Hate speech removals can be particularly complex compared to other types of content,” she said. “Hate speech, because it often relies on spoken rather than visual ques, is sometimes harder to detect that some forms of branded terrorist propaganda. It’s intensely content specific.” Neither executives gave a detailed explanation of as to whether either company is capable of making these distinctions.
Conservatives meanwhile argue that Facebook is targeting them because of their politics. President Donald Trump’s social media director Dan Scavino Jr., for instance, was temporarily blocked in March from making public Facebook comments.
The ban claimed that “some of your comments have been reported as spam,” and that “to avoid getting blocked again,” he should “make sure your posts are in line with the Facebook Community Standards.” Trump assured his supporters in a March 19 tweet that he “will be looking into this!”
Tech experts have expressed concerns that Facebook and Google are not up to the task of distinguishing between legitimate content and hate speech. Emily Williams, a data scientist and founder of Whole Systems Enterprises, for one, argues that Facebook’s lack of transparency about the frailties of their AI-deep learning instruments makes it difficult for people to understand why and how content is being throttled.
“I think that is a very big stretch,” she told The Daily Caller News Foundation in March, referring to media reports that Facebook might be using code designed to deboost suicide content as a way to target conservatives. “If Facebook came out and was transparent, that would be one thing, but a lot of people are arrogant and don’t want to admit their algorithms are imperfect,” Williams said.
She added: “If I were trying to weed out extremists I would not use this code. These codes are very good at what they are trained for but not very good at anything else.” Facebook is a profit-driven corporation, so if it wanted to target conservatives or liberals, then it would probably not use a code for a purpose other than what it was designed to do, Williams noted.
Williams believes that Facebook’s algorithm likely has a 70 percent success rate, which means roughly 30 percent of the time the company’s moderators are nixing conservatives who are sharing provocative content but not that what might be prohibited by the Silicon Valley company.
Walden and Potts were often asked during the congressional hearing, which was chaired by Democratic Rep. Jerry Nadler of New York, about their handling of the New Zealand shooting. Facebook and Twitter struggled to remove video images of shootings at two mosques in March as some analysts criticize the companies’ inability to immediately ding such content.
Follow Chris White on Facebook and Twitter
All content created by the Daily Caller News Foundation, an independent and nonpartisan newswire service, is available without charge to any legitimate news publisher that can provide a large audience. All republished articles must include our logo, our reporter’s byline and their DCNF affiliation. For any questions about our guidelines or partnering with us, please contact [email protected].