No featured image available
Facebook announced Thursday that it will start labeling political ads featured on the platform so that users can better understand where such content comes from.
Election-related and issue ads on Facebook and Instagram will have a “Paid for by” disclosure appended to it, which will lead to further information about the source of the sponsorship.
“For example, the campaign budget associated with an individual ad and how many people saw it – including their age, location and gender,” Rob Leathern, Facebook’s director of product management, wrote in a company blog post. “People visiting the archive can see and search ads with political or issue content an advertiser has run in the US for up to seven years.”
On a phone call with the media, Facebook execs said that seeing the reach, including the demographics of those they reach, is just one of many aspects that are important.
Also, any advertisers who apply to run political ads in the U.S. will be required to provide and verify their location and identity.
“We believe that increased transparency will lead to increased accountability and responsibility over time – not just for Facebook but advertisers as well,” Leathern continued, adding that people should report any unmarked ones. “We’re investing heavily in more people and better technology to proactively identify abuse.”
Leathern doesn’t explicitly describe past events and revelations, but he alludes to “interference in elections on Facebook,” arguing that’s “why [these changes] are so important.”
In a separate but concurrently published post, however, Katie Harbath, Facebook’s global politics and government outreach director, and Steve Satterfield, director of public policy, are more direct.
“Political advertising serves an important purpose. It helps candidates share their views with the public more broadly, and it can help encourage people to get involved in the political process,” they explained in the blog post Thursday. “But political ads can also stoke partisanship or fear as well as manipulate and deceive — all of which we experienced with the Russian-backed ads on Facebook before, during and after the 2016 US presidential election.”
Facebook admitted to congressional investigators in September that it sold political ads to a Russian firm during the run-up to the 2016 presidential election, an action that contributed to lawmakers’ decision to request CEO Mark Zuckerberg (and other social media company execs at a different point) testify before an official hearing.
Some have argued that Kremlin-bought Facebook ads helped sway the election in favor of President Donald Trump since many appeared supportive of the then-Republican candidate, or former Secretary of State Hillary Clinton’s opponent Vermont Sen. Bernie Sanders. Many other ads, however, just seemed bent on sowing seeds of divisiveness (as mentioned by Harbath and Satterfield) among an already schismatic American populace.
Nevertheless, Facebook — like others in the industry — allegedly pushed hard for advertisers to spend ad money without much apparent care for who was doing so. The tech giant helped advertisers with targeting during the 2016 election season by offering a detailed template of how it perceives the U.S. electorate is ideologically split, according to BuzzFeed News.
Facebook felt the need to explain that ridding of political ads altogether would limit less prominent as well as local candidates’ ability to get their message out since they often “can’t afford larger media buys.”
Harbath and Satterfield also said that being able to raise awareness of issues is important, leading them to agree “that the benefits outweighed the potential harm.”
“Key here will be transparency. Because greater transparency will lead to increased responsibility and accountability over time for advertisers,” they wrote.
Facebook defines what is “political” as those that are either related to elections or issues. It is defining “issues” with a preliminary list that is likely open to additions down the road. It has a policy that attempts to clarify when a post is addressing or taking a position on one of those issues.
“To enforce the policy, we’ll check both the images and text in an ad, and who is being targeted. And, if the ad sends a person to an outside website, we’ll check the landing page as well,” Harbath and Satterfield explained, adding that their technology (artificially intelligent algorithms) will try to purge content from digital menaces that haven’t been authorized. “We won’t always get it right. We know we’ll miss some ads and in other cases we’ll identify some we shouldn’t.”
Facebook also said during the media briefing that they plan on hiring 3,000 to 4,000 more human content moderators to bolster their ever-developing algorithms.
While making no reference of so in the blog posts, the reforms seem to be designed with new rules unanimously approved by the Federal Election Commission in mind. The FEC’s draft notice of proposed rulemaking asks for feedback before a June 27 hearing on the issue, and primarily aims to include websites and any “internet-enabled device or application” to the agency’s political disclaimer rules.
Harbath, in response to a question asked by The Daily Caller News Foundation during the conference call, said the reforms have been in the works for a while, but picked up in April. She and another executive said they are “aware” of such developments from agencies and prospective legislation, and in some way “take it into account,” but aren’t waiting for any forced changes.
Send tips to [email protected].
All content created by the Daily Caller News Foundation, an independent and nonpartisan newswire service, is available without charge to any legitimate news publisher that can provide a large audience. All republished articles must include our logo, our reporter’s byline and their DCNF affiliation. For any questions about our guidelines or partnering with us, please contact [email protected].