But the potential misuse of those tools has become a national concern in the past year, particularly after Facebook disclosed last week that fake accounts based in Russia had purchased more than $100,000 worth of ads on divisive issues in the lead-up to the presidential election.
“It’s shocking because it’s illustrating the degree of targeting that’s possible,” said Eli Pariser, the author of “The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think.” “But I think the critical piece of context is this is happening when we know that a foreign country used targeted Facebook ads to influence opinion around an election.”
He added: “Before all of this, you could see the rise of targeted advertising, you could see the rise of social politics, but the conjunction of the two in this way feels new.”
Facebook’s self-service ad-buying platform allowed advertisers to direct ads to the news feeds of about 2,300 people who said they were interested in anti-Semitic subjects, according to the article by ProPublica. Facebook’s algorithms automatically generated the categories from users’ profiles.
Reporters from ProPublica tested Facebook advertising categories to see whether they could buy ads aimed at people who expressed interest in topics like “Jew hater,” “How to burn jews,” and “History of ‘why jews ruin the world.’” The reporters paid $30 to promote ProPublica posts to the people affiliated with the anti-Semitic categories to ensure they were real options, according to the investigation, which noted that Facebook had approved the posts within 15 minutes.
Facebook said in a statement that users had entered the terms under the “employer” or “education” fields on their profiles. Doing so violated the company’s policies, the company said, and led to their appearance on the ad-buying tool.
The company said it would remove targeting by such self-reported fields “until we have the right processes in place to help prevent this issue.” It added that “hate speech and discriminatory advertising have no place on our platform.”
After the ProPublica report, BuzzFeed conducted a similar test on Google, where ads are purchased based on potential search terms. The site reported that upon entering terms like “why do Jews ruin everything” and “white people ruin,” the automated system suggested long lists of offensive “keyword ideas” like “black people ruin neighborhoods” and “Jewish parasites.” It then allowed the purchase of some of the terms for ads.
Google said that it informed advertisers when their ads were offensive and rejected, and that not all suggested keywords were eligible for purchase.
“In this instance, ads didn’t run against the vast majority of these keywords, but we didn’t catch all these offensive suggestions,” Sridhar Ramaswamy, Google’s senior vice president of ads, said in a statement. “That’s not good enough, and we’re not making excuses. We’ve already turned off these suggestions, and any ads that made it through, and will work harder to stop this from happening again.”
The Daily Beast noted on Friday that Twitter was also allowing people to target ads based on some racial slurs. But the greater scrutiny is on Facebook and Google, given their sheer size and dominance of the online advertising business, which brings each company tens of billions of dollars in revenue a year.
Last week, Facebook representatives briefed the Senate and House Intelligence Committees, which are investigating Russian intervention in the election, about ads on the site. The company told congressional investigators that it had identified more than $100,000 worth of ads on hot-button issues that were traced back to a Russian company with links to the Kremlin.
The ads — about 3,000 of them — focused on divisive topics like gay rights, gun control, race and immigration, and they were linked to 470 fake accounts and pages that Facebook subsequently took down, according to its chief security officer. Facebook has not released copies of the ads to the public.
Last fall, Facebook came under fire after ProPublica reported that advertisers could use its targeting to exclude certain races, or what the social network called “ethnic affinities,” from housing and employment ads, a potential violation of the Fair Housing Act of 1968 and the Civil Rights Act of 1964. Facebook, which assigns the updated term “multicultural affinity” to certain users based on their interests and activities on the site, no longer allows it to be used in ads for housing, employment or credit.
These series of problems with advertising make the company look unprepared to handle the power of its ad system, said Benjamin Edelman, an associate professor at Harvard Business School.
“They’ve created a very complicated ad platform — it has all kinds of options and doodads and things working automatically and manually, and they don’t know what they built,” Professor Edelman said. “The machine has a mind of its own.”
Mr. Pariser said the types of targeting reported this week made a strong argument for increased disclosure of the funding behind political ads online, especially on Facebook. The Federal Election Commission voted on Thursday to seek public comment on disclosure requirements around online political ads, which advocates hope will lead to rules requiring more disclaimers revealing who paid for online content.
“This is drawing a new level of public awareness to how targeted advertising can be used to manipulate and affect politics and political conversation in ways that didn’t used to be feasible at all or easy,” Mr. Pariser said.