Twitter sorry for helping advertisers reach homophobes

Twitter has apologized for making it possible for advertisers to target users with anti-LGBTQ, racist, or other bigoted stances.

The apology came after an investigation by the BBC. The broadcaster “found it possible to target users who had shown an interest in keywords including ‘transphobic,’ ‘white supremacists’ and ‘anti-gay,’” the BBC reports on its website.

Twitter, like other social media sites, provides user data to advertisers wishing to reach a specific audience. Keywords provide a way for ads to be targeted even more specifically, and Twitter estimates how many users would be reached by a certain keyword.

“For example, a car website wanting to reach people using the term ‘petrolhead’ would be told that the potential audience is between 140,000 and 172,000 people,” the BBC reports. But the broadcast service also found “that it was possible to advertise to people using the term ‘neo-Nazi,’” according to its account.

“The ad tool had indicated that in the UK, this would target a potential audience of 67,000 to 81,000 people,” the BBC notes. “Other more offensive terms were also an option.” The BBC created a generic ad using an anonymous account and used three different offensive terms — it did not specify which — for targeting, and the ad was approved. “Targeting an advert using other problematic keywords seemed to be just as easy to do,” the BBC continues, explaining that the ad tool OK’d the word “Islamophobia” and also made it possible to direct ads toward vulnerable groups, such as people with eating disorders.

Hope Not Hate, a U.K. antiracism group, saw danger in the use of such terms as an advertising strategy. “I can see this being used to promote engagement and deepen the conviction of individuals who have indicated some or partial agreement with intolerant causes or ideas,” Patrik Hermansson, Hope Not Hate’s social media researcher, told the BBC.

Twitter officials told the BBC the company had policies prohibiting the use of keywords that play into bigotry, but the policies had not been enforced properly. “[Our] preventative measures include banning certain sensitive or discriminatory terms, which we update on a continuous basis,” a statement from the company said. “In this instance, some of these terms were permitted for targeting purposes. This was an error. We’re very sorry this happened and as soon as we were made aware of the issue, we rectified it.”