Dismissing this increased vulnerability or openness as a price to pay for access — that taking on this risk is part-and-parcel of being a platform user — isn’t an adequate response, if for no other reason than that tradeoff is rarely, if ever, explicitly put to us when we sign up. Instead, we’re offered something quite different.
“We’d like to bring the nuance and richness of real-life sharing to software. We want to make Google better by including you, your relationships, and your interests,” Google explained when it launched Google+ in 2011. “You and over a billion others trust Google, and we don’t take this lightly. In fact, we’ve focused on the user for over a decade: liberating data, working for an open Internet, and respecting people’s freedom to be who they want to be.” Facebook’s language — even now — is similar. Facebook, CEO Mark Zuckerberg wrote in 2017, “stands for bringing us closer together and building a global community.”
Even at times when ad-based platforms have helped people be who they want to be, or build communities, that vulnerability — to the influence of foreign actors, to persuasion, or to security lapse — never disappears. In some cases, those expressions of self and those communities only increase it.
Between 2015 and 2017, the Internet Research Agency (IRA), otherwise known as Russia’s “troll farm” that’s been linked to targeted misinformation campaigns on Facebook in the run-up to the 2016 election, was particularly focused on Black Lives Matter (BLM).
The trolls sought to exploit the movement’s legitimate goals in order to deepen ideological discord between its members and other sectors of American society.
The movement was successful at gaining support and attention by using social platforms like Facebook. It granted a voice and audience to activists and supporters, and connected them with like-minded people in their cities and states. Yet that activism and connection made the people who joined BLM groups on Facebook vulnerable — not necessarily to authorities like police agencies, but to other, unexpected forces, like the IRA.
As an investigation by the U.S. House Intelligence Committee revealed earlier this year, BLM activists and groups were frequently targeted by the IRA. The trolls sought to exploit the movement’s legitimate goals in order to deepen ideological discord between its members and other sectors of American society.
As April Glaser wrote at Slate, the IRA were using Facebook’s “ad-targeting tools just as they were intended: to reach specific groups of people with specific interests, as revealed through their Facebook likes and listed enthusiasms.” And they were able to do that because, behind the scenes, algorithms could work away at testing those groups — what messages they liked most, which ones were more likely to make them annoyed — and, simultaneously, other groups, too — like people who might hate Black Lives Matter — on the same metrics.
Facebook’s ad-targeting setup “can be exploited by anyone looking to target people based on negative stereotypes, racial profiling, and extremely specific points of interest, hitting people with just the right kind of messaging that will provoke a reaction,” Glaser wrote. Eventually, all kinds of users, not just those who voluntarily joined BLM groups or activities on Facebook, might have been susceptible to messaging framed around it. This messaging could have shifted their viewpoints without them even knowing how.