Predictive analytics came into politics in a big way in 2008, as the Obama campaign used big data to target likely voters and build “personality” models, among many other things.

As analytics in political campaigns has expanded, individual voters of both parties can now expect a message tailor-made just for them. They receive flyers and other materials with messages tailored to their personalities and beliefs.

This new era of “fake news”, however, raises questions about how big data can be used to elicit certain behavior from voters via less-than-truthful content.

Alexander Nix, CEO of Cambridge Analytica, talked about this issue at last year’s Concordia Summit.

Scientific American summarizes the presentation:

Using the example of gun rights, Nix described how messages will be crafted to appeal specifically to you, based on your personality profile. Are you highly neurotic and conscientious? Nix suggests the image of a sinister gloved hand reaching through a broken window.

In his presentation, Nix noted that the goal is to induce behavior, not communicate ideas. So where does truth fit in? Johan Ugander, Assistant Professor of Management Science at Stanford, suggests that, for Nix and Cambridge Analytica, it doesn’t. In counseling the hypothetical owner of a private beach how to keep people off his property, Nix eschews the merely factual “Private Beach” sign, advocating instead a lie: “Sharks sighted.” Ugander, in his critique, cautions all data scientists against “building tools for unscrupulous targeting.”

The warning is needed, but may be too late. What Nix described in his presentation involved carefully crafted messages aimed at his target personalities. His messages pulled subtly on various psychological strings to manipulate us, and they obeyed no boundary of truth, but they required humans to create them.  The next phase will be the gradual replacement of human “craftsmanship” with machine learning algorithms that can supply targeted voters with a steady stream of content (from whatever source, true or false) designed to elicit desired behavior. Cognizant of the Pandora’s box that data scientists have opened, the scholarly journal Big Data has issued a call for papers for a future issue devoted to “Computational Propaganda.”

Stanford Professor Johan Ugander seized on Nix’s beach sign anecdote as an encapsulation of an ethical issue that data scientists may soon have to ask themselves. He had this to say in a post on Medium:

In fewer words: crafting lies, and then targeting them.

And there-in lies a major ethical problem: it’s one thing to personalize content, “telling the story that’s most persuasive for a given individual,” which itself raises important ethical questions (cf. ideas around libertarian paternalism). But it’s another matter entirely to “tell the lie that’s most persuasive.” It’s not made clear that Cambridge Analytica has actually crossed into that territory —my complaint above was first and foremost a reaction to Nix’s unscrupulous metaphor — but the mash-up of personalization and “alternative facts” is a dark side of the force that I hadn’t considered until recently, and I really hope that we academics can train thoughtful data scientists who reject such applications of their skills.

Photo by Angela N via Flickr CC License