By Wil Forbis
Several weeks ago, I discovered an interesting post on reddit.com. A Venezuelan twentysomething was planning on moving from his increasingly dangerous native country to Spain and he needed advice on finding work and living quarters in a new country.
I read through the sympathetic responses to his questions with interest. Then something struck me: what if this post wasn’t real? What if no beleaguered Venezuelan existed, but rather this was part of a psyops operation orchestrated by a political agency—-perhaps an anti-government group in Venezuela or the CIA?
Did I really think the post was a fake? Of course not; I presumed it to be authentic. The crux of my thought was that I had no way of definitively knowing whether it was real. And that’s because it’s hard to know what is real or fake on the Internet.
Let’s explore the possibility this post was a fraud perpetrated by the U.S. Government. What would be the point? To help destabilize the Venezuelan government. Such an act would have little effect on its own, but a campaign of fake posts/blogs/web site articles could be powerful. Psyops is a battle for hearts and minds. Imagine an average fellow—-we’ll call him Dave—-reading a blog about Venezuelans struggling against their government. He goes to work and his left-leaning cubical mate comments on the appeal of communism as practiced by countries like Cuba and Venezuela. Dave volleys back using the information in the blog as ammunition. Maybe he wins the argument, or at least gets his coworker to concede that not everything is fine and dandy in commie-land. Repeat that process, that conversation, a half million times and opinions are shifting all over the world.
I used to roll my eyes a bit when people would theorize about these kinds of psyop practices. The idea of a CIA spook sitting at a computer, typing blog posts and creating fake Facebook accounts just sounded so ineffective. Now… not so much. These operations could be bloodless tools for effecting national or foreign policies. (This Guardian article points to the U.S. Military performing such operations, though purportedly aimed only at non-U.S. citizens.)
Governments aren’t the only ones who could do this. Let’s say someone goes to a health message board on the web and asks how to end their acne. Several people respond and one poster in particular, a frequent commenter on the site, says, “I totally cured my acne with Brand X.” How do we know this person isn’t an employee of Brand X? Such a profile would be termed a sock puppet—-a person on the Internet pretending to be someone else to advance their own interests. In the world of fiction writing, several authors have been caught using sock puppet profiles to inform the world of their genius while damning competitors.
I’ll raise an obvious question. With the rise of chatbots and A.I. programs, how long until computer programs themselves are the fakers? Software could create profiles on message boards and social media and then drop the desired opinions into conversations in a seemingly natural way. (“Funny you mention prune juice, Bob. I’m convinced it was prune juice combined with my Brand X skin cream that cured my psoriasis.”)
So far, I’ve talked only about the danger poised by fake personas online. What about the web environment and the way search results and interesting news stories fall into our lap? Is that deceitful?
Much is made of the fact that Google tailors your search results based on your past behavior. If you tend to click on web sites with a particular ideological viewpoint, then Google will increase the prevalence of those web sites in your search results. Many people fear this creates a filter bubble that exposes people only to views they already hold. As a result, internet users get a skewed view of the opinions of their fellow citizens.
One could argue the filter bubble helped push Donald Trump to victory. Passive liberals, emboldened by eight years of Obama, turned to Google and Facebook and saw only people who shared their opinions and none of the simmering rage of the flyover states. (This article argues that journalists missed the coming Trump victory because they spent their time in the ideologically stratified environment of Twitter.)
With filter bubbles, there’s no malevolent intent. Google, Facebook and other searchable sites enact filter bubbles to increase the “stickiness” of their content, not to dupe people. Can malicious forces manipulate the web environment to place specific emotions and ideas into people’s heads?
British journalist Carole Cadwalladr of The Guardian has written an interesting series of articles that point to that possibility. She’s reported on a communications firm, Cambridge Analytica, that tracks people’s web activities and targets them with specific advertisements.
…Jonathan Albright, a professor of communications at Elon University, North Carolina, who had mapped the news ecosystem and found millions of links between rightwing sites “strangling” the mainstream media, told me that trackers from sites like Breitbart could also be used by companies like Cambridge Analytica to follow people around the web and then, via Facebook, target them with ads.
On its website, Cambridge Analytica makes the astonishing boast that it has psychological profiles based on 5,000 separate pieces of data on 220 million American voters — its USP is to use this data to understand people’s deepest emotions and then target them accordingly. The system, according to Albright, amounted to a “propaganda machine”.
Facebook was the key to the entire campaign, Wigmore explained. A Facebook ’like’, he said, was their most “potent weapon”. “Because using artificial intelligence, as we did, tells you all sorts of things about that individual and how to convince them with what sort of advert. And you knew there would also be other people in their network who liked what they liked, so you could spread. And then you follow them. The computer never stops learning and it never stops monitoring.”
Think about what a person’s web activities and Facebook likes reveal. Look at that guy over there who frequents the Huffington Post and “likes” the Black Lives Matter page. A bleeding heart liberal no doubt. How about the gal who hovers over the NRA blog and likes Sean Hannity’s page? You get the picture.
What can a political campaign do with this knowledge? Let’s say you’re a candidate running for President. Your strategists inform you that must win several small counties in Ohio to take the state’s electoral votes. Cambridge Analytica hands you data that helps you identify specific members of those communities and target them with ads. You also find voters who support your opponent and innundate them with ads that, in some way, generate political apathy, thus suppressing their urge to vote. (This Bloomberg article describes how Donald Trump used Facebook advertisements to suppress the vote of certain populations.)
We tend to think of the web as a kind of natural environment, controlled by no conscious manipulators. We feel that we “surf” from one web site to another with no one guiding our hand. But, if fact, we may be seeing exactly what someone, somewhere wants us to see.
What do you think? Leave your comments on the Guestbook!
Wil Forbis is a
well known international playboy who lives a fast paced life attending
chic parties, performing feats of derring-do and making love to the
world's most beautiful women. Together with his partner, Scrotum-Boy,
he is making the world safe for democracy. Email - firstname.lastname@example.orgVisit Wil's web log, The Wil Forbis Blog, and receive complete enlightenment.