Russia Was a Latecomer to the Cyberwar Game

The US, not Russia, pioneered the use of state-sponsored social media manipulation.

Staff Sgt. Alek Albrecht participates in a Network War Bridge Course at the 39th Information Operations Squadron Sept. 19, 2014, Hurlburt Field, Fla. US Air Force / Airman 1st Class Krystal Ardrey

Before 2016, the public’s biggest anxiety around social media was that it could be used to beam reams of information about us straight to the prying eyes of faceless spies. Now, our chief fear is that those same spies will be the ones beaming information to us.

The ongoing revelations surrounding the Russian cyber-disinformation campaign in 2016 and beyond — which included everything from the use of paid trolls and online bots to spread propaganda to the dissemination of fake news to unwitting readers — have spurred an ongoing panic about the effects of such campaigns and the Kremlin’s ability to wage them. This disinformation campaign has been widely labelled “cyber warfare,” a term that traditionally referred to attacks on computers or information networks using viruses and denial of service attacks. Russian intelligence agencies have been dubbed “masters” of such a “cyber foreign policy,” their work likened to “the world of mind control imagined by George Orwell.”

As a result, the response from embattled social media companies tends to focus on the dangers of cyber-disinformation originating in Russia. Facebook is working on creating a tool that tells users if they’ve interacted with a Facebook page or Instagram account created by the recently indicted Internet Research Agency (IRA). In response to a report that content from IRA-linked websites was shared on Reddit, the company’s co-founder insisted they were doing what they could about it and that “the biggest risk we face as Americans is our own ability to discern reality from nonsense.”

This current laser-like focus on Russia’s “mastery” of cyber-disinformation obscures the full context of the history of such campaigns. Looked at with a wider lens, the affair resembles less a singular act by the Kremlin than the latest episode of a global arms race — often led by Western, democratic countries.

Learning From Democracies

The explosion of cyber-disinformation campaigns by governments around the world can be traced back to a US-funded program started eight years ago.

“Authoritarian regimes tend to learn from democracies, and really all this stuff started with the United States in 2010” — three years before Russia’s Internet Research Agency was founded — says Samantha Bradshaw, a researcher on the Computational Propaganda Project at Oxford University, who co-authored a report last year titled “Troops, Trolls and Troublemakers: A Global Inventory of Organized Social Media Manipulation,” documenting such manipulation by governments in twenty-eight countries.

“It was DARPA that put money into studying how messages go viral on social media and how to generate movement around particular issues. That research has now made its way back to politics,” she says.

DARPA, the Pentagon’s internal research arm, put $8.9 million towards its Social Media in Strategic Communication (SMIC) program, funding a variety of studies that tracked social media content, individuals’ online behavior, and how information spreads on the web. In 2014, the Guardian reported on this research, which studied Lady Gaga and Justin Bieber’s tweets, the social interactions of 2,400 Twitter users located in the Middle East, and online discussions of fracking and other controversial topics. Researchers even interacted with users online to figure out which of them were the most effective influencers.

The studies had a number of potential implications. Some involved the question of how best to propagate information online, while others looked at how to target the right users to promote particular government-approved campaigns and messages. Several were linked to automated analyses of how well people knew each other based on their social media interactions, in line with the work of intelligence agencies like the National Security Agency, which often uses computers to analyze the vast stores of metadata it collects about individuals.

“We demonstrate on Twitter data collected for thousands of users that content transfer is able to capture non-trivial, predictive relationships even for pairs of users not linked in the follower or mention graph,” boasted one DARPA-funded study, explaining that the findings make “large quantities of previously under-utilized social media content accessible to rigorous statistical causal analysis.”

DARPA no longer hosts the list of studies on its site, but an archived version can still be accessed, and the papers are available online.

SMIC continued to produce studies in subsequent years, which you can find here. One looked at the nature of “social contagion” on social media platforms, while another examined how the ordering of content on platforms like Reddit, Facebook, and Twitter impacted peer recommendation and the ability to focus users’ attention on particular content.

Propaganda and Disinformation

DARPA presented the study as a defensive action meant to help the military detect and counter the spread of disinformation or otherwise unwelcome content, particularly in areas where US troops are fighting. Bradshaw agrees, saying the efforts are very different from what the Kremlin has been accused of doing.

“We weren’t trying to say democracies are doing disinformation,” she says. “It’s a lot more about spreading good information.”

But it’s not hard to see more troubling implications behind the DARPA research. In 2015, Rand Waltzman, the DARPA program director who commissioned SMIC, wrote about the importance of having an effective US propaganda program in place to combat foreign social media disinformation. Propaganda wasn’t always negative, he explained; originally it referred to Pope Gregory XV’s attempt to combat the spread of Protestantism and “help people follow the ‘true’ path.” He approvingly quoted Edward Bernays, considered the father of public relations, who wrote that the “conscious and intelligent manipulation of the organized habits and opinions of the masses is an important element in democratic society.”

Waltzman went on to lament the fact that “the US is unable to effectively take advantage of social media and the internet due to poorly conceived US policies and antiquated laws,” such as those barring the intelligence community from influencing domestic politics. Because of the diffuse nature of the internet, there was no way to guarantee Americans wouldn’t “be inadvertently exposed to information operations that are not intended for them.”

Other governments have no qualms with using such tools to manipulate both their own and other countries’ populations, China and Russia being foremost among them. The Philippines’ Duterte, meanwhile, is renowned for heading a virtual army that uses Facebook to promote the president and attack his critics, especially potent given the powerful position social media has in the country.

But are democratic, Western countries really just bystanders in this game?

For the Greater Good

Part of the reason we seem to know so much about the Russian effort is that there’s a good chance it was never meant to be all that secret. The DNC hackers, for instance, left behind clues that were either deliberate or incredibly sloppy. And a number of current and former IRA employees have spoken to Western journalists about their work, sometimes on the record — as was the case two years ago when a New York Times Magazine reporter, Adrian Chen, was able to stride into the agency’s St Petersburg headquarters and speak to one of its top officials.

Western governments, by contrast, are unlikely to publicize similar efforts for the sake of keeping up appearances. Even so, evidence of similar activities has trickled out over the past eight years.

The best evidence we have for this are the NSA files leaked by Edward Snowden. In a series of stories in 2014, the Intercept’s Glenn Greenwald exposed the array of capabilities at the disposal of UK and US intelligence agencies. A “menu” of cyber tools used by the GCHQ’s Joint Threat Research Intelligence Group (JTRIG), for instance, included a system for using complaints to sites like YouTube about offensive comment to get material removed, the ability to change the outcome of online polls, the manipulation of a website’s internet traffic and search ranking, the ability to “masquerade Facebook Wall Posts for individuals or entire countries,” and much more. The latter two are particularly notable, given the recent SMIC-funded research into the effects of the different ordering of content on social media and the web.

Other reports by Greenwald detailed operations that involved “pushing stories,” and what the agency itself called “propaganda” and “deception” through various social media platforms; discrediting targets by writing blog posts purporting to be from their victims and posting negative information about them on forums; uploading YouTube videos “containing ‘persuasive’ communications” to “discredit, promote distrust, dissuade, deter, delay or disrupt”; setting up Facebook groups, forums, blogs, and Twitter accounts, as well as “spoof[ing] online resources such as magazines and books that provide inaccurate information” and establishing online aliases to boost such messages.

Even at the time, this was not an exhaustive list of capabilities. JTRIG cautioned GCHQ employees that “if you don’t see it here, it doesn’t mean we can’t build it.” Given that the document was last modified in 2012, there’s no way to tell how these capabilities have expanded since.

A forty-two page internal document published by the Intercept outlined the nature of JTRIG’s work, which “targets a range of individual, group, and state actors across the globe who pose criminal, security and defence threats.” It was split into three operational groups: Support to Military Operations, Counter-Terrorism, and Rest of the World. The latter was split into six further teams, which ranged from “cybercrime and “serious crime” to “Iran” and “global,” the last of which, according to the paper, focused on the Middle East, Africa, Argentina, Russia, and China.

It’s clear that this work, pointedly listed separately from counterterrorism and crime, is more political in nature. Besides criminals and extremists, the document states, its operations also target “the general population (e.g., Iranians), or regimes (e.g., Zanu PF),” referring to Robert Mugabe’s regime. It goes on in more detail: “two of the Global team’s current aims are regime change in Zimbabwe by discrediting the present regime, and preventing Argentina from taking over the Falkland Islands.” It also explains that its Iran team was focused on “discrediting the Iranian leadership and it’s [sic] nuclear programme.”

This has been fleshed out more recently by Mustafa Al-Bassam, an Iraqi-born former hacker who now works for a British payments firm and was listed as one of Forbes 2016 “30 under 30.” In a 2016 Vice report and a talk he gave last year in Germany, Al-Bassam explained how JTRIG set up fake Twitter accounts (as well as a counterfeit URL-shortening service that was used by the accounts for surveillance) to foment unrest during protests in Iran, Syria, and Bahrain. Although this particular campaign was small scale to say the least, it’s a precedent.

There’s evidence that the United States is engaged in similar activities. In 2014, the Associated Press revealed a secret effort by the US government to foment anti-government unrest in Cuba by creating a clandestine, Twitter-like service on mobile-phone networks called ZunZeo. The plan was to first build up a critical mass of subscribers by promoting “noncontroversial content,” then start introducing political material that would provoke Cubans into organizing protests and, eventually, a mass uprising.

That same year, the Air Force Research Laboratory funded a paper that looked at how human behavior could be manipulated through social networks. The paper examined how “peer pressure from social leaders affects consensus beliefs,” such as opinions, political affiliations, or emotional states. The ultimate goal was to develop a “decentralized influence algorithm” that would drive behavior on social networks “to a desired end.”

In 2013, the Washington Post reported on the Defense Department’s plans to conduct a psyops campaign targeting Somalis around the world, in order to combat the influence of the terrorist group al-Shabab. It noted that the Special Operations Command had military information support teams in twenty-two countries doing similar work, including operating news websites whose connection to the US military was not made clear to readers.

Two years before that, the Guardian reported that the US military was developing software for the use of “sock puppets” — or fake online personalities, often multiple ones controlled by a single person — to manipulate social media and influence online conversations. The software was believed to be part a program named Operation Earnest Voice (OEV), which began as a psychological warfare operation in Iraq to combat the online influence of those opposed to the coalition’s presence in Iraq. Presciently, the report warned that this operation “could also encourage other governments, private companies and non-government organisations to do the same.”

Government officials would likely argue that these capabilities are used simply against “bad guys” like terrorists, or to counter dangerous ideologies, rather than manipulate another country’s politics as Russia is alleged to have done. But while that’s true for some known cases, it’s not for others, including ZunZeo, JTRIG’s activities in Iran and elsewhere, and a long history of similar US government initiatives through the decades. Besides, the public has only a keyhole-like view into such operations, the full details of which are kept secret. When one person filed a FOIA request for records on Operation Earnest Voice in 2013, for instance, US Central Command informed him they were “exempt from disclosure.”

We probably won’t learn the full scope of “our” side’s operations for a while. But clearly Russia’s disinformation campaign, whatever its actual impact, didn’t come out of nowhere. It emerged from an environment in which, for years, powerful states have been ramping up their capability to “weaponize” social media platforms, whether for political ends or on the elastic basis of national defense. Those states include not just autocracies like Russia and China, but Western, democratic ones like the UK and the United States, who pioneered the techniques we’re seeing now.