Abstract

Are cyber-enabled information warfare (IW) campaigns uniquely threatening when compared with traditional influence operations undertaken by state actors? Or is the recent “hacking” of Western democracies simply old wine in new—but fundamentally similar—bottles? This article draws on classical theories of democratic functionality from the political science and communications studies fields to deconstruct the aims and effects of cyber-enabled IW. I describe democracies as information systems wherein the moderating functions of democratic discourse and policy deliberation rely on robust mechanisms for asserting the credibility, origination, and quality of information. Whereas the institutions of democracy are often compromised in ways that force failures of the system’s moderating dynamics, influence operations in the digital age act to subvert the traditional mechanisms of democratic functionality in new ways. Sophisticated digital age information operations create a multifaceted attribution challenge to the target state that amounts to unprecedented uncertainty about the nature and scope of the threat. However, the promise of cyber-enabled IW capabilities emerges more from the rewiring of modern democratic information environments around new media platforms than it does from the cyber conflict faculties of state actors. Rather, cyber operations act as an adjunct modifier of IW abilities that allow belligerent governments to secure new sources of private information, to divert attention from other pillars of IW campaigns, to compromise the capabilities of domestic counterintelligence assets and to tacitly coerce important members of society.

Since the end of the Cold War, few threats to national security have so completely captured the attention of pundits, politicians, and citizens in Western countries as having ongoing cyber-enabled influence operations prosecuted by the Russian Federation, Iran, and other belligerent foreign powers. In particular, concern over election “hacking” and the prospect of democratic processes compromised have together amounted to extraordinary efforts across North America and Europe to better secure critical information infrastructure, develop systems to counter disinformation proliferation and—in some cases—politicize the nature and intentions of foreign actors.

At the heart of fear, outrage and political tension on the subject of foreign intervention into Western democratic processes is the role of cyber tools and techniques in enabling campaigns where information is the primary weapon of choice. Many prominent voices have explicitly and consistently labeled interference in 2016 and beyond as acts of cyber warfare. Senator Diane Feinstein, for instance, notably stated that “[w]hat we’re talking about is a cataclysmic change. What we’re talking about is the beginning of cyber warfare” [1]. Likewise, in 2017, former Vice President Dick Cheney argued that “Putin’s conduct … has to do with cyber warfare, cyberattack on the United States—the fact that he used his capabilities in the cyber area to try to influence our election … Putin and his government, his organization, interfere[d] in major ways with our basic fundamental democratic processes. In some quarters, that would be considered an act of war” [2]. And prominent recent scholarship that directly takes up the question of foreign influence in the American democracy labeled the manipulative use of social media by a foreign power “a cyberattack” [3].

And yet, clearly, that the ultimate shape of such interference is the direct result of cyber conflict processes is up for debate [4–11]. From at least one perspective, the problem certainly “is” part and parcel of the broader challenges posed by the rising tides of international cyber conflict wherein the Internet has brought with it a series of new opportunities for prospective foreign meddlers and a plethora of problems—including challenges of protecting significant national security assets that are privately owned—for national security planners to address. But the vulnerabilities that most concern pundits, scholars, and policymakers alike when it comes to election hacking are invariably not those inherent in network systems. In spite of the obvious cybersecurity challenges bound up in protecting critical infrastructure and political institutions from unwarranted assault, the problem is also obviously about the agency and function of those innumerable information platforms that now form the substantive backbone of democratic discourse [12].

Cyber-enabled information operations are clearly of unique concern for modern democratic societies [13–17]. However, in spite of consensus among many military, intelligence, and security policy professionals that cyber is simply a new method of delivery and dissemination, the exact nature of the threat has been grappled with by scholars in a surprisingly limited fashion.1 Belligerent cyber attackers and the information systems upon which they operate are clearly related elements of the thing but there remains a pressing need to better unpack the relationship between conflictual digital actions and the national security implications of how core societal functions are increasingly defined by the broad-scoped integration of evolving digital technologies. Are cyber-supported information campaigns uniquely threatening vis-à-vis traditional influence operations undertaken by state actors? If so, are cyber operations themselves the determining factor? Or, conversely, is the “hacking” of Western democracies simply old wine in new—but fundamentally similar—bottles?

In this article, I link theoretical frameworks prominent in studies of democratic function and political communication to notions of information assurance common in computer science to explain why information warfare (IW) in its most recent, cyber-enabled manifestations poses a unique threat to modern digital democracies. Specifically, I argue that the integrity of democratic systems of discourse and governance can be thought of in much the same way that cybersecurity specialists and computer engineers think of the security of information systems. For democracy to function in an effective fashion, information needs to be made available to the public in such a manner that adjudication on the credibility, origination, and quality thereof is possible. While such an argument may seem to largely emerge from the metaphor of the “marketplace of ideas” (MOIs) that guides public opinion toward prudent—if not necessarily accurate or “truthful”—deliberative outcomes, this broad conceptualization of democratic functionality gels with the several modern, prominent theories of democracy at work in political science [22–24]. In the “marketplace of ideas” metaphor, countervailing institutions—from a free press to elected representatives, public experts and declassified intelligence products—frame, interpret and disseminate information in such a way that the policy outputs of deliberation on any given issue are invariably characterized by some degree of moderation. This idea lies at the heart of many theories of domestic and international politics, not least theories on democratic peace and policy prudence [25–28]. In more well-grounded empirical theories of democratic functionality—such as Achen and Bartel’s group theory of democracy—such forces play a significant role in determining the degree to which social identity or retrospective analyses of leadership play out at the ballot box. Thus, the notional significance of such information conditions to the targeting strategies of sophisticated IW operators seems well-founded.

This essay argues that the information revolution based around Internet-enabled platforms and technologies has transformed the mechanisms of information assurance in democratic states and effectively degraded the natural resiliency of democratic processes to manipulative information operations. The “hacking” of democratic elections and discourse today is not simply the use of computer network exploitation techniques to enhance the traditional toolkit of foreign interference; rather, it primarily involves the widespread manipulation and compromise of the algorithmic nature of information platforms that, today, dictate the nature of information circulation within democracies. In addition to the muting of democracies’ mechanisms of information assurance, the intended result is to generate an attribution challenge within the target state wherein the nature of disruption is difficult to ascertain. This effect is similar to what information security researchers refer to as a “Byzantine fault” in an information system, where the fact of systemic compromise is difficult to ascertain and appears as arbitrary to the casual observer [29]. Such a dynamic dampens the capacity of the democratic MOIs for self-correction and opens space for a foreign belligerent to shape sociopolitical outcomes.

The remaining sections of this article describe how influence operations have been deployed to hack democratic electorates—not elections themselves, necessarily—by muting the inoculating effect of countervailing mechanisms of democratic moderation in the West in recent years, focusing particularly on the case of Russian interference during the 2015–16 presidential election season in the USA. Within the theoretical framework posed, the degree to which “democracy hacking” is a function of cyber operations is clear. Though broad-scoped use of private information to saturate information channels and shape media narratives “did” significantly benefit from the outputs of cyber operations, the grounds for success of Russia’s interference campaigns were ultimately the information systems that are today part and parcel of the substrate of democratic discourse. Much as has been argued of cyberwar in this and other forums, computer network operations are simply an adjunct modifier of broader conflict that enhances the broader effort to influence Western politics. They do so in four ways.

First, cyber operations provide unprecedented access to a multitude of private information taken from the servers of political operators, government personnel, and private citizens. Such material clearly extended the frontier of possibilities for those responsible for the design of influence attacks conducted across the West over the past half decade. Second, such actions divert popular attention from other tools of IW at work. Cyber operations are often executed and framed so as to play on enduring fears in both public and expert settings about the gravity of cyber issues. In practice, information about cyber operations is regularly leaked and framed so as to encourage the proliferation of narratives of digital doom that mask the appearance of other forms of threat to national security. Often, this involves the inception of diversionary threats to the target state, such as a potential disruption of national election infrastructures. In this way, the value of cyber operations is the constitution of symbolic threats rather than threats that significantly promise to aid broader political warfare campaigns. At the same time, such operations in this context perpetuate scholarly overemphasis on cyber doom, which is then reflected in the public domain in a series of ways. Third, cyber operations might distract and consume the resources of security entities tasked with combat digital aggression, particularly where lines of communication and responsibility for countering hybrid threats are unclear within national defense establishments. Finally, and perhaps most importantly, cyber operations help belligerent foreign actors attack the credibility of democracies’ information assurance mechanisms. They do so in at least two ways. First, disruptive cyber operations are often a critical element of efforts to prevent prominent democratic voices from taking a direct role in combating disinformation and digital antagonism, not least because data breaches can naturally constitute a tacit coercive signal. For the general public, nonspecific knowledge of such external influence can diminish trust in important stakeholders. Second, both cyber operations and the image of broad-scoped vulnerability they encourage aid in the facilitation of popular information malaise wherein public trust in the origination and quality of information being forwarded by pundits, experts, and fellow citizens alike diminishes in line with the intensity and diversity of apparent sources of interference. With cyber operations specifically, implied or demonstrated compromise of systems critical to electoral integrity—up to and including election systems themselves—might achieve such an effect.

What follows is divided into six sections. After outlining the broad shape “election hacking,” “democracy hacking,” I put forward the model of democratic functionality as that of an information system and draw on classical theories of democratic operation to suggest a theoretical framework for understanding the aims and impacts of sophisticated cyber-enabled IW efforts. Then, I deconstruct the multipronged case of Russian interference in Western democracies since 2014 via reference to the various mechanisms of marketplace information assurance proposed by the framework. Thereafter, I use this understanding of the nature and form of cyber-enabled IW efforts to adjudicate on the operational and strategic significance of cyber operations amidst other digital developments, before concluding with implications for scholars and practitioners.

The “hacking” of democracies

In assessing what many have called “election hacking” [30], “electoral interference” [16, 31], and “foreign meddling” [32] in just the past few years, scholars regularly fall back on the use of labels like “hybrid warfare”2 or “active measures” [37]. Broadly construed, these terms describe the use of various elements of a state’s arsenal of foreign interference and influence in service of strategic objectives, including the employment of limited military force, utilization of intelligence assets, and disruption of both social and political processes via propaganda [38–43]. To simply say that hybrid warfare involves a blended effort to manipulate state competitors, however, is analytically insufficient here insofar as the statement does little to impart understanding of what within democracies is being hacked and how. When pundits, scholars, and laymen invoke the concept of “hacking” Western democracy, they are clearly referring to something more than network-centric warfare [44, 45] or intelligence operations. Rather, this “hacking” is clearly linked to the manner in which information technologies have transformed the ideational landscape of European and North American political systems in recent years.

At the same time, it also deals with something more particular in its manifestation than those national efforts to use media to influence foreign citizenry that are the common focus of scholarly study, such as Voice of America [46], Japanese propagandizing during the Pacific War [47], or direct-to-media propaganda spinners like Hanoi Hannah or Baghdad Betty. A recent volume work edited by Samuel Wolley and Phillip Howard calls the modern manifestation of interference activities under review here “computational propaganda,” a phenomenon of manipulation and obfuscation of social and political processes enabled by the Internet and related technologies [48]. Adding to a growing body of work in both the information systems and intelligence studies fields, Wolley and Howard’s volume illustrates both the novelty of Internet-aided attempts to influence nations’ politics and the manner in which the underlying technologies have already meant a remarkable diversification in the tactics and attack vectors used by propagandists. And yet these experiences are not universal. National experiences are driving policy reactions across democratic states that reflect experiences unique in both technological and national terms. In Taiwan, human-centric manipulation of social media discourse has been cast in terms of the all-encompassing struggle against mainland influence and has inspired an almost authoritarian focus on national literacy in the determination of factualness [49]. In countries like Poland, familiarity with Russian methods of propaganda—for modern computational influence campaigns truly are just a continuation, albeit with novel characteristics, of age-old propaganda techniques practiced by the Russians [50, 51]—has led to concern about interference manifest alongside acceptance of the value of new age media manipulation for political purposes by internal actors [52]. In yet other cases, national media and scholarly discourse remain anchored to assessments of cybersecurity challenges, with citation of high-profile cyber intrusions in more than a dozen cases of Russian interference since 2014 often given as a rationalization such focus. To many, it is clearly the case that cyber conflict is a critical element of what makes modern cyber-enabled IW so apparently potent. But to read Woolley, Howard, and others that describe the diversifying landscape of digital interference without reference to network warfare, it is not clear why this should be the case.

The sections below explore this relationship—often made implicit in the invocation of “information warfare”—between cyber operations and digital age propaganda in its multiform expressions. The case study sections below explore Russian interference in the American democracy as a paradigm case to which we might apply appropriate theoretical lessons. However, it is worth noting here that this form of IW—i.e., cyber-enabled influence operations and propaganda dissemination efforts—is a new (if fundamentally similar in broad shape to subversive campaigns prosecuted for centuries) normal of operation for states the world around, not just the Russian Federation. Fake news content has been found in circulation in most countries in Western and Central Europe in recent years and bot activity is a growing problem to be addressed by social media platform operators [53–55]. Elsewhere in the world, as noted above, Taiwanese communities on social media platforms have been demonstrably susceptible to infiltration by fake bot accounts that are often likely deployed by the government in Beijing [56]. Likewise, Western communities have been subjected to disinformation and propagandizing efforts on the part of Iran, with almost 1000 bot accounts active between 2016 and 2018 identified in recent assessments of IW efforts targeting American persons via Twitter [57]. And just in recent months, similar efforts have been seen—though as yet understudied as data are not yet fully marshaled—of attempts to skew discourse pertaining to the 2020 presidential election season in the USA and in numerous countries to the global pandemic crisis emerging from the spread of COVID-19. Given such commonplace manifestations of democracy under manipulative assault, better unpacking of the utility of different IW elements seems critical.

The attack surface of democratic vulnerability to IW in the digital age

Any attempt to adjudicate on the nature of the threat from cyber-enabled IW and influence operations should focus on the function and functionality of the target system, rather than principally on the technical nature of tactics or methods employed in such campaigns. Just as assessments of cyber conflict have increasingly incorporated analysis of sociopolitical context in recent years, so too should any study of IW be first and foremost based on an understanding of target systems. In doing so, we might generate knowledge of the attack surface of such campaigns complete with understanding of discrete attack vectors. Here, as Lin and others have recently noted, with IW this means national populations and the social and political systems within which democratic discourse occurs. As such, this section moves to link theoretical frameworks prominent in studies of democratic function and political communication to notions of information assurance common in computer science to better deconstruct the aims and effects of sophisticated cyber-enabled IW campaigns.

Information, contestation, and the MOIs

Social scientists regularly invoke the metaphor of a “marketplace of ideas” (MOIs) to describe democratic discourse.3 In doing so, they have often intended to describe how democracies are an idealized forum for public debate [64, 65]. The MOI, much like its traditional economic parallel, embraces a “libertarian, laissez-faire approach to speech” where the “same ‘invisible hand’ that guides unregulated economic markets to maximum efficiency will guide unregulated markets in ideas to maximum discovery of truth” [66]. In other settings, scholars have embraced the idea of the marketplace in variable formats as a functional set of processes that determine how democratic societies politick, policy-make, and prosecute conflict.4

The metaphor of the marketplace is, in reality, simply a description of an information system. Whether a marketplace of human interaction is concerned primarily with commodities or ideas, the underlying shape of the thing is that of a system designed to channel informational inputs toward particular outcomes. If one accepts that the metaphor of the MOI holds some basis, in fact, that outcome is moderation via the representation and interaction of a democracy’s citizenry [71]. Here, it is helpful to think of the MOI in Bayesian terms [72]. Social understanding of a particular issue begins at a baseline. Over time, with the introduction of new information and continued deliberation, society updates its beliefs. In this way, democratic societies tend to exhibit a prudence and a resilience to outlandish sensationalism, inflicted or otherwise, that others do not. In theory, given enough time and unfettered competition of arguments, “correct” ideas—or, at least, prudent ones with a basis in observable and verifiable facts—triumph over falsehood and society’s understanding comes to reflect an underlying reality.

Democratic functionality, however, actually depends on more than just the availability of information and a willingness to discuss the issues of the day. A generation of literature in political communication has argued that there may be systematic sociopolitical forces—natural “bugs in the system” —undermining the market [73–76]. In that literature, assessment of the significance of these “bugs” pivots on the shape of key stakeholders within the discursive system, including a free and vigilante media, diverse experts, loyal and disloyal opposition, and a responsible executive. Marketplace failure occurs when one or numerous of these stakeholders fail to do their jobs, either because they are motivated to shirk their duties or are denied the opportunity to do so.

Information assurance and the function of democratic information systems

The argument being forwarded here is simple: that the traditional view of democratic systems as marketplaces of ideational competition effectively describes the functioning of an information system and that the targeting of IW efforts against democracies can be best understood in the context of the MOI’s mechanical components. This provides a robust basis for not only understanding the elements of modern IW, but also the role that cyber operations play therein. This section outlines four areas relevant to the effective function of democratic information environments—the origination of information, the credibility of discourse, the ability to determine information quality, and a consensus belief in the freedom of information—and describes the nature of each so as to provide a bridge between political theory and an operative analysis of recent cyber-enabled IW.

First, it is important to note that the position this article stakes out gels with modern empirical works on the nature and function of modern democracies within political science, works that rarely suggests that the classical imagining of the MOI reflects reality in a predictive fashion. While many historians and political scientists might admit a link between countervailing institutions at work and the relative health of a given democracy, modern scholarship emphasizes theories that democratic functionality centered on predicting outcomes at the ballot box. Conventional theories along these lines hold that individuals either (i) vote for representation or in referenda to directly support their interests [77] or (ii) behave as sort of retrospective judges to either punish or reward representatives for their past performance in public office.5 A more recent contribution to this literature, Achen and Bartel’s group theory of democracy, holds that voters do not hold preferences defined beyond their self-identification as a member of a group [83]. Preferences and beliefs then follow and are reinforced entirely based on this identity.

Even from the perspective of such theories—theories that move significantly beyond the simple MOI metaphor—the targeting scheme being suggested in this section still holds substantial logical weight. Regardless of the nature of voter decision-making, a healthy information environment is still the basis of personal social and political literacy. Actions aimed at the mechanisms of the marketplace described below yet act to detach the individual from the aspirational and logical functions of democracy. For a proponent of the explanatory potential of the metaphor of the MOI, such actions prevent moderating effects. For an advocate of group theory, the approach achieves similar eventual objectives by instead delinking individuals from systemic context and influences so that they then fall back on core identity motivations, which are themselves the target of attempts at narrative cooption. The relevance of the targeting strategy being suggested here to both perspectives stems from a reality of computational propaganda increasingly expounded upon by scholars who study IW—that influence campaigns aim to subvert and distort the constitutive underpinnings of political activity rather than to directly alter the structural outcomes thereof.

The origination of information

At the most basic level, for discourse to be democratic, there must exist methods by which it is reasonably easy to know where information comes from. This means two things. First, it means that social and political interlocutors should not be able to entirely hide their identity as it relates to public speech. While a degree of obfuscation of agency is to be expected in democratic systems in the form of corporations or interest groups “speaking” on behalf of individuals affiliated therewith, it should be the case that a reasonable investigation of any mouthpiece awards a robust understanding of where speech is coming from. Second, it means that the factual sources of information should be observable by a reasonable deconstruction of surveyed rhetoric, opinion, and reporting. Regardless of the “spin” offered in punditry or advocacy, democratic functionality requires that the average reasonable effort to unpack the facts and voices involved in any given conversation will be successful. Significantly, “spin” itself does not work against democracy “unless” the origination of information is difficult to ascertain. Opinion journalism and advocacy, even where it appears extreme, can be beneficial to the overall function of the democratic information system as it works to process and update priors so as to produce a moderating information effect [84]. Only when the underpinnings of such elements of discourse are obscured or manipulated does the marketplace suffer. In this way, the parallels to be drawn between information assurance of democratic functionality and computer systems break down in a single sense—to work, democracies do not require that information remains free of tampering. Rather, they only require that such tampering is reasonably discoverable.

The credibility of discourse

More generally, democracies depend on a consensus view that discourse “is” discourse. Democratic populations must believe that—given an allowable amount of manipulation of national conversation by special interests—social and political discourse is not a façade. Here, the parallel to how many authoritarian states affect social control is highly relevant. In Russia, there are a series of well-documented how steps that have been taken since the late 1990s by the governments of Vladimir Putin and Dimitri Medvedev to diminish the credibility of democratic processes as significant for ensuring good governance and stability in civil society [85]. The idea is that nonbelief in liberal democratic political mechanisms as guarantors of benefits to the citizenry leads to a rising tolerance of, among other things, corruption and limitations on social freedoms [86–88]. Rigged elections are met with limited protest and nepotistic promotion of political cronies is accepted because democratic process is not seen as a credible alternative [89], particularly in the context of at least some degree of responsible stewardship of society on the part of the autocrat. In China, censorship has increasingly taken the form of targeted limitations on attempts to assemble with only special constraints set on speech itself [90, 91]. The idea is that the appearance of a vibrant civil society debate wherein much criticism of authority is allowed produces a pressure valve that keeps a population away from the urge to protest [92]. In reality, the conversation is largely artificial, with astroturfing and disinformation being common features of the landscape [93, 94].

The quality of information

A third requirement of effective function of democratic systems is information quality sufficient to allow a reasonable degree of parsing signal from noise via diverse, widespread discourse.6 Though this requirement is in many ways secondary to the underlying imperatives of ensuring the attributable nature and credibility of information (because quality cannot be assessed unless that information is known), it is nevertheless critical to the realization of democracies’ moderating processes. Even where limited information on a given issue is known, repetitive reporting from independent sources and contextualization in both media analyses and social settings help identify and define meaningful elements of the foci of debate. Inhibition of a reasonably complex ecosystem that enables such repetitive investigation and interpretation leads to the domination of few perspectives without a societal capacity for exploring issue nuance.

The freedom of information

Finally, democratic systems’ functionality requires a secondary belief in the credibility of information. As opposed to the need to ensure that discourse itself is seen to be credible, however, this final requirement concerns the relative freedom of information that can be brought to bear in discourse. Marketplace participants must not be conditioned to think that certain types of opinion or certain objects of empirical observation are off-limits, at least beyond a small subset of extreme positions that find themselves normatively juxtaposed to prevailing societal sentiment. Even in those instances, citizens must feel able to employ any fact or argument in discourse without incurring a predetermined penalty based, for instance, on a prevailing political or civic mood. The violation of this requirement of marketplace functionality sits alongside a broader argument about presidential threat inflation at the heart of Kaufmann’s seminal description of marketplace failure in the lead up to the 2003 US invasion of Iraq [23]. Quieted by fear of backlash from an electorate highly mobilized in the aftermath of the 11 September 2001 terror attacks, countervailing forces of democratic discourse in the media, among opposition politicians in Congress and in expert communities failed to effectively problematize and investigate claims of Iraqi WMD being forwarded by the administration of George W. Bush. Clearly, limitations of the freedom of relevant information can have dramatic effects on the functionality of democratic information systems.

Foreign interference during the 2016 US presidential election season

Few episodes illustrate the concept of “democracy hacking”—including the murkiness of the link between impactful cyber operations and resultant sociopolitical outcomes—as well as does Russian efforts to manipulate social and political processes in the lead up to the American presidential election in 2016 [95]. In this case, Russian disinformation efforts in the USA appear to have principally involved the use of cyber instruments for the purposes of gaining access to proprietary data [96, 97]. Russian and Russian-affiliated hackers gained access to private systems, stole relevant information to be distributed to a range of distributing sources, and then utilized a range of social media and related resources to amplify narrative messaging aimed at exacerbating sociopolitical divisions and harming specific political targets. The campaign began in 2015 with an initial dissemination of thousands of phishing emails (about 1 in 40 of which were opened and accessed) that attempted to collect private information from unsuspecting political operatives and government employees [98].

The result of initial intrusion attempts, discovered in their fullest extent about a year later, was access to both unclassified systems across elements of the US executive branch and the acquisition of various private materials from several political institutions. The most significant of these was the Democratic National Committee (though the Republican counterpart organization was also attacked) and, relatedly, the campaign of then-presidential candidate Hillary Clinton [99]. Two groups of Russian operators—known by a range of names, but most colloquially as Cozy Duke and Fancy Bear—managed to steal emails from the political campaign alongside internal documents detailing various plans and designs held by the Democratic Party on upcoming political events [100].

What followed was many months of timed information releases and dumps, some of which was coordinated via the Internet Research Agency (IRA), a Russian firm owned and operated by an oligarch with close ties to the Kremlin [101–103]. The IRA utilized thousands of bot accounts on social media platforms [104] and online communities like those on the social news aggregator site Reddit [105] to release snippets of information to Western world over the course of the election season. Elsewhere, WikiLeaks publically sponsored the release of yet larger troves of data about the Democratic Party and Hillary Clinton, often timed—possibly strategically—so as to maximize the ensuing disruption to regular political processes [106].

The IRA and a number of individuals affiliated with Kremlin-aligned oligarchs also acted thereafter to amplify the effect of information disruptions in 2015 and 2016. In particular, bots controlled by the IRA inserted themselves extensively into virtual conversations taking place on, primarily, Twitter and Reddit [103]. On Facebook, it was revealed in 2017 that Russian actors had purchased some thousands of dollars’ worth of ad space and had published content targeted at demographic groups and localities where divisive rhetoric might have the most effect [107, 108]. Recently, evidence has emerged that click fraud, a common feature of information manipulation on social community sites and in ad monetization schemes, was used to promote these topics above other viewers might otherwise see. As one newspaper estimates, a minimum of 23 million Americans was exposed to this content over almost a 2-year period; this number may have been much higher [109].

Unpacking Russian IW in 2016: market forces and democratic context

Though arguably neither substantially successful nor insignificant in impact, the prospects and outcomes of Russian IW in 2015–16 linked with the presidential election were determined by five crucial factors: (i) the algorithmic predictability and limited content oversight of large-scale social media platform operators (which enabled unique forms of content dissemination); (ii) a cyber-enhanced ability to retrieve specialized private information matched with extensive opportunities for rapid, timed release; (iii) a substantial degree of either pseudonymity or anonymity afforded to malicious agents on in constructing or modifying discourse; (iv) the risk aversion and general conservatism—shaped at least to some degree by the uncertainty of the battlespace—of traditional countervailing institutions of the marketplace, including the incumbent executive, media organizations, and experts; and (v) the willful threat obfuscation of the prevailing candidate (who benefited from a unique authority advantage given the context of the election season). A sixth factor, the willingness of the belligerent actor (Russia) to seize opportunities for meddling in what was an under-realized space of operation, is relevant to the episode; however, this factor is only linked to the outcomes of IW efforts insofar as it pertains to the fact of meddling itself.

In this section, I discuss each of these factors’ impact on the function of democratic information systems. Functionally, the idea here is to draw out and describe what is different between the real experience of foreign digital age meddling in American political processes and a hypothetical predigital equivalent scenario where, say, Russian operatives physically break into the offices of an American political party and then selectively distribute private information for publication to media outlets and civic organizations. What, if anything, makes IW in the digital age uniquely threatening vis-à-vis traditional influence operations undertaken by state actors?

The algorithmic shaping of contestation and social engagement

Few subjects have received so much new scrutiny since 2016 as has the question of algorithmic design and data security in the context of major social media and information aggregation platforms. Firms like Facebook, Google, and Twitter face criticism on numerous fronts over the degree to which their platforms—either via inherent function or underlying management and design decisions—allow for the manipulation of information framing and dissemination [110, 111]. To consider the degree to which such platforms matter—at least at the margins—in the context of foreign IW meddling, one need only acknowledge the scope of the population that uses them. In 2018, there are an estimated approximately 2.2 billion Facebook accounts and more than 336 million active Twitter users.

Beyond just having broad adoption across the globe, however, it is more important to consider the degree to which they otherwise act as non-neutral forces on sociopolitical discourse. One 2012 study concluded that a Facebook campaign to get more people to vote likely had a small, but meaningful effect in boosting votership among youth users of the platform [112]. In 2014, one newspaper noted that there was an almost 20-point positive delta between support for a statewide measure in counties in Florida that were the subject of intensive Facebook ad placement campaigns versus those that received none. And in 2018, Twitter’s CEO admitted that algorithmic reduction of the visibility of some messaging and accounts certainly leads to the censorship of major democratic stakeholder voices, though the firm insists that no political ideology is considered in the algorithm’s functionality [113].

In fairness to technology companies, it would be disingenuous to suggest that the potential for information inequities and manipulation emerge simply in how platforms link users to other users and their speech. Much of what concerns policymakers in the West is evident only in the outputs of undergirding algorithms for content collection, generation, and presentation. Though tangential to the focus of this essay, a 2015 incident in which an image recognition algorithm employed by Google identified dark-skinned individuals as gorillas in image searches serves as a robust example of how, as Schelling might have put it, microfoundational design characteristics can result in deviant or suboptimal macro outcomes [114]. Google quickly rectified the situation but the extreme example portrays the degree to which commonplace information services depend on classification subroutines that determine the social and political valence of content that is presented to users. The example also illustrates a characteristic of such algorithmic foundations that bears significantly on recent IW campaigns prosecuted by foreign belligerents—given a basic degree of dissection, such protocols are systematically predictable. Much in the same way that some terrorists have been seen to target content for recruitment and advocacy purposes based on an understanding of how Facebook recirculates differently themed content, it is entirely possible for IW operators to designs channels of content distribution based on a robust understanding of platform operation.

Though the exact impact remains unclear at time of writing, it is obvious that the algorithmic function of platforms like Twitter and Facebook was a significant factor in determining the sophistication of recent Russian-backed IW. Distinct from a counterfactual campaign in which propaganda and disinformation might be disseminated in foreign countries in such a way that impact would be determined by newspaper circulation and television or radio viewership, Russian efforts from 2014 onwards used both intentional and nascent mechanisms of social network and media platforms to make sure content was presented to specific audiences. Facebook ads espousing antifactual narratives and promoting fake advocacy organizations were targeted, via use of tools provided by the social networking company to ad buyers, specifically to users who supported particular candidates (i.e., Hillary Clinton, Donald Trump, Jill Stein, etc.), aligned with particular figures (e.g., Martin Luther King Jr.) or were in areas of the American South [1]. Ad content—which is most effectively spread as Facebook presents content to users’ friend networks based on the success thereof—was thereafter targeted by efforts to tailor content presentation, in this case by reference to user interest in “Dixie” and “Black Lives Matter” among other topics [1]. At the same time, platform design characteristics ensured that much content circulation and resultant discourse would be segregated along virtual community lines [115–117]. Targeting algorithms ensure a degree of such segregation by matter of default (i.e., insofar as divisive issues produce semantic polarization at the level of the individual rather than cross-perspective engagement). Moreover, as recent research has concluded, narratives pushed into circulation in different virtual communities did not lead to active engagement among communities unless antipropaganda activists organized to call out bot activity. Much of this can be attributed to a lack of intentional promotion of content or engagement by bot accounts on Twitter beyond targeted networks so as to ensure the construction of echo chambers without opportunity for compromise [118–120].

Design of IW efforts to fit the contours of these platforms’ algorithmic functions had several effects. First, it naturally helped mask the origin of certain information presented to consumers of social media and then, by means of media recirculation, to a broader audience [121]. This masking effect comes from more than simply the injection of fake or modified content into otherwise natural discourse among virtual communities, as is discussed below. Rather, understanding of content redistribution and information search algorithms allows the sophisticated operator to force an attribution problem of degrees. Naturally, fake bot and troll accounts present a challenge for the median voter who is not trained for or focused on discerning false identification online. Moreover, recirculation on social media platforms inevitably moves information away from the initial voice or voices on a given topic and nests discourse in a community setting designed by algorithms. Even where such protocols for encouraging interconnectivity and availability seem benign, the effect can still be greater ease of obfuscation for an actor focused on conversation manipulation [121]. And, as has been regularly noted by political communications scholars, political messaging becomes more effective when received from a member of one’s own community [122]. In such a context, recipients of messaging are less inclined to inquire about the validity of offered facts or opinions, the source of underlying assertions, or the incentives of the messenger.

Second, and relatedly, the obvious consumer-focused functions of account setup and platform use further provide tactics that can mask the origin of both factual information and narratives in discourse. During the 2018 World Cup in Brazil, Russian bots sought to rile up human rights advocates by pushing inflammatory rhetoric about the upcoming Qatar event in 2022 [123], perhaps as a means for testing out new methods of interference ahead of elections seasons in the USA and Europe. In doing so, a common tactic involved the deletion of troll accounts after the initial injection of rhetoric or falsified content had successfully led to recirculation by genuine Twitter users. The result, at least to the public eye, was a conversation with broad engagement but unclear beginnings. That is not, however, to say that such conversation was accepted as legitimate in all quarters. Much as happened in the USA with pocketed outcry about alleged foreign involvement in social and political discourse online, reporting of bot activities around the Brazilian World Cup prompted a confused situation wherein (i) some limited conversation was clearly the result of bot manipulation, (ii) much countervailing reporting and discourse considered a number of such conversations as illegitimate, and (iii) yet other voices pointed at criticisms of Qatar as legitimate regardless of the details of the genesis of the conversation. In short, such tactics not only help IW operators hide the source of injected rhetoric and information, but also benefited from limited attribution that caused a nested crisis of credibility in the nature of conversation.

The strategic redistribution of information

Though much content created for or spread to diverse virtual communities via troll personas, bot accounts, and advertisements was thematically divisive and targeted elite voices, some usage of information as a weapon of Russian-backed external subversion7 from 2014 took the form of doxxing—the strategic obtaining and publishing of private information about people, organizations, and/or the details of their affairs. In this element of Russian-sponsored efforts, cyber operations played their greatest part. As described above, efforts by affiliates of the Fancy Bear and Cozy Bear threat actors—linked by Western intelligence and cybersecurity firms to the Main Directorate of the General Staff of the Armed Forces of the Russian Federation (colloquially, the GRU) and the Foreign Intelligence Service (SVR) [125]—to steal valuable information on the machinations of political organizations and the specifics of political figures’ lives clearly furnished pro-Russian elements with some of the most polarizing content pushed and dissected by bot and troll personas across the election period.

In particular, there is evidence of efforts by hackers to break into the servers of the Open Society Foundations, a George Soros-backed philanthropic grant-making organization, in order to steal and then release information doctored to imply a collusion of sorts between the institution and the Russian government [126]. Of most significance in the case of Russian interference in the 2016 US presidential campaign—aside perhaps from suspicious attempts to intrude into Pentagon, White House and State Department networks in 2014 and 2015—was the 2016 hacks of the DCCC and the DNC [127–129]. In both cases, successful spear phishing attacks compromised the access credentials of key Democratic staffers and yielded Russian threat actors a treasure trove of email and other document data on the machinations of Democratic campaigns, nomination processes, and personnel details. With the DNC, in particular, lax defensive measures and hygiene practices led to the compromise of high-level personnel accounts [99, 130].

Once obtained, Russian agents strategically set about releasing information for manipulative purposes in several distinct ways. Perhaps the most interesting prima facie evidence of the strategic calculations involved in efforts to dox Democratic processes to achieve favorable outcomes is the fact that cyber operations were clearly undertaken against Republican targets as well—specifically, according to intelligence officials, from a hack of the Republican National Committee (RNC)—without subsequent follow-through in the release of stolen content [131]. While such action is likely demonstrative of the evolving tactical focus of Russian efforts over time as conditions on the ground made the denigration of Democratic persons and positions a desirable option, it is also clearly suggestive of the manner in which the contours of the battlespace determine the evolving nature of information treatment and employment.

Beyond the spring of 2016, from the start of the period in which the breaches of the DCCC, DNC, and RNC took place, over 80 individual leaks of private political information took place. Data release was split between three distinct sources. The first was WikiLeaks [132], a nonprofit organizations dedicated to the publication of secret intelligence and other classified media that some in Western intelligence circles have famously labeled a “non-state hostile intelligence service” [133]. The second was a website, mocked up to look like a Beltway insider scoop and leaks aggregator, called DCLeaks [134]. The third was a hacker persona that called themselves Guccifer 2.0 [134]. Perhaps more significant than just the division of doxxing labor among different release sources, each took different tacks in their attempts to introduce information to the environment. WikiLeaks is a well-known source of government and other private information with a network of sources—admittedly hard to determine given the submission mechanisms put in place to protect leaker identities—for obtaining such data and a liberal position on the legality and morality of its release. By contrast, the DCLeaks and Guccifer 2.0 personas represented contrasting perspectives on the publication of private information as, respectively, a somewhat amateurish entrant to the constellation of extreme conservative media entities and a hacker disillusioned by a system they supposedly deemed to be rigged [135].

The manner in which information was strategically released to the American public was significant once again for masking the origins of information that would thereafter circulate in broad-scoped discourse. Though the release of information through WikiLeaks would likely encourage much skepticism among mainstream media outlets and their audiences, reports from the nonprofit had much traction among the many fringe right-wing communities on sites like Reddit and 4chan whose membership overlapped with the more traditional hard right audiences and supporter bases of conservative pundits like Rush Limbaugh or Ann Coulter [136]. By contrast, the other sources of leaked information were set up so as to appear cast in the whistleblower role that is often highly, if controversially, valued by median voters in Western democracies. By corroborating insider scoops from ostensibly independent sources, recipients of leaked information would be more likely to at least engage with the substance, something further encouraged in the promotion of injected narratives and facts by prominent voices within conservative political communities and in connection with the Republican presidential campaign. Since engagement and subsequent recirculation are more relevant minimal barriers for success than persuasion, such a tactic was undoubtedly effective and had follow-on impacts on the ability of citizens to accurately assess the quality of relevant information. Much as is noted above, some limited attribution of interference with division on the nature and significance thereof created a crisis around the credibility of information, a condition that was aided by the development of a general malaise surrounding the evolution of campaign narratives and talking points over the course of 2015 and 2016 [137].

No time for updating

An element of Russian IW in the USA and elsewhere through the mid-2010s that extends beyond the adoption of a strategy for releasing stolen information in a semantically-meaningful fashion is the unique temporal nature of effort. On behalf of Russian political forces and security services, the IRA not only timed doxxing efforts so as to maximize their media effects, but also directed their troll and bot legions in such a way that democratic updating and cross-sectional discourse were difficult. This is perhaps clearest in the American case in the context of positive coverage for the Democratic candidate, Hillary Clinton. On the Friday prior to the Democratic National Convention that would see Clinton elevated to the party nominee, for instance, WikiLeaks released thousands of emails that had been stolen from the DNC by Russian hackers the previous month [138]. Particularly given the emergence of a scandal in which Democratic staffers were seen to conspire against Bernie Sanders’ candidacy in favor of Clinton, the release captured several news cycles and overshadowed what would otherwise have been a momentous moment in the first time nomination of a woman as the official selection of a major American political party [139]. Elsewhere, however, the timing of manipulative efforts is also clear in the actions of IRA-controlled accounts on social media sites that swarmed quickly around emerging social and political issues in an effort to Shanghai and direct conversation. Troll account activity swelled in the wake of Black Lives Matters protests [140] and major speeches by Barack Obama in support of Clinton or against Republican candidates.

Though the efficacy of such efforts is yet to be determined in full, the value to be prospectively added to the Russian effort is not difficult to surmise in retrospect. Both in the case of information releases and of direct efforts to manipulate conversation, the manner in which attribution hurdles were constructed by IRA operators in their development of IW infrastructure (i.e., fake accounts, American-based servers, proxy sources for leaked information) allowed for a remarkably nuanced form of marketplace manipulation. Particularly, the infrastructure of the IRA’s effort allowed it to directly attack the Bayesian updating functions of democratic information systems wherein time and continued discourse naturally leads to moderation as more data are handled by citizenry.

Though it is unlikely we will ever know to what extent, it seems likely that the marketplace—faced with regular spates of new information—was made inefficient, at least in part, in its ability to adjudicate on the public policy and political value of any one new development that might be harmful to Russian designs. In addition to leaking stolen information, bot accounts pushed demonizing rhetoric that specifically sought to diminish the power of countervailing voices in the system when they otherwise might be crucial to fair treatment of an actor deemed undesirable by the IRA. James Comey, for instance, prior to his October surprise reopening of the investigation into Clinton, was broadly demonized by troll accounts as a Democratic patsy when his earlier statements appeared to authoritatively counter the anti-Clinton rhetoric around her use of a private email server found—at this juncture—at Republican events [141]. Likewise, when Donald Trump achieved notable victories on the campaign trail and around his nomination by the Republican Party, IRA-controlled accounts made clear efforts to moderate conversation aimed at middle-of-the-road voters and to simultaneously misconstrue Democratic performance. Perhaps the most notable examples of this occurred around the three presidential debates from late summer to October of 2016. Specifically, against the conventional assumption that Clinton’s actions during each debate has constituted a winning performance, pollsters were shocked to discover a diminishing expression of support among middle-of-the-spectrum viewers articulated as a feeling that the candidate’s public persona did not match her actual demeanor [126]. In subsequent polling work and other research, it seems clearer here than with almost any other juncture during the election season that WikiLeaks’ release of stolen information on Clinton’s dealings with big banks had muddied the waters with her for many potential voters [142]. Then, in perhaps the greatest surge of activity during the year, bot accounts trumpeted news of a Trump victory and attacked Clinton’s performance as deceptive. Caught between conventional media coverage and unprecedented fluctuation in narratives circulating in virtual and related physical communities, the resulting marketplace dynamic—characterized by increasingly siloed information environments in select online settings and incessant, contradictory information releases elsewhere—was one that worked against the naturally tendency of the system to unpack and debate in a reasonably open fashion.

Faking discourse

Of course, beyond just the supply chain and strategic release considerations bound up in Russia’s IW efforts since 2014, the content of their messaging also mattered a great deal. As most researchers focused on issues of Russian interference (and IW more broadly) have noted, content and narrative discussion from both troll and bot accounts was focused on building agendas among virtual communities—which correspond, of course, to both dispersed and concentrated real-world communities—that accentuated and codified emergent fault lines in national discourse [54, 143]. Directed by individuals within the IRA, fake accounts pushed content and insinuated themselves into conversations across Twitter, Facebook, Instagram, Reddit, and Google. According to Facebook, over 126 million people were exposed to fake or leading content and more than a thousand hours’ worth of YouTube videos were identified by Google as linked directly to the Russian effort [144]. Twitter identified nearly 40 000 accounts set up to push fake reporting and other messaging produced by the IRA and affiliated entities [145]. These accounts tweeted more than 1.5 million times directly about election-related issues and millions of more times about social and political issues of all kinds. And social news aggregation site Reddit identified at least 944 accounts believed to be linked with the IRA’s effort, alongside much more significant evidence of fake or doctored content within certain subreddit communities on the platform [146].

Categorically, content pushed by these accounts took three different forms—opinion, blatant disinformation, and the modification of factual content. By far, opinion is the simplest to understand. On Twitter, for instance, research has demonstrated a degree of functional differentiation among troll/bot accounts identified as part of the IRA’s effort [103]. Many accounts tweeted relatively little substantive content, instead apparently dedicated to the blasting out of links to third-party content to either general or more specifically-targeted audiences. Most, however, blended forms of information delivery so as to construct the profiles of a median engaged Twitter user [147]. In essence, these accounts offered narrative spin by either directly introducing new perspective or by commenting on retweeted content, mostly in the context of different communities that they were attempting to engage. A smaller number still took on more noticeable roles as public firebrands, targeting persons significant to ongoing discourse (though almost exclusively at the level of distinct virtual communities and rarely at the level of national debate).

Arguably much more significant than the conveyance of opinion by Russian-created accounts, however, was the fabricated and modified content that underwrote messaging efforts. Political communications scholars have regularly demonstrated that not only does most persuasion work to activate undecided voters rather than change the decisions of existing members of an electorate, messaging for that purpose is most successful when embedded within accessible media like political videos, memes, op-eds, and other reporting [147]. Coupled with the sizable degree of recirculation from trusted community sources achieved by Russian efforts, the IRA was able to create and target content that specifically worked to activate the hardest targets of the Republican candidate in his public-facing election campaign. Churchgoing evangelicals that objected in large part to Donald Trump’s priors as a multiple divorcees and callous rhetorician with little seeming respect for organized Christianity [147]. Responsively, IRA accounts targeted evangelicals with fake ad content designed to demonize Clinton with negative pseudo-religious portrayals and promote Trump. In many cases, such content simply asked users to retweet or like content “… if you want Jesus to win …” or for other, similar reasons [147]. To capture veterans that were being demobilized by Trump’s comments about war hero John McCain or a Gold Star family that spoke at the Democratic National Convention, content was created that spun a narrative of Clinton as complicit in the deaths of American servicemen in Libya [147]. Following the release of private information stolen from the DNC that linked Hillary Clinton to a series of speeches made to Wall Street bankers, troll accounts created, and circulated a portfolio of fake accusations that evolved from clear-cut suggestions of elitist nepotism by the Clintons to the existence of a “deep state” that ran the American government as a rigged system [148].

The utility of so much fake content employed in relatively sophisticated fashion clearly lies in the degree to which the credibility and quality of discourse suffers. It is not abnormal for political campaigns to attempt to bloody opponents’ messaging both in the lead up to and following debates. As noted above, such even extreme journalism (or “advocacy journalism”) that is spun to social or political effect can be beneficial to the overall function of the democratic information system. “Spin” helps citizens update their priors so as to produce a moderating information effect. However, balance and time are required for this to happen. In conventional electoral settings, political spin from two or more sides generally cancels out opposing spin. Particularly over days and weeks, the median voter tends to better contextualize spin as part and parcel of broader discourse, even though little change in position may be experienced. In 2016, that balance was lost. Though the purpose of this essay is not adjudicate on case-specific effects, in this detail is the clearest case that Russian interference had a significant effect on the 2016 presidential election—not in terms of direct voting manipulation or outputs, but in the massaging of voter preferences and priors from manipulation of the information environment. Specifically, efforts to strategically counter the mechanisms of democratic discourse were moded toward support for one political faction with the result that time-and-place injection of fake content and extreme rhetoric denied that side reasonable control of news cycles. An immense number of citizens were thus prompted to reconcile real-world developments with a constructed reality made more tangible by a deluge of counternarratives and seemingly factual information from multiple sources. Others, by contrast and given emerging details of Russian involvement in interference, questioned the credibility of the process and in turn fed further IRA-supported claims on the right about a rigged system [149].

The trauma of unprecedented conditions

Naturally, it would be inappropriate to consider Russian efforts during the 2015–16 presidential campaign without further contextualization centered on the countervailing mechanisms of America’s democracy. In reality, there are substantial parallels to be drawn between the case of the 2016 election season and the paradigm debate that preceded the decision by the Bush administration to invade Iraq in 2003. In both situations, the introduction of a set of narratives designed to capture otherwise unconvinced citizens can skew the preferences of median voters and create not only support for a previously disadvantaged position, but can also neutralize incentives for conventional countervailing voices of the marketplace to observe their normal functions. In this way, the role of these countervailing voices in bringing about unusual marketplace conditions in 2016 is both noteworthy and secondary to the role of meddling Russian forces.

Kaufmann, in his seminal deconstruction of the health of the MOIs in the lead up to the 2003 Iraq war, perhaps best describes the dynamic that sometimes leads to the skewed function of countervailing voices in democracies in his invocation of Arrow’s Impossibility Theorem [23, 150]. The Theorem quite simply demonstrates that there is no way that a majoritarian system of democratic governance can consistently produce results that reflect the will of a majority of voters if there exist at least three policy or candidate alternatives that are meaningfully different (i.e., in ideological, moral, or other dimensions). In that situation, democracy becomes vulnerable to manipulations that will cause some citizens to vote against their interests simply to avoid a less desirable alternative. This matters even in cases where it appears, on paper, that there are two choices available to voters because dilution of the issues that define the existing split in voter preferences is always possible [23]. Specifically, where there is a clearly dominant position, an obvious strategy that the opposing force can pursue is the generation of a set of issues that appears external to the existing debate. The purpose is to draw interest from supporters of the prevailing policy position via subversion. Much as National Socialist leaders famously did in the successful sale of their manifesto to the German people by emphasizing issues of national sovereignty in the interwar years, prosperity, and cultural integrity, tying core voter interests to an alternative and apparently independent position from those pre-existing ones forces an electorate to split their support. The result, as Ralph Nader may attest to, can be a fundamental shift in outcomes toward the otherwise disadvantaged policy or candidate, entirely in spite of static voter preferences.

In 2016, prominent countervailing voices of the marketplace failed to play their role on several fronts. While in some cases Russian efforts sought specifically to discredit experts as they entered the public eye linked with particular issues, such as in the case of fake intelligence leaked regarding Attorney General Loretta Lynch that was aimed as casting her as a corrupt partisan [151], both the watchdog media and the loyal opposition of significant government figures regularly failed to act as would normally be expected with regard to the unusual emergent information conditions. With regard to the media, the context of the election meant that much uptime was dedicated to coverage of rhetoric from candidates and, later, nominees as the fonts of authority on national politics over and above the unelectable incumbent executive. This amplified the authority of those candidates, as will be discussed below. But this dynamic also meant that much attention was given to those narratives and positions being espoused as a set of alternatives to both Republican and Democratic opponents that were deemed to hold the positional advantage over upstarts like Donald Trump and Bernie Sanders. Due to the conditions of the election, in other words, mainstream media outlets were especially prone to recirculation of the narrative portfolio of positions being pushed by Russian trolls and recursively amplified by at least one nominee. The result of this was the muting of the watchdog functions of the media—not because criticism of seemingly deviant perspectives was not commonplace, but because recirculation of new issues that we being injected into national discourse strategically for purposes of undermining prevailing mainstream perspectives.

The effect of new narrative positions that endeared previously uncertain voters to the shape of outside candidates further hampered function of the marketplace by encouraging conservatism among both the incumbent executive and prominent legislative stakeholders. Republican lawmakers and party figureheads, uncertain about the wave of support being forwarded toward outside positions that nevertheless aligned with the party institution, exercised extreme restraint when determining whether or not to check the misleading rhetoric of candidate Trump [152]. Moreover, evidence of interference from abroad was met with paralysis and suspicion that the act of informing the broader electorate of the uncovered details of meddling would upset what was at least partially assumed to be a favorable, if unexpected, turn in support of the Republican platform [130].

For the incumbent executive, President Obama, the manipulative introduction of an alternative set of discursive dynamics in the American electoral process produced a form of forced paralysis emerging from the natural logical tension centered the responsibility to protect the nation [130]. On the one hand, evidence of foreign meddling presented as an unprecedented threat to the American polity that must be addressed. On the other hand, Obama feared that the impact of his interference would be deleterious to the nation on at least two fronts. First, and most obviously, he feared that the election season meant he no longer commanded the traditional authority advantage held by presidents. Traditionally, executives have unique access to national intelligence and maintain the use of a singular pulpit from which to command national attention [153]. As such, they maintain unique abilities to steer national debate. In the context of the election, Obama feared that his involvement would be seen as partisan without pre-existing bilateral support.

Second, according to many reports, Obama was concerned about the precedent involved in any kind of retaliatory action that quickly and harshly called Russia out on its interference. Here, the clash of interests emerged in the context of cybersecurity policy pursued by the Obama administration particularly during the later years of his terms in office. In contrast to recent developments, administration policy before 2017–18 emphasized defense, the hardening of targets deemed critical to national security, and the support of security operations in domains other than cyberspace [154]. Taking action by, say, effectively destroying the servers being used by the IRA to coordinate social media fraud might cross an as-yet-unclarified red line and provoke a severe response. Particularly given Russian clear willingness to aggressively act in the space in the form of the prosecution of cyber-supported political warfare, rapid response contained obvious risks and unknown variables. The result of this and political considerations was simple—the administration attempted to galvanize bilateral support for response on any grounds via high-level briefings given to and discussions had with Congressional leaders from both major parties. Seemingly due largely to the above-noted logical reluctance of Republican lawmakers to act rashly, executive response was neither quick to materialize or provocative when it did come following the election. The key countervailing mechanisms of democratic information assurance failed, in other words, from a logical onset of conservatism and the manifestation of a seemingly unprecedented set of rhetorical conditions on the ground.

The Trump factor

It would not do to conclude any discussion of Russian interference in the 2016 presidential campaign in the USA without considering the role of candidates (and then nominees) Donald Trump and Hillary Clinton. While not in elected positions, the frontrunner candidates in a major election for national leadership do benefit by proxy from the traditional authoritative advantages held by the president. In part, this comes from the fact that they become the focus of most national coverage for the month-long duration of the campaign’s most intense events. The advantage also emerges from the access nominees are given to limited intelligence resources and information. And in the particular case where the incumbent executive is not running for reelection, both nominees benefit from the fact that the incumbent is incentivized to avoid the appearance of partisanship where possible.

The misfortunes of candidate Clinton are described to some degree above. From most starting positions, by far the more relevant discussion to be had pivots on the rhetoric of Donald Trump as both an upstart challenger to the initial field of Republican contenders for nomination and then as the nominee of the party. As so many pundits and analysts have argued in the context of claims of Russian interference as a deciding issue in the 2016 election, there is a robust argument that the idiosyncrasies of candidate Trump themselves explain the shift of voter preferences sufficient to ensuring the unique conditions needed for victory on election day in 2016 [155, 156]. And certainly, there is no doubt that the impact of candidate Trump’s rhetoric on national political discourse was unprecedented on a number of fronts. The question, at least with regard to the need to better unpack the processes and outputs of IW in the digital age, has to do with how likely his rhetoric would have been to resonate with a substantial enough portion of the voting population in relevant states in the absence of Russia’s actions via the IRA and threat actors like Fancy Bear.

Here, an answer must inevitably pivot on the question of whether or not IRA-directed forces were critical to the construction of an alternative position sufficient to capture voters that would otherwise have supported the candidacy of Hillary Clinton, an alternative Republican nominee or a third-party candidate. As might be expected given the gravity of the question, this essay is not the first to generally consider this question. One study by researchers from the University of California at Berkeley and Swansea University found that bot activity generating alternative narratives of political reality through 2016 likely produced a minor change in voter preferences as expressed at the ballot box, but certainly one significant enough to potentially alter the outcomes of an election where the presidency was won by less than 80 000 votes [157]. This result mirrors related work by the National Bureau of Economic Research showing similar findings with regard to Russian interference in domestic debate focused on Brexit in the UK. And it is almost unarguable that Trump’s prolific social media activities intersected with Russian efforts given that more than half of all followers of the candidate’s account have been found to be bots or propaganda related.

Regardless of whether or not Russian influence was specifically able to produce an electoral win for the favored candidate, the role of IRA-directed bots in amplifying the candidates’ voice is generally possible to surmise. In addition to the construction of counter-Democratic narratives adapted to fit the context of siloed communities of skeptical and independent voters, Russian recirculation of Trump’s social media content massively outpaced that which might be expected of a conventional candidate. Over the course of 2016, Trump tweets were retweeted almost half a million times by Russian accounts, a 1000% increase over the amount of retweeting of Clinton’s own statements on the platform. Added to an additional 2.12 million bot tweets on election-related issues, retweets of Trump’s words collectively received almost half a billion impressions within a single week of posting, implying strongly that the candidate’s rhetoric was being massively and strategically disseminated off the back of Russian-supported activities.

We may never have a concrete answer on the question of Russia’s role in electing Donald Trump to the office of the President of the USA. Nevertheless, it is clear not only that Trump and Clinton both took on unique roles as authoritative voices in the MOIs akin to the national leader and that, beyond that time-and-place dynamic, Trump’s voice was magnified by the machinations of the IRA both in amplifying his speech and in supporting the narrative conditions within which he acted to reshape the conditions of democratic debate in an almost unprecedented fashion. Though the candidate was noteworthy and in some ways rhetorically unprecedented in his own right, it is likely that his actions on the campaign trail would have received more robust criticism or that Russian interference might have been more effectively countered had not the IRA effectively created conditions that simultaneously masked foreign involvement and engaged in issue manipulation sufficient to alter the prevailing payoff structure of the median voter.

The myth of IW transformed by cyber operations

In the recent experience of the USA with foreign-based cyber-enabled IW, the important role of quiet countervailing institutions and an executive proxy in candidate Trump whose rhetorical approach to politics embraced sensationalism cannot be overlooked. Nevertheless, it seems clear that the design and use of modern Internet-enabled media platforms, coupled with a limited ability by relevant stakeholders and citizenry to attribute and validate information consumed thereon, are the critical factors that make the threat of IW in the digital age novel. Specifically, the ability of meddling foreign threat actors to covertly enter domestic conversations via use of fake accounts, to spread false narratives and facts in a manner that is generally hard-to-track for the average citizen, and to strategically inject information to counter the moderating effect of time on national deliberations create an attribution challenge for the MOIs that opens space for Byzantine failures of the system. Byzantine faults are a specific class of systems failure characterized by an inability to effectually determine the source thereof. Here, the scale and complexity of the techno-societal systems being targeted virtually dictate the likelihood of such faults. Moreover, regardless of whether or not such failures took place as a result of Russia IW from 2014 onwards, it seems clear that a lack of oversight on the manner in which design characteristics of new information dissemination platforms and the unfamiliarity of elites and media actors with discourse channeled through such mediums particularly magnify the potential for their occurrence. Simply put, though the failure of traditional marketplace mechanisms is still substantially needed for major disruptions to democratic process to occur, the confluence of circumstances brought about by new environmental conditions clearly creates new space within which information attribution and subsequent assurance are unprecedentedly difficult.

It is tempting to say that cyber operations play a limited role in augmenting IW campaigns insofar because their effectiveness does not significantly emerge from the characteristics of either defensive or offensive cyber conflict dynamics. Certainly, the application of cyber instruments to compromise foreign networks and to either modify or steal data is a force multiplier of sorts for IW efforts. But the landscape of issues bound up in the prosecution of cyber operations campaign and the security of systems critical to the function of civil society actors in democratic states does not substantially intersect with those issues that lie at the core of democratic vulnerability to manipulation. Nevertheless, cyber operations have robust adjunct effect on IW campaigns as a force modifier, and so it is perhaps more appropriate to say that cyber operations play a “lateral” (rather than a limited) role in augmenting IW campaigns. Specifically, the adjunct effects of cyber on IW have been seen in recent years in a few particular ways.

First, and most obviously, the ability to intrude upon the networked computer systems of individuals, organizations, and institutions relevant to an IW effort does significantly enhance the potential for effective disinformation and propaganda delivered via both traditional and nonconventional mediums. It often does so directly along two lines. First, cyber actions can enhance the potential of influence operations. With the use of social media by Russian IW operators, for instance, we saw a limited attempt to spread malware—specifically in the form of an application called “FaceMusic”—that directly infected computers’ browsers toward fabricated content so as to artificially inflate its visibility by fraudulent “upvoting.” Second, it does so by enriching the information content available to IW operators. In the USA, information retrieved from the servers of the DCCC and DNC could have easily been used as the basis of IRA attempts to influence domestic discourse in several ways. Perhaps as important, “supposed” cyberattacks resulting in information theft were presented via mouthpieces like Guccifer 2.0 to domestic contacts and communities as the source of credible information on the intransigence of various political actors. Whether intrusions actually occurred or were fabricated under circumstances where corroboration of the act would be difficult, the value of actions taken by threat actors like Fancy Bear is clear—cyber operations can provide information more useful to the task of subverting regular discourse than might otherwise be available to a foreign belligerent force. And there is potential for enhancement over and above what otherwise might be possible in the hypothetical case of foreign operatives physically breaking into an organization’s premises to steal documents. The added value of cyber operations, in particular, comes in large part from the relative deniability involved in their employment, the lack of need for physical access to documentation and the greater opportunity for sophisticated cyber actors to overcome defensive measures set in cyberspace and in the hygienic behavior of employees. Arguably even more significantly, the value of cyber operations employed or implied in an IW efforts comes from the opacity of their use. The well-documented attribution challenge in the space magnifies the utility of the method, as various actor-level incentives toward nonreporting combine with inevitable challenges in linking technical observations with sociopolitical priors to generally obscure details of cyber conflict actions [158, 159].

Second, cyber operations often result in sociopsychological effects at the level of national citizenry and security institutions. For one, they divert popular attention from the shape of a broader IW effort. Mechanically, this happens as attention is drawn to punctuated actions in the form of cyberattacks that take news cycle precedence over the thousand cuts of information manipulation that are the bread and butter of IW campaigns. Perhaps more notably, however, cyberattacks made public are a foil to popular narratives that effectively consider the nature and prospects of democratic vulnerability outlined in the sections of this essay above. Though they may be of dubious legality in the mind of citizens, cyberattacks that lead to information insecurity by elite domestic actors “do,” in the mind of at least some, contribute to a higher quality of available information in national discourse [160, 161]. Likewise, cyberattacks are often employed against symbolically significant political or national targets. This has been observed to have several potential effects. Most simply, citizens observing attacks on targets of national significance—such as the networks of critical infrastructure or of governmental defense establishments—often seize on the notion that foreign threats are being directed against the conventional infrastructure of national security and prosperity, rather than the generally harder to conceive terrain of democratic functionality [162]. Such conceptualizations of threat are then frozen, as is demonstrated in much research on the tendency toward doomsayer rhetoric on cyber conflict issues [162], by the manner in which practitioners, pundits, and scholars regularly portray threats emerging from cyberspace. The effect can be particularly useful for a belligerent actor insofar as the crosscutting nature of cyber threats clearly prompts varying reactions among citizens that associate digital insecurity different degrees of threat to personal, societal, or national political welfare. In this way, cyber operations undertaken in support of broad-scoped contestation in other domains act to affect a form of diversionary warfare.

Relatedly, cyber operations necessarily consume the responsive capacity of defensive institutions tasked with tackling cyber aggression at the local, state, and national levels. Given enough activity, this consumption can be debilitating. From intelligence organizations tasked with providing relevant expertise to civilian government agencies for federal network defense to state institutions whose purview for information assurance extends narrowly to—for instance—election infrastructure, most critical security entities of national security will initially respond to intrusion in line with preset protocols on incident response. Beyond that, democratic governments are perennially at risk of failing to link the dots wherein threats to national security emerge in complex, nontraditional formats. Even where information sharing is effective and can be channeled through organizations like the Federal Bureau of Investigation or the Office of the Director of National Intelligence, interpretation is often a function of the political considerations of elected leadership or bureaucratic positioning.

Over and above the strategic particulars of a given campaign in which cyber operations are employed to multiply force effects, repeated cyber incidents also directly work against the credibility of the processes of democratic systems. Significantly, amidst a broader campaign of interference composed of cyber incidents and the weaponization of both stolen, fabricated and narrative content to achieve manipulative effects, such operations work as tacit coercive signals to individual stakeholders.8 Repeated compromise of some targets and subsequent doxxing or other use of private information for political gains allows a foreign state to hold additional targets at risk, either via cyber intrusions that have not yet led to such action or by implication of the possibility. Political operators that fear the compromise of their personal or organizational secrets may be less likely to perform their role in helping assure the proper handling of information in democratic discourse. In the case of Russian interference in the USA, a recurring narrative among opinion journalists has been that Republic Party voices were held relatively in check at various points by the knowledge that cyber probing of the servers of both the RNC and various primary campaigns might mean future compromise. Whether or not this was true, the example demonstrates the obvious coercive signal that some amount of cyber compromise sends to other concerned parties. And because the signals involved are either visible or inferable beyond the affected parties, the logical result in some cases is diminished trust in previously credible countervailing voices stemming from uncertainty regarding potential compromise.

Moreover, cyber operations also more broadly contribute to popular malaise regarding insecurity in the digital age [137]. This is, naturally, similar to the aforementioned installation of panic that often extends from individual or a few clearly linked cyber incidents. And yet, it is worthwhile noting that the issue of malaise is different on the bases than is the issue of panic and misinterpretation linked to seizing and freezing on the shape of cyber warfare instead of an influence campaign. Here, at least part of the issue is that cyber operations campaigns are multifaceted efforts that involve actions of differing levels of sophistication. Particularly where cyber capabilities are employed for political warfare purposes, the degree of sophistication required by foreign threat actors in compromising contextually valuable targets is often limited. Attacks against the DCCC and DNC involved relatively simple attacks to secure login credentials at the former in order to breach the latter. This was followed, as has been the case quite often in attacks across the West prosecuted by threat actors like Fancy Bear in recent years, by systems compromise that primarily makes use of preconstructed malware resources. Such variability of effects is one of several factors that builds complexity in the preventive missions of law enforcement, intelligence, and other government entities. Resulting failure to rapidly respond to threats of a cyberflavor in many instances—as well as regular confusion about who bears responsibility for response and cost mitigation—marries with the naturally high level of incidence of such events to generate siege narratives across national, local, and social reporting. As has been seen in communities regularly affected by natural disasters, such a result can lead to diminished trust in the public sector and active distrust of services that are particularly prone to exogenous shocks [165]. In the context of IW, of course, the implication is that cyber operations cede additional value to belligerents for degrading the credibility of national processes tasked with digital security on several fronts. Specifically, the selection of targets whose integrity is not readily assessable and that are significant to the integrity of the system—such as in the case of internal political party nomination processes or of electoral infrastructure—can particularly act to further diminish the credibility of the whole.

Cyber-enabled IW, escalation, and future democratic vulnerability

As information revolutions transform global society, they naturally open up new space for contestation and conflict. Much new vulnerability surrounding major developments in communications technologies, as the late Paul Virilio noted, emerges from the need of social and political systems to adapt to new dynamics and speeds of information usage [166]. While some implications of new information technologies might be clear and opportunities may be quickly realizable, the holistic gear shift of society toward entirely new modes of operation rarely comes about in short order. With the IW threat to systems of national discourse and sociopolitical participation, it is clear that we remain in such a period characterized by mismatched information technology-based opportunities and vulnerabilities. New media systems and the rewiring of global society premised on Internet-enabled technologies create clear potential for interference and, as has been argued above, the possibility of Byzantine failures of democratic functionality.

However, the “hacking” of democracies that is the substance of so much punditry and practitioner reporting in recent years has relatively little to do with the direct employment of cyber instruments to disrupt, degrade, or spy. Rather, the threat to democratic political systems emerges from the mismatch of new systems that now undergird discourse and those regulations and norms of behavior that must be adopted in years to come to safeguard the integrity of national polities. At present, new media services and underlying algorithmic design conditions enable failures of democratic process that, regardless of whether or not they manifest as severe subversion of national systems, are difficult to detect and appear arbitrary to the median observer. Cyber operations aid the manipulative use of new media features primarily as a new, enhanced method for obtaining sensitive information that can be harnessed and directed for negative effects by a foreign power. Secondarily, cyber conflict serves as a diversionary tool of political warfare on several fronts.

It is worth noting a final issue with the role of cyber operations as an adjunct modifier of both IW and other exercises of state power, however. As alluded to several times above, digital age vulnerabilities of democratic information systems abound but are likely not insoluble. But the resolution or minimization of IW threats via addressing the underlying challenges of information assurance of democratic systems will not fundamentally neutralize all potential negative externalities of cyber instruments used to enhance political warfare. Particularly given the sociopsychological impact of cyber conflict outlined above, there is likely to remain an enduring threat of escalation of interstate tension from the use of digital instruments for harm and espionage. As evidenced particularly by the personal nature of the campaign waged against certain political actors in the USA during the 2016 election season and in support of others, there is a strong possibility that cyberattacks employed as political warfare might prompt disproportionate response in cyberspace or in other domains. Though the role of cyber operations in ensuring the success of democracy hacking may be minimal, the incentive to use cyber assets is strong. In many cases, such capabilities are aimed at unsophisticated targets. They are, as a result, not use-and-lose—often, where the goal is not advanced degradation of foreign security capacity, they are either cheap or predeveloped.

Moreover, such capabilities also benefit from the well-documented benefits of attribution difficulties. Future meddlers should bear in mind, however, that when used to specifically attack political figures or organizations it is not improbable that cyber acts of limited added value of an IW effort might result in retaliation anchored in the rationale of subjective importance. Victim states or institutions may strike back with some force in order to set deterrent red lines around the compromise of specific persons or organizations deemed important to national interests. Particularly where foreign threat actors may not be one and the same as a foreign leader or government in an operational sense, this potential sets up the conditions for further escalation. Thus, though the added value of cyber operations in the “hacking” of democracies may be functionally minimal, the danger in their employment should not be dismissed and researchers would do well to further study the cross-level interaction of adjunct applications of cyber instruments and interstate conflict.

Footnotes

1

Perhaps the best of which is Farrell and Schneier [18]. Farrell and Schneier take initial steps in line with the arguments made in this paper. Nevertheless, their description of democracies as information systems is limited by the broad characterization of democratic information treatment as being generically about consensus. Other initial-but-limited efforts include Refs [19], [20], and [21].

2

It would not do here to proceed without recognizing that hybrid warfare is a term often used interchangeably and with a degree of conceptual confusion alongside others such as active measures, political warfare, irregular warfare, more recently as a description of hybrid force employed in network-dense environments, and IW. For work on hybrid warfare, see inter alia [33–36].

3

The concept of a marketplace of democratic discourse is a commonplace reference point for studies that examine drivers of public and foreign policy, broadly writ. Beyond the earliest cohesive articulations of the metaphor by John Stuart Mill and others, the model has been widely applied in prominent work on the determinants of great power conflict, on issues in conflict resolution and intervention studies, and more. For seminal uses, see inter alia [58–63].

4

Prominent work in this vein inevitably includes the program of work on the democratic or liberal peace in international relations. Examples include Refs [22, 64–70].

5

Laid out as a criticism to the conventional view of voter behavior in Ref. [78]. Thereafter taken up by a broad body of work on retrospective voting. For recent entries see, among others [79–82].

6

This is perhaps the most commonly cited requirement of democratic information systems metaphorically posed as marketplaces of ideas found in the most prominent treatments using the model in IR.

7

External subversion describes the actions of states attempting to subversively influence conditions abroad. External subversion is a common tool of statecraft and is often used to achieve ancillary aims for states (or specific rulers) interested in affecting political change abroad through more traditional means, including conquest and the securing of favorable treaty arrangements. See Ref. [124].

8

This effect has yet to be considered in the burgeoning literature on cyber coercion, notable entrants to which are Refs [96, 163, 164].

References

1

Demirjian
K.
Russian ads, now publicly released, show sophistication of influence campaign. Washington Post, 1 November
2017
.

2

Chalfant
M.
Cheney: Russian election interference could be ‘act of war’. The Hill, 27 March
2017
.

3

Jamieson
KH.
Cyberwar: How Russian Hackers and Trolls Helped Elect a President What We Don't, Can't, and Do Know
.
Oxford University Press
, Oxford,
2018
.

4

Arquilla
J.
The Advent of Netwar. Rand Corporation, Santa Monica, CA, New York, NY,
1996
.

5

Deibert
R.
Black code: censorship, surveillance, and militarization of cyberspace
.
Millennium J Int Stud
2003
;
32
:
501
30
.

6

Libicki
MC.
Conquest in Cyberspace: National Security and Information Warfare
.
New York
:
Cambridge University Press
,
2007
.

7

Reveron
DS (
ed).
Cyberspace and National Security: Threats, Opportunities, and Power in a Virtual World
.
Washington, DC
:
Georgetown University Press
,
2012
.

8

Rid
T.
Cyber war will not take place
.
J Strat Stud
2012
;
35
:
5
32
.

9

Lindsay
JR.
Stuxnet and the limits of cyber warfare
.
Secur Stud
2013
;
22
:
365
404
.

10

Gartzke
E.
The myth of cyberwar: bringing war in cyberspace back down to earth
.
Int Sec
2013
;
38
:
41
73
.

11

Brandon
V
Maness
RC.
Cyber War versus Cyber Realities: Cyber Conflict in the International System
.
Oxford University Press
, New York, NY,
2015
.

12

Chertoff
M
Rasmussen
AF.
The Unhackable Election: What It Takes to Defend Democracy
.
Foreign Aff
2019
;
98
:
156
.

13

Rid
T
Buchanan
B.
Hacking democracy
.
SAIS Rev Int Aff
2018
;
38
:
3
16
.

14

Omand
D.
The threats from modern digital subversion and sedition
.
J Cyber Policy
2018
;
3
:
5
23
.

15

Pope
AE.
Cyber-securing our elections
.
J Cyber Policy
2018
;
3
:
24
38
.

16

Hansen
I
Lim
DJ.
Doxing democracy: influencing elections via cyber voter interference
.
Contemp Pol
2019
;
25
:
150
71
.

17

Mansfield-Devine
S.
Hacking democracy: abusing the Internet for political gain
.
Network Secur
2018
;
2018
:
15
9
.

18

Farrell
HJ
Schneier
B.
Common-Knowledge Attacks on Democracy. Berkman Klein Center Research Publication No. 2018-7, October
2018
.

19

Smeets
M
Lin
HS.
Offensive cyber capabilities: to what ends? In: 2018 10th International Conference on Cyber Conflict (CyCon), pp.
55
72
. IEEE, Tallinn, Estonia,
2018
.

20

Lin
H
Kerr
J.
On cyber-enabled information/influence warfare and manipulation. In:
Cornish
P
(ed.),
Oxford Handbook of Cyber Security
.
New York
:
Oxford University Press
,
2018
. https://www.theatlantic.com/technology/archive/2017/10/what-facebook-did/542502/ #121.

21

Paul
C
Matthews
M.
The Russian “Firehose of Falsehood” Propaganda Model: Why It Might Work and Options to Counter It
.
Santa Monica, CA
:
Rand Corporation
,
2016
.

22

Snyder
J
Ballentine
K.
Nationalism and the marketplace of ideas
.
Int Secur
1996
;
21
:
5
40
.

23

Kaufmann
C.
Threat inflation and the failure of the marketplace of ideas: the selling of the Iraq war
.
Int Secur
2004
;
29
:
5
48
.

24

Thrall
AT.
A bear in the woods? Threat framing and the marketplace of values
.
Secur Stud
2007
;
16
:
452
88
.

25

Paris
R.
Peacebuilding and the limits of liberal internationalism
.
Int Secur
1997
;
22
:
54
89
.

26

Reiter
D
Stam
AC.
Democracy, war initiation, and victory
.
Am Pol Sci Rev
1998
;
92
:
377
89
.

27

Cramer
JK.
Militarized patriotism: why the US marketplace of ideas failed before the Iraq War
.
Secur Stud
2007
;
16
:
489
524
.

28

Mansfield
ED
Snyder
J.
Democratization and the danger of war
.
Int Secur
1995
;
20
:
5
38
.

29

Driscoll
K
Hall
B
Sivencrona
H
Zumsteg
P.
Byzantine fault tolerance, from theory to reality. In:
International Conference on Computer Safety, Reliability, and Security
, pp.
235
48
.
Berlin, Heidelberg
:
Springer
,
2003
.

30

Sanger
DE.
Obama strikes back at Russia for election hacking
.
The New York Times
,
2016
, 29 December
2-16
.

31

Tomz
M
Weeks
JL.
Public Opinion and Foreign Electoral Intervention, Annu Rev Polit Sci 2019;1–18.

32

Fabre
C.
The case for foreign electoral subversion
.
Ethics Int Aff
2018
;
32
:
283
92
.

33

Monaghan
A.
The ‘war’ in Russia's ‘hybrid warfare’
.
Parameters
2015
;
45
:
65
.

34

Lanoszka
A.
Russian hybrid warfare and extended deterrence in eastern Europe
.
Int Aff
2016
;
92
:
175
95
.

35

Renz
B.
Russia and ‘hybrid warfare’
.
Contemp Pol
2016
;
22
:
283
300
.

36

Chivvis
CS.
Understanding Russian “Hybrid Warfare”. The RAND Corporation, Santa Monica, CA,
2017
, pp.
2
4
.

37

Gioe
DV
Lovering
R
Pachesny
T.
The soviet legacy of Russian active measures: new vodka from old stills?
.
Int J Intell Counterintell
2020
;
33
:
1
26
.

38

Armistead
L
(ed.),
Information Operations: Warfare and the Hard Reality of Soft Power
.
Potomac Books, Inc
., Lincoln, NE,
2004
.

39

Hutchinson
W
Warren
M.
Principles of information warfare
.
J Inf Warf
2001
;
1
:
1
6
.

40

Kopp
C.
Shannon, hypergames and information warfare
.
J Inf Warf
2003
;
2
:
108
18
.

41

Anderson
EA
Irvine
CE
Schell
RR.
Subversion as a threat in information warfare
.
J Inf Warf
2004
;
3
:
51
64
.

42

Jormakka
J
Mölsä
JV.
Modelling information warfare as a game
.
J Inf Warf
2005
;
12
25
.

43

Pegues
J.
Kompromat: How Russia Undermined American Democracy
.
Rowman & Littlefield
, Lanham, MD,
2018
.

44

Cebrowski
AK
Garstka
JJ.
Network-centric warfare: its origin and future
.
US Naval Instit Proc
1998
;
124
:
28
35
.

45

Moffat
J.
Complexity Theory and Network Centric Warfare
.
DIANE Publishing
, Collingdale, PA,
2010
.

46

Heil
AL.
Voice of America: A History
.
Columbia University Press
,
2003
.

47

Kushner
B.
The Thought War: Japanese Imperial Propaganda
.
University of Hawaii Press
, Honolulu, HI,
2007
.

48

Woolley
SC
Howard
PN
(eds).
Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media
.
Oxford University Press
,
2018
.

49

Monaco
NJ.
Computational propaganda in Taiwan: where digital democracy meets automated autocracy. In: Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media,
2017
.

50

Fitzgerald
CW
Brantly
AF.
Subverting reality: the role of propaganda in 21st century intelligence
.
Int J Intell Counterintell
2017
;
30
:
215
40
.

51

Gioe
DV
Goodman
MS
Frey
DS.
Unforgiven: Russian intelligence vengeance as political theater and strategic messaging
.
Intell Natl Secur
2019
;
34
:
561
75
.

52

Gorwa
R.
Computational Propaganda in Poland: False Amplifiers and the Digital Public Sphere. Samuel Woolley and Philip N. Howard, eds. Working Paper 2017.2. Oxford, UK: Project on Computational Propaganda. comprop.oii.ox.ac.uk. 32.

53

Kucharski
A.
Post-truth: study epidemiology of fake news
.
Nature
2016
;
540
:
525
.

54

Lazer
DM
Baum
MA
Benkler
Y
et al.
The science of fake news
.
Science
2018
;
359
:
1094
6
.

55

Fletcher
R
Cornia
A
Graves
L
et al. Measuring the reach of “fake news” and online disinformation in Europe. Factsheets Reuters Inst
2018
:
1
10
.

56

Leetaru
K.
Twitter's great bot purge and can we really trust active user numbers? Forbes, 21 July
2018
.

57

Romm
T.
Iranians masqueraded as foreign journalists to push political messages online, new Twitter data shows. Washington Post, 17 October
2018
.

58

Downs
A.
An Economic Theory of Democracy
.
New York
:
Harper
,
1957
.

59

Snyder
J.
Myths of Empire: Domestic Politics and Political Ambition
.
Ithaca, NY
:
Cornell University Press
,
1991
.

60

Russett
B.
Grasping the Democratic Peace: Principles for a Post–Cold War World
.
Princeton, NJ
:
Princeton University Press
,
1993
.

61

Van Evera
S.
The Causes of War.
Ithaca, NY
:
Cornell University Press
,
1999
.

62

Snyder
J.
From Voting to Violence: Democratization and Nationalist Conflict.
New York
:
W.W. Norton
,
2000
.

63

Reiter
D
Stam
AC
III
.
Democracies at War
.
Princeton, NJ
:
Princeton University Press
,
2002
.

64

Milton
J.
Areopagitica: 1644. Alex. Murray,
1868
;1.

65

Mill
JS.
On liberty. In: John R. 
A Selection of His Works
.
London
:
Palgrave
,
1966
,
1
147
.

66

Coplan
KS.
Climate change, political truth, and the marketplace of ideas
.
Utah Law Review
2012
:
545
.

67

Owen
JM.
How liberalism produces democratic peace
.
Int Secur
1994
;
19
:
87
125
.

68

Layne
C.
Kant or cant: the myth of the democratic peace
.
Int Secur
1994
;
19
:
5
49
.

69

Russett
B
Layne
C
Spiro
DE
et al.
The democratic peace
.
Int Secur
1995
;
19
:
164
84
.

70

Kinsella
D.
No rest for the democratic peace
.
Am Pol Sci Rev
2005
;
99
:
453
7
.

71

McCombs
ME
Shaw
DL.
The evolution of agenda‐setting research: twenty‐five years in the marketplace of ideas
.
J Commun
1993
;
43
:
58
67
.

72

Mirowski
P.
A visible hand in the marketplace of ideas: precision measurement as arbitage
.
Sci Context
1994
;
7
:
563
89
.

73

Krebs
RR
Kaufmann
C.
Selling the market short? The marketplace of ideas and the Iraq war
.
Int Secur
2005
;
29
:
196
207
.

74

Dunne
T.
Liberalism, international terrorism, and democratic wars
.
Int Relat
2009
;
23
:
107
14
.

75

Cramer
JK
Thrall
AT.
Introduction: understanding threat inflation. In: Jane C. and A. Trevor Thrall eds
American Foreign Policy and the Politics of Fear
.
Routledge
, Oxford, Oxfordshire,
2009
,
19
33
.

76

Drezner
DW.
The Ideas Industry
.
Oxford University Press
, New York, NY
2017
.

77

Dahl
RA.
On Democracy
.
Yale university press
, New Haven, CT,
2008
.

78

Schumpeter
JA.
Capitalism, Socialism and Democracy
.
Routledge
, London, UK,  
2013
.

79

Kousser
T.
Retrospective voting and strategic behavior in European Parliament elections
.
Elect Stud
2004
;
23
:
1
21
.

80

Berry
CR
Howell
WG.
Accountability and local elections: rethinking retrospective voting
.
J Polit
2007
;
69
:
844
58
.

81

Woon
J.
Democratic accountability and retrospective voting: a laboratory experiment
.
Am J Polit Sci
2012
;
56
:
913
30
.

82

Healy
A
Malhotra
N.
Retrospective voting reconsidered
.
Annu Rev Polit Sci
2013
;
16
:
285
306
.

83

Christopher
HA
Bartels
LM.
Democracy for Realists: Why Elections Do Not Produce Responsive Government
, Vol.
4
.
Princeton University Press
, Princeton, NJ,
2017
.

84

Schmuhl
R
Picard
RG.
The marketplace of ideas
.
The press. Institutions of American democracy series
2005
:
141
55
.

85

Burnell
P
Schlumberger
O.
Promoting democracy–promoting autocracy? International politics and national political regimes
.
Contemp Pol
2010
;
16
:
1
15
.

86

Klyamkin
I
Timofeev
L.
Tenevaya Rossiya [Shadow Russia]. Economic and sociological research. Russian State Humanities University, Moscow, Russia,
2000
.
224
5

87

Golosov
GV.
The 2012 political reform in Russia: the interplay of liberalizing concessions and authoritarian corrections
.
Probl Post-Communism
2012
;
59
:
3
14
.

88

Osipian
AL.
Loyalty as rent: corruption and politicization of Russian universities
.
Int J Sociol Soc Pol
2012
;
32
:
153
67
.

89

Calingaert
D.
Election rigging and how to fight it
.
J Democ
2006
;
17
:
138
51
.

90

King
G
Pan
J
Roberts
ME.
How censorship in China allows government criticism but silences collective expression
.
Am Polit Sci Rev
2013
;
107
:
326
43
.

91

King
G
Pan
J
Roberts
ME.
Reverse-engineering censorship in China: randomized experimentation and participant observation
.
Science
2014
;
345
:
1251722
.

92

Hassid
J.
Safety valve or pressure cooker? Blogs in Chinese political life
.
J Commun
2012
;
62
:
212
30
.

93

MacKinnon
R.
Networked authoritarianism in China and beyond: implications for global internet freedom. Liberation Technology in Authoritarian Regimes, Stanford University, Palo Alto, CA,
2010
.

94

Han
R.
Adaptive persuasion in cyberspace: the ‘Fifty Cents Army’ in China, APSA 2013 Annual Meeting Paper, New York, NY,
2013
. https://ssrn.com/abstract=2299744.

95

Shane
S
Mazetti
M.
The plot to subvert an election: unraveling the Russia story so far. New York Times, 20 September
2017
.

96

Valeriano
B
Jensen
BM
Maness
RC.
Cyber Strategy: The Evolving Character of Power and Coercion
.
Oxford University Press
,
2018
.

97

Mazzetti
M
Benner
K.
U.S. Indicts 12 Russian Agents in 2016 Election Hacking. New York Times. 5 September
2018
.

98

Rid
T.
Disinformation: A Primer in Russian Active Measures and Influence Campaigns. U.S. Senate Select Committee on Intelligence Hearing, 30 March
2017
. https://www.intelligence.senate.gov/sites/default/les/documents/os-trid-033017.pdf (1 October 2017, date last accessed).

99

Lipton
E
Sanger
DE
Shane
S.
The Perfect Weapon: How Russian Cyberpower Invaded the U.S. New York Times, 13 December
2016
. https://www.nytimes.com/2016/12/13/us/politics/russia-hack-election-dnc.html?_r=0 (7 July 2017, date last accessed).

100

Sanger
DE
Corasaniti
N.
DNC Says Russian Hackers Penetrated Its Files, Including Dossier on Donald Trump. New York Times, 14 June
2016
. https://www.nytimes.com/2016/06/15/us/politics/russian-hackers-dnc-trump.html? mcubz=3&_r=0 (29 September 2017, date last accessed).

101

Farkas
J
Bastos
M.
State propaganda in the age of social media: examining strategies of the internet research agency. In: 7th European Communication Conference (ECC),
2018
.

102

Boatwright
BC
Linvill
DL
Warren
PL.
Troll factories: the internet research agency and state-sponsored agenda building. Resource Centre on Media Freedom in Europe,
2018
.

103

Linvill
DL
Warren
PL.
Troll factories: the internet research agency and state-sponsored agenda building,
2018
. pwarren.people.clemson.edu/Linvill_Warren_TrollFactory.pdf.

104

Farkas
J
Bastos
M.
IRA propaganda on Twitter: stoking antagonism and tweeting local news. In: Proceedings of the 9th International Conference on Social Media and Society, pp.
281
5
. ACM,
2018
.

105

Farwell
JP.
Countering Russian Meddling in US Political Processes
.
Parameters
2018
;
48
:
37
47
.

106

Satter
R
Donn
J
Day
C.
Inside story: how Russians hacked the democrats' emails: how did Russian hackers pry into Clinton campaign emails? Huge effort made quick work. US News, 4 November
2017
. Associated Press.

107

Dean
K
, et al. , The Facebook Ads Russians Targeted at Different Groups, Washington Post, 1 November
2017
. https://www.washingtonpost.com/graphics/2017/business/russian-ads-facebook-targeting/? nore direct=on&utm_term=.eec213460940.

108

Politico Staff
. The Social Media Ads Russia Wanted Americans to See, Politico, 1 November
2017
. https://www.politico.com/story/2017/11/01/social-media-ads-russia-wanted-americans-to-see-244423.

109

Posner
S.
What Facebook can tell us about Russian sabotage of our election. Washington Post, 27 September
2017
.

111

Kim
YM
Hsu
J
Neiman
D
et al.
The stealth media? Groups and targets behind divisive issue campaigns on Facebook
.
Polit Commun
2018
;
35
:
515
41
.

112

Bond
RM
Fariss
CJ
Jones
JJ
et al.
A 61-million-person experiment in social influence and political mobilization
.
Nature
2012
;
489
:
295
8
.

113

Phillips
C.
Twitter CEO Jack Dorsey admits ‘left-leaning’ bias but says it doesn’t influence company policy. Washington Post, 19 August
2018
.

114

Amanda
S.
Google apologizes for mis-tagging photos of African Americans. CBS News, 1 July
2015
.

115

Stefanidis
A
Cotnoir
A
Croitoru
A
et al.
Demarcating new boundaries: mapping virtual polycentric communities through social media content
.
Cartogr Geogr Inf Sci
2013
;
40
:
116
29
.

116

Hemsley
JJ
Eckert
J.
Examining the role of “place” in twitter networks through the lens of contentious politics. In: 2014 47th Hawaii International Conference on System Sciences (HICSS), pp.
1844
53
. IEEE, Honolulu, HI,
2014
.

117

Anselin
L
Williams
S.
Digital neighborhoods
.
J Urban
2016
;
9
:
305
28
.

118

Michael
K.
Bots trending now: disinformation and calculated manipulation of the masses
.
IEEE Technol Soc Mag
2017
;
36
:
6
11
.

119

Keller
TR
Klinger
U.
Social bots in election campaigns: theoretical, empirical, and methodological implications
.
Polit Commun
2019
;
36
:
171
89
.

120

Manor
I.
The specter of echo chambers—public diplomacy in the age of disinformation. In:
The Digitalization of Public Diplomacy
.
Cham
:
Palgrave Macmillan
,
2019
,
135
76
.

121

Woolley
SC.
Automating power: social bot interference in global politics
.
First Monday
2016
;
21
.

122

Jamieson
K.
Could Russian trolls have helped elect Donald Trump? Washington Post, 10 November
2017
.

123

Davlashyan
N
Charlton
A.
Russian bots, trolls test Waters ahead of US midterms. Associated Press, 15 July
2018
.

124

Laurence
B.
Power through Subversion
.
Washington, DC
:
Public Affairs Press
,
1972
.

125

Department of Homeland Security
, Joint Statement on Election Security, 7 October
2016
. https://www.dhs.gov/news/2016/10/07/joint-statement-department- homeland-security-and-office-director-national.

126

Mayer
J.
How Russia Helped Swing the Election for Trump. New Yorker, 1 October
2018
.

127

Sanger
DE
Savage
C.
US Says Russia Directed Hacks to Influence Elections. New York Times, 7 October
2016
.

128

Nakashima
E.
Russian government hackers penetrated DNC, stole opposition research on Trump. Washington Post. 14 June
2016
.

129

Rid
T.
How Russia pulled O the Biggest Election Hack in US History. Esquire, 20 October
2016
.

130

Sanger
DE.
The Perfect Weapon: War, Sabotage, and Fear in the Cyber Age,
2018
.

131

Gaoette
N.
FBI's Comey: republicans also hacked by Russia. CNN, 10 January
2017
.

133

Bennett
B.
CIA chief calls WikiLeaks ‘non-state hostile intelligence service’. Los Angeles Times, 13 April
2017
.

134

Satter
R.
Inside story: how Russians hacked the Democrats’ emails. A.P., 4 November
2017
. https://www.apnews.com/dea73efc01594839957c3c9a6c962b8a.

135

Berzon
A
Barry
R.
How alleged Russian hacker teamed up with Florida GOP operative. The Wall Street Journal, 25 May 2017.

136

Heikkilä
N.
Online antagonism of the alt-right in the 2016 election
.
Eur J Am Stud
2017
;
12
. https://journals.openedition.org/ejas/12140.

137

Gillies
J.
Introduction: the 2016 US Presidential Election. In:
Political Marketing in the 2016 US Presidential Election
.
Cham
:
Palgrave Macmillan
,
2018
,
1
9
.

138

18 revelations from Wikileaks' hacked Clinton emails. BBC, 27 October

2016
.

139

Shear
M.
Released Emails Suggest the D.N.C. Derided the Sanders Campaign. New York Times, 22 July
2016
.

140

Clifton
D.
Russian trolls stoked anger over black lives matter more than was previously known. Mother Jones, 30 January
2018
.

141

Bessi
A
Ferrara
E.
Social bots distort the 2016 US Presidential election online discussion.
2016
;11-7.

142

Smialek
J.
Twitter bots may have boosted Donald Trump’s votes by 3.23%, researchers say. Time, 21 May
2018
.

143

Vargo
CJ
Guo
L
Amazeen
MA.
The agenda-setting power of fake news: a big data analysis of the online media landscape from 2014 to 2016
.
New Media Soc
2018
;
20
:
2028
49
.

144

Allcott
H
Gentzkow
M.
Social media and fake news in the 2016 election
.
J Econ Perspect
2017
;
31
:
211
36
.

145

Update on Twitter’s review of the 2016 US election. Twitter Public Policy, 19 January

2018
.

146

McKay
T.
Reddit: we've found 1,000 suspected Russian troll accounts, but most of them sucked at getting upvotes. Gizmodo, 10 April
2018
.

147

Kim
D
Graham
T
Wan
Z
et al. Tracking the digital traces of Russian trolls: distinguishing the roles and strategy of trolls on Twitter. arXiv preprint arXiv
2019
;1901.05228.

148

Inkster
N.
Information warfare and the US presidential election
.
Survival
2016
;
58
:
23
32
.

149

Sinclair
B
Smith
SS
Tucker
PD.
“It’s largely a rigged system”: voter confidence and the winner effect in 2016
.
Polit Res Q
2018
;
71
:
854
68
.

150

Arrow
KJ.
Social choice and individual values
, 2nd edn.
New York
:
Wiley
,
1963
.

151

Atkinson
R.
Content analysis of Russian trolls' Tweets circa the 2016 United States presidential election.
2018
.

152

Many in the GOP were wary of Trump. They’re coming around. Washington Post, 3 June

2016
.

153

Muir
WK.
The bully pulpit
.
Pres Stud Q
1995
;
25
:
13
7
.

154

Flowers
A
Zeadally
S.
US policy on active cyber defense
.
J Homel Secur Emerg Manag
2014
;
11
:
289
308
.

155

Fortunato
D
Hibbing
MV
Mondak
JJ.
The Trump draw: voter personality and support for Donald Trump in the 2016 Republican Nomination Campaign. Am Polit Res
2018
;
46
:
785
810
.

156

Williams
EA
Pillai
R
Deptula
BJ
et al.
Did charisma “Trump” narcissism in 2016? Leader narcissism, attributed charisma, value congruence and voter choice
.
Pers Individ Dif
2018
;
130
:
11
7
.

157

Gorodnichenko
Y
Pham
T
Talavera
O.
Social Media, Sentiment and Public Opinions: Evidence from #Brexit and #USElection, No. w24631. National Bureau of Economic Research,
2018
.

158

Rid
T
Buchanan
B.
Attributing cyber attacks
.
J Strateg Stud
2015
;
38
:
4
37
.

159

Tsagourias
N.
Cyber attacks, self-defence and the problem of attribution
.
J Confl Secur Law
2012
;
17
:
229
44
.

160

Beyer
JL.
The emergence of a freedom of information movement: anonymous, WikiLeaks, the Pirate Party, and Iceland
.
J Comput Mediat Commun
2014
;
19
:
141
54
.

161

Pieterse
JN.
Leaking superpower: wikiLeaks and the contradictions of democracy
.
Third World Q
2012
;
33
:
1909
24
.

162

Gomez
MAN.
Sound the alarm! Updating beliefs and degradative cyber operations
.
Eur J Int Secur
2019
;
4
:
190
208
.

163

Whyte
C.
Ending cyber coercion: computer network attack, exploitation and the case of North Korea
.
Comp Strat
2016
;
35
:
93
102
.

164

Lindsay
JR
Gartzke
E.
Coercion through cyberspace: the stability-instability paradox revisited. In:
Greenhill
KM
Krause
PJP
(eds),
The Power to Hurt: Coercion in Theory and in Practice
.
New York
:
Oxford University Press
,
2016
:
179
203
.

165

Fleming
DA
Chong
A
Bejarano
HD.
Trust and reciprocity in the aftermath of natural disasters
.
J Dev Stud
2014
;
50
:
1482
93
.

166

Virilio
P.
Speed and information: Cyberspace alarm!
.
ctheory
1995
:
8
27
.

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.