<img height="1" width="1" src="https://www.facebook.com/tr?id=154003588595255&amp;ev=PageView &amp;noscript=1">
Shift Media has now merged with EditShare! Learn more.     

     

Disinformation Breakdown: What Brands (and Democracies) Need to Know

Disinformation campaigns go beyond the realm of politics

September 11, 2019


Courtesy of Nike

Brands deal with customer feedback online every day, much of it negative. But complaints can present an opportunity: Quickly addressing negative feedback can improve customers’ perception, their loyalty, and ultimately a brand’s bottom line. But what about the extreme — when a wave of rabid anger erupts out of the wild stretches of the Internet, for example?

Take the case of Nike when it chose Colin Kaepernick for its famous Just Do It campaign. There was initially a wave of authentic reaction online: real people expressing real opinions, pro and con. But the next phase of furious debate — on social media and eventually throughout the press — was the work of bad actors who seized the opportunity to feed conflict and exacerbate Americans’ polarization. In a further attempt to raise distrust, 4chan trolls created a (fake) QR-coded coupon for African-American customers that, when scanned by a Nike store clerk, would read as a robbery threat. Fortunately for everyone, this ploy was a complete failure.

Disinformation is the name for this type of campaign, one that deliberately spreads false information with the intent of damaging its target. To be clear, it's not the same as misinformation, which is simply false information that’s spread unintentionally. There are many facets to disinformation. (Heard about Russia’s interference in the 2016 presidential election? That’s a drop in the bucket — a fat, very sinister drop, to be sure.) It’s not a brand-new phenomenon, either. But the Internet, the many ways social media can be exploited, and reactionary politics have given disinformation new potency, one that’s extending to attacks on brand integrity. “We don’t believe that brand value is determined as much by brand anymore as it is by word of mouth and this digital, social conversation,” says Paul Michaud, VP at Sprinklr, a social media management company. “[Bad actors] have recognized this power.”

The shape of disinformation

The bar to entry is low. Disinformation campaigns cost little to nothing and, on a basic level, don’t require particular skills like coding or hacking. At the same time, such campaigns increasingly deploy bots and malware to more readily spread the deception. So, while producing disinformation is cheap, its cost to corporations can reach millions of dollars when a successful disinformation campaign causes a dip in consumer trust, leading to decreased sales and sometimes a drop in stock prices. According to the 2019 Brand Disinformation Study by Sebring Web Solutions, 30 percent of respondents said they lose trust in a brand altogether after a “reputation-damaging event”; 42 percent say it takes three years or more for a brand to earn back their trust.

Art 1_revised stats

Into this morass wades Jonathon Morgan, CEO of the information integrity company New Knowledge. He and the tech and security experts behind the four-year-old business found themselves in a unique position to assist brands in fighting disinformation after spending years in the field of online extremism — i.e., counterterrorism, believe it or not. Studying recruitment tactics by groups like ISIS honed their skills in digital content distribution (like social media and online ads), online communities, and machine learning. As Morgan puts it, they learned “how to measure radicalization or strong, sharp changes in ideology, which is similar, in a weird way, to brand affinity. [T]he stakes are lower — no one’s dying when we talk about erosion of brand affinity. But the mechanics are the same. [T]his problem we were focused on for a long time was very applicable in [the] new world of information integrity as it relates to brands.”

Covert coordination is a key part of disinformation: A small set of people create the false impression that they're a way bigger number of unrelated individuals acting spontaneously.

Basically, the artificial intelligence (AI) developed by New Knowledge scans a given website for a type of spread: normal vs. artificial. When an idea or belief moves organically online — an authentic shift in opinion — the spread looks one way. When a group is secretly coordinating what looks like a building consensus (typically appearing much larger than it is due to the use of bots), the spread looks different. AI identifies signs of that coordination, detecting people (or, more often, bots) who are, in Morgan’s words, “describing a similar idea at a similar time in a way that is unnaturally consistent based on what we could expect the Internet to do.”

New call-to-action

Covert coordination is a key part of disinformation: A small set of people create the false impression that they are a way bigger number of unrelated individuals acting spontaneously. It’s how unsuspecting Internet users get ensnared in online turmoil that’s largely faked. (Remember the 2016 election? Exercising healthy skepticism online could really help us in 2020 — or Every. Single. Day, for that matter.)

The bleed between real and fake

Although the field of online brand integrity is growing, Morgan describes New Knowledge as unique in its focus on context to detect disinformation. In contrast, UK company Factmata focuses on content (the words in an article, for instance) via AI and natural language processing. Factmata even offers an app where you can score any (online) English-language news article for political bias and hate speech. Factmata’s CEO, Dhruv Ghulati, writes in an email, “A lot of bots use specific language to spread propagandist messages — we can add that [to our AI] as a signal.” And he advises brands, “What you really want is an analyst doing the research on the key opinions, claims, and arguments people are making . . . not just analyzing reputation risk and responding to crisis, but literally knowing what people think about something (which might also be helpful for product research).”

Needless to say, the growing threat presents brands with both a steep learning curve and an urgent need to distinguish real customers from bad actors or a PR crisis from a successful disinformation campaign. With the right data, companies can allocate resources appropriately in each case and minimize the bleed between real and fake. Like Morgan says, better analysis helps companies “get the most out of the social media environment, as opposed to being victimized.”

Sprinklr VP Michaud reminds brands, “You’ve got to be listening constantly, and you need AI to help sort it and route it to people when things go viral.” He says in some situations the best option is: “Don’t fuel the fire when you’re in a brand crisis, and you believe that your ads may only exacerbate that conversation. [W]e warn brands about the need to have good governance [and the controls] to be able to instantaneously shut things off.” This is of particular concern when brands use different agencies for every content and media channel. “The more companies between you and that channel, the slower it’s going to be. [Y]ou need the underlying technology to help you do it fast, to literally hit a button and say, ‘Stop,’” Michaud cautions.

The information ecosystem

It turns out exercising transparency can be a strength for brands when grappling with disinformation. Back to Sebring’s 2019 Brand Disinformation Study: 45 percent of respondents said they’d trust a brand more if it publicly announced that it’s fighting disinformation. You might even gain customers’ assistance; according to the 2019 Brand Disinformation Impact Study commissioned by New Knowledge, 18 percent of consumers surveyed said they would take action to defend a brand they trust when it’s the target of disinformation. On another level, more cross-industry collaboration (companies sharing reports, for instance, as opposed to the current norm of “every man for himself”) would improve the detection of disinformation.

It must be said, though, that issues of online deception, manipulation, and deepening distrust affect far more than corporate performance. Many perpetrators of disinformation seek to undermine the social fabric, trust in institutions overall, and democracy itself. Maintaining the health of the information ecosystem (i.e., the Internet) is in everyone’s interest; there’s far more at stake than any brand’s popularity. Social media platforms, legislators, and individuals all have a role. Morgan’s emphasis is for Internet users to note the context of information that’s spreading online, as opposed to our current tendency to fixate on a specific “fake news” article or doctored image. “It’s about the sharing of it, not about the content itself,” he says and suggests that a platform like Facebook could make a huge difference just by tagging an article with a note to the effect of: “There seems to be a coordinated effort to ensure this item is popular.”

As advice for everyone else, Michaud points out, “you should only have a handful of trusted sources. If you don’t know the source, don’t share it, don’t believe it."

Elizabeth Jackson is a fiction writer, classical violinist, and punk rock bassist. She’s currently revising her novel, Behind and Past and Front and Ahead, about a fantastical narcissist, his arsonist child, and the fallout of white supremacy. A longtime resident of Austin, Texas, Elizabeth has origins in Mississippi that inform her sense of stories, language, and weirdos.
Read more by Elizabeth Jackson