Combatting Disinformation Campaigns Using AI-Based Threat Intelligence

April 12, 2021
Disinformation campaigns, spread via social media, aim to sow uncertainty, spread conspiracy theories, or sway public opinion. They can also threaten public safety.

Disinformation campaigns, spread via social media, aim to sow uncertainty, spread conspiracy theories, or sway public opinion. They can also threaten public safety.

Local law enforcement agencies face a number of challenges when attempting to monitor, investigate and take action against disinformation events. Threat actors can easily conceal their identities on social media platforms using fake accounts. Identifying an individual threat actor can prove difficult enough, but investigators must also contend with false information spreading through many-layered, but loosely structured, networks. Such networks sometimes use misdirection to hide their origins. A group of propagandists in New York, for example, could spread information through a news site based in Europe to make their information appear to be coming from a disinterested source. 

Complex disinformation networks conceal the motives and objectives of the individuals involved. The task for investigators is to untangle the networks, determine the threat actors' game plan and take steps to disrupt their activities. 

Those activities can occur across a wide spectrum: Disinformation events range from nation-state initiatives that destabilize entire countries to false narratives that incite lone wolf threat actors. Regardless of the source or scale, such events can inspire individuals or groups to make and carry out threats. 

A force multiplier 

With much at stake, and frequently limited investigative resources, law enforcement agencies need a force multiplier to scour the web for threat actors and stay ahead of potential threats to lives and property. AI-driven threat intelligence can provide the leverage investigators need to successfully expose a disinformation campaign before deceptive words lead to violent actions. 

Conducting a successful investigation without automated assistance is close to impossible. The social media accounts that could potentially serve as part of a dissemination network number in the millions. Website trafficking in conspiracy theories occupy not only the familiar "surface" web but also its dark web counterpart. The problem of searching the vastness of the web for signs of trouble compounds when one considers the multitude of hashtags, and clusters of related hashtags, that could indicate trouble ahead. The scope of continuous monitoring is just too enormous for investigators to manage solely through manual means. 

Automation also offers the benefit of speed, the pivotal element of any kind of investigation. Timely access to trustworthy and actionable threat intelligence puts law enforcement agencies in a position to deal with disinformation before it metastasizes into a dangerous incident. 

Indeed, intervention at an early stage can prove critical. Disinformation efforts are crafted to spark the curiosity of recipients and, if successful, draw them deeper and deeper into a web of untruths. Such campaigns appeal to an individual's confirmation biases, creating an echo chamber that cuts off other sources of information and delegitimizes differing points of view. In addition, the social media outlets that serve as key conduits for disinformation also cultivate communities of like-minded individuals. The group employs social pressure to reinforce an individual's belief in fake news and conspiracy theories.  

The threat actors or networks behind disinformation may ultimately seek to radicalize recipients. This final step in the escalation process can result in physical violence or online threat-making and abuse of perceived enemies.

The role of AI in threat intelligence

Disinformation relies on the psychology of group identity, but technology provides the tools of this deceptive trade.  The Library of Congress' Law Library cites "the use of technological tools and techniques, including bots, big data, trolling, deep-fakes, and others, enabling those intending to manipulate public opinion…"

Disinformation's technology foundation requires a technology-driven response. Automated web intelligence coupled with AI and machine learning, can play a key role in providing threat intelligence before, during, and after an event. 

This capability monitors the online environment -- including social media sites, posts, and comments.  That's no small endeavor given the vast population of social media users, which Statista estimates to exceed 3.6 billion. Many of those users have multiple accounts, complicating matters. Automation and AI is the only practical way to sift through the social media-generated data and zero in on the actionable intelligence. A wholly manual investigation would occupy a bevy of human analysts for untenable amounts of time. 

Armed with AI, investigators can make an online sweep using search parameters dictated by the case they are working on, which could include geospatial data, keywords, images, and hashtags. Law enforcement agencies can employ this technology to identify the early warning signs of a potential civil disturbance. If a group is planning a protest within an agency's jurisdiction, for instance, investigators could construct a social media probe to search for the names of cities, specific neighborhoods, or streets and adjacent keywords such as "kill," "gun" or "attack." 

In today’s challenging investigative environment, law enforcement agents need an AI-driven capability to explore the so-called dark web, as well as popular surface websites and social media platforms. Dark websites are unindexed by conventional search engines and can only be accessed via anonymizing browsers. Sites on the dark web harbor extremist groups and serve as incubators for conspiracy theories, many of which eventually make the rounds on mainstream websites and social media outlets. So, the ability to search the dark web provides an opportunity to investigate disinformation efforts at an early stage. 

But the anonymous nature of web activity – fake social media accounts, bots, and the dark web – helps threat actors conceal themselves. As a disinformation event unfolds, investigators must penetrate the anonymity to pinpoint the individuals and groups behind the stealthy campaigns. Here, AI and machine learning help investigators pull together diverse pieces of data that even the most calculating threat actors will end up leaving behind. The aggregation and correlation capabilities of those technologies tie up the loose ends. Investigators can associate a threat actor's online "handle," for example, with other bits of information such as a mainstream social media account linked to a dark website or an encryption key assigned to a conventional email account. They can quickly deanonymize and identify a threat actor.

Uncovering networks, revealing objectives 

AI can also paint the bigger picture for investigators, conducting social media network analyses that may uncover additional individuals connected to a particular threat actor. As more individuals are uncovered, the outlines of disinformation networks begin to emerge. 

Such networks are important to unearth since the participants coordinate activities to amplify their messages. Hashtags have become a key mechanism for disseminating disinformation and targeting campaigns toward specific audiences. Networks may seize upon opportunities to add their hashtags to those publicizing an upcoming protest. While the anchor hashtag could be neutral -- #ProtestOnMainStreet – the piggybacking hashtags may be incendiary, urging violent action. 

Such strategic use of hashtags helps threat actors and networks steer recipients slowly and deliberately through a process of escalation. As a result, what was ostensibly a peaceful protest could become a riot.  Law enforcement agencies equipped with AI-based investigative tools can monitor social media to discover hashtag trends moving in an ominous direction.

Threat actor networks -- and their followers – can prove quite fluid over the course of a disinformation campaign. Adherents to a particular conspiracy theory may decamp en masse from a popular social media site, which expels them or censors their activities, to a fringe site. To track such migrations, investigators need a wide-ranging WEBINT capability that spans both established sites and emerging, or obscure, properties that address narrower audiences.

Using AI to unravel the identities of threat actors and networks also provides valuable insight into their motivations and objectives. A law enforcement agency could determine that a disinformation campaign targeting a municipal government stems from individuals or groups unhappy with a pending development project. City managers might then decide to seek compromise in a bid to forestall any additional tumult. 

In general, the more investigators know about the architects of disinformation the faster they can implement effective intervention strategies. 

AI and machine learning can also contribute to post-event analyses. If a disinformation campaign did spiral out of control, analysts will want to learn why and how. Continuous vetting models, built on AL and machine learning, can help, enabling investigators to apply lessons learned. Those models, focusing on the important hashtags, geolocations, keywords, and URLs, can help law enforcement agencies keep watch over disinformation fires that may reignite. The post-event debrief, supported with AI and machine learning, can also help agencies shore up policies and processes. 

The case for AI

As advanced technology can help prime agencies for future investigations, it can also help them collate data on threat actors that can be used in an eventual prosecution. AI plays a couple of roles in this process. First, social media network analysis can identify connections, and assess the strength of those linkages, among threat actors participating in a disinformation network. That kind of deep analysis can help investigators build a case. 

Second, AI can manage the information returned when a law enforcement agency subpoenas a social media platform. Data dumps numbering tens of thousands of pages aren't out of the question. AI offers agencies the ability to work with big data. The technology can ingest flat files for analysis and facilitates the batch processing of large datasets -- phone numbers obtained in a Title III investigation, for instance. The end goal is to make otherwise unwieldy data searchable. 

From the initial investigative inquiry to case-building challenges, AI and machine learning deliver critical value to a law enforcement agency. When integrated with WEBINT, those technologies provide a proactive threat intelligence tool that can help analysts overcome the shadowy nature of disinformation. 

AI can comb through big data, find telling correlations among pieces of information and identify threat actors and broader disinformation networks. As digital insurgences become a daily occurrence, and one that shows no signs of abating, enlisting AI and machine learning can lend agencies a much-needed edge. 

Johnmichael O'Hare is the sales and business development director of Cobwebs Technologies (www.cobwebs.com). He is the former Commander of the Vice, Intelligence, and Narcotics Division for the Hartford (Connecticut) Police Department. Prior to that, he was the Project Developer for the City of Hartford's Capital City Command Center (C4), a Real-Time Crime Center (RTCC) that reaches throughout Hartford County and beyond. C4 provided real-time and investigative support for local, state, and federal law enforcement partners utilizing multiple layers of forensic tools, coupled with data resources, and real-time intelligence. Contact him on [email protected] 

Cobwebs Technologies is a worldwide leader in web intelligence. Our innovative solutions are tailored to the operational needs of national security agencies and the private sector, identifying threats with just one click. Cobwebs solutions were designed by our intelligence and security experts as vital tools for the collection and analysis of data from all web layers: social media, open, deep, and dark web. Our web intelligence platform monitors these vast sources of data to reveal hidden leads and generate insights. Our exclusive technology extracts targeted intelligence from big data using the latest machine learning algorithms, automatically generating intelligent insights. For more information: https://www.cobwebs.com/

About the Author

Johnmichael O'Hare

Johnmichael O'Hare is the sales and business development director of Cobwebs Technologies (www.cobwebs.com). He is the former Commander of the Vice, Intelligence, and Narcotics Division for the Hartford (Connecticut) Police Department. Prior to that, he was the Project Developer for the City of Hartford's Capital City Command Center (C4), a Real-Time Crime Center (RTCC) that reaches throughout Hartford County and beyond. C4 provided real-time and investigative support for local, state, and federal law enforcement partners utilizing multiple layers of forensic tools, coupled with data resources, and real-time intelligence. Contact him on [email protected]

Sponsored Recommendations

Voice your opinion!

To join the conversation, and become an exclusive member of Officer, create an account today!

Request More Information

By clicking above, I acknowledge and agree to Endeavor Business Media’s Terms of Service and to Endeavor Business Media's use of my contact information to communicate with me about offerings by Endeavor, its brands, affiliates and/or third-party partners, consistent with Endeavor's Privacy Policy. In addition, I understand that my personal information will be shared with any sponsor(s) of the resource, so they can contact me directly about their products or services. Please refer to the privacy policies of such sponsor(s) for more details on how your information will be used by them. You may unsubscribe at any time.