This past week, I got a tremendous opportunity to participate in a Thomson Reuters Leadership Institute conference titled “Those Darkest Hours,” to not only commemorate the 20th anniversary of the September 11 attacks on our nation, but also examine where we are in our efforts to protect the United States from terrorist attacks.
Yes, we have definitely made progress in the effort to mitigate the threat of terrorism, but the landscape has changed, and 20 years after some of the most sensational and horrifying attacks on our country and people, the United States needs to adjust to new, evolving threats.
That was the discussion on the panel in which I participated, along with two brilliant former Homeland Security Department officials and the Thomson Reuters Manager of enterprise content as moderator.
So where are we? Has the definition of the term “terrorist” evolved? Has domestic terrorism surpassed international terrorist as a threat against our nation? Does it really matter when working to mitigate the threat? And is there a link between the international and domestic terrorism landscape?
Terrorism, as a general rule, is terrorism. It is defined broadly as “a person who uses unlawful violence and intimidation, especially against civilians, in the pursuit of political aims.” An international terrorist pledges allegiance to a foreign terrorist organization (FTO) or a state sponsor (we often discuss Iran being a state-sponsor of terrorism because it continues to fund, train, and direct terrorist proxy groups in the region to conduct terrorist attacks against western interests).
Domestic terrorism is more ideologically motivated – whether political, religious, social, racial or environmental—and is not connected to nor does it pledge allegiance to an FTO.
The distinctions are meaningful for government actors working to stop these groups, because the authorities used to research and stop them are different. But to a victim… why should it matter?
However, when we discuss these differences, it is critical not only for government officials to understand the distinctions, but also for private firms, companies, internet platforms, and financial institutions to not only understand the variances, but also share information about them.
If we look at homegrown jihadists in the United States, we see that converts are more likely to commit terrorist acts inspired by jihadism than those born into the Muslim faith. They are not uneducated, with 35 percent having at least attended college, if not graduated and only 16.5 percent having dropped out of high school. The median radicalization age for these terrorists is 21-22, and although we look at social media as a major culprit for radicalization, online activities only sometimes prompted the process for radicalization of homegrown jihadists.
The average age at which the offenders typically radicalize has dropped in recent years. The shift to social media recruitment tactics coincided with a noticeable uptick in the number of teenagers who became involved with terrorism-related crimes. Sometimes, but not always, online social media prompted the process. Social media sometimes fueled codependent radicalization of a peer group or of close relatives. In one case, three siblings from Chicago radicalized together. Only one of them was tried as an adult.
The domestic terrorists who are emerging as a major threat, according to the federal government, are a bit older. The average age of racially and ethnically motivated violent extremists (RMVEs) and other domestic violent extremists (DVEs) is around 37. The January 6 riot participants at the US Capitol this year generally tended to be older—aged 40-42. Although some groups that had some kind of organizational structure, such as the Proud Boys or the Oathkeepers, participated in the riot in DC, most rioters were either part of small, tight-knit groups of people, such as friends, acquaintances, and family members who jointly planned their trip and activities, or inspired believers who were not connected to any group and independently planned their participation.
And guess what! It’s those latter groups that are much harder to track and trace. A bank, travel agency, or airline is not going to flag as suspicious a family or an individual flying to our nation’s capital. A credit card company will not block a transaction if a small group or an individual decides to get gas on their way to DC or pays for a hotel room on their way. Asking them to justify their travel is intrusive and would be viewed as objectionable and meddling.
Although there are specific red flags that come into play when financial institutions and other companies monitor transactions to prevent money-laundering and the financing of terrorism, almost none of them apply to domestic extremists. Transactions in jurisdictions known for terrorist activity generally warrant closer scrutiny. Complicated funds transfers to hide the source and intended use of the funds will generally draw the attention of compliance professionals, as will unusual cash activity in foreign bank accounts.
These are not red flags for firms or financial institutions to track domestic extremists. ADL’s examination of domestic terrorism and extremism funding found that criminal activity is not a major source of funding for REMVEs and DVEs, and therefore money laundering flags would not be triggered, since there is no predicate offense that necessitates obscuring the origins of the cash. Domestic terrorists are mostly self-funding. They use crowdfunding platforms to raise money (although these companies appear to be catching on and are closely scanning who is using their platforms to raise funds), but mostly, they appear to still default to traditional methods such as online platforms and stores to sell wares, concerts, and in-person fundraisers to obtain resources from supporters using money orders and checks.
Because white supremacists often face “de-platforming” (banning users who violate terms of service) and exclusion from mainstream online methods of raising or transferring money, they have become particularly assiduous at exploiting new methods of fundraising, often seeking out platforms that have not yet realized how extremists can exploit them and have not developed policies or measures to counter such exploitation. When a new fundraising method or platform emerges, white supremacists can find a window of opportunity.
Financial institutions and fintech platforms need to understand the financial ecosystem in which these individuals and groups operate. How do they move money? What methodologies do they use? How will new payment methods, such as cryptocurrencies, impact their ability to conduct financial transactions?
I would say that transaction monitoring and knowing your customers, as well as sharing information with other companies and banks, will probably be the most helpful until effective methodologies have been researched and discovered. Is a longtime customer all of a sudden making regular deposits inconsistent with their regular financial activities? Is a client suddenly receiving funds from groups abroad? Are those foreign groups known as extremist? What kind of organizations or individuals are sending cash?
My firm, FiveBy Solutions, helps clients mitigate their risks, not just to ensure they are in compliance with US sanctions and other laws, but also to help them mitigate reputational risks by providing background, history, any known links to extremist organizations and possible connections to possibly questionable groups; research on leadership and corporate ownership and control structures; and glances around the corner—strategic looks—at upcoming designations, regulations, and restrictions to help clients be forward leaning in their efforts to prevent malign actors from using their products and technologies.
At the same time, information-sharing among tech platforms about objectionable content—especially with smaller companies that may not have the resources to perform content monitoring—becomes vital.
The Global Internet Forum to Counter Terrorism (GIFCT) was created in 2017 to prevent terrorists and violent extremists from exploiting digital platforms. The NGO was founded by founded by Facebook, Microsoft, Twitter, and YouTube “to foster technical collaboration among member companies, advance relevant research, and share knowledge with smaller platforms.”
They share hashes—digital signatures for an image or video—allowing members to identify visually similar content. They share URLs, helping GIFCT partners remove terrorist-connected links. And no, they do not share user data, for those who are worried about privacy. They use technical tools at their disposal and share relevant information with smaller entities.
When our members review the content they have identified by hashes, they have the option to feed back to the system and tell us whether they agree or disagree that any one hash relates to terrorist activity, and to rate its severity. At GIFCT, we respect that each member might operate a little differently. We don’t tell our members how to use the hashes or how to apply their own policies. Rather, we are here to help our members collaborate, and together we can make terrorists ineffective online.
This last bit is important.
REMVEs and DVEs tend to meet in internet chat forums and social media platforms. They often feel disenchanted with the current national climate, disenfranchised, isolated, and targeted. In real life, face-to-face encounters may inform them that their ideas and conspiracies are out of the norm, but when they meet like-minded individuals online, their biases are confirmed and they no longer feel alone. They are embraced as part of an internet community—the family dynamic they lack in real life.
Many of them will never embark on the path to violence, but censoring their ideas—regardless of how extreme they may sound—only reinforces their perception that they are targeted, and that their freedom of expression is being violated.
The National Strategy for Countering Domestic Terrorism rightfully says that people should not be targeted based on their political views, but the challenge lies in how to determine whether unsavory talk is on its way to becoming a violent act.
It is critical that we condemn and confront domestic terrorism regardless of the particular ideology that motivates individuals to violence. The definition of “domestic terrorism” in our law makes no distinction based on political views – left, right, or center – and neither should we.
Our panel held a robust discussion about strategies. My view is that these people are already isolated and feel marginalized and discriminated against, and government messaging has to be careful not to push them further away from their communities. lest their isolation becomes a self-fulfilling prophecy. The more their views are marginalized, the more they will assimilate said messaging as personal attacks and censorship against them.
So instead of punishing BadThink, counter it.
Tailor messaging to provide alternate viewpoints, rather than condemning opinions. Censorship will merely confirm victim mentality, but alternatives tell an individual that they are intelligent and open-minded enough to examine alternatives and come to conclusions on their own.
In the small town of Aarhus, Denmark, a deradicalization program in existence since 2013, seems to have had some success in helping reintegrate individuals with extreme views back into society and has been adopted nationally.
Instead of threatening to arrest and imprison young people who want to join extremist groups — and those returning from war zones — Danish authorities provide the would-be fighters with housing, healthcare, help finishing school and finding work.
[…]
The Danish program’s organizers say that while it is difficult to judge exactly how successful they've been, the number of people traveling from Aarhus to fight dropped from 31 in 2013 to just one last year. An estimated 115 Danes have gone to Syria and Iraq, making the country second only to Belgium in Europe in terms of the per capita number of foreign fighters it has sent to the Middle East.
Would a similar program work in the United States? Maybe not. We are a large and diverse nation, and the strategy seems to work best in homogeneous societies such as Denmark.
But I think the idea is sound.
Instead of isolating these people, bring them back into their communities with open minds and hearts.
Expose them to disparate views without condemning them as Nazis, fascists, etc.
Provide social support to draw them into society.
As I said on my panel, people who care about their communities are less likely to try to kill those who live in them.
The strategy requires cooperation between local law enforcement, schools, guidance counselors, teachers, mentors, and employers who are willing to take a chance on someone whose views may not align with their own. It’s a whole-of-community approach that may mitigate some of the distrust REMVEs and DVEs may have of state and local authorities.
And finally, a word about content monitoring.
Artificial intelligence and machine learning are incredible tools. They save time and resources, which is particularly important when gargantuan tech platforms have to deal with reams and reams of data. Unfortunately, these are still machines, and they have no real understanding of nuance, satire, and other mitigating factors.
I remember getting banned from Facebook because I made an “Achmed the Dead Terrorist” joke, quoting a routine by comedian Jeff Dunham.
Why? Because the joke used the word “kill,” and the AI automatically flagged it as problematic, issuing a ban for several days.
Worse yet, when I appealed the decision, I got an automated message that said Facebook just did not have the resources to examine every appeal, so the decision stood.
Now, I’m not a person who will automatically think I am being censored for my views just because stupid AI somehow glommed on to the word “kill,” and since it does not understand humor, banned me from a tech platform. However, someone who is already feeling persecuted and marginalized will think exactly that—Big Tech is censoring them because of their politics.
So spend the resources. For a company like Facebook and other tech giants, the human resources to analyze this content cost what amounts to couch change, but those extra pennies could help direct someone away from the path to radicalization by helping eliminate their perception that they are being targeted for their views. AI does not understand nuance. It has no comprehension of humor or satire. It does not grok regional and cultural references and colloquialisms. Human expert analysts do, and they can help assess the content to ensure that legitimate views are not censored based on mere words in an algorithm.
Yes, it will likely cost a bit more, but in the end, if a company can prevent a violent attack or even turn someone away from embarking on that road by eliminating their confirmation bias, I think it would be worth it.
In the end, it’s about balance. We need to balance the efforts to prevent violent attacks with the efforts to protect diverse views—even views we might find abhorrent. There is a difference between holding extreme views and acting on them.
It is the latter that we need to avert.