Meta (Facebook’s parent company) this week published a report about its efforts to remove what it calls “coordinated inauthentic behavior” from its platforms.
One man’s “coordinated inauthentic behavior” is another’s “disinformation campaign,” and Meta apparently removed Chinese and Russian campaigns—that were unrelated (or maybe not) with each other.
The Chinese-origin influence operation ran across multiple social media platforms, and was the first one to target US domestic politics ahead of the 2022 midterms and Czechia’s foreign policy toward China and Ukraine.
The Russian network — the largest of its kind we’ve disrupted since the war in Ukraine began — targeted primarily Germany, France, Italy, Ukraine and the UK with narratives focused on the war and its impact through a sprawling network of over 60 websites impersonating legitimate news organizations.
China’s influence operation was relatively small and targeted US voters of both major parties. In Czechia, this campaign focused on criticizing the government’s support of Ukraine after Russia’s invasion. In the United States, the fake accounts focused in divisive issues, such as abortion, Covid, and others in swing states. The Chinese operation was relatively small, according to Meta, and failed to gain traction or a real following.
It also lacked sophistication, including fake accounts that had female names in English and male profile photos.
The Russian campaign was more comprehensive and targeted multiple countries. The campaign was also more sophisticated, carefully recreating websites for legitimate media organizations and promoting Russian falsehoods using those websites.
We took down a large network that originated in Russia and targeted primarily Germany, and also France, Italy, Ukraine and the United Kingdom. The operation began in May of this year and centered around a sprawling network of over 60 websites carefully impersonating legitimate news organizations in Europe, including Spiegel, The Guardian, Bild and ANSA. There, they would post original articles that criticized Ukraine and Ukrainian refugees, praised Russia and argued that Western sanctions on Russia would backfire. They would then promote these articles and also original memes and YouTube videos across many internet services, including Facebook, Instagram, Telegram, Twitter, petitions websites Change.org and Avaaz, and even LiveJournal. …They operated primarily in German, English, French, Italian, Spanish, Russian and Ukrainian. On a few occasions, the operation's content was amplified by Russian embassies in Europe and Asia.
Facebook says this was the largest and most sophisticated Russian-origin campaign it has ever disrupted, and the multiple languages used in the fake sites required linguistic expertise to examine, likely by human analysts, the importance of which I have been highlighting for a long time now.
Early on, the Russians even tried to create their own “reliable” news sources, which the Russian embassies around the world tried to amplify (certainly a red flag that the news and accounts are a fake).
The false sites themselves appear to have been built with care, emulating real media websites and even linking to real news stories published by the outlets interspersed with lies with only minor mistakes that most may not have even noticed. For example, the spoofed Guardian website, while linking to the paper’s genuine news ticker and front page, did not include an appeal for support for the Guardian, and some links did not work.
I have doubts the Chinese and Russian campaigns aren’t related. Too many similarities. Russia has been engaging in this behavior for decades, targeting all political parties, sowing discord within US society, and weakening the public’s confidence in democratic processes, US infrastructure, and national leadership.
Chinese disinformation efforts are much similar. As a matter of fact, the State Department earlier this year flagged similarities and an increase in alignment between Chinese and Russian disinformation.
There’s an overlap in narratives about the discredited theory on US bioweapons in Ukraine that the Russians continue to promote in an attempt to justify their “US/Ukraine is engaged in provocations and is a threat to the Russian Federation” messaging and also about alleged US “global interference.”
Although these tactics are not new, I have a sneaking suspicion—based on the limited information available and my gut feeling—that there was at least some coordination going on between the Chinese and Russian actors. China cannot overtly support the mass slaughter, rape, and other war crimes committed by the Russians in Ukraine, but it can certainly provide tacit backing via disinformation efforts.
Facebook’s efforts to remove these accounts are useful, but don’t think for a moment that the malign actors will stop making accounts and spreading disinformation. Stop and check your source before accepting the information as truth.
Is the website authentic? Is it using homoglyph domain spoofs to imitate the real media source? Is the web address www.theguardian.com or www.theguardіan.com (written with a Cyrillic “i”), and can you tell the difference? Will you immediately catch a different domain name - .cam, .ltd, or even .ru? Will you catch that “.co” before the domain name, which normally doesn’t appear.
Does the content lean too much to a particular point of view? Of course, most media reports will lean one way or another, but dropping all pretext of objectivity in regular news reporting—especially if the media outlet has not done so previously—should definitely raise a red flag.
Does the report use words meant to evoke specific emotion? “Horrifying,” “bloody,” “agonizing,” or “deadly” are trigger words that are meant to provoke a negative response. Descriptors such as “outstanding,” “spellbinding,” or “euphoric” tend to evoke the opposite emotions. This red flag, coupled with a banner that perhaps is only slightly different from what you’re accustomed to—maybe a different font or a missing appeal for support that usually graces the top of The Guardian’s website—should set off your Spidey senses.
Are the accounts sharing a particular story or headline doing so in a synchronized manner and using the exact, verbatim language? Red flag.
Keep an eye out for fake personas. On Twitter, I’m always leery of recently created accounts that have no personal information, no avatar or cover photo, and sport names that are common with a bunch of random numbers attached. I will run such accounts through Bot Sentinel, which will give me a rating about the disruptiveness of the account, its activity, frequently used phrases in their Tweets, and other indicators that may point to an unauthentic account.
ADDED: There are apparently some issues with Bot Sentinel. Luckily, it’s not my only go-to when I look at the possibility of bot account engagement. Thanks to a friend for pointing me to this video.
I’ve been running some tests on this site - Botometer. It seems to be pretty decent.
If a Russian or Chinese embassy on Twitter amplifies one of these messages, spread by obvious bots, you can be sure the message is fake and meant to spread disinformation.
And I will run every image through a reverse image search, such as tineye.com. Knowing where your photos come from is as critical as understanding who shares them and why.
Point is, taking down these actors is often like playing whack-a-mole. They continue to create new accounts and share the same lies and misinformation.
Read the report and watch your six, because foreign malign actors are out there, and they are targeting US society: you.
Good information here. Thanks for the link to Bot Sentinel— I’ve never heard of it but will bookmark it.