I wanted to flag this for those who haven’t seen the piece I wrote last month on domestic terrorism after I participated in the “Those Darkest Hours” conference.
I have always believed that tech tools are not a substitute for human expertise, and that human-driven analysis is absolultely necessary to interpret data and recognize the linguistic, cultural, and other nuances in online content.
It’s not enough to have AI flag content because someone mentioned the word “kill” or other objectionable language. The extremely offensive word for a gay man in the United States in the UK means something completely different. A joke in Russia may not translate well in the United States. Linguistic and cultural experts can recognize these subtleties. They can recognize tone and timbre. They understand language subtleties in certain regions.
That’s why I am so grateful to Thomson Reuters Institute for publishing this piece, which was originally published as an advisory on FiveBy Solutions’ website.
Individuals who hold extremist views may never embark on the path to violence, but censoring their ideas, regardless of how extreme they may sound, only reinforces their perception that they are being targeted and oppressed. Challenging those notions requires a human touch.
AI tools are terrific at gathering the data and the red flags. Human experts supplement those tools with regional, linguistic, and cultural knowledge, helping ensure that ideas and views are not censored and individuals who hold them are not marginalized and vilified, further setting them on the road to extremism.
Thank you, Thomson Reuters, for being such a terrific partner!