AI not the answer to fighting fake news just yet

What: Today’s AI algorithms aren’t efficient in identifying fake news yet, a new MIT study shows.

The authors of the paper titled “Are We Safe Yet? The Limitations of Distributional Features for Fake News Detection” define automatic detection of fake news as “a long outstanding and largely unsolved problem”.

Why: The advent of neural networks like GPT-2, dubbed “too dangerous to release”, meant that AI can now churn out disturbingly realistic news that is totally made up. And it only takes a click of a button. Kind of. Plus, extensive training on a large body of texts prior to that.

With false information’s potentially harmful impact on politics and society and the difficulty to flag it as such, the question of building a reliable system to recognize fake news is ever more pressing.

Related:

More precisely: The researchers showed that AI can effectively detect fake text provided that it is auto-generated (“produced by a language model”), using what they called a “stylometry-based provenance” model. They used an algorithm which could trace a text’s writing style back to its source.

Since malicious texts can be written by humans however, just like legitimate ones can be auto-generated, the above approach is not reliable defending against malicious text attacks.

“Our findings highlight the importance of assessing the veracity of the text rather than solely relying on its style or source.”, the researchers said.