A new AI-powered Chrome extension judges content on its ‘trustworthiness’
Called Unpartial, it’s an offshoot of natural language processing previously developed by Recognant for cancer researchers and market testers.
An artificial intelligence company has launched this week what it says is the first autonomous AI tool for determining the “trustworthiness” of a story.
Recognant has described the tool as being a detector of “fake news,” but it’s actually a “trustworthiness” detector. It doesn’t check facts or validate the source, but instead uses generated rules to evaluate the internal validity of a story. It is appropriate only for content of about 300 words or more, which excludes most tweets or Facebook posts.
The tool looks at such factors as whether the article’s conclusions are too biased based on the information presented, if the claims are backed up and if the math makes sense. It also considers the correctness of the grammar, the density of facts and the presence of subjective statements. To employ the plugin, a user clicks the “UN” button on the browser and sees these kinds of overlays:
On the left is a story that Unpartial has rated as “Seems Legit,” and on the right is one rated not credible (“Fake News”). The summary description consists of sentences from the story, rendered by another Recognant tool called TLDR: Summarize Anything.
Ratings categories are: “Seems Legit,” “Consider a More Reputable Source,” “Seems Sketchy,” “Super Shady” and “Fake News.”
Founded in 2016, Recognant develops AI that is primarily used for natural language processing (NLP). Its software has been employed by market researchers to see in real time how a TV series is doing on social media, and by cancer research organizations to peruse medical journals and find alternative treatments based on responses to protein signals. It has also been used to determine which online ads were promoting human trafficking.
Wirtz said that untrustworthy content can have serious impacts on his company’s clients, such as spreading misleading info about how to spot victims of human trafficking or presenting cancer remedies that are not based on valid science. Brands, of course, are increasingly keen on keeping their ads away from questionable content, lest the association damage their hard-earned reputations.
Wirtz said his company decided to create this extension because “we had all the pieces laying around.”
Instead of the neural net computing that IBM’s Watson largely employs, he said, Unpartial uses a form of AI called mind simulation, where a seeded, initial set of rules is then supplemented by the software’s generation of additional rules.
One rule, he said, might specify that, if a negative word modifies an object, the sentiment expressed is negative. “A bad movie,” for instance, is negative, while “the movie’s characters were full of bad thoughts” is not necessarily.
This approach to NLP, Wirtz said, is more transparent than neural nets, because someone can go in and see why a particular judgment was rendered. It’s slower to learn the rules initially, he added, but, eventually, the processing becomes faster than neural nets.
To test its accuracy, Recognant fed the system 10,000 articles whose trustworthiness had previously been rated by humans. Wirtz said the system matched the human ratings 99 percent of the time.