Microsoft Used AI to Make a Bot That Comments on News

Researchers of Beihang University and Microsoft developed a bot that can read and write in the comment section of news articles published online. The name of the bot is “Deep Commenter” which they call “DeepCom” in short. If you’re a developer too and is interested to know the coding of the program, you can kill your curiosity visiting the Github.

Explaining the “DeepCom”, developers have explained, the system is formed through 2 neural networks. One is “Comprehends” that reads the article and find out the significant points. Second is “Writer” that writes the comment evaluating the title and content of the article. The whole system is designed on the same grounds exactly as humans evaluate any news online. Say, anyone who comes across a news article online; read the title, jot down the key points and write a comment on that basis. Comments usually carry a personal perception or upvote or loophole of the news.

‘DeepCom’ ditto works on the same basis without any manual intervention. Everything is automatic. Now, the question arises, what’s the purpose of developing ‘DeepCom’? Well, the answer is very simple. It’s simply designed to drive a number of readers and thereby increase the traffic of news websites. Articles on which comments are posted are likely to fetch the reader’s attention than others.

So far two versions of ‘DeepCom’ has released. Version one is on the 26th of September, 2019 and version two is on 1st October 2019. In the first version, the developers have made it clear that “Encouraging users to browse the comment stream, to share new information, and to debate with one another. With the prevalence of online news articles with comments, it is of great interest to build an automatic news commenting system with data-driven approaches.”

Upon reading the details, the constructive view on ‘DeepCom’ is, it is useful for generating fake engagement because it increases the readability of the news content and benefits humans.

However, when the second version is released, researchers have made the fact clear that it could be risky to make use of ‘DeepCom’. The reason they have stated is, ‘DeepCom’ operates through AI and AI can’t be equated with human intelligence.

Let’s understand it with an example, which is also given by the researchers in the Paper version. Upon reading news related to FIFA rankings, ‘DeepCom’ will post two comments. One is “If it’s heavily based on the 2018 WC, hence England leaping up the rankings, how is Brazil at 3?” and second is, “England above Spain, Portugal, and Germany. Interesting.”

Popular social media like Facebook and Twitter faced a lot of trouble due to fake accounts and botnets. Botnets are the system of interconnected bots that works together. This has proved dangerous in terms of political propaganda and the rampant spread of destructive political views around.

“While there are risks with this kind of AI research, we believe that developing and demonstrating such techniques is important for understanding valuable and potentially troubling applications of the technology,” said the researchers in the paper version releases on Oct 1, 2019.

Bots like ‘DeepCom’, creates a bias that may affect the other AI system. As of now, ‘DeepCom’ is trained for 2 datasets, One is made by Crawling Tencent News. Crawling Tencent News is a Chinese website that writes on news and review articles. Another one is from Yahoo news. Upon testing, it has been found that all the articles are written by humans which are biased in some or the other way. In such a case, ‘DeepCom’ fails to address the potential bias in the data.

Whitney Phillips, who is an assistant professor at Syracuse University in Communications, Culture & Digital Technologies, is of the opinion that “Some risks just don’t occur to some people, because they’ve never had to think about that or worry about those things impacting them.”

2 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here