We are committed to providing fast, efficient, and affordable software solutions that set new standards in the software development industry.
  • The NY Times Turns to Artificial Intelligence
Technology Articles > Software > Web Development > The NY Times Turns to Artificial Intelligence

How often do you comment on an article? If you’re like most people, you’ve probably been deterred from commenting thanks to internet trolls (those people that comment with the sole aim of provoking others). If you’ve been subjected to these types of comments before, you might be reluctant to actually post a comment now. You may be thinking, ‘what’s the point?’

That’s what The New York Times is thinking too. Kind of. Today, the Times told press that it would be using Artificial Intelligence to cut back on useless comments. A special algorithm developed for the paper will weed out comments that aren’t constructive. Here’s how it will be done.

The Beauty of AI

An algorithm of this kind learns. Or, rather, it can be taught. Programmers will teach the algorithm to look for certain types of comments or phrases. In the case of the NY Times, those phrases will include anything with hate language, anything that’s not on-topic, or any comment that aims to provoke.

In order to teach an algorithm how to learn, though, you have to come up with the things that you might look for. The Times has a specific list of words and phrases that should trigger the algorithm to trash comments that aren’t constructive or don’t add to a conversation.

The main point of this is to encourage people with something to say to leave a comment -- and so that the Times (like other publications) can develop further content based on the comments people leave. It will also cut back on the work that moderators have to do.

Robots Aren’t Perfect

Algorithms like this one won’t replace human moderators completely. What they will do is make the job of a moderator easier. By weeding out comments that don’t belong on a page, moderators can then focus on those comments that should be approved. So while moderators won’t be replaced (and the Times notes that this is the case), the workload will be cut back.

There’s also the robot factor. Robots simply aren’t perfect. Plus, you have to tell an algorithm what to look for, and that’s not easy to do either. It’s hard to think of everything that someone might say or do to cause a problem in the comments section of an article. Programmers are bound to miss a few things that an algorithm should learn.

But that’s also the beauty of AI.

A Continuous Process

Moderators, I presume, will keep adding to the list of words and phrases that an algorithm should look for. Eventually, most things that can cause someone not to comment will be weeded out. Of course, that’s also the downside. Sometimes a word or phrase may belong to a legitimate comment, but an algorithm might automatically toss that comment aside. So there are two sides of this coin - though both sides lead to the preservation of human moderators.

The machines aren’t going to take over moderating any time soon. What these new algorithms will do, however, is make sure that people with a real voice have a chance to stand up - while people that act solely to provoke are left in the shadows where they should be.