Trolls on the internet – no-one loves them, no one wants them, but everyone tolerates them because you can’t punch those childish twerps in the face. The only way to deal with a troll effectively is to ignore it. Eventually the troll loses interest in trolling anyone and moves to another subreddit to find a different victim. Another option is banning them, but how can you know when someone is a troll? How can you preemptively ban them before they become a total pain in the ass? Researchers at Cornell University think they have the answer in the form of an algorithm that can be used to auto-ban trolls on the internet.

The researcher’s discovery and invention is the result of 18 months spent studying trolls on the internet and interviewing them and the moderators working at three high-traffic online communities:, and The study is partly funded by Google, in conjunction with Disqus. The algorithm is about 80% effective in identifying users who are trolls, but it has no associated plugin for any forum or online community just yet. So, if you’re hoping to get rid of the “lol, noob” comments on the Battlefield forums or the “rtfm, noob” comments on the Ubuntu forums, we’re still a ways from that.

The study is titled “Antisocial Behaviour in Online Discussion Communities” and was carried out by researchers Justin Cheng, Cristian Danescu-Niculescu-Mizil and Jure Leskovec. It compared anti-social users in these communities, referred to as Future Banned Users (FBUs), with users who are highly unlikely to ever be banned, referred to as Never Banned Users (NBUs). In just about all of the 10,000 FBUs that were studied and observed, almost all of them begin commenting at a lower standard of literacy than NBUs and tend to cling to specific comment threads before their final ban – engaging in a final flame war, if you will.

The study also found that FBUs started more threads, subreddits and introduction posts than NBUs on CNN’s community, while on Breibart and IGN they were more likely to join existing threads and slowly derail the conversation there. This makes sense to me – CNN is a news portal, so people will be more likely to post up news threads about topics that they want to discuss, ending up in engaging in flame wars and becoming trolls themselves.

There’s also an interesting find in the study that details how users who are censored excessively early on when joining a forum are more likely to turn into trolls themselves. Communities actively dealing with trolls every day also become more hostile to the FBUs, with their posts being deleted more frequently and being handed more temp bans more freely. This creates a snowball effect in the FBU’s behaviour, causing him/her to become even more antisocial and trollish until they are eventually permabanned in every way possible. In fact, this is listed as one of the potential downsides of using this algorithm to identify trolls in these communities.

“While we present effective mechanisms for identifying and potentially weeding antisocial users out of a community, taking extreme action against small infractions can exacerbate antisocial behavior (e.g., unfairness can cause users to write worse),” the trio write in their conclusion.

So, how do you get around it? Well, since the algorithm only identifies trollish behaviour if it is somewhat consistent, avoiding being detected by this plugin may be as simple as spacing out the trollish posts with some relatively normal ones in between. There might be a way of even figuring that out but until we do, we’re left to deal with these cretins on our own. Thanks, Obama.

Source: Antisocial Behaviour in Online Discussion Communities

Hey you! Share this! SHARE IT WITH EVERYONE!
Share on Facebook0Share on Google+1Tweet about this on TwitterShare on Reddit0Share on StumbleUpon0Pin on Pinterest0Share on Tumblr0