[Discussion] Do you run a community website? How do you ban?

Jeff Atwood’s write up is probably the best I’ve seen yet. Suspension, Ban or Hellban? and gives some cool ideas for some methods.

But this is something that’s always kind of caught my attention when creating user based or community based systems. Mostly because it’s very very hard to put anything in place that actually works even a little bit without a massive amount of enforcement. When you implement the bans that Jeff talked about and let the users know, or they find out, they start getting paranoid and thinking that any error, lag, or bad post is because they were banned. But, if you do an outright denial type banning, then people get mad and you run into DDoS issues or multiple malicious accounts, or anything they can think of.

The thing that really surprises me is how little it seems to be talked about in blogs and discussions. I think that might just be the “if we ignore it, maybe it will go away” type mentality. I don’t think this is the way to go about things. As this site has seen, controlling content is very important. When I first came here every post I made had to be approved and I understand the reasons for doing that and options were limited, but it’s a very bad way to go about things because it discourages new members and free discussion… heck I don’t even know if this post has to be approved before it can be seen.

So, what methods do you use to ban? Do you do outright banning or some sort of soft-banning on the Coding Horror blog?

What about identifying users? VPNs are cheap and private browsing is very effective. Do you just make it as inconvenient as possible? Or do you take extra steps and add cookies or track IPs? Or do you go even further and implement something like cookieless cookies?

I don’t think I’m giving away any secrets here, but these are some of the things that we do here.

Spam and fake/bot accounts are banned on the spot, permanently.

For other breaches of the rules (eg fluff posting, advertising/self-promo (different from spam), personal abuse), we have a system of warnings and infractions. For your first offence, you get a warning. After that, you get infractions worth a certain number of points, which may expire after a set period or remain on your account forever - both depending on the nature and severity of the offence. If you reach a total of 8 points then you are banned, but if some of those points subsequently expire and bring you back below 8 then you can use the forums again. This gives people a fair chance to mend their ways, but also gives us a robust audit trail for banning people after a series of lesser offences. (We also operate a “double jeopardy” system where you can’t be penalised twice for the same type of offence in 24 hours).

When we give warnings and infractions, we do tell people what they have done wrong and what the deal is, but in a lot of cases people don’t read/understand it.

For the most part it works well. Occasionally we do come across Rasputin-type characters who just won’t stay dead, and who keep signing up under a new name the instant we ban their last one, but it just takes perseverance and eventually they give up.

A couple of other points:

We are lucky that we have a large volunteer staff, so we can have reasonably complicated systems, but it is still a lot of work to keep on top of problem posts.

We did look at a similar feature to “hellban”, but decided against it. I don’t remember exactly what the reason was, but I suspect uncertainty around how robust the plug-in was may have played a part! The reason for considering this option was that one or two members were causing us so much trouble by defying bans and signing up under new names, and we wondered whether it would be a more effective way to stop them wreaking havoc. A downside would have been that it would have made it more difficult to spot the new incarnations. This was only ever consideredas a practical measure. We have never, as far as I’m aware, considered anything along the lines of “slowban”, where we would deliberately sabotage users’ experience of the site. For me personally that just seems too much like taunting people on an ego-mad power trip. I’m not sure what it solves, other than vengeance. It isn’t transparent, so it’s unlikely to have a corrective effect. That isn’t the kind of culture that we want to foster!

Third time lucky - the other thing I was going to add is about posts requiring approval before they appear publicly. There are three aspects to this here:

  1. Any new threads in the “reviews” section are automatically put in the moderation queue, because we have particular rules and requirements that people have to satisfy before they are eligible to ask for a review, so we need to be able to check they have done that.

  2. We have a watch list of common words or phrases used by spammers that are unlikely to be used in any legitimate posts. Any posts that fall foul of this ate put in the moderation queue so that we can delete them without them ever being publicly visible. Occasionally we get false positives from that, but not often.

  3. The forum software has its own set of heuristics that it uses to flag posts that might be unsuitable. Unfortunately we have limited scope to customise this, and we do get a lot of false positives from that, particularly where people have included code samples. That’s a nuisance with the kind of subject matter that we have on here, but we would rather have a few false positives that we need to approve manually than open the floodgates and remove the automatic protection.

Apart from the specific case of 1., and actual spam, we do our best to minimise the number of posts that have to be manually approved, but we are restricted by a combination of third party software and a trade-off with managing spam.

I find the post linked to a bit misleading. The Title is
Suspension, Ban or Hellban?
but seems to be more about “secret suspension” i.e.
hellbanned, slowbanned, and errorbanned

OK, I see his point, dealing with problem posts and accounts does drain time and effort away from more constructive pursuits.

And as a programmer, I can understand the appeal of solving the problem programmatically.

I also get the impression that Jeff Atwood has very little tolerance for SPAM and other similar problems and relies on outright Banning based on IP

I have reservations with this “one bad apple spoils the lot” approach.

And I am strongly against what I consider to be “playing games” with problem accounts.

IMHO
State up front what the policies and rules are.
In many cases - be fair - and give them a chance to correct their behavior. To this end, it is important that they know exactly what they did wrong.
If they persist, then inch them towards a Ban
If they still haven’t gotten the message and persist, Ban

While being fair does take more time and effort it, errmm, is only fair.

That’s kind of a luxury. It takes time and effort to get to know people and get to where you can trust people with such things. Especially for a small development team with no room for any real community involvement, at least not involvement at that level.

Add to that, that both SitePoint and StackOverflow are for programmers. They are places that naturally draw a more technically competent crowd than somewhere else that would attract the general person. It’s amazing at how much instruction it takes to teach some people how to perform simple functions or even follow simple process flow.

I’ve had problems with these types of members becoming lazy and untrustworthy over time. Then if you don’t have the size or the ability to do the democratic thing talked about in the blog, you run into the issue of:

How do I remove this person without it looking bad for me or for them?
How do I remove them without making bad blood?

And those issues suck.

*By lazy, I mean don’t actually do what they are supposed to do. You can’t expect too much from volunteers… But, I’m talking when people are still participating frequently and watching things happen that shouldn’t happen without doing anything.

I actually quite like the slowban and the errorban. The hellban to me is kind of drastic, because people could figure it out much faster than the other 2. If I want to remove someone from a community, it’s because I see them as a rotten apple and something that isn’t conducive to what I’m trying to foster in a community. Other than that, I don’t care about them. I just want them to go away and go away quietly. I can see the hellban working well for spammers though, just not actual users.

Both of these make them leave on their own. Their experience becomes annoying and not really worth their time and just down-right frustrating. So they leave. No hard feelings. No multiple accounts. And most importantly, no risk of them coming back with a DDoS… which is insanely hard/expensive to stop, extremely detrimental, and even worse still: stupidly easy to initiate.

These things become more important with the lower amount of registration you have, because it becomes easier for people to come back. My experience is with mostly anon and quick-register type sites.

Also, please don’t take this as criticizing this site in particular in any way what-so-ever! Spam and bad users are hard as hell to stop. There are really no perfect ways and really not very many “good” ways to do it either. Which is why I wanted to start this discussion. :smiley: Plus, you’re further limited and more exposed using widely used and old systems. I’m ready to see Discourse in action :slight_smile:

Stevie and Mittineague have done a great job of telling you what we do here, so I won’t repeat them, but I’ll address a couple of your points.

I’m surprised about that as well, because I think it’s very important to be transparent about your guidelines when it comes to banning.

Hmmm. We don’t have that policy for just the reasons that you say. It sounds like something is going a bit wrong with our automoderation. Unless you hit the trigger list you shouldn’t go into the mod queue. Apologies for that.

You’re right to a degree, but it’s not impossible. I’m a freelance Community Manager so I manage a number of communities. One is small as I started it from scratch a couple of months ago. I don’t have a team of mods so I rely on the built in tools that forum platforms provide. Discourse is good in that particular respect because it utilises crowd-sourced moderation.

The easiest way is to give them the opportunity to save face and walk away. I always contact people privately and explain to them what they’re doing wrong. I give them the opportunity to change. If they don’t, I give them the opportunity to walk. If they don’t, I demote them. That is firm and fair. Ultimately communities are NOT democracies. You take advice and listen to feedback, but at the end of the day it is the job of the CM to run a community that is healthy and enjoyable for everyone to participate in. And that means that everyone has to carry their share of the weight.

I don’t agree… You set guidelines and people take on the role based on those. You can expect them to fulfil those commitments, provided they are clearly communicated.

So are we.

And no problems, I understand that you’re not criticising us and I’m happy to discuss this more if you have other questions. :slight_smile:

[ot]

As a moderator, I can confirm that something has been amiss with the automoderator for some time. :frowning: It periodically takes the strunts against perfectly innocent members and starts putting all their posts into moderation, for no apparent reason. I realise this must be very frustrating, and I (and others) try to approve them promptly, but there are times when staff are thin on the ground.

Unfortunately, there’s no point trying to fix it now, when we’re about to move to Discourse. Hopefully the move will solve a lot of the current software frustrations. [/ot]