Community moderation is no small task, especially in video games. People crack jokes about 12 year-old yelling racial slurs at them on Xbox Live, but that’s because it’s a thing that constantly happens. And while sometimes it seems like game developers and publishers don’t give a crap, generally speaking they do. The problem is, it’s not easy. Manual, human moderation is time-consuming in ways few tasks are, and automated moderation has far too many holes. For example, let’s take a look at Rainbow Six Siege, which may be the greatest example of automated moderation to date. It’s great because not only are people mad about it (for the most obvious reasons), but a nearly unexpected side effect has made the whole thing hilarious to boot.
Here’s the story. Some unfortunate individual yelled at the official Rainbow Six Siege Twitter account, saying they were banned for using the n-word. Of course, some letter substitutions were used, because public, online cowardice. The account responded, to the effect of saying good, that’s exactly what should happen. Then a massive storm followed, as the community realized that Rainbow Six Siege was actually automatically banning people who used certain words in text chat. Initial bans last just over a full day, with subsequent violations leading to more and more time, until a perma-ban kicks in.
Now, I’m of the camp that using racial slurs is a bad thing, and doing so in a public forum is especially bad. I also think using them in a private space, such as in someone else’s video game servers, and having big consequences for that behavior is also a good thing. It’s not censorship or thought policing, it’s consequences for subhuman behavior. But that’s not the end of the story here.
As I mentioned before, automated moderation has its issues. While it makes sense that a giant video game like Rainbow Six Siege can’t have a dedicated team of moderators watching player activity and responding to bad actors, it’s always risky to introduce bots. That’s because bots are ones and zeros which do literally what they’re told, nothing more or less. This means people can be unjustly punished in some cases and, in others, players can find ways to work around or abuse the auto mods.
This is one of those cases. Not long after the gaming community learned of the slur bans, the system was weaponized in an incredible display of human psychology. Now that players knew it was possible to get players banned automatically for certain word use, all it took is heated tempers for people to get goaded into saying them. Players would also use leading conversations and deliberately edgy humor to trick opponents. On the Rainbow Six Siege Reddit community, players have admitted to leveraging this system to get players banned at their convenience. That it actually works is the hilarious part. Arguably, this is the system working as intended, to a much greater effect.
This whole Rainbow Six Siege drama is another chapter in the book of toxicity in gaming. People have, for too long, viewed online gaming as the Wild West. They consider it a realm in which they can do whatever they want, up to and including using language that would get them punched in the face in a public setting. It’s really satisfying to see a company not only admit to a new zero tolerance policy, but loudly defend and boast about it. The weaponization thing? That’s just icing on the cake. A slur is a slur, and even if someone eggs you on until you say the word, you’re still the person who has that tucked away in their vocabulary. Bye!