Games Don't Do Enough to Combat Toxicity at Launch

Riot Games has cutting-edge moderation tools at its disposal. Few of them are present in Valorant, which launched this week.
Characters from the game Valorant
Valorant launched earlier this week, but its anti-toxicity features lag.Courtesy of Riot Games

“I do not use voice chat if I’m going in alone,” says Valorant’s executive producer, Anna Donlon. Playing games like the first-person shooter Valorant, Donlon has always received comments about her gender. “It was a huge wake-up call for me when I experienced it in my own game. That sucked.”

In early May, when Valorant was in closed beta, publisher Riot Games said it would make an effort to curb toxicity after several of the company’s female developers spoke out about harassment they’d received in-game. On Twitter, a UX designer for another Riot game, Teamfight Tactics, posted a video of a teammate calling her a “thot” after she turned down his advances over voice chat. Wrote one senior game designer in the replies to the tweet, “It's fucked up, but this is why i added the #RIOT tag to my handle. I've noticed a significant decline in voice comms harassment since adding it.” Said Donlon, “Gross, this is creepy as hell. This is why I can't solo. I'm so sorry.”

Top Twitch streamer Imane “Pokimane” Anys had something to say about it, too: “please implement anonymous mode. i've asked since playtesting alpha + am tired of people calling me a skank, thot, or saying other rude and vulgar things EVEN when i don't use my mic. 🥺🙏”

Valorant launched earlier this week, but its anti-toxicity features lag. Like Overwatch, which didn’t get a function to report abuse on console until over a year post-launch, and Apex Legends, which launched without a report feature entirely, Valorant is not on track to keep up with gamers’ tendency to harass women and minorities. It launched before developers implemented a robust system to combat toxicity: strict, in-game messaging about what is not OK; incentives for prosocial behavior; and stern punishments for repeat offenders. (Last week, Amazon’s first big videogame, Crucible, launched without both voice and text chat, developers say, because they were not equipped to mitigate toxicity). Apex Legends publisher EA did not respond to WIRED’s request for comment. In 2017, Overwatch developer Blizzard described the implementation of a console reporting system as “extremely challenging.”

“I don’t think we were prepared nearly enough for games plagued by disruptive behavior—what a lot of people would refer to as harassment or toxicity in games,” says Donlan, a veteran of Call of Duty studio Treyarch. Based on my personal experience playing Valorant’s closed beta—an admittedly curated audience—I’d agree; in nearly every game, a teammate used a racist or homophobic slur. After release Tuesday, the games I’ve played have not fared much better.

Valorant is Riot Games’ first big game in 11 years, but it’s not the game company’s first time hosting a toxic community under its roof. League of Legends famously has a reputation for being unfriendly to women, minorities, and new or unskilled players. In one 2020 community survey with 3,784 respondents, 79 percent of League of Legends players said they were harassed after a match ended. For a decade, studies and articles have been written for and by female League of Legends players explaining and describing rampant sexism in the game. In 2018, a Kotaku exposé revealed endemic sexism at the company, which resulted in a gender discrimination lawsuit.

At a minimum, online multiplayer games have or eventually include reporting systems, so if anyone’s being toxic in voice or text chat, or intentionally sabotaging the game, teammates and opponents can alert moderators. A player who gets reported enough times might receive a silencing, suspension, or a ban. Today’s biggest game publishers now recognize that’s not enough, though, and Riot Games has received ample praise for “leading the charge” to implement more rigorous systems. In 2015, the studio introduced “reform cards” to League of Legends, which notified players punished for poor behavior why they were being reprimanded. In 2017, League of Legends added a system in which players could receive in-game rewards by leveling up “honor” with their prosocial behavior. In March, Riot Games publicized its new “Player Dynamics Discipline” team, which would leverage neuroscience, sociology, and more toward building a welcoming community.

Today, anyone logging in to Valorant for the first time will see a message with vague language around being a good community player: “Compete to win, together;” “Commit to respect and empathy;” “Protect my community.” The screen does not ask players to refrain from racist, sexist, or homophobic behavior. Valorant does have a “ping” system so players don’t have to communicate using voice or text chat, which could reveal facts about their demographic. And compared to closed beta, Valorant’s new reporting system now offers more options to explain how a player was toxic; you can say exactly why you think a teammate should potentially receive a punishment in the reporting interface. But reported players only receive chat and voice restrictions for their teams. Many of the technologies developed over a decade of League of Legends didn’t make it into this game.

When asked what lessons from combating toxicity in League of Legends will be implemented into Valorant, game director Joe Ziegler explained how “that became a big discussion at the very beginning of this beta.” While the Valorant team knew toxicity would be a big problem, he said, “it’s one of those things where a thing gets bigger than you expected it to be and is now a bigger problem than it was before, because there’s more people there.” Ziegler says Valorant will develop back-end technology that detects what’s being said and reduce jerk players’ exposure to other players.

Kat Lo, a researcher and consultant who studies toxicity in videogames and user interfaces, says that trust and safety measures are too often “thought to be a layer on top of the game and not essential to gameplay.” In her view, good reporting systems at launch include explicit, in-client messaging around unacceptable behavior, in-client acknowledgement that reports have been acted upon and prosocial encouragement systems, like League of Legends’ honor system. “When you have an embedded culture it is very, very hard to change that as opposed to shaping it from the beginning to be nontoxic,” Lo says.

Why wouldn’t a game company hold off on launching until it had those systems in place? Lo thinks, in part, it’s because game companies are focused on KPIs—key performance indicators—and good community feel isn’t as easily measured and conveyed to executives as the number of players you have or how much they spend. “It’s difficult to legitimize it as a core feature or an important thing to optimize for within the company,” Lo says.

WIRED followed up with Riot Games after Valorant’s release to ask why the game launch didn’t wait until a stronger anti-toxicity system was ready. WIRED also asked why Riot Games didn’t implement some of the features that had proven successful for League of Legends. We did not receive specific answers, but Donlon says that Riot learned a lot about what players want in closed beta, that they “want more ways to drive action against disruptive behavior” and that the team is “making consistent progress, year over year, as the community evolves.”

“We’re in a state currently where our goal is to protect all players from online harassment,” says Ziegler, noting that the systems in-game are “somewhat incomplete.”

Game launches are opportunities for game publishers to set the tone. Without robust systems in place to hold people accountable for their behavior online, toxicity proliferates and reinforces itself. Data from Riot Games itself noted that new players who encounter toxicity are 320 percent less likely to keep playing the game; this is how online gaming communities alienate women and minorities. And it erects barriers to skilling up when, as Donlon puts it, “they believe that [my voice/gender] is putting them at a competitive disadvantage.”

“Toxicity, harassment, and disruptive behavior in voice chat have always been the hardest of the problems,” says Donlon, “and it’s definitely the problem I think we’re going most aggressively at to find solutions for.”


More Great WIRED Stories