Another week, another plan by Twitter to combat abuse. It's easy to be cynical about the company's chance of overcoming a problem that's plagued its platform for years. But this time feels a little different.
To be sure, the new features themselves still seem incremental. Twitter said it plans to keep banned users from creating multiple new accounts via a combination of human and algorithmic intervention. It will introduce a “safe search” feature that hides sensitive content from search results. (Users can choose to opt out.) It will hide "low-quality" @-responses so users don’t get incessantly pinged with abusive tweets.
But the changes are coming at a time when the internet's biggest companies are starting to acknowledge, at least implicitly, that they can't really stay neutral when it comes to harmful content on their platforms. In trying to stake out neutral territory, they're finding there's really no ground to stand on.
"In the wake of the election, it feels like there’s a tide change around social media platforms and their responsibility," says Tarleton Gillespie, who studies the impact of social media on public discourse at Microsoft Research.
It’s been a few years since former Twitter CEO Dick Costolo admitted in an internal memo that Twitter sucks at dealing with abuse. Yet the company’s progress on mitigating harassment—especially for women and people of color—has been slow. Twitter has in some ways become so synonymous with harassment that it was reportedly seen as a liability for potential acquirers of the service. Late last month, CEO Jack Dorsey tweeted that the company is taking a "completely new approach to abuse." But any such effort is overshadowed by a president who has no problem using Twitter to unleash personal attacks.
"There’s a very dark mood in Silicon Valley right now," says Leslie Miley, a former Twitter engineering manager. "I think the thing people are grappling with overall is their part in this tragic black comedy that is Donald Trump administration."
Google and Facebook also seem to be feeling a need to grapple with the role they have played. Both have undertaken highly visible initiatives to curb fake news. Now, through a project called CrossCheck, they've committed to helping French newsrooms combat fake news ahead of the country's presidential election, which itself is being upended by a right-wing populist. Pressure on companies to fix their problems is coming from the public. But especially where politics is concerned, it's also coming from within. "I think there is a lot of worry and guilt among many tech employees at platform companies about their complicity in their platforms being used in ways that are destructive to society," says James Grimmelmann, a professor of law who studies social networks at Cornell Tech.
As for whether Twitter's latest efforts will work to stamp out abuse, it's hard to know in part because the details of the tech being used aren't known. In an ideal world, Grimmelmann says Twitter would allow its users to train a artificially intelligent system to recognize what kind of comments they do and don't mind seeing. Twitter says it can’t describe exactly how the process works for fear that abusive users will use the information to game the system.
Tools have long existed to block IP addresses linked to abusers. And harassment bots typically use the same wordings over and over again, which in theory should make them easy to detect. But critics of Twitter's abuse reporting process say it's as much a process problem as a tech problem, especially when it comes to understanding why Twitter lets cases of reported harassment slide. "There’s nothing I can do to ask a human to get involved" to give an abuser another look via an appeals process, says Brianna Wu, an engineer and video game developer who’s been targeted by the Gamergate movement. At the moment, it's still humans who have to figure out how to stop the tech they've created from doing any more harm.