Twitter announced an important update today to its abuse-prevention product along with changes to its user policy. Before I get to my opinion on these changes I’d like to present the closing paragraph from a New York Times book review as it’s rather timely to this discussion.
Last week the Times reviewed a book called “So You’ve Been Publicly Shamed” which discusses the phenomena of online public shaming: “from time to time, it seems as if every user of social media rises up as one to denounce, shame and remove an apparently deserving victim.” The book reviewer, Choire Sicha, noticed something that the author didn’t really highlight: that men who have been publicly shamed fare much better post-shaming than women in similar situations. Mr Sicha says it better than I ever could: “For women — and for all gender offenders, from gays to trans people — insult and the threat of murder are issued simultaneously.” He ends his review with this: “The actual problem is that none of the men running those bazillion-dollar Internet companies can think of one single thing to do about all the men who send women death threats.” It’s not that they don’t occasionally display empathy, but up until now, Twitter has long been silent on abuse and harassment issues. As CEO Dick Costolo admitted earlier this year: “We suck at dealing with abuse and trolls on the platform and we’ve sucked at it for years.” The process was unwieldy and worse, failed to get results.
So, back to today’s announcement. Twitter wisely attacked this problem from two different angles: product and policy. Product changes include, according to Twitter, an algorithm that automatically recognize harassment. “This feature takes into account a wide range of signals and context that frequently correlates with abuse including the age of the account itself, and the similarity of a Tweet to other content that our safety team has in the past independently determined to be abusive.” It’s an interesting product that will probably only improve with time as it “learns” how to automatically recognize and filter abusive statements.
Twitter has also decided on an interesting product tactic by limiting the reach of suspicious tweets. It wisely recognized a distinction between publishing a tweet and viewing that tweet. If a user tweets and nobody sees it, is the tweet even public? I’m not trying to make this too philosophical but look at it from the product perspective. Changing the rules of when and if a tweet shows up in a person’s mentions is an interesting approach. Of course, the proof will be in results: will women see less abusive tweets?
The second aspect of Twitter’s changes is to change policy. Twitter says they’ve updated their “violent threats policy so that the prohibition is not limited to “direct, specific threats of violence against others” but now extends to “threats of violence against others or promoting violence against others.” They say that this change will allow them to take additional verification, locking accounts for a limited period and locking accounts until tweets that are deemed offensive are removed. While Twitter says this “option gives us leverage in a variety of contexts, particularly where multiple users begin harassing a particular person or group of people” I think it also points to its weak point: that decisions on difficult topics will need to be made by humans, not algorithms. This means that Twitter will need to beef up its support team and build a team that understands what consists of abuse not just in the US, but globally. This needs to be a culturally and globally sensitive team and that’s not easy to build. Neither is it cheap.
This generation of tech companies are all about building something wonderful and offering it for free. This great quote from TechCrunch conveys this sentiment beautifully: “Uber, the world’s largest taxi company, owns no vehicles. Facebook, the world’s most popular media owner, creates no content. Alibaba, the most valuable retailer, has no inventory. And Airbnb, the world’s largest accommodation provider, owns no real estate.” When users of these platforms play nice, the world is full of rainbows. Yet when they don’t, and there will always be at least one person who will abuse the system, these platforms rarely have a call-in support line, slow response times, and, in the end, end up doing nothing.
Can Twitter’s new policy and product changes make a difference? It’s hard to tell at this point. Much will depend on the smarts and learning ability of the detection algorithm and the responsiveness and effectiveness of the human support team. Bottom line, much will depend on how many resources Twitter is willing to throw at this problem. I hope, for all of our sakes, not just women, that not only will these two changes make a difference but that this is a first step in many.