Why we can’t have nice things: planning for abuse

As a product manager, it’s really easy to plan for good behavior. Think of a scenario, map the flow, design the product. It can be that simple. Of course, along the way assumptions are made about how the user interacts with the product. Assumptions that in some ways can be reflective of the PM’s personal experience and in others based on usage data or user feedback that just doesn’t represent every possibility. Regardless of the road it took to get there, be it willful ignorance or unintentional oversight, many social products don’t anticipate the ways abusers can utilize it to harass others.

I thought about this last week after the launch of Peeple, an app that calls itself the “Yelp for people.” Back in October, when it was first announced, it generated an incredible backlash. The Register called it “slander-as-a-service” and described it as “an app that lets people rate other people, whether they like it or not.” The Washington Post, no less, said: “It’s inherently invasive, even when complementary. And it’s objectifying and reductive in the manner of all online reviews. One does not have to stretch far to imagine the distress and anxiety that such a system would cause even a slightly self-conscious person; it’s not merely the anxiety of being harassed or maligned on the platform — but of being watched and judged, at all times, by an objectifying gaze to which you did not consent.” The co-founders seemed to shrug off all the criticism, ignoring the potential for abuse, and said that Peeple’s goal is “making the world more positive.”

At the time, Ella Dawson wrote an incredibly detailed post on the many ways Peeple could be used to harass people, especially non-members. “All it takes is for my ex to create a profile for me, or even one of his friends who he gives my [cellphone] number to, and a horde of abusers in online communities can send an avalanche of terror directly to my cellphone.” There were, at the time, no real anti-abuse tools, just the admonition that these users would be “violating the terms of use.” At the launch last week, Ms Dawson did say some changes had been made such as “app is opt-in, meaning people need to join the app and consent to reviews being posted about them, and that they now have the ability to hide their negative reviews.” However, it turns out that Peeple is planning to sell a “truth license” that allows paying subscribers (such as potential employers, universities, etc) to see “hidden reviews the individual has not elected to share. Meaning my ex’s grievances get some airtime. Meaning I do not actually have control of my profile.” It is not yet clear if that “truth license” will allow access to profiles of people who haven’t opted in to Peeple, which would be horrendous. Ms Dawson ended with a request: “Do not download the app. Every download pushes the rating of the app higher in the app store. Every download validates the app—a hate download is still a download.”

TechCrunch also thinks that Peeple is not completely opt-in. From last week’s review: “In other words, even if you’re not participating, someone could write your review. Sure, that review might not be public, but it exists in a digital format on the company’s servers.” The reviewer, Sarah Perez, also said: “it appears the plan is to reactively handle abuse claims, much like larger social services like Twitter do (and struggle with) today. But for a service that involves providing a blank slate for the sole purpose of letting users write people recommendations, not having some basic, automated moderation system in place to at least block profanity and other keywords is either a glaring oversight or an intentional (and callous) decision. If the latter, it’s likely one that’s designed to beef up the company’s private database of bad reviews marked for sale.” Ms Perez’s final words: “Peeple is live on the iOS App Store for the time being. (TechCrunch is choosing to not provide a direct link.)”

The emphasis in Ms Perez’s review is mine. Twitter is a great place to engage, to find like-minded people and keep up with live events. Yet, as Ms Perez said, Twitter is struggling with how it handles harassment claims. Much of what it does is reactive, and is often inefficient at stopping abuse as it happens. Twitter’s challenge is to stop harassment without changing the features that make it great.

Randi Harper wrote three great posts on Medium about privacy and design. The first is about Facebook’s Real Name policy, its goal to eliminate harassment but it does take away the advantages that anonymity provides. She also points out that “the design of Facebook itself does not give as much positive feedback [as Twitter] to those seeking to harm.” The second lists feature suggestions for Twitter to better protect users from abuse, which I liked because most, such as user verification and mechanisms for blocking users and hashtags, can do a lot of good and seem like they wouldn’t harm the “essence” of Twitter. Finally, a post with ideas for cleaning up YouTube’s comments, something I think is impossible even with these tweaks. Ms Harper says: “Is this a big departure from what YouTube is doing now? Probably. Does this have a high engineering cost? Most definitely. Would it drastically improve the quality of content everyone sees on YouTube? Absolutely.”

My point in quoting Ms Dawson and Ms Harper’s review of these social apps is to second their opinion that abuse of the service and its users needs to be considered at the planning stage. PMs need to avoid the magical thinking that it will only be used “for good.” It’s not if a social platform will harbor harassment, it’s when. At that point, policies and product need to be ready for action. Easy? No, extremely difficult as no one platform seems to have solved it, real names, anonymity, or not. Sadly, what Peeple has done is to make abuse way too easy.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.