As a product manager, it’s really easy to plan for good behavior. Think of a scenario, map the flow, design the product. It can be that simple. Of course, along the way assumptions are made about how the user interacts with the product. Assumptions that in some ways can be reflective of the PM’s personal experience and in others based on usage data or user feedback that just doesn’t represent every possibility. Regardless of the road it took to get there, be it willful ignorance or unintentional oversight, many social products don’t anticipate the ways abusers can utilize it to harass others.
I thought about this last week after the launch of Peeple, an app that calls itself the “Yelp for people.” Back in October, when it was first announced, it generated an incredible backlash. The Register called it “slander-as-a-service” and described it as “an app that lets people rate other people, whether they like it or not.” The Washington Post, no less, said: “It’s inherently invasive, even when complementary. And it’s objectifying and reductive in the manner of all online reviews. One does not have to stretch far to imagine the distress and anxiety that such a system would cause even a slightly self-conscious person; it’s not merely the anxiety of being harassed or maligned on the platform — but of being watched and judged, at all times, by an objectifying gaze to which you did not consent.” The co-founders seemed to shrug off all the criticism, ignoring the potential for abuse, and said that Peeple’s goal is “making the world more positive.”
TechCrunch also thinks that Peeple is not completely opt-in. From last week’s review: “In other words, even if you’re not participating, someone could write your review. Sure, that review might not be public, but it exists in a digital format on the company’s servers.” The reviewer, Sarah Perez, also said: “it appears the plan is to reactively handle abuse claims, much like larger social services like Twitter do (and struggle with) today. But for a service that involves providing a blank slate for the sole purpose of letting users write people recommendations, not having some basic, automated moderation system in place to at least block profanity and other keywords is either a glaring oversight or an intentional (and callous) decision. If the latter, it’s likely one that’s designed to beef up the company’s private database of bad reviews marked for sale.” Ms Perez’s final words: “Peeple is live on the iOS App Store for the time being. (TechCrunch is choosing to not provide a direct link.)”
The emphasis in Ms Perez’s review is mine. Twitter is a great place to engage, to find like-minded people and keep up with live events. Yet, as Ms Perez said, Twitter is struggling with how it handles harassment claims. Much of what it does is reactive, and is often inefficient at stopping abuse as it happens. Twitter’s challenge is to stop harassment without changing the features that make it great.
Randi Harper wrote three great posts on Medium about privacy and design. The first is about Facebook’s Real Name policy, its goal to eliminate harassment but it does take away the advantages that anonymity provides. She also points out that “the design of Facebook itself does not give as much positive feedback [as Twitter] to those seeking to harm.” The second lists feature suggestions for Twitter to better protect users from abuse, which I liked because most, such as user verification and mechanisms for blocking users and hashtags, can do a lot of good and seem like they wouldn’t harm the “essence” of Twitter. Finally, a post with ideas for cleaning up YouTube’s comments, something I think is impossible even with these tweaks. Ms Harper says: “Is this a big departure from what YouTube is doing now? Probably. Does this have a high engineering cost? Most definitely. Would it drastically improve the quality of content everyone sees on YouTube? Absolutely.”
My point in quoting Ms Dawson and Ms Harper’s review of these social apps is to second their opinion that abuse of the service and its users needs to be considered at the planning stage. PMs need to avoid the magical thinking that it will only be used “for good.” It’s not if a social platform will harbor harassment, it’s when. At that point, policies and product need to be ready for action. Easy? No, extremely difficult as no one platform seems to have solved it, real names, anonymity, or not. Sadly, what Peeple has done is to make abuse way too easy.