It’s a given in social networks (and practically every digital product) that while the majority of users will use it positively, there will always be those that abuse the network and other users. That abuse can range from the minor, such as spam, and reach levels of harassment that can (and have) culminate in physical harm. There comes a point where steps will need to be taken to address the misuse, be it a minor annoyance or a major problem.
A few weeks ago, Dick Costolo, CEO of Twitter, admitted that Twitter wasn’t doing enough to prevent harassment and vowed to do more. Twitter has some blocking mechanisms for abuse but many times users aren’t penalized in the long term as they can always open another account. Facebook, which has a better reputation for preventing harassment, partially due to the nature of the network (duality of friendship) and which they credit to their Real Names policy, is now dealing with some of the drawbacks of that policy. With most networks, abusive behavior is like a game of whack-a-mole, where for every abusive user that is blocked, two take his place. The challenge for networks is that if the “legitimate” users, loosely defined as the ones that use the network in the positive way its creators intended, feel that there is too much abuse going on, they will leave, harming the network in the long run. Product managers should not only look at how they’d like users to use their product, but also how it can potentially be misused.
Curious about best practices, I decided to take a look at what LinkedIn does, as it doesn’t get called out for abuse often. Most of the spam I’ve seen on LinkedIn is in their Groups feature. Compared to other forms of engagement on LinkedIn, Groups rely on community moderators. Other modes of communication rely on either a previous connection between users or the use of paid services such as InMails. Groups, on the other hand, are free and for most, joining doesn’t require moderator approval. When I see spam in my groups, it’s usually from users whose profiles are easy to classify: less than 5 connections, sometimes just one or two, and one place of employment, usually without any job detail. They also tend to use a stock photo.
The “easy to classify” description bothers me. If spammers on LinkedIn are so easy to classify, easy to find common characteristics that define them, why doesn’t LinkedIn do more to stop them? Groups are an attractive target for spammers because members cannot opt out of receiving some sort of email notification (instant, daily or weekly) that a post has been made. It usually takes LinkedIn at least 24 hours to delete such posts and remove the fake user accounts but by then the email has gone out to the group members and the
post has been seen. It’s a great way for spammers to reach real people and I’m sure, based on the fact that I keep seeing variations of the same messages, that the conversion rate makes it worth their time. Yet, it cannot be this easy to note the similarity in abuse cases, determine a pattern, and block those who use it. Otherwise LinkedIn would have blocked this path, right?
I started this post by looking at LinkedIn because it, on the surface, seemed to have the least abuse, so it must be doing something right. While it must be more sophisticated than what I observed, it seems like there should be ways to identify and block the patterns that I saw with this quick analysis. What am I missing?