Why we can’t have nice things: Facebook and the false news story

Fewer hoaxes in my newsfeed? Hopefully! Source: Facebook

Fewer hoaxes in my newsfeed? Hopefully!
Source: Facebook

Facebook announced today that it is trying to crack down on hoax posts by allowing uses to tag them in their feed (I haven’t seen this yet as I optimistically believe that my Facebook friends are too smart to share these hoaxes.) It will allow users to report a “false news story” under the post reporting dialog. The end result, however, won’t be to remove false stories from the newsfeed but rather to mark them as “suspicious.” As Gizmodo reported: “Interestingly enough, though, Facebook claims that it won’t be “removing stories people report as false” nor will it “review content and make a determination on its accuracy.” In other words, anyone with enough tech savvy will probably be able to find a way to game the system, since Facebook won’t actually be making its own value judgements.”

This “hands off” surprised me in one way. The Newsfeed is a major Facebook product and “getting it right” is a top priority for them. Also, news itself is an area that Facebook wants to concentrate and is becoming a greater source of referrals for news sites. The launch of Facebook’s standalone Paper app a few months ago demonstrated that Facebook is serious about being the first place readers go to for news. Given those two product goals alone, shouldn’t Facebook invest more in weeding out and yes, blocking, fake stories? Even at the cost of raising the ire of the free-speech advocates? Additionally, there can be real harm, not just to readers’ sensitivities, in some of these fake stories. Graham Cluley’s security blog explains: “In this way, scam messages can spread very quickly and help drive traffic to websites on behalf of fraudsters.

Typically money is earned through affiliate schemes, tricking users into completing online surveys or signing up for premium rate mobile phone services in the belief that they might win a prize.”

Where should Facebook Newsfeed stand, as a product, on the free speech vs harmful content spectrum?

Gmail phishing alert notice. The "Dear Friend" should have tipped me off.

Gmail phishing alert notice. The “Dear Friend” should have tipped me off.

Take a look, for example, at Gmail’s spam filtering, especially for emails that could be phishing scams. Google clearly marks those and sends them to languish in the Spam folder. After 30 days, spam emails are automatically deleted, unless they’re purged by the user earlier. Can Gmail’s filtering be called censorship? Or are they doing users a service by identifying the problematic emails. Google decided that the damage that these kinds of scams can cause unsuspecting users is reason enough to protect users from them. So, based on potential harm vs benefit to users, Google decided to tweak the product to protect users. That big red box is enough for me to ignore the email, if I even happen to see it in the first place.

Facebook’s equivalent of this warning is a light blue notice at the top of the post saying “Many people on Facebook have reported that this story contains false information.” That’s rather mild when compared with Gmail’s “Be careful with this message. It contains content that’s typically used to steal personal information.”

LinkedIn Groups: the moderators are responsible for content, the LinkedIn product is harmed.

LinkedIn Groups: the managers are responsible for content, the LinkedIn product is harmed.

But perhaps Facebook product teams are thinking that “we’re only the platform” and that it’s the user’s responsibility to decide whether to read a story. Well, LinkedIn has adopted that approach with its Groups product to the point where the “lose weight fast” and “get rich quick” offers show up in my feed almost every day. When I once asked LinkedIn why it doesn’t do more to prevent such garbage content from reaching users (usually it’s posted by members who recently joined the group who have few or no connections) the reply I received was that group owners are responsible for moderating content in the group. That’s an easy product approach to take but the end result is that LinkedIn Groups have become practically useless for me which is a shame. There is some valuable content shared in them on occasion with a good accompanying discussion that I no longer bother to look for. In the end, it’s not the specific group’s value that is diminished, it’s the overall LinkedIn product.

Improving their news product and protecting their users should motivate Facebook and maybe, in the future, we will see more product tweaks done to support that goal. Perhaps the posts will be filtered more aggressively, showing up in less timelines. Perhaps they will be marked in red, clearly marked as a “potential hoax.” Or perhaps, similar to that rarely-viewed “other” message box on Facebook, designed to protect users from harmful content, Facebook will create a “quarantined” newsfeed, one for suspected hoaxes only, in order to avoid the “censorship” label. One thing is certain: if Facebook wants to be taken seriously as the leader in news, it needs to take these hoaxes seriously.

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s