Another reason why developing for iOS has to be first

Having fun with Prisma, launched on iOS first and Android a week later.

Having fun with Prisma, launched on iOS first and Android a week later.

Product Hunt, one of the best (if not the best) site for surfacing new products, announced its finalists for the various Golden Kitty awards this week. I gravitated to the “mobile app of the year” award page to take a look at the nominees. Aside from spending time trying a few of them out (how did I not download Prisma before today?) I wanted to use this list to answer an important question: for successful, recently launched apps what came first, Android or iOS?

As an side, finalists were chosen “through a mix of community suggestion, data, and PH secret sauce,” meaning that it’s not entirely transparent but the Product Hunt community has surfaced great stuff in the past so these finalists are probably, at worst, a good representation of popular, new apps.

These are the finalists for mobile app of the year and their launch date on iOS and Android:

Apps, ranked by current upvotes In their words… iOS launch date Android launch date
Prisma AI that turns your photos into artwork in seconds 7/8/2016 7/15/2016
Hardbound Stories for curious minds 9/13/2016 No
Polymail iOS Simple, beautiful, powerful email for iOS 2/26/2016 No
Ulli Self-driving Internet. The first AI-powered mobile browser 10/13/2016 Planned
Houseparty If FaceTime was built as a social network (launched by Meerkat team) 9/28/2016 9/28/2016
Tinycards The future of learning with flashcards, by Duolingo 7/19/2016 No
Notion Artificial intelligence-powered email. 10/19/2016 10/19/2016
Whale Video Q&A with influencers and experts 10/13/2016 No
Anchor Record bite-sized podcasts that anyone can join 2/9/2016 8/25/2016
INKHUNTER Try tattoos in real-time with augmented reality 4/15/2016 10/6/2016
Dropbox Paper Mobile Organize your team’s knowledge in a single place 8/3/2016 8/3/2016
Nexar Turn your phone into an AI Dashcam 2/12/2016 8/31/2016
Bobby Keep track of your subscriptions 3/30/2016 No
To Round Task manager designed for visual thinkers 3/14/2016 3/14/2016
Airmail for iOS One of the best Mac email clients, now on iPhone 2/1/2016 No
Listen A smart phone number 11/17/2016 No
Front for iOS The first inbox for teams 9/30/2016 No
Lemon Know where your money’s going 6/16/2016 No
Winnie Great activities and destinations for families 6/9/2016 No
Hop The new face of email is fast, elegant, powerful, expressive 11/14/2016 11/29/2016

Out of these 20 apps, considered the best/most popular of 2016, all were launched on iOS first. Of the 20, 6 launched an Android version on the same day or within two weeks of launching iOS, 3 launched about half a year later, and 11 don’t have an Android version yet (though, to be fair, those launched later in the year might still intend to launch an Android app soon.)

To decide what platform to develop for, Adam Sinicki compares Android and iOS across five criteria: development platform (tie,) design (Android guidelines are better,) fragmentation (iOS wins easily,) publishing restrictions (Android wins,) and profits (iOS.) To those five I add the marketing angle. Even though the worldwide market share for Android is now at 88%, with iOS only at 12.1%, in the US iOS marketshare is at 40.5%, and a two year old study claims that iOS marketshare in San Francisco is over 80%. What this means is that almost everybody in the tech community, especially early adopters that love trying new apps and that can generate enough awareness and hype to get traction, have iPhones. To get their attention, iOS development has to be first, unless the app is targeted for an audience outside the US and Europe. End of discussion?

It’s the most wonderful time of the year… new adventures in shopping

It’s the first week of December, just another week in the busiest shopping season of the year for many American retailers for whom holiday sales account for “as much as 30 percent of annual sales.” Retailers will jump through hoops to get shoppers to complete purchases, offering freebies, sales and “doorbusters” just to get customers into the store and helpers on the floor to help shoppers find the gifts they need. Yet the checkout process has stayed the same (aside from slowing down the process with chip cards) with long lines of shoppers waiting to pay a common sight at many stores.

"What if we could weave the most adavnace machine learning, computer vision, and AI into the fabric of a store?" Source: Amazon Go video

“What if we could weave the most advance machine learning, computer vision, and AI into the fabric of a store?”
Source: Amazon Go video

Long lines lead shoppers to abandon their purchases, which is one reason that Amazon’s announcement today of a checkout-free store seems like a great step forward. Amazon Go is a store without a checkout line, where shoppers do not need to stop, unload their cart, and pay. All they need to do is install an app, scan it when they walk into the store, and start shopping. Says Amazon: “Our Just Walk Out technology automatically detects when products are taken from or returned to the shelves and keeps track of them in a virtual cart.” All items added in the store will be charged to the shopper’s Amazon account when they leave the store.

There are two things with this launch that I find interesting:

First is the realization that brick and mortar stores offer value. This from Amazon itself, pioneer of the online superstore and a long time believer that the best and only way to shop is online. Whether it’s the brand experience, the ability to touch products, try on clothes, see the size and actual color of the product, or maybe the instant gratification of an in-store purchase, malls and stores are not going away anytime soon.

Second is the admission that there are processes that can be improved in the retail experience that haven’t been changed for decades. It’s the same process now as it was in the last century: browse, gather, and pay. Maybe it’s time to improve parts of that process.

It’s not that stores haven’t tried. All have added an online presence with ecommerce capabilities. They have mobile apps for on-the-go shopping They’ve adapted the flash-sale phenomena, implemented in-store pick-up of online orders for faster gratification, guaranteed product availability, less wandering in the store looking for products, and less lines, and they’ve partnered with on-demand services like Postmates and Google Express to compete with Amazon’s fastest delivery options. Yet that hasn’t fundamentally changed the in-store process.

By launching no-checkout stores, Amazon has tackled a common pain-point for many shoppers. That said, it comes at a privacy price some consumers might not be willing to pay. When Amazon detects what products are removed and returned to shelves, it has valuable insight on what the shopper thought of buying, not just eventually bought. Considering the scourge of ads that retarget shoppers with products they have already bought, knowing what shoppers considered buying but did not buy is valuable information.

Finally, responses to Amazon Go have included a lament that this is another case of technology taking away jobs, in this case, cashier jobs. I’m convinced that the best brick and mortar experience will definitely include human interaction, just not in the cashier position. For example, our local hardware store has a greeter that asks everyone walking in what they’re looking for and points them in the right direction. (Yes, I know a robot can do this, but considering the current state of voice interactions as implemented on Google Home, it will take time before a robot greeter knows the answer to detailed questions without requiring the asker to use specific template.)  I’ve also a great shopping experiences buying clothes that have to fit just right, such as jeans and bathing suits, that were made easier with assistance from extremely competent salespeople. Sales people that are familiar with their product line, how different models fit, what works for what body type, and where to find everything do more to create a positive experience and generate a sale than any cashier. It’s not about eliminating humans from the shopping experience, it’s about eliminating a specific frustration. Can’t wait to have it everywhere.

The personalization scale: we don’t have to turn it up to 11

Many of the post-election op-ed pieces focused on analyzing and criticizing Facebook and its handling of its prize property: the newsfeed. I assume that as some point in ancient Facebook history, the goal of the newsfeed was to show users everything their friends shared, chronologically, somewhat like Twitter still does it today. That changed because there were too many updates and important ones were missed. So Facebook decided to personalize its newsfeed by looking at select engagement signals, analyze those signals across users to understand what each user liked, and optimize that process continuously.

Today’s newsfeed is a curated feed of updates from friends and followed organizations, selected by an algorithm that optimizes for engagement. People tend to click and like links and updates that match their existing opinions, prompting Facebook to show them more of those those, generating, in turn, more likes. This ends up creating a filter bubble, an “epistemic closure that comes from only seeing material you agree with on social platforms.” This is personalization taken to an extreme.

Volume up to 11. Source: Wikipedia on This is Spinal Tap

Volume up to 11.
Source: Wikipedia on This is Spinal Tap

This doesn’t mean that personalization is bad. In today’s “world of infinite information and limited attention” it is necessary in many products, especially those that deal in news, music, video, user-generated content and almost any update. The question is to what degree, and for that I propose thinking about a personalization scale, where zero is none and 10 is nothing but personalized content.

Think of Facebook as 11.

Yesterday’s topics based on an interest I have shown in Hamilton, Lin-Manuel Miranda, and Mountain View, but nothing about the tragic story of Brazil football team Chapecoense.

Yesterday’s topics based on an interest I have shown in Hamilton, Lin-Manuel Miranda, and Mountain View, but nothing about the tragic story of Brazil football team Chapecoense.

Let’s look at news as an example. A zero is the New York Times homepage: nothing is personalized, everything on that page was picked out by a human editor who has no commitment to engage the reader. It’s the old media model for the most part. On the other extreme, Google Now updates seem to be a 10. Google Now shows users a short list of news stories, each one defined by an interest that the user has displayed in the past. Sadly though, this turns out to be a very short list of topics that are very limited in scope and that keep repeating. Not only do I see repeating topics, I know that clicking an article will just reinforce that topic in Google Now’s algorithm, bringing me more of the same. That said, it’s easy to dissect Google Now because it kindly tells me what interest I’ve exhibited prompted the story to be included in the feed. It’s also very easy to customize the newsfeed and to remove topics and sources.

The point is that for any product that curates content, product managers need to find a sweet spot somewhere in the middle and remember that a product doesn’t have to choose only one place on the scale. For music, for example, listeners sometimes want to listen to their chosen artists or specific songs, sometimes they want to listen to what they’ve liked before (10,) sometimes just in the style of what they’ve liked before (around an 8,) sometimes from a genre they often prefer (closer to a 5,) and sometimes just whatever is on a DJ-curated playlist (a 1.)

For video/TV it’s the same: sometimes viewers want to watch their favorite team no matter what (9,) sometimes it’s what’s new from YouTubers they like (an 8?) sometimes just whatever was popular today (a 4,) and sometimes they just want to see what’s on now (0.)

A final note, I don’t think it’s important to give it each feature a precise number, but rather to realize where the offering is on the scale, what combination users are offered, and whether that is what they want. Even giving users more control may not be the best solution. Even though they may be actively customizing their selection, there may be times when they want content that is editorially curated and has nothing to do with their personal preferences.

They’re human, after all.

Welcome to my home, Google

Sitting pretty on the mantle: Google Home.

Sitting pretty on the mantle: Google Home.

As a family, we’ve been playing around with Google Home for the past few weeks and one thing I can say right off the bat: it fits right in. Within a short period of time Home has become our central music player, our question answerer and fact-verifier, and even our own personal game show host. I wasn’t sure we needed an always on, digital personal assistant sitting pretty (and it is pretty!) in our living room, but it turns out that we do.

Here are a few of the features I like:

  1. Music. At the launch event in October Google touted its speaker quality and emphasized that music will be one of the most used features of Home. It’s powered by Google search so that you can search either by name and artist, an album or a playlist (for Google Music.) It also works with other streaming providers such as Spotify but I’m not sure how well the voice search is integrated with other streaming services yet. Google Music, however, was great at understanding our requests most of the time. For example, when we asked to play “Independent Ladies by Beyonce,” Home replied: “Playing Destiny’s Child, Independent Women Part One.” The more specific we were with our requests, the better Home understood them. Home is also great at creating a playlist on-the-go based on the first song requested One minor quibble is that Home doesn’t own up to when a song isn’t in its library, instead playing some lesser known variation of either that song or artist or even nothing remotely related. For example, when I asked to play Beyonce’s Sorry (which isn’t available on Google Music) I got the lullaby version: cute, but not really what I wanted to hear.  
  2. General knowledge via Google Assistant. This has been perhaps the most fun aspect of having Home located strategically between the kitchen, dining, and living rooms, where it can pick up questions from the farthest reaches of those rooms. Questions at our house have ranged from what US presidents have been shot, the capacity of World Series ballparks, the length of the National Mall, and when a TV special was on (Hairspray Live, if you must know.) Assistant had an answer for each. That said, there were questions that it couldn’t understand and said that “it’s still learning.” but overall it provided the correct answer almost all the time. I also liked that the Assistant named the source before providing each answer, though most answers began with “according to Wikipedia…” What Assistant doesn’t yet know is how to connect questions and to understand the context between them. When asked “how tall is Curry” it gave the correct answer, but when the follow up was “how old is he?” Assistant had no idea who we were asking about. That said, Google claims that its Knowledge Graph, “the easily accessible information that pop up under the search bar for certain queries, now encompasses 70 billion facts.” So hopefully it is going to find the right answer most, if not all, the time, depending on whether it understood the question, which brings me to…
  3. Understanding speech. Over the Thanksgiving weekend we had guests with varying accents and while Home occasionally misunderstood the sentence as a whole, overall its success rate was high. That said, we have learned that questions must be asked in a very particular format to ensure understanding and we (the humans) are training ourselves to ask questions correctly. I call these “magical phrases” – present sentences that users know Assistant will recognize. This reminds me of the time in ancient tech history when we all learned a different way to write the alphabet because it was what the device understood. I assume that just like improvements in touch-screen navigation over the years, thus we will see improvements in speech recognition, both words and phrases, in the coming years.
  4. Localized answers for topics such as weather and news so that all we need to ask is just “what is the weather” without a location to get the local forecast. Home can also answer traffic questions based on location.
  5. Personal answers. Here’s where Home is limited. Asking “what is my day like?” is a magic phrase (see item 3) that reads back items from a calendar. But whose calendar? Home only supports one account at the moment which is limiting for personalized responses. For us, the decision was made to add the account that also pays for Google Music but this limits the rest of the members of the household. Google Home is truly a family device, serving everyone in the home (and guests!) so it’s a shame to limit features based on a single-user use case.
  6. We haven’t yet connected smart devices to Google Home. It would be nice to turn on the lights with a simple voice command.
  7. Productivity. We enjoy using the shopping list feature on Home. Saying “add flour to my shopping list” adds it to a keep checklist. To overcome the fact that the list is owned by the single Home user, it was shared with other household shoppers. The nice thing about the keep list is that it can be checked at the store on a mobile device.
  8. External services: it’s possible to connect an Uber account to Home, but not yet possible to order directly from an ecommerce site.

Why did we adopt Google Home so fast? It turns out that efficiency wins: it’s much easier to use voice commands than to pick up a device, unlock it, bring up a browser or the google app, and type in the query. As Home improves word and phrase recognition, requiring less magic phrases and more contextual association, and add third-party services beyond Uber (such as food delivery, shopping and travel reservations) it will become the preferred way of interaction with these services. Brian Roemmele says web sites will have to learn how to interact with voice devices such as Home and Amazon’s Echo. He adds that “by the end of  2017 5% of consumer-facing websites have a voice first interface” and, even better, “by 2020, 30% of web browsing sessions [will be] done without a screen.” Even at today’s technical level for voice interaction demonstrated by Google Assistant, it’s clear that in private spaces such as the Home, voice becomes the interface of choice.

Google’s new PhotoScan app and the trade-off of quality vs time

Yesterday Google launched PhotoScan, a new app to help people bring their old printed photo collections into the digital world faster and more efficiently.

PhotoScan is an app that was made for people like me: proud archivists of their family’s old photos. We who spend hours, days, weeks trying to find, scan, categorize and share these photos. Google said PhotoScan “gets you great looking digital copies in seconds – it detects edges, straightens the image, rotates it to the correct orientation, and removes glare.” Google adds that it also saves the scanned photos to Google Photos “to be organized, searchable, shared, and safely backed up at high quality.” So, scan and organize are the goals. Great.

A few years ago I used an old flatbed scanner to scan all the photos of my grandparents and their families to ensure that they were preserved forever and that all cousins had digital copies. I went through about 200 photos and for each I did a pre-scan, adjusted the scanning parameters to expand the contrast range and a pre-crop (i.e. scan only the photo, not the entire page) and did a final scan. Each photo took me about 5-10 minutes to get right, because most were small. In my collection, around 70% were 2” by 3”, 25% were 3” by 5”, and 5%, mostly group photos, were 5” by 7”. I’ll get to the significance of these stats later.

Today I downloaded the PhotoScan app and played around with it. It is extremely easy to use and, relative to a flatbed scanner, really, really fast! Here’s how it works:

  1. Position the photo in the frame. This works even if it’s placed carelessly, crooked, with other photos on the page, or behind a glass or plastic film.
  2. Align the phone with the four corner dots in the app. PhotoScan then processes the photo for a few seconds.
  3. If necessary, adjust corners by pulling the four corners of the 4-sided outline to match the photo.

That’s it, the photo is saved directly to Google Photos.

Position, click at four corners, and adjust corners if necessary. Note the tiny zoom + cross-hair aid for adjustment.

Position, click at four corners, and adjust corners if necessary.
Note the tiny zoom + crosshairs aid for adjustment.

If PhotoScan manages to grab the photo correctly, the overall time to “scan” a photo is around 30 seconds. If the corners need to be adjusted then it takes closer to a minute. This is because the adjustment is frustrating to do with touch control. Even though the app provides a zoom in and crosshairs for each corner as it’s grabbed, this is a task best done with taps as opposed to drag. It’s just too sensitive and each movement too large. This makes the entire corner-adjusting process too slow and frustrating, especially when scanning multiple photos. That said, the image processing to detect, straighten, unskew, and “rectangularize” a photo is amazingly good. Also, if the photo is alone and better situated (i.e. fully frontal camera view) in the initial frame then PhotoScan detects the photo correctly more often.

My test photo: PhotoScan's version.

My test photo/

This brings us to the big trade-off: time vs quality. The photo I used to test PhotoScan is small, 3.5” by 5. When I scanned it on my flatbed scanner, it came out as 15.1MB bitmap. When I converted it to high-quality JPG with Photoshop, it was 3.58MB. The advantage of this size is that every fraction of the photo was represented, down to the texture of the paper it was printed on. Since I scanned at such a high resolution, I could then enlarge the photo and show it on a big screen and print it out at double or even quadruple its original size and it still looked good. For group photos, the higher resolution allowed me to zoom in on the faces, which was an advantage. PhotoScan saved this image as a 293KB JPG.

Now, you could argue that the time saved is worth the reduction in quality. Optimally the scan time is 30 seconds on PhotoScan versus 5 minutes on a flatbed, even though this was a few years ago and I assume today the process would be faster than that, but I digress. You would be right if you had thousands of photos to scan. The use case, however, tells a different story.

Photos from the early half of the last century were rare and infrequent. My grandparents and their ancestors didn’t own a camera, they waited for a traveling photographer to reach their town and take a handful of photos of them, their family, and their friends. In the second half of the century, when personal cameras were more prevalent, the limitations of a roll of film, with either 24 or 36 photos, and the hassle and cost involved in developing and printing a roll limited the number of photos. I’d be curious to find a stat but even in the 1980s taking more than, say, 10 photos in a single event was extravagant and rare. For my project, 200 photos spanned 6 decades. So, while it can be daunting to look at entire albums and shoeboxes of photos, there just weren’t that many, especially in the first part of the century. Even if using PhotoScan is faster, with the truly important photos the lower quality is a deterrent. After all, the goal is to do it once for posterity, right? Quality is important.

My second gripe is organization. We rely a lot on Google’s amazing photo recognition skills, and they are usually mind-blowing. Every time Google is able to recognize faces and differentiate between siblings across time, I am amazed anew. Yet right now, PhotoScan saves a photo without any info. For my test photo, it named it with a meaningless sequence of 45 letters and numbers whereas I had named it with the name of people in it and the year in which it was taken. It would be great if PhotoScan popped up a quick dialog to enter date and the names of people in the photo at least. That way it would also have more information to detect the faces other photos.

Front and back of a photo of my great-aunt and her husband. On the back a dedication to her aunt, their names, the date and the occasion, all in Polish.

Front and back of a photo of my great-aunt and her husband.
On the back a dedication to her aunt, their names, the date and the occasion, all in Polish.

My third gripe, though of lesser importance than the first two, is only for the crazy archivists that like to scan the back of every historical photo as well as the front. This is because there is usually a lot of important information on it such as the subjects of the photo, the occasion and date it was taken, and often reason for sending it. Some of my favorite discoveries have been love poems behind a few innocent-looking photos. There needs to be way to tie the two scans of the front and back as one photo.

Note: on the photo on the right, the text on the back was written in Polish, which I do not understand. It took a few weeks for me to find a speaker who had the patience to untangle the handwriting and translate for me. Sadly she couldn’t decipher all the words. This would be something Google AI would be great at!

Finally, would I use PhotoScan and not my old flatbed scanner if I was doing the project I described above today? Probably not, just because of the quality issue. Sure, I’d save time, but we’re talking about saving these relatively few photos for future generations. Were tagging and organizing given more attention, I’d reconsider, but as of now, quality wins.

 

Facebook, part 3: Look at where we are, look at where we started

I know it’s my third Facebook post in less than a week. I also know that this debate around what is tech, what is media, what is censorship, what makes a community, and what are the limits to user generated content are the most interesting product discussions happening after this election.

Yes, even more than really fun glasses.

It’s not looking good for Facebook today. After Mark Zuckerberg’s denial from last week, today Gizmodo revealed that some Facebook executives were aware that there was a fake news problem and tried to solve it. “One source said high-ranking officials were briefed on a planned News Feed update that would have identified fake or hoax news stories, but disproportionately impacted right-wing news sites by downgrading or removing that content from people’s feeds. According to the source, the update was shelved and never released to the public.” It’s unknown whether this imbalance in effect was a result of there simply being more fake stories with a conservative background vs a liberal background.

Second, as I write this, Buzzfeed reported that another group of Facebook employees were secretly meeting to try and solve the problem: “The employees declined to provide many details on the task force. One employee said “more than dozens” of employees were involved, and that they had met twice in the last six days. At the moment, they are meeting in secret, to allow members of the group to speak freely and without fear of condemnation from senior management. The group plans to formalize its meetings and eventually make a list of recommendations to Facebook’s senior management.”

That it has come to this is a shame, and it feels that it unnecessary. At some point, Facebook, in the interest of optimizing the newsfeed to increase engagement and time spent on site, made a wrong turn or two. These changes took the newsfeed from being a string of personal events to a much less appealing mix of sponsored content, something that someone liked, a share by a friend of a friend, an update to a group, and yes, maybe a few personal updates from friends.

When users started engaging less with the new mix of content, Facebook made sharing easier, but the result of reducing friction was the sharing of less personal content which, in turn, created less an incentive for users to log in and see what their friends shared, leading to less sharing on their part, and so on.

Yet the social connections on Facebook are still relevant and unique. Sheryl Sandberg understands this. In June she said: “When you think about the connections you make on Facebook, people think about strong ties and weak ties. Those strong ties include family and close friends that you contact regularly. What Facebook does is let us hear from more people on a daily basis. What you might call your weaker ties: the people you went to school with, the people not in your current company but your last job, the people from your hometown … so what you get are just more abilities to keep in touch with more people, and over time, we believe, more diversity.” Here she was trying to prove that Facebook users are exposed to different viewpoints but that’s not the point. The point is that among every other social service, from email to Snapchat, it is Facebook that has the most personal social graph. These “weak ties” are ones that are important to users to stay in touch with and passively see updates from. Getting updates on Facebook  absolves users from more “strenuous” forms of communication such as email, phone, or (gasp!) sending holiday cards.

I don’t know how Facebook’s newsfeed algorithm works. I don’t know how they choose what to show users. I do know that friends don’t always see my updates and that I don’t see every update from my friends. Instead, I see perhaps more “popular” shares from my network, but these aren’t the personal ones. And this is frustrating. So despite Facebook being the only place where I can hear from my friends, I no longer enjoy browsing through my newsfeed because I no longer see content I care about. I no longer share because I have little faith my friends will see it. Anecdotally, I’ve heard this from other Facebook users.

John Oliver on President Elect Trump Source: YouTube

John Oliver on President Elect Trump
Source: YouTube

I wish I had the data and research to write definitively on what made the newsfeed go from the best place to connect with my “weak ties” to a place that John Oliver called a “cesspool of nonsense.” It didn’t happen overnight and started before 2016. Hopefully, it can still be fixed.

Facebook part 2: the data and the denial

So it seems I was not alone in my hot take about Facebook’s responsibility in fanning the flames of hatred in this election by circulating false stories and creating echo chambers for politically like-minded users. Here are a few I found most interesting: 

First up, Sam Biddle at the Intercept took perhaps the most critical tone with a post titled “Facebook, I’m begging you, please make yourself better.” Says Mr Biddle: “confirmation bias doesn’t begin to describe what Facebook offers partisans in both directions: a limitless, on-demand narrative fix, occasionally punctuated by articles grounded in actual world events, when those suit their preferences. But it was the Trump camp more than its opponent that encouraged this social media story time, because theirs was a candidate who was willing to stand at a podium and recite things he knew to be false, day after day.” He was also harsh with Facebook’s motivation: “the cynical explanation here is the most plausible: People will click on and share things they want to believe are true, and the more this happens, all the better for Facebook’s share price. The extent to which Facebook rambles about algorithmic oversight and a commitment to neutrality is only a means of ditching responsibility.”

Second, Emily Bell at the Columbia Journalism Review took a more temperate position but still called Facebook out for being the main distributor of fake news that was hard to counter. “Facebook, now the most influential and powerful publisher in the world, is becoming the “I didn’t do it” boy of global media. Clinton supporters and Trump detractors are searching for reasons why a candidate who lied so frequently and so flagrantly could have made it to the highest office in the land. News organizations, particularly cable news, are shouldering part of the blame for failing to report these lies for what they were. But a largely hidden sphere of propagandistic pages that target and populate the outer reaches of political Facebook are arguably even more responsible.” Perhaps most interesting from the product perspective is her comment that “the quality of journalism (or even the veracity of information) does not guarantee financial success. Fake news and real news are not different types of news; they are completely different categories of activity. But in Facebook’s News Feed, they look the same.” It is also turned out to be very profitable to create these false stories “Ad sales are all automated, and based on demographic data. Publishers that generate those data for traffic are not rewarded for quality.”

She also quotes John Lloyd who “draws a clear parallel between the rise of the social Web and the migration away from truth by those who publish there. He links this shift in attention to the lower print readership and the decline of newspapers in physical form and their passing on to the internet puts them on all fours with the vast flows of information, fantasy, leaks, conspiracy theories, expressions of benevolence and hatred. There they have to live or die.”  And like I said on Wednesday, this leveling of the playing field is really not fair to the media organizations supporting large, expensive newsrooms. Finally, Ms Bell agrees that the first step for Facebook is admitting that there is a problem: “Until the company and Zuckerberg specifically acknowledge that this ecosystem is a problem, nothing will happen. The large numbers of policy people Facebook has working on issues such as extremist recruitment, hate speech, and terrorism are effectively already editing the platform. But the system for moderating the site’s content is largely obscure, the echo chambers concealed, and the fake news out of control.”

Third, Zeynep Tufekci, an academic studying the intersection of social media and politics wrote an op-ed in the New York Times back in March of this year on how she tried to understand “the power of the Trump social media echo chamber… It’s a world of wild falsehoods and some truth that you see only rarely in mainstream news outlets, or hear spoken among party elites.” Yesterday she compiled a few interesting sources that provided the numbers behind some of the claims every other post was making. She added: “Facebook’s algorithm is central to how news & information is consumed in the world today, and no historian will write about 2016 without it…2016 was a close election where filter bubbles & algorithmic funneling was weaponized for spreading misinformation.” Taking a harsher tone in her demands from Facebook and the other “tech” companies: it may seem trivial, but it’s my corner: tech companies should immediately go to end-to-end encryption and ponder alternative financial models. FB algorithms have clear bents: filter-bubble, clicky or quarrelsome content. It builds on human tendencies. It greatly amplifies them.” The “click-bait algorithms fuel misinformation.”

"People who believe Trump was sent by God will take anything on faith." Source: Daily Edge

“People who believe Trump was sent by God will take anything on faith.”
Source: Daily Edge

Finally, the denial. Yesterday at the Techonomy conference, Mark Zuckerberg said that the “small amount” of fake news that spread on Facebook did not influence the outcome. “To think it influenced the election in any way is a pretty crazy idea,” he said. He added that “There’s a profound lack of empathy in asserting that the only reason someone could’ve voted the way they did is because they saw fake news” which doesn’t really answer the evidence presented in the posts above. Adam Mosseri, Facebook’s vice president of product management, was a bit more conciliatory saying “we value authentic communication, and hear consistently from those who use Facebook that they prefer not to see misinformation. Despite Facebook’s efforts, we understand there’s so much more we need to do.”

No kidding.