The inherent conflict between Facebook and its Safety Check

There’s a bit of a flurry around the negative aspects of Facebook’s Safety Check going around this week, mostly based on the reaction to the London fire. The problem, as Techcrunch reports, is that the Safety Check causes unnecessary stress because, for one, it’s not geographically specific enough, asking users miles away if they were OK, and two, because it can be triggered by many events, not all of which should be considered dangerous. Techcrunch makes the interesting point that not all disasters affect people around it in equal ways. A terrorist bombing such as the one in Manchester could involve people from all over the city but a fire in a tower is unlikely to involve people not living there.

Asking users in the perceived area of a tragedy to say they’re safe. They can either answer that they’re safe or not in the area.
Source: Facebook

Finally, Safety Check causes distress by “by making Safety Check a default expectation Facebook flips the norms of societal behavior and suddenly no one can feel safe unless everyone has manually checked the Facebook box marked “safe”.

In a series of tweets, Zeynep Tufekci‏ adds that Safety Check “can be comforting—but it is also adding to the fear-mongering around the world. People check-in safe even when never in danger. Humans are already bad at estimating risk/danger. We already have sensationalist media stoking fear; social media options matter a lot. For both mass and social media fear-mongering is engaging. Pageview/ratings driven mass/social media can converge on sensationalism.” So instead of being a helpful tool to tell people that they are safe, Safety Check stokes hysteria.

The tool originally made sense, and in some way still does. A check-in to tell friends and family that one is OK when a local tragedy occurs is not necessarily a bad idea. Consider the total unavailability of Bay Area phone lines during the hours and even days after the 1989 Loma Prieta earthquake. Even before the widespread adoption of cell phones, telecommunication systems are designed to handle an average load, not the maximum possible when almost every subscriber is trying to use the system, as well as people calling in. Add to that the probable power outages that take some parts of systems offline directly after an event, causing another reduction in capacity.

It makes sense to have an asymmetrical check-in product where people affected by an event can quickly (and with little connectivity) say that they are OK while any of their friends can check that notification whenever they want. It also makes sense that Facebook create such a tool. It has the ability to share information with the people who matter. Adding to that is the incredibly high percentage of the population already on Facebook, almost guaranteeing that everyone who needs to see the check-in, will. Creating such a product also fits in with Facebook’s goal to create social value. So where did it go wrong?

There is another problem, though, beyond the geographical inaccuracy, the definition of a check-in worthy event, the ensuing fear-mongering and stress creation. Being on Facebook creates a perception of profiting from a check-in, even if that wasn’t Facebook’s original intent. Consider the new features added just this week: adding a personal note and fundraising, one which pushes engagement and the other monetization, both of which can create unease.

So what can be done? Techcrunch suggested using “Facebook to post a status update saying they’re fine if they feel the need to — or indeed, use Facebook (or WhatsApp or email etc) to reach out directly to friends to ask if they’re okay — again if they feel the need to.” The problem with that is that people no longer trust that their important family and friends will even see an update they post. If it doesn’t have the importance inferred by an official check-in, the newsfeed algorithm might not deem it important enough to show. An email might be too cumbersome and time-consuming to send.

WhatsApp, however, is a much better option. By updating a chosen group or two, users can notify the important people in their lives and only those people. Others, not in the group, can assume that if they were not updated, it means the person was not close to the disaster. It can reduce stress for both the affected people, their friends and family, and people who weren’t in any danger from the beginning. The only disadvantage of that solution is Americans aren’t big users of WhatsApp, meaning that there is not one app that users can go to to update friends, and that’s a shame. 

The takeaway is to realize that even a feature with the best of intentions can have negative ones, and to always strive for better. Corny, I know, but it has been a tough week.



Tech, climate change, big data, and making a difference

A while ago I wrote about the challenges of writing a tech blog about apps and gadgets when world-altering events are going on. This came into focus this week after the president’s withdrawal from the Paris Accord and the ensuing conversation. Then, surprisingly, commitment to support the Accord poured in from cities, states, universities and companies around the US. Michael Bloomberg pledged to make up some of the $2 billion in lost funds toward climate action programs. He’s also “leading a coalition, made up of three states, dozens of cities, and 80 university presidents, that vows to uphold the Paris Agreement.”

Listening to a talk with Paul Hawken about this topic this week educated me a bit more about what the Paris Accord really means and what Princeton’s Carbon Mitigation Initiative set out to do. By adopting 15 different strategies aimed at reducing carbon emissions, and meeting the goals set out in them, we could avoid some of the more disastrous consequences global warming. Yet out of the 15, says Mr Hawken, 11 are aimed at larger corporations and utilities. The only actions relevant to individuals were to drive less and install solar power. This is what he set out to change, and came up with a way “map, measure and model the 100 most substantive ways to reduce global warming.”

Mapping air quality at a block-by-block level.
Source: Google

The interesting takeaway from this for me is that maybe there is more that tech can do with the “map, measure, and model” part of the equation. After all, collecting and analyzing data is their bread and butter. Google’s new pollution mapping initiative seems to be a step in the right direction. By attaching relatively cheap sensors to its street map that were out and about on city streets anyway, Google was able to create a street-by-street, block-by-block map of pollution levels in three cities, including Oakland. They then took a closer look at what data points suppose them on the map. In Oakland their analysis exposed areas where quieter residential streets were exposed to higher levels of pollution because of wind direction and spots where vehicles accelerate. This gives the City of Oakland a way to understand how to prioritize public works projects if reducing pollution levels for residents is a priority. Says Google: “With nearly 3 million measurements and 14,000 miles captured in the course of a year, this is one of the largest air quality datasets ever published, and demonstrates the potential of  neighborhood-level air quality mapping. This map makes the invisible, visible, so that we can breathe better and live healthier. It helps us understand how clean (or not clean) our air is, so that we can make changes to improve it.”

In the light of political change, it will be up to local entities, not the federal government, to take action on global warming. To do so they’ll need to collect and analyze many data points. Google’s mapping initiative shows that tech companies, especially those driven by location and mapping data, can relatively inexpensively help with this component proving, perhaps, that change is possible from the bottom after all.

Apple enters the smart speaker fray – a bit late and a bit short

Apple is hosting its developer’s conference this week and kicked things off yesterday with keynote with all their product announcements. Presented last, but predicted by many Apple followers, was their new smart speaker, called HomePod. Despite the precedence set by Google and Amazon, Apple’s focus is different and starts with music.

Phil Schiller introducing HomePad’s “musicologist” features.
Source: Apple keynote

Apple’s Phil Schiller said that it wants to “reinvent home music” and to that end focused on creating a high quality speaker that can “rock the house.” Mr Schiller went into a lot of detail on the audio features and song availability, playlists, smart user interaction, and expanded music-related query understanding. Aside from music, HomePad will answer queries relating to 13 other topics, some very limited in scope, though Home control seems promising. Also, right now, there are no third-party apps, so if you’re a Spotify fan, you’re out of luck.

HomePad’s other areas of expertise, from unit conversion to smart home control.
Source: Apple keynote

Interestingly, when explaining the pricing for the HomePad, Mr Schiller presented it as a mix of two products, a WiFi speaker at $300-$500 and a smart speaker at $100-$200, making the $349 price of the HomePad a good deal. That said, a Google Home at $129 paired with the recommended Sonos Play:1 at $200 comes out cheaper and is easily expandable. Even though the voice activated music playback handoff on Home to other speakers is still a bit buggy, this is something that Google is sure to solve soon.

I liked what Vox had to say about the higher price point in relationship to the current feature set on HomePad: “Amazon and Google’s smart speakers play a supporting role in the companies’ larger business strategies. Amazon’s goal is to make the Echo ubiquitous to help sell Amazon Prime subscriptions and other digital content. Google wants to get users hooked on as many different Google services as possible to support its advertising business. For both companies, the priority is to attract as many customers as possible, without worrying too much about making a profit from each one.” This is true but it’s not just about getting us hooked. Google services complement my Home’s feature set, making personalized information that I need available via a quick interaction.

Apple says this is the first high quality smart speaker (hence the higher price) but its success beyond Apple fans will depend on a few future improvements:

  1. How well will HomePad understand voice commands. A comparison of Siri, Google Assitant, Alexa, and Cortana from a few months ago found Google to be the best at understanding and executing various commands. Apple’s closed garden was detrimental to Siri’s performance in that test and HomePad isn’t any more open. 
  2. How good voice music interactions really are. Spotify has many fans, especially for playlists and recommendations. Google does a great job at understanding what songs the snippets of lyrics I asked for belong to. My favorite Google Music feature, though, is the way it creates a playlist on the fly based on one song I ask it to play, and those playlists are spot on in terms of genre, style, and a mix of stuff I’m familiar with and never heard before. Apple boasts about its musical understanding, knowledge and music catalog, so HomePad should succeed on this front.
  3. How big a role will personalization play and what Apple products will support it.
  4. When and if third party apps will be allowed to launch on HomePad, opening up the speaker to more smart functionality. Google gave out Home devices at I/O specifically to boost third party apps for its Assistant.
  5. Understanding speech and parsing words in a room is a different skill than the same task on a phone held arm’s length away. It becomes frustrating very quickly when it doesn’t work and both Google and Amazon have a head start on Apple here.

It’s interesting that Apple chose to enter this field much later than its competitors and did so with a reduced feature set and hardware that won’t be available for another half a year, but since many Apple fans already own a wide range of Apple products and are happy within the Apple world, they won’t be bothered by HomePad’s limitations. I’m doubtful that it will become be an entry point for non-Apple users to the ecosystem with its current feature set and price point. That said, it’s Apple. Customers buy their products based on features that are less important to me such as design, or perhaps the sound quality really will blow other connected speakers away. It will be interesting to see come the holiday season how successful HomePad is and how its feature set will grow in the next year or two.


Notifications in Android O: what to expect

Notifications are tricky but are essential when building a mobile app. Nir Eyal, who wrote the book on building habit-forming products says “[notifications] are the Pavlovian bell of the 21st century and they get us to check our tech incessantly.” They are the triggers to bring a user back into an app, and they do the job. That said, adds Mr Eyal, “as powerful as these psychological cues are, people are not drooling dogs. Your product’s users can easily uninstall or turn off notifications that annoy them.” That’s been the tradeoff up until now: app developers had to decide on the right balance between enough notifications to keep users engaged but not enough to either turn off notifications completely or, even worse, uninstall the app.

At one of the more interesting sessions at Google I/O last week, UX designers on the Android team presented their findings from research they conducted into what users currently think about notification. Unsurprisingly, they found that the general gist was that “phone notifications were a major source of stress” where users were “hyper vigilant” to receiving notifications because they were afraid to miss something important but that most notifications were unnecessary. It’s easy to see why that combination causes stress.

Another interesting insight from their research said that users are interested in receiving some notifications from an app, but not all. That kind of granularity doesn’t exist in notification settings and even if it does, users don’t want the hassle of customizing it. It’s an all or nothing approach. Notifications they’d like to receive depend on the person behind the notification, especially their VIPs, and reminders to get stuff done, with the caveat: when it’s relevant.

Different notification Channels defined for a fictional airline app. Users can decide on importance level whether to receive them by Channel, or turn them all off.

The study’s results led the UX team to define a new framework notifications in the yet unnamed Android O called Channels. Channels allow developers to create groupings of notifications by their own criteria but should include a similar subject matter, importance to users, and urgency. It then allows users to select how they want to receive notifications from that channel. So for a fictional airline in their example, channels could include notifications loyalty program, deals, and specific flight updates.

From the user’s perspective, Android O wanted to meet the most common use case, which is “I don’t want this type of notification from this app.” To make that kind of control accessible, users get notification control directly from the notification itself via a long hold and hold on the notification itself. Users can also access notifications in the app settings, where they can control all the app’s notification Channels in one place, and change the behavior model if they want.

Notification settings per channel per app.

Android O also adds more user control by allowing one of four importance levels per Channel, (min, low, default, high) where the importance level determines what set of behaviors the notification will have. The behaviors are set per importance level and including appearances on the lockscreen and status bar, making a sound, peeking on screen when on and waking the screen if off. The only customizable aspect will be vibration. The designer said that they “intentionally trading flexibility for simplicity. “ This means that users will need to understand what each level means, but on the other hand can rely on the fact that all notifications at the same level behave the same. Every channel from every app will have their own setting page and this will be consistent across apps and users will be able to block a channel, change its importance and customize some characteristics such as sound and whether to vibrate (which, to  me, again introduces complexity but I understand the need to allow that flexibility.)

Clearer hierarchy in the notification shade.

This consistency will allow users to understand the importance of a notification before digging deeper to try and understand it. Another change in Android O tries to make the hierarchy of notifications in the shade clearer with four distinct buckets. They will be ordered by importance and distinguished by color and height, and grouped by app within the buckets. The top bucket is “major ongoing” such as music and directions. Below that is “people to people” which research deemed was the most important kind of notification to users, and below that “general” and “by the way.”

This new organization of notifications with Channels and the shade hierarchy aims to reduce user stress by focusing on what’s important to users but its success will depend on two things. The first is whether users will understand the ease of the new Channel settings and set the importance according to their needs or will they revert back to the “all or nothing” approach, deleting the app? One of the findings was that many people don’t adjust their settings even if they know they can. Will the simplicity of a long press on a notification be a discoverable way for users to figure out how easy it is to change notification settings, or will the full page of options deter them?

The second part stems from how app developers utilize the new notifications and if they try to set them at an importance level that doesn’t match user expectation. If this “bad” behaviour is adopted, users just might end up going back to deleting the apps that irritate them without taking the time to adjust notification settings. I’m hoping that developers strive to get this right to avoid deletion, if nothing else, and I’ll on the lookout to see who gets this right once Android O officially launches.

Update 5/30/17: As I was catching up with my post-weekend reading, I came across this plea from Nir Eyal, quoted at the beginning of this post, the guy who literally wrote the book on creating addictive products. He is speaking out against hooking users (now that really does sound like something from the drug world) and saying that, unsurprisingly, “making things more engaging also makes them more potentially addictive.” He’d like tech companies and app creators to take a stand, for the health of their users and “identify, message, and assist people who want to moderate use.”

Back to the topic of this post, Mr Eyal talks about notifications, and tacitly recognizes the role they play in addicting and irritating users: “rather than making it so fiendishly difficult to figure out how to turn off notifications from particularly addictive apps, Apple and Android could proactively ask certain users if they’d like to turn off or limit these triggers.” I’d like to think that with the changes in O, Android has taken a big step in helping users manage and control notifications. Perhaps, as Mr Eyal suggests, the next step should be proactive assistance. But then, wouldn’t proactive assistance be just another push notification and we’d be right back where we started?


Straight from I/O – new sharing features in Google Photos that you really need

Last week I had the opportunity to attend Google I/O for three days. It’s the conference where Google announces new products and features while providing new guidelines for developers to support those products. A week after the keynote, the themes I remember are that machine learning is for everything and Assistant is your friend, whether proactively via push reminders or reactively via voice. Google Lens, the new image-driven AI app seems like one of the coolest of the new machine learning implementations and uses visual clues in your photos to provide more info about special events and businesses, and identify things such as flowers. This will be cool to test but as of now Google says it is “coming soon.”

Anil Subharwal at Google I/O keynote, introducing Suggested Sharing from the photo-takers side, with selected photos shared with people in them.

Many, many summaries have been written about everything that Google announced at the conference but for me two new announcements stood out. Google Photos’ new photo sharing feature set and new notification settings. 

In Google Photos, the first new photo sharing tool is called Suggested Sharing, which helps users share photos with the people that are in them. Photos starts by recognizing that a group of photos belong to a certain event, a “meaningful moment” per Google. It then groups the best photos from that event and notifies the user that they are ready to be shared, along with a list of suggested people to share it with. The user has the final word and can customize what photos to include and what people to share it to, and off it goes, either via the Photos app or email or text if the recipient doesn’t have the app.

Suggested Sharing – recipients are encouraged to add their photos to the album.

Another nice touch is that the recipients are asked to share their photos of the event, if the app find some on their phone. Those photos are then added to the shared album, and the entire process seems easy and frictionless, leaving the user very much in control of what is shared with who.

Sharing an entire library with a close person, but choosing exactly what to share.

The second feature goes beyond suggestion and automates the entire sharing process. It’s called Shared Libraries and allows users to set up automatic photo sharing with certain, special contacts only. This doesn’t mean that the selected contact will get every photo, no, that would be too broad. Users can set up to share a photo with the selected contact only if a certain person appears in it. One one hand, it’s built with an extremely specific scenario in mind, the one demoed in the keynote, where a user would share every photo of their child with their spouse. Yet on the other hand, that seems like a common and flexible use case, with many possible combinations and I can see users setting it up with their parents, grandparents, kids, friends, and even coworkers.

The key, just like in Suggested Sharing feature, is complete and granular control by the sharer. For now, Shared Libraries will allow a user to share with specified contacts and then choose whether to share photos of a specific person (which, lest we ignore the complexity of this, is an amazing feature.) The recipient, the shared-with contact, has the option to either manually check if photos were shared and then save them to their library or to select settings so that photos with, again, specific people, will be automatically saved.  It might be nice, in the future, to enable more sharing parameters to expand the use case. For example, from a certain location could apply to a corporate event or conference if date doesn’t achieve the right granularity, or a range of dates to encompass a family trip. All in all, I love both of these new sharing features because they automate a common sharing process, ensuring that it gets used more often and that I finally get the photos I’m in, not just those that I have taken. 

The last Photos feature announced at I/O, Photo Books, is perhaps one that will be less appealing to many because, if I’m honest, printed albums, as much as I love them, are a dying format. That said, the way Photos creates an album by selecting the highest quality photos representing significant events with important people is a really, really nice trick and one I’m going to be using often. I’ve spent hours creating photo-books and the challenge was exactly this: selecting the “right” photos.

One last thing about Photos. In a talk about Assistant and Lens, Google introduced the ability to make sense of some of the photos we take to remind us to do something later on, such as business cards, where information from the card is entered into contacts, and concert flyers, where dates of the performance and when tickets go on sale are added to the calendar. They didn’t give too many examples but the use case should include handwritten notes, presentation slides, receipts, attendee badges, flyers, billboards, etc. My point is that the photos on which this information is based really need to be automatically archived or deleted after the useful information from them is extracted. Chances that they’ll be significant a few months from now are slim and users don’t really need them in their photo collection. 

The second new announcement I am excited about, but more as a product manager rather than a user, is mobile notifications on which I’ll post later this week.

Finally, Android Go, a “lite” version of Android (still unnamed!) O, made for lower end phones in connectivity challenged areas, where battery life is valuable, seems like a great step to connect the next billion. It’s easy to plan cute sharing apps for user stories that are similar to yours, therefore easy to be empathic to, it’s harder to design for users using phones that are have much less processing power, are not always connected to high speed internet, and are quite often offline, and where a battery life is an issue. Those scenarios are harder to design for when sitting in the heart of well connected, fully charged Silicon Valley, but that is exactly the reason Android Go could be a game changer. This is one to follow.


What’s next for Citymapper? A bus?!

Loyal readers know by now that Citymapper is one of my all time favorite apps. It offers the best public transport route planning I’ve seen with delightful features such as a recommended car to be best situated at the end station and what exit to use from the station that’s closest to the final destination.

Citymapper GO: accompanying users on every step of their ride, including a “get off” alert.

Last month, using New York’s Subway, I discovered a new feature, the GO button, that breaks down the route into easy-to-follow steps. Each step has its own detailed description, map, and a notification card, visible from the lock screen. The part of the journey that includes the ride also has the option to receive a “get off” alert, which is a great feature for tourists. Citymapper also added a GO dashboard to keep a tally of calories burned and trees and money saved per trip vs driving a car. The gamification was a bit redundant as a feature but cute nevertheless.

Citymapper GO dashboard: the complete journey from point A to point B.

Little did I realize that one reason for the new feature to enable data collection based on real travel. By knowing full routes, not just when a rider got on a bus or train, Citymapper can better understand actual transportation needs, not just the compromises riders made due to route availability. With this more accurate data, Citymapper hopes to optimize public transportation routes when data shows there is a need. It can also show what new routes might work. To test that hypothesis, it launched a short bus service in London today and said “while the service will not get passengers very far, it is seen as a test of the technology that could lead to something much bigger.”

The implications of such a service are interesting. First, the flexibility and dynamic capabilities: transportation authorities could decide spontaneously that they want to add a route based on an event that they didn’t know about or a problem that they didn’t anticipate. They don’t have to reprint the transit map and hang signs, they only have to integrate with Citymapper so that the app can suggest the new route to travelers in need. For users, the trust in Citymapper is such that if the app suggests a route, it must exist. It’s a much quicker way to set up alternative and temporary transportation when things go wrong.

Citymapper said it best: “We’ve helped people figure out which bus to take. When it arrives. How long it takes. When to get off. Now it’s inevitable that we help make them work better. We don’t have to do it all ourselves, we’re glad to partner with others. We built an easy to use app by being users ourselves. So we feel the best way to build software for buses is to run buses ourselves. And learn from running some public experiments.” The first part I agree with, they’ve built a better app for existing transport systems than any of those transport systems (I’ve tried New York, Bay Area and Berlin.) Whether they need to be the ones to run buses remains to be seen, though I appreciate that they are running these London ones to prove a point.

Bus routes are based on historical data and political decisions. While transport authorities might have current usage data for existing routes, they don’t know much about what routes outside their system riders actually need. That’s why we see companies running private shuttles to and from their offices – those routes were needed and were not served by existing public transportation necessitating private vehicles. Just as a point of comparison, our local transit agency, the VTA, is also planning a route overhaul. They’ve asked for input from college and high school students, commuters and other community members.  They’re having community town halls and trying to get feedback from as many entities as possible about the changes. This is honorable but falls short of getting actual usage and, better yet, desired usage data such as Citymapper has.

Can Citymapper leverage its fantastic app to make a real difference in how city transportation works? Can they anticipate need and help cities plan accordingly? Will they tie their system into private transportation such as the carpool options on ridesharing apps and help everyone travel faster? Regardless, it’s a very interesting first step and I’m waiting to see how it works in London and other cities around the world and what cities will embrace Citymapper’s help.

AR for the rest of us

“Making the camera the first AR platform”
Source: Facebook

Augmented reality (AR) touted at Facebook’s F8 with a place of honor on Facebook’s 10 year roadmap as one of three technology areas Facebook is going to focus on. Mark Zuckerberg stated in his short keynote that Facebook is “making the camera the first augmented reality platform” which, with their strengths in machine learning and the social graph, might make for some very powerful tools. That said, there are many smaller, more focused applications that are better suited for widespread adoption of AR, even on today’s hardware.

Take, for example, the Vivino app. I know very little about wine, but I do appreciate a good glass every once in a while. Vivino gives instant access to wine rankings for almost 11 million wines from a community of 23 million users who care about wine enough to rank and write reviews. Their wine list implementation is extremely helpful. All users need to do is take a photo of the list and the app provides rankings for each wine and a handy color scale.

This is one of the first times I have seen a useful implementation of AR, aside from Google’s Translate app, and I started thinking what makes an AR app practical?

  1. Immediate: does it use the phone camera, with either a live view or a photo or does it require dedicated hardware? A phone is much easier to access. Also, how many steps are needed before the information added?
  2. Saves time: does the app replace a search or, even better, several searches? Does it automate data entry by recognizing the text? Does it replace a task often done on the go? For me, that’s usually searching undecipherable menu items in hipstery restaurants.
  3. Visually simple: is the additional information is presented in a way that isn’t too complex to understand at a glance? It shouldn’t be a complicated infographic, but a few additional data points.
  4. Adds value: does the added layer of data add value? Does it provide actionable data? Too much data or irrelevant data can be a nuisance, especially for AR apps on the go.
  5. Doing the math: can the additional data be manipulated in a way to provide more value? For Vivino. there was a suggestion to calculate points per dollar/euro so that users can quickly choose the best wine their money can buy. For other applications, say a grocery app, a photo of products can provide initial value with a health ranking (similar to Fooducate) and a cost per serving.

Finally, one of the cooler uses of AR discussed at F8 was the mesh between facial recognition and the social graph, where, via a pair of AR glasses, names pop up above people relevant to you in a crowd. For people such as me, who are better at faces than at names, that would be amazing. Till then, I’m hoping to discover a few more practical ones.