A few musings on notifications in iOS by a long time Android user

This week, after my Android phone died again, I decided to venture into the world of Apple and borrowed an old iPhone 6 from a friend. I’ve spent the last few days fighting with my muscle memory (where do I swipe?) and Googling “how do I…” when I can’t find an essential setting. Since my Android died abruptly, I couldn’t use Apple’s Move to iOS app, so I’m installing apps only as I remember that I need them. This is actually turning into an interesting purge as there were apps on my Android that it turns out I really haven’t been using. But I digress.

Aside from the changes in UI that were predictably hard to get used to, and I’m not holding a grudge against Apple for this, I’m truly exhausted by one thing: notifications.

  1. iOS alerts: so many, each its own, large-ish message.

    The sheer number: there are so many alerts and each one is its own message. Android encourages notification bundling but iOS seems to do nothing of the sort, and doesn’t seem to offer it in their settings. This means that I get an alert for every news item, every email, every Twitter like, and every new post in every WhatsApp group, including multiple posts in the same group.

  2. They pop up everywhere, repeatedly, in multiple locations. Whereas in Android they’re only on the lock screen in the same format as in the notification drawer and requiring the same action, on iOS they’re on the lockscreen, the notification center, and as alerts, as temporary banners, and as badges. Speaking of the variety…
  3. At every app install, permission is requested by the app to send notifications. I agreed to most, especially the messaging and mail apps, as these are notifications I want to receive. I was then summarily overwhelmed. Going into settings I realized that there were many different aspects of notifications that I could control, such as badges, banners, sounds and alerts. After looking up what they meant, I realized that I have lots of control but not for what I need: minimizing and grouping alerts.
  4. Dismissal of notifications takes a swipe, to get available actions, and a click, to choose view or clear. Why not just a swipe? Each notification has to be dismissed individually, there’s no dismiss per app or per group. Update: Notifications can be dismissed for the entire day, I missed this.
  5. The badges stick until actively dismissed, which drives me crazy. Email, for example, keeps telling me I have 38 unread emails. Yes, I know, but these are not new unreads, they are ones that I already know about and decided not to read or delete for now. The badge is useless, which is why I downloaded the Gmail app, where badges are not for total unread but only for new unreads. I get that this is decided on an app by app basis but this doesn’t make sense to me, especially for iOS’s default mail app.
  6. Choosing channels in Apple’s News app – why are so many on by default?

    There is no hierarchy of notifications in the general settings, only in some individual apps, who mostly don’t offer additional options. Gmail is one exception and allows notifications settings for different types of email, where I can chose to be notified only for Primary emails, not Updates or Social. It helps minimize notifications without a risk of missing important messages. I also cannot set how many notifications I’d like to receive from an app daily (Nuzzel does this well.) The good is that some apps, such as Apple News, do offer customization abilities so that I can choose what channels I want to hear from. The drawback to this is that it’s in the News app and not reachable from the News notifications from Settings, so it needs to be discovered independently. 

So, is it just a question of me getting used to a new UI or are iOS notifications really that stressful? I’m in the process of customizing and maybe, by next week, I’ll feel more at home in this new OS. Meanwhile, I am more appreciative of the changes that Android has in store for Notifications in O: more granular control and the realization that too many notifications cause anxiety and not all notifications are equal.

Finally I love how every iOS app puts the back button in a different place, but that’s a post for another day.

Citymapper integrated bike-sharing and it looks great

Going on vacation is my way to see what Citymapper has been up to. It’s pretty much the only app that continues to surprise me every time I update with some new, useful feature. Last time it was the journey companion (hitting Go on a chosen route provides cards throughout the trip and reminders when to get off) and this time around it was integration with the local bike-sharing service.

 

Paris, like many cities, has a bike-share program with multiple docks throughout the city. Like many of their counterparts around the world, the Paris bike-share system has its own app, which shows docks with their available bikes and empty slots. Like the advantage of a complete route planning app over a static Metro map, bike-only apps are good at a providing bike information, not information on what transportation option is overall better given user preferences. Citymapper offers not only bike routes but also combines bike segments with other public transport options like the Metro. It also has updated information on the number of bikes available at a pickup dock and the number of empty spaces available at the destination dock.

What I don’t know, because I didn’t actually use the bike-sharing option in Paris, was whether the app has the ability to change the destination bike dock en route based on the closest bike dock to the user’s final destination that actually has available slots for checking in a bike. On several occasions in Paris I saw riders cycling towards a dock only to discover that there were no available slots, pulling out their phones and then cycling away in frustration. This is one situation where Citymapper can offer more value, like Waze and other navigation apps, by changing planned routes due to changed conditions along the route.

One other feature I could see use for is predicting availability ahead of time by looking at and studying docking patterns throughout the day and week, similarly to how Google Maps considers historical traffic patterns when predicting times and routes for future travel. This could be useful during periods of high demand for bikes and slots during rush hour, when bike traffic can be more unbalanced and demand higher for bikes and open slots in certain locations.

Finally, and I know this isn’t a simple request, but it would be so useful for those users with no international data plans (ahem) that need route planning on the go. The Paris Metro has dense (in the city center) crisscrossing lines, with trains arriving every few minutes. Helping tourists find just one, optimal route even when they are not connected could be extremely helpful. Another lesson learned in Paris is that there are some stations (Châtelet-les-Halles as the the most obvious) that should really be avoided for transfers between lines because they are so huge. I haven’t seen Citymapper offer routes that specifically avoid large stations and it could be a helpful option, and doable seeing as it already offers routes that aim to be rain-safe.

The inherent conflict between Facebook and its Safety Check

There’s a bit of a flurry around the negative aspects of Facebook’s Safety Check going around this week, mostly based on the reaction to the London fire. The problem, as Techcrunch reports, is that the Safety Check causes unnecessary stress because, for one, it’s not geographically specific enough, asking users miles away if they were OK, and two, because it can be triggered by many events, not all of which should be considered dangerous. Techcrunch makes the interesting point that not all disasters affect people around it in equal ways. A terrorist bombing such as the one in Manchester could involve people from all over the city but a fire in a tower is unlikely to involve people not living there.

Asking users in the perceived area of a tragedy to say they’re safe. They can either answer that they’re safe or not in the area.
Source: Facebook

Finally, Safety Check causes distress by “by making Safety Check a default expectation Facebook flips the norms of societal behavior and suddenly no one can feel safe unless everyone has manually checked the Facebook box marked “safe”.

In a series of tweets, Zeynep Tufekci‏ adds that Safety Check “can be comforting—but it is also adding to the fear-mongering around the world. People check-in safe even when never in danger. Humans are already bad at estimating risk/danger. We already have sensationalist media stoking fear; social media options matter a lot. For both mass and social media fear-mongering is engaging. Pageview/ratings driven mass/social media can converge on sensationalism.” So instead of being a helpful tool to tell people that they are safe, Safety Check stokes hysteria.

The tool originally made sense, and in some way still does. A check-in to tell friends and family that one is OK when a local tragedy occurs is not necessarily a bad idea. Consider the total unavailability of Bay Area phone lines during the hours and even days after the 1989 Loma Prieta earthquake. Even before the widespread adoption of cell phones, telecommunication systems are designed to handle an average load, not the maximum possible when almost every subscriber is trying to use the system, as well as people calling in. Add to that the probable power outages that take some parts of systems offline directly after an event, causing another reduction in capacity.

It makes sense to have an asymmetrical check-in product where people affected by an event can quickly (and with little connectivity) say that they are OK while any of their friends can check that notification whenever they want. It also makes sense that Facebook create such a tool. It has the ability to share information with the people who matter. Adding to that is the incredibly high percentage of the population already on Facebook, almost guaranteeing that everyone who needs to see the check-in, will. Creating such a product also fits in with Facebook’s goal to create social value. So where did it go wrong?

There is another problem, though, beyond the geographical inaccuracy, the definition of a check-in worthy event, the ensuing fear-mongering and stress creation. Being on Facebook creates a perception of profiting from a check-in, even if that wasn’t Facebook’s original intent. Consider the new features added just this week: adding a personal note and fundraising, one which pushes engagement and the other monetization, both of which can create unease.

So what can be done? Techcrunch suggested using “Facebook to post a status update saying they’re fine if they feel the need to — or indeed, use Facebook (or WhatsApp or email etc) to reach out directly to friends to ask if they’re okay — again if they feel the need to.” The problem with that is that people no longer trust that their important family and friends will even see an update they post. If it doesn’t have the importance inferred by an official check-in, the newsfeed algorithm might not deem it important enough to show. An email might be too cumbersome and time-consuming to send.

WhatsApp, however, is a much better option. By updating a chosen group or two, users can notify the important people in their lives and only those people. Others, not in the group, can assume that if they were not updated, it means the person was not close to the disaster. It can reduce stress for both the affected people, their friends and family, and people who weren’t in any danger from the beginning. The only disadvantage of that solution is Americans aren’t big users of WhatsApp, meaning that there is not one app that users can go to to update friends, and that’s a shame. 

The takeaway is to realize that even a feature with the best of intentions can have negative ones, and to always strive for better. Corny, I know, but it has been a tough week.

 

Tech, climate change, big data, and making a difference

A while ago I wrote about the challenges of writing a tech blog about apps and gadgets when world-altering events are going on. This came into focus this week after the president’s withdrawal from the Paris Accord and the ensuing conversation. Then, surprisingly, commitment to support the Accord poured in from cities, states, universities and companies around the US. Michael Bloomberg pledged to make up some of the $2 billion in lost funds toward climate action programs. He’s also “leading a coalition, made up of three states, dozens of cities, and 80 university presidents, that vows to uphold the Paris Agreement.”

Listening to a talk with Paul Hawken about this topic this week educated me a bit more about what the Paris Accord really means and what Princeton’s Carbon Mitigation Initiative set out to do. By adopting 15 different strategies aimed at reducing carbon emissions, and meeting the goals set out in them, we could avoid some of the more disastrous consequences global warming. Yet out of the 15, says Mr Hawken, 11 are aimed at larger corporations and utilities. The only actions relevant to individuals were to drive less and install solar power. This is what he set out to change, and came up with a way “map, measure and model the 100 most substantive ways to reduce global warming.”

Mapping air quality at a block-by-block level.
Source: Google

The interesting takeaway from this for me is that maybe there is more that tech can do with the “map, measure, and model” part of the equation. After all, collecting and analyzing data is their bread and butter. Google’s new pollution mapping initiative seems to be a step in the right direction. By attaching relatively cheap sensors to its street map that were out and about on city streets anyway, Google was able to create a street-by-street, block-by-block map of pollution levels in three cities, including Oakland. They then took a closer look at what data points suppose them on the map. In Oakland their analysis exposed areas where quieter residential streets were exposed to higher levels of pollution because of wind direction and spots where vehicles accelerate. This gives the City of Oakland a way to understand how to prioritize public works projects if reducing pollution levels for residents is a priority. Says Google: “With nearly 3 million measurements and 14,000 miles captured in the course of a year, this is one of the largest air quality datasets ever published, and demonstrates the potential of  neighborhood-level air quality mapping. This map makes the invisible, visible, so that we can breathe better and live healthier. It helps us understand how clean (or not clean) our air is, so that we can make changes to improve it.”

In the light of political change, it will be up to local entities, not the federal government, to take action on global warming. To do so they’ll need to collect and analyze many data points. Google’s mapping initiative shows that tech companies, especially those driven by location and mapping data, can relatively inexpensively help with this component proving, perhaps, that change is possible from the bottom after all.

Apple enters the smart speaker fray – a bit late and a bit short

Apple is hosting its developer’s conference this week and kicked things off yesterday with keynote with all their product announcements. Presented last, but predicted by many Apple followers, was their new smart speaker, called HomePod. Despite the precedence set by Google and Amazon, Apple’s focus is different and starts with music.

Phil Schiller introducing HomePad’s “musicologist” features.
Source: Apple keynote

Apple’s Phil Schiller said that it wants to “reinvent home music” and to that end focused on creating a high quality speaker that can “rock the house.” Mr Schiller went into a lot of detail on the audio features and song availability, playlists, smart user interaction, and expanded music-related query understanding. Aside from music, HomePad will answer queries relating to 13 other topics, some very limited in scope, though Home control seems promising. Also, right now, there are no third-party apps, so if you’re a Spotify fan, you’re out of luck.

HomePad’s other areas of expertise, from unit conversion to smart home control.
Source: Apple keynote

Interestingly, when explaining the pricing for the HomePad, Mr Schiller presented it as a mix of two products, a WiFi speaker at $300-$500 and a smart speaker at $100-$200, making the $349 price of the HomePad a good deal. That said, a Google Home at $129 paired with the recommended Sonos Play:1 at $200 comes out cheaper and is easily expandable. Even though the voice activated music playback handoff on Home to other speakers is still a bit buggy, this is something that Google is sure to solve soon.

I liked what Vox had to say about the higher price point in relationship to the current feature set on HomePad: “Amazon and Google’s smart speakers play a supporting role in the companies’ larger business strategies. Amazon’s goal is to make the Echo ubiquitous to help sell Amazon Prime subscriptions and other digital content. Google wants to get users hooked on as many different Google services as possible to support its advertising business. For both companies, the priority is to attract as many customers as possible, without worrying too much about making a profit from each one.” This is true but it’s not just about getting us hooked. Google services complement my Home’s feature set, making personalized information that I need available via a quick interaction.

Apple says this is the first high quality smart speaker (hence the higher price) but its success beyond Apple fans will depend on a few future improvements:

  1. How well will HomePad understand voice commands. A comparison of Siri, Google Assitant, Alexa, and Cortana from a few months ago found Google to be the best at understanding and executing various commands. Apple’s closed garden was detrimental to Siri’s performance in that test and HomePad isn’t any more open. 
  2. How good voice music interactions really are. Spotify has many fans, especially for playlists and recommendations. Google does a great job at understanding what songs the snippets of lyrics I asked for belong to. My favorite Google Music feature, though, is the way it creates a playlist on the fly based on one song I ask it to play, and those playlists are spot on in terms of genre, style, and a mix of stuff I’m familiar with and never heard before. Apple boasts about its musical understanding, knowledge and music catalog, so HomePad should succeed on this front.
  3. How big a role will personalization play and what Apple products will support it.
  4. When and if third party apps will be allowed to launch on HomePad, opening up the speaker to more smart functionality. Google gave out Home devices at I/O specifically to boost third party apps for its Assistant.
  5. Understanding speech and parsing words in a room is a different skill than the same task on a phone held arm’s length away. It becomes frustrating very quickly when it doesn’t work and both Google and Amazon have a head start on Apple here.

It’s interesting that Apple chose to enter this field much later than its competitors and did so with a reduced feature set and hardware that won’t be available for another half a year, but since many Apple fans already own a wide range of Apple products and are happy within the Apple world, they won’t be bothered by HomePad’s limitations. I’m doubtful that it will become be an entry point for non-Apple users to the ecosystem with its current feature set and price point. That said, it’s Apple. Customers buy their products based on features that are less important to me such as design, or perhaps the sound quality really will blow other connected speakers away. It will be interesting to see come the holiday season how successful HomePad is and how its feature set will grow in the next year or two.

 

Notifications in Android O: what to expect

Notifications are tricky but are essential when building a mobile app. Nir Eyal, who wrote the book on building habit-forming products says “[notifications] are the Pavlovian bell of the 21st century and they get us to check our tech incessantly.” They are the triggers to bring a user back into an app, and they do the job. That said, adds Mr Eyal, “as powerful as these psychological cues are, people are not drooling dogs. Your product’s users can easily uninstall or turn off notifications that annoy them.” That’s been the tradeoff up until now: app developers had to decide on the right balance between enough notifications to keep users engaged but not enough to either turn off notifications completely or, even worse, uninstall the app.

At one of the more interesting sessions at Google I/O last week, UX designers on the Android team presented their findings from research they conducted into what users currently think about notification. Unsurprisingly, they found that the general gist was that “phone notifications were a major source of stress” where users were “hyper vigilant” to receiving notifications because they were afraid to miss something important but that most notifications were unnecessary. It’s easy to see why that combination causes stress.

Another interesting insight from their research said that users are interested in receiving some notifications from an app, but not all. That kind of granularity doesn’t exist in notification settings and even if it does, users don’t want the hassle of customizing it. It’s an all or nothing approach. Notifications they’d like to receive depend on the person behind the notification, especially their VIPs, and reminders to get stuff done, with the caveat: when it’s relevant.

Different notification Channels defined for a fictional airline app. Users can decide on importance level whether to receive them by Channel, or turn them all off.

The study’s results led the UX team to define a new framework notifications in the yet unnamed Android O called Channels. Channels allow developers to create groupings of notifications by their own criteria but should include a similar subject matter, importance to users, and urgency. It then allows users to select how they want to receive notifications from that channel. So for a fictional airline in their example, channels could include notifications loyalty program, deals, and specific flight updates.

From the user’s perspective, Android O wanted to meet the most common use case, which is “I don’t want this type of notification from this app.” To make that kind of control accessible, users get notification control directly from the notification itself via a long hold and hold on the notification itself. Users can also access notifications in the app settings, where they can control all the app’s notification Channels in one place, and change the behavior model if they want.

Notification settings per channel per app.

Android O also adds more user control by allowing one of four importance levels per Channel, (min, low, default, high) where the importance level determines what set of behaviors the notification will have. The behaviors are set per importance level and including appearances on the lockscreen and status bar, making a sound, peeking on screen when on and waking the screen if off. The only customizable aspect will be vibration. The designer said that they “intentionally trading flexibility for simplicity. “ This means that users will need to understand what each level means, but on the other hand can rely on the fact that all notifications at the same level behave the same. Every channel from every app will have their own setting page and this will be consistent across apps and users will be able to block a channel, change its importance and customize some characteristics such as sound and whether to vibrate (which, to  me, again introduces complexity but I understand the need to allow that flexibility.)

Clearer hierarchy in the notification shade.

This consistency will allow users to understand the importance of a notification before digging deeper to try and understand it. Another change in Android O tries to make the hierarchy of notifications in the shade clearer with four distinct buckets. They will be ordered by importance and distinguished by color and height, and grouped by app within the buckets. The top bucket is “major ongoing” such as music and directions. Below that is “people to people” which research deemed was the most important kind of notification to users, and below that “general” and “by the way.”

This new organization of notifications with Channels and the shade hierarchy aims to reduce user stress by focusing on what’s important to users but its success will depend on two things. The first is whether users will understand the ease of the new Channel settings and set the importance according to their needs or will they revert back to the “all or nothing” approach, deleting the app? One of the findings was that many people don’t adjust their settings even if they know they can. Will the simplicity of a long press on a notification be a discoverable way for users to figure out how easy it is to change notification settings, or will the full page of options deter them?

The second part stems from how app developers utilize the new notifications and if they try to set them at an importance level that doesn’t match user expectation. If this “bad” behaviour is adopted, users just might end up going back to deleting the apps that irritate them without taking the time to adjust notification settings. I’m hoping that developers strive to get this right to avoid deletion, if nothing else, and I’ll on the lookout to see who gets this right once Android O officially launches.

Update 5/30/17: As I was catching up with my post-weekend reading, I came across this plea from Nir Eyal, quoted at the beginning of this post, the guy who literally wrote the book on creating addictive products. He is speaking out against hooking users (now that really does sound like something from the drug world) and saying that, unsurprisingly, “making things more engaging also makes them more potentially addictive.” He’d like tech companies and app creators to take a stand, for the health of their users and “identify, message, and assist people who want to moderate use.”

Back to the topic of this post, Mr Eyal talks about notifications, and tacitly recognizes the role they play in addicting and irritating users: “rather than making it so fiendishly difficult to figure out how to turn off notifications from particularly addictive apps, Apple and Android could proactively ask certain users if they’d like to turn off or limit these triggers.” I’d like to think that with the changes in O, Android has taken a big step in helping users manage and control notifications. Perhaps, as Mr Eyal suggests, the next step should be proactive assistance. But then, wouldn’t proactive assistance be just another push notification and we’d be right back where we started?

 

Straight from I/O – new sharing features in Google Photos that you really need

Last week I had the opportunity to attend Google I/O for three days. It’s the conference where Google announces new products and features while providing new guidelines for developers to support those products. A week after the keynote, the themes I remember are that machine learning is for everything and Assistant is your friend, whether proactively via push reminders or reactively via voice. Google Lens, the new image-driven AI app seems like one of the coolest of the new machine learning implementations and uses visual clues in your photos to provide more info about special events and businesses, and identify things such as flowers. This will be cool to test but as of now Google says it is “coming soon.”

Anil Subharwal at Google I/O keynote, introducing Suggested Sharing from the photo-takers side, with selected photos shared with people in them.

Many, many summaries have been written about everything that Google announced at the conference but for me two new announcements stood out. Google Photos’ new photo sharing feature set and new notification settings. 

In Google Photos, the first new photo sharing tool is called Suggested Sharing, which helps users share photos with the people that are in them. Photos starts by recognizing that a group of photos belong to a certain event, a “meaningful moment” per Google. It then groups the best photos from that event and notifies the user that they are ready to be shared, along with a list of suggested people to share it with. The user has the final word and can customize what photos to include and what people to share it to, and off it goes, either via the Photos app or email or text if the recipient doesn’t have the app.

Suggested Sharing – recipients are encouraged to add their photos to the album.

Another nice touch is that the recipients are asked to share their photos of the event, if the app find some on their phone. Those photos are then added to the shared album, and the entire process seems easy and frictionless, leaving the user very much in control of what is shared with who.

Sharing an entire library with a close person, but choosing exactly what to share.

The second feature goes beyond suggestion and automates the entire sharing process. It’s called Shared Libraries and allows users to set up automatic photo sharing with certain, special contacts only. This doesn’t mean that the selected contact will get every photo, no, that would be too broad. Users can set up to share a photo with the selected contact only if a certain person appears in it. One one hand, it’s built with an extremely specific scenario in mind, the one demoed in the keynote, where a user would share every photo of their child with their spouse. Yet on the other hand, that seems like a common and flexible use case, with many possible combinations and I can see users setting it up with their parents, grandparents, kids, friends, and even coworkers.

The key, just like in Suggested Sharing feature, is complete and granular control by the sharer. For now, Shared Libraries will allow a user to share with specified contacts and then choose whether to share photos of a specific person (which, lest we ignore the complexity of this, is an amazing feature.) The recipient, the shared-with contact, has the option to either manually check if photos were shared and then save them to their library or to select settings so that photos with, again, specific people, will be automatically saved.  It might be nice, in the future, to enable more sharing parameters to expand the use case. For example, from a certain location could apply to a corporate event or conference if date doesn’t achieve the right granularity, or a range of dates to encompass a family trip. All in all, I love both of these new sharing features because they automate a common sharing process, ensuring that it gets used more often and that I finally get the photos I’m in, not just those that I have taken. 

The last Photos feature announced at I/O, Photo Books, is perhaps one that will be less appealing to many because, if I’m honest, printed albums, as much as I love them, are a dying format. That said, the way Photos creates an album by selecting the highest quality photos representing significant events with important people is a really, really nice trick and one I’m going to be using often. I’ve spent hours creating photo-books and the challenge was exactly this: selecting the “right” photos.

One last thing about Photos. In a talk about Assistant and Lens, Google introduced the ability to make sense of some of the photos we take to remind us to do something later on, such as business cards, where information from the card is entered into contacts, and concert flyers, where dates of the performance and when tickets go on sale are added to the calendar. They didn’t give too many examples but the use case should include handwritten notes, presentation slides, receipts, attendee badges, flyers, billboards, etc. My point is that the photos on which this information is based really need to be automatically archived or deleted after the useful information from them is extracted. Chances that they’ll be significant a few months from now are slim and users don’t really need them in their photo collection. 

The second new announcement I am excited about, but more as a product manager rather than a user, is mobile notifications on which I’ll post later this week.

Finally, Android Go, a “lite” version of Android (still unnamed!) O, made for lower end phones in connectivity challenged areas, where battery life is valuable, seems like a great step to connect the next billion. It’s easy to plan cute sharing apps for user stories that are similar to yours, therefore easy to be empathic to, it’s harder to design for users using phones that are have much less processing power, are not always connected to high speed internet, and are quite often offline, and where a battery life is an issue. Those scenarios are harder to design for when sitting in the heart of well connected, fully charged Silicon Valley, but that is exactly the reason Android Go could be a game changer. This is one to follow.