Last week I had the opportunity to attend Google I/O for three days. It’s the conference where Google announces new products and features while providing new guidelines for developers to support those products. A week after the keynote, the themes I remember are that machine learning is for everything and Assistant is your friend, whether proactively via push reminders or reactively via voice. Google Lens, the new image-driven AI app seems like one of the coolest of the new machine learning implementations and uses visual clues in your photos to provide more info about special events and businesses, and identify things such as flowers. This will be cool to test but as of now Google says it is “coming soon.”
Many, many summaries have been written about everything that Google announced at the conference but for me two new announcements stood out. Google Photos’ new photo sharing feature set and new notification settings.
In Google Photos, the first new photo sharing tool is called Suggested Sharing, which helps users share photos with the people that are in them. Photos starts by recognizing that a group of photos belong to a certain event, a “meaningful moment” per Google. It then groups the best photos from that event and notifies the user that they are ready to be shared, along with a list of suggested people to share it with. The user has the final word and can customize what photos to include and what people to share it to, and off it goes, either via the Photos app or email or text if the recipient doesn’t have the app.
Another nice touch is that the recipients are asked to share their photos of the event, if the app find some on their phone. Those photos are then added to the shared album, and the entire process seems easy and frictionless, leaving the user very much in control of what is shared with who.
The second feature goes beyond suggestion and automates the entire sharing process. It’s called Shared Libraries and allows users to set up automatic photo sharing with certain, special contacts only. This doesn’t mean that the selected contact will get every photo, no, that would be too broad. Users can set up to share a photo with the selected contact only if a certain person appears in it. One one hand, it’s built with an extremely specific scenario in mind, the one demoed in the keynote, where a user would share every photo of their child with their spouse. Yet on the other hand, that seems like a common and flexible use case, with many possible combinations and I can see users setting it up with their parents, grandparents, kids, friends, and even coworkers.
The key, just like in Suggested Sharing feature, is complete and granular control by the sharer. For now, Shared Libraries will allow a user to share with specified contacts and then choose whether to share photos of a specific person (which, lest we ignore the complexity of this, is an amazing feature.) The recipient, the shared-with contact, has the option to either manually check if photos were shared and then save them to their library or to select settings so that photos with, again, specific people, will be automatically saved. It might be nice, in the future, to enable more sharing parameters to expand the use case. For example, from a certain location could apply to a corporate event or conference if date doesn’t achieve the right granularity, or a range of dates to encompass a family trip. All in all, I love both of these new sharing features because they automate a common sharing process, ensuring that it gets used more often and that I finally get the photos I’m in, not just those that I have taken.
The last Photos feature announced at I/O, Photo Books, is perhaps one that will be less appealing to many because, if I’m honest, printed albums, as much as I love them, are a dying format. That said, the way Photos creates an album by selecting the highest quality photos representing significant events with important people is a really, really nice trick and one I’m going to be using often. I’ve spent hours creating photo-books and the challenge was exactly this: selecting the “right” photos.
One last thing about Photos. In a talk about Assistant and Lens, Google introduced the ability to make sense of some of the photos we take to remind us to do something later on, such as business cards, where information from the card is entered into contacts, and concert flyers, where dates of the performance and when tickets go on sale are added to the calendar. They didn’t give too many examples but the use case should include handwritten notes, presentation slides, receipts, attendee badges, flyers, billboards, etc. My point is that the photos on which this information is based really need to be automatically archived or deleted after the useful information from them is extracted. Chances that they’ll be significant a few months from now are slim and users don’t really need them in their photo collection.
The second new announcement I am excited about, but more as a product manager rather than a user, is mobile notifications on which I’ll post later this week.
Finally, Android Go, a “lite” version of Android (still unnamed!) O, made for lower end phones in connectivity challenged areas, where battery life is valuable, seems like a great step to connect the next billion. It’s easy to plan cute sharing apps for user stories that are similar to yours, therefore easy to be empathic to, it’s harder to design for users using phones that are have much less processing power, are not always connected to high speed internet, and are quite often offline, and where a battery life is an issue. Those scenarios are harder to design for when sitting in the heart of well connected, fully charged Silicon Valley, but that is exactly the reason Android Go could be a game changer. This is one to follow.