Five product trends that were showcased at Google I/O

I was lucky to attend Google I/O last week and after the excitement around the announcements had died down, I was left with five intriguing product trends that not only show where Google is headed but also, in some ways, the rest of the industry. These do not by any means include all the announcements at I/O or even all the product trends at I/O. These were just the ones I found interesting given past I/O conferences and events in the industry in the past year.

Collect even more data, more of it sensitive. Google Assistant will have more capabilities to go beyond generic information and media requests and will become more personalized. With the ability to gain insights and make suggested based on a schedule, previous interactions such as what item was ordered off a menu, and social interactions, Assistant aims to become a more useful and integral part of people’s lives. While it strives to take care of tasks that are easier for a machine to handle than humans, it also requires a very intimate knowledge of the user to be effective. The tradeoff is still attractive to many users: give up personal data in return for time-saving and life-optimizing products. The question here is just how long that tradeoff will continue to make sense for users and when it ceases to, will they have a way to go back.

Consolidation of user generated content. In the earlier days of UGC sites, users spread the content on different sites dependant on the media type and the audience they wanted to reach. Users could share photos on Flickr or Instagram, share restaurant reviews on Yelp, product reviews on Amazon, videos on YouTube, live videos on Periscope, thoughts on Twitter, and so on. It was and is challenging to get users to switch from a platform that has benefited them by creating a following, community, or both, to a new site. A new site had to either offer significant exposure benefits (such as Medium initially) or much less sharing friction, or another signficant benefit to counter the incumbent’s offering.

Personalized restaurant recommendations based on reviews and preferences.
Jen Fitzpatrick at the Google I/O keynote.

Take Yelp, a site founded 13 years ago and focused on reviews. Mine have mostly been focused on restaurants and cafes, and I put them on Yelp to help their users, but I gain nothing in return except a few likes. Google Maps has historically come a distant second to Yelp in terms of reviews but they’re catching up. The ubiquitousness of Maps, especially on mobile, the high visibility and easy accessibility of ratings and number of reviews, and the increase in the number of reviews (the most new reviews vs Facebook and Yelp) has made Maps a preferred resource for ratings. I have found that I go to Yelp less and less, and hardly ever on mobile. Yet I was still writing reviews on Yelp because I felt they had impact there. At I/O, Google introduced personalized recommendations, based on types and cuisine of places I had reviewed in the past and liked. It’s a game-changer: why would I want to post my reviews anywhere else if placing them on Maps helps me find more places that I like? My sense is that this will become more widespread as AI learns from my preferences and starts making smarter recommendations in every category. This will be interesting to follow and see what other content sharing areas will be shifting to Google just because its ML algorithms can make something beneficial out of them.

Voice interactions are getting smarter. Smart speakers and voice-activated assistants on phones have created a new way to interact with the world’s information and personal information. Speech comprehension has improved to make most voice requests understandable, but not all queries have ready answers quite yet. This is clearly something Google hopes to address given the high number of Assistant related development sessions during this I/O. Continued conversation helps make the dialog flow easier but the challenge, given new tools and new apps accessing new information, will be discovery. If every new tool will require the utterance of a special, unique word to engage, it might create a barrier to usage, and an increase in the number of services providing apps, from big banks to your local bike shop, will only acerbate this problem.

Higher visibility of machine learning capabilities. Most of the things Google does with AI/ML are under the hood but delight users when the “magic” happens. My personal favorite is Google Photos, with its amazing search tools and photo editing tools, is slated to get new automatic editing features and now automatically adding color to black and white photos and changing only the background of a photo into black and white to create a focus on a central object. The Photos assistant also recognizes receipts, photos rotated in the wrong orientation, and outdated notes. All these ML-driven tools contribute to a much better photo managing experience and amazing photo editing.

Yet the real moment of wow was Google’s new robocaller, Duplex, an assistant that will call businesses (for now) on your behalf to fulfill a simple goal such as to book an appointment. The demo had the audience amazed at not just at the flow of conversation but also the quirks of language used by the assistant to conduct a conversation that the human on the other end was convinced she was talking to another human. There was discussion at the conference about the deceptive aspect of the call and how this technology has a high chance of being misused. This was also called out by the press and notably by Zeynep Tufekci. If the last few years have taught us anything it’s that we are at the point where we can no longer optimistically design products, showcase their most flattering implementation, and hope for the best. It’s time to consider all possible uses of a technology, the ethics of launching it, and the price humanity could pay.

The wellness dashboard shows personal usage stats for each app.
Sameer Samat at the Google I/O keynote.

Digital wellness. Finally, I’m not convinced this is a trend but it was encouraging to see nevertheless. The last few years have shown us that people being tethered to devices and spending too much time scrolling through timelines wasn’t good for their well being. Google’s wellbeing tools start with giving users the visibility into how much time they spent in an app, how many times they opened it, and how many notifications they received and acted on. Based on that data alone users might have enough knowledge to remove or turn off notifications for time-sucking apps. Android P will also allow users to set a time limit for apps and once that limit is reached the app launch icon is greyed.

For years apps have been using elements of social science to hook users into using their products and coming back again and again. Maybe giving users insight into those manipulations can help users gain control of their time.  It will be interesting to see how companies like Google, who benefit from increased engagement and more time spent in an app, will balance the realization that technology isn’t benefiting users with their bottom line.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s