YouTube Kids and the corruption of recommendation algorithms

Recommendation engines have been around for years, at least since Amazon started correlating shared purchases and suggesting products (or was it only books back then?) with “since you bought this, you might like this.” It was a good-enough recommendation algorithm that helped shoppers sift through endless options to find what was relevant for them.

The goals for today’s recommendation engines haven’t changed much from those early years: find the user things they like in order to either sell them more stuff or keep them on the site for a longer time to show them more ads, also known as “engagement.” Yet while today’s recommendation engines have the same goals, they’re coming up with completely different methods and results. On one hand we have Facebook’s tinkering with their Newsfeed algorithm, where trying to increase engagement has negative results such filter bubbles and promotion of extreme content. On the other we have Spotify’s amazing discovery playlists, such as the Daily Mix, that almost always delight me in their selection of new-to-me music. In between we have Netflix, which sometimes gets it right, and more selective stores, like Nordstrom, that do a decent job of suggesting products others bought. For most of these the recommendation engine is a black box for users, and its effects are measured religiously.

Yet with all recommendation engines, especially when user-generated content is involved, it’s a game between the platform, that decides what to recommend, and content providers, who try to influence that decision. What got me thinking about just how intensely this game is played is this very detailed post from a few weeks ago about how independent producers are gaming YouTube Kids. “Someone or something or some combination of people and things is using YouTube to systematically frighten, traumatise, and abuse children, automatically and at scale,” said the author, James Bridle. At the time, the post seemed too extreme and I waited for other analysts to weigh in. This week, John Biggs  at TechCrunch said conclusively: YouTube isn’t for kids. “YouTube is a cesspool of garbage kids content created by what seems to be a sentient, angry AI bent on teaching our kids that collectible toys are the road to happiness. YouTube isn’t for kids. If you give it to kids they will find themselves watching something that is completely nonsensical or something violent or something sexual. It’s inevitable.” This is as condemning as it comes.

YouTube’s and YouTube Kid’s reach is incredible. Earlier this week, Ofcom published a study on “Children and Parents: Media Use and Attitudes Report” in the UK and came up with these numbers:

Younger children especially watch a lot of YouTube Kids.
Source: Ofcom

  • YouTube is the content provider the highest proportion of 12-15s say they ‘ever’ watch – 85%.
  • YouTube is the only content provider, of the 14 examples, which is used by a majority of 12-15s to ‘often’ watch content.
  • Use of the YouTube website or app increases with the age of the child, accounting for 48% of 3-4s, 71% of 5-7s, 81% of 8-11s and 90% of 12-15s. Use of YouTube has increased since 2016 by 11 percentage points for children aged 3-4, by 17 percentage points for 5-7s and by eight percentage points for 8-11s. [Note: YouTube Kids was launched in February 2015.]
  • Half of YouTube users aged 3-4 (48%) and a quarter (25%) aged 5-7 only use the YouTube Kids app rather than the main YouTube website or app.

These numbers are high but not entirely surprising. Parents trusted Google when it said YouTube kids was child-friendly: “the app makes it safer and easier for children to find videos on topics they want to explore.” Its availability on every device has made it easy to access from practically everywhere. But with that level of trust, could YouTube Kids have done more to monitor content?

Last week, YouTube issued a response that, based on the comments on it, didn’t do enough. Of the five changes, only two went beyond guidelines: “tougher application of our Community Guidelines and faster enforcement through technology.” Also, ads will be removed from “inappropriate content.” What they didn’t do is allow parents to block specific providers or, as some requested, to whitelist channels or providers. The new restrictions don’t go far enough in allowing parents to control what their children watch and to block content they don’t want them to watch.  

What this proves, beyond the eternal axiom that we just can’t have nice things, is that once a service allows user-created content, it’s finding it extremely difficult to monitor that content. Beyond that, many creators, not only on YouTube, have figured out how to game the recommendation algorithm and get their content in front of viewers. I wish YouTube had taken a stronger stance with this, like blocking all creators aside from a few hand-picked ones until they can figure it out. Maybe that won’t be the most profitable choice, but it might be best from the product perspective. Until then, I have to agree with Mr Biggs: YouTube is not for kids.

Advertisements

Intentional nostalgia: a guide

Simple cookies, incredible delight.

A few years ago, a coworker brought in a box of cookies his grandmother had sent him that were exactly like ones my grandmother used to bake. The moment I opened the bag, saw and smelled them, I was transported back into my grandmother’s kitchen, to when I was a child and she baked them weekly. They were just as tasty as I remembered, and I was grateful not just for the cookie but for the happy memories it brought. When he brought me those same cookies again a few months later, they were just as tasty, but the nostalgic wave had subsided significantly. Nostalgia is a beautiful thing, but it’s a button that cannot be pushed too many times.   

Lately it seems that too many products have been trying to mobilize nostalgia-as-a-feature too often and as a blatant way to increase sharing and engagement. Many social and photo apps have been around for over a decade, during which they have gathered many special moments which are often fun to revisit. Yet, evoking those nostalgic moments doesn’t always work. Here are a few things I’ve noticed about what works well:

  1. Make sure the moment is good one. Not every moment in our past is a happy one and, even if we shared it before, and creating an intentionally nostalgic can seem forced. Even what users willingly shared years ago may be painful today, such as the death of a loved one or the end of a relationship. I’ve found the better nostalgic moments understand that an use some connection to the present to make assumptions about the past. For example, Google Photos deals with this really well with their “Then & Now” collage because it uses current information to bring up the past. Understanding that if I took a photo of Joe today, I’m probably OK with seeing Joe 8 years ago.

    One week on Facebook: memories, a friendversary and a month in review.

  2. Don’t overdo it. Google Photos surfaces nice moments in their “Rediscover this day” collages, but when I receive such reminders every day, it gets old. Even though I love seeing my family members change throughout the years, that’s a button that can’t be pushed that often. Likewise for Facebook, where this week I got reminders of my Memories on Facebook, of a post I shared two years ago, my October Memories, and a Friendversary, which really has no meaning for me because we became friends long before.

    Nostalgic for a salad?

  3. A reminder of a significant moment. This requires some knowledge of what makes a moment special, but offhand I’d include important people during special occasions like birthdays, weddings, first days of school, and holidays. I’m in awe of Google Photo’s ability to recognize members of my extended family across time, and would like them to use that to generate reminders. I’m a bit confused by why they would choose to send me a photo of a salad from three years ago.
  4. Going beyond reminders and notifications and include flashbacks as part of the user experience, where users can be part of a process that unobtrusively offers nostalgic moments that they choose to explore. One great example of this approach is how photobook/card/calendar creators enjoy the process because it involves choosing photos from events that gave them joy in the past. Spotify and other music apps do this as well when offering playlists of specific decades, and focused on themes from those decades. Netflix might do this better by offering movies and TV shows by decades prompting a weekend full of all your teen flicks.

Bottom line I think that yes, products can harness nostalgia and the goodwill it generates, but please, do so tastefully and sparingly. When the user feels, as I sometimes do, that it’s all done to manipulate them into social action, it generates impatience and ire more than goodwill. Less is more.

Social media goes to Washington: tech has a problem and needs to face it

We’re almost at one year post election and today there are hearings on Capitol Hill on how Russian media and advertisers on Facebook, Google, and Twitter may have influenced the decision. Facebook especially is in the hot seat as it has been deemed to be the most influential. Only this week it was revealed that 59% of Americans saw the Russian ads before the election. That’s an astounding number, especially considering that it targeted the undecided voters and those susceptible to be swayed.

This should be a period of reckoning for tech, especially Facebook, about the enormous influence it has on the world today and how it’s handling that power. It shouldn’t just focus about Russian advertising but more on why that advertising was so effective.  This comes down to four points:

  1. Divisiveness as a product. Let’s start with Mark Zuckerberg’s opening statement in today’s earnings report. “Our community continues to grow and our business is doing well, but none of that matters if our services are used in ways that don’t bring people closer together.” That has certainly been Facebook’s mission for a while now, but it’s not exactly what it’s doing. It’s enhancing the more extreme societal viewpoints and encouraging arguments. It’s a sad truth that controversy drives engagement. Maybe that metric needs to change.
  2. Dwindling trust in media. This is the bulk of what prompted me to write last year’s post-election post and there is also recent research by Omidyar Network and Edelman Intelligence that shows a steep decline of trust in media. There are several factors contributing to this decline in trust. First is the rise of citizen journalism, where everyone can contribute. It makes it difficult for readers to separate the wheat from the chaff, the truth from the partial truth from the completely fake. Second is the corner that traditional media, institutions with newsrooms and research staff and, above all, a reputation to uphold, have been pushed into in the last two decades. First the internet took away their advertising revenue and then social media (especially Facebook) took away their traffic and constantly drove them to change strategies based on seemingly fickle algorithmic changes in the newsfeed. It was only this week that Facebook, yet again, changed their media policies obstinately to combat “fake news” but in reality capturing the big fish in its net as well.
  3. Censorship, or, using a friendlier term, deciding what people see. Facebook does this every single time a user peruses the Newsfeed: it decides what to show them. It does that by showing users what it thinks they will like, and what they’ll like will keep them on the site for a longer period of time. By showing users what they want to see, they effectively drown out opposing political, religious, and social views. The fact that those views are presented by friends and family, people the user respects and knows personally, gives them even greater significance. Choosing what to show users in their newsfeed may be the most controversial decisions Facebook makes, and the filter bubble around users only strengthens that control.
  4. Effective persuasion. This is the most dangerous because it’s the most subversive: Facebook already has so many data points about every individual user, and the ability to reach that user in the most effective manner today. It also knows how to engage that user and keep them coming back to the service, be it on the site or via the app, time and time again. That’s their secret sauce, the reason their profits from advertising dwarf everyone’s except Google’s. Listen to Zeynep Tufekci’s talk from last week to understand just how well Facebook targets and, for lack of less evil sounding word, manipulates users: “It’s because it works great as a persuasion architecture. But the structure of that architecture is the same whether you’re selling shoes or whether you’re selling politics. The algorithms do not know the difference. The same algorithms set loose upon us to make us more pliable for ads are also organizing our political, personal and social information flows, and that’s what’s got to change.”

CapitolHillFinally, for further reading, Ben Thompson wrote an excellent summary of today’s hearings but ended with an interesting conclusion. He said that “I still believe that, on balance, blaming tech companies for the last election is, more than anything, a convenient way to avoid larger questions about what drove the outcome. And, as I noted, the fact is that tech companies remain popular with the broader public.”

I disagree, though I don’t think “blame” is the right term here. Could the proliferation of fake news, the erosion of trust in fact-based media, the rise of highly divisive rhetoric, and the specific targeting of undecided individuals happened without social media? It’s about how platforms such as Facebook are built and how they are being subverted. I’m more concerned about how Facebook harms our democracy than about how foreign entities are playing the game these platforms created. Facebook is troubling because while it’s built to connect people in a good way, it’s also built to bring out the worst of our collective behavior.

In conclusion, I bring you Tim Cook’s words, when asked about Russian influence: “I don’t believe that the big issue are ads from foreign government. I believe that’s like .1 percent of the issue. The bigger issue is that some of these tools are used to divide people, to manipulate people, to get fake news to people in broad numbers, and so, to influence their thinking, and this, to me, is the No. 1 through 10 issue.”

Update, November 2nd: The Verge published a very partial set of the ads that were run on Facebook by Russian operatives. They’re ugly, and their purpose is clearly to drive violence, hatred, and fear. It’s sadly easy to see, with these messages and Facebook’s optimized delivery platform, just how so much divisive harm was done.

Dynamic pricing and fairness – how to plan for acceptance

I read an interesting article last week in the NY Times that discussed the fairness, or lack thereof, of dynamic prices in different industries. The case studies included pricing for show tickets, electricity during periods of high-demand, toll roads and congestion fees, prices of essential goods after a natural disaster, and, yes, Uber with its surge pricing. The interesting common angle of all of them was that consumers are willing to pay higher prices when, and this is key, they feel those prices are fair.

The article also points out that even though many consumers dislike them, there are reasons that dynamic pricing is necessary, mostly to flatten peak periods and divert that excess to periods of less demand, whether the demand is for electricity or highway use. Dynamic pricing motivates consumers to do what the supplier wants them to do on one hand, but also, like airlines, to extract more income when there is limited supply but a very high demand. 

That said, there are ways to build dynamic pricing models that work. What the successful examples of variable pricing have in common is that they treat customers’ desire for fairness not as some irrational rejection of economic logic to be scoffed at, but something fundamental, hard-wired into their view of the world. It is a reality that has to be respected and understood, whether you’re setting the price for a highway toll, a kilowatt of power on a hot day, or a generator after a hurricane.”

Which brings me back to Uber. The point of the Times article was not to say that dynamic pricing shouldn’t be used, but rather that it be explained to the user in a way that caters to their sense of fairness. This isn’t always easy to do. For surge pricing, Uber realized that users prefer seeing an estimate of the total price of a ride before confirming a ride. “It turns out it’s easier to decide whether it’s worth $30 for a car ride and act accordingly then it is to be told that a surge multiplier of 2.5 times is in place and that the normal rate would probably come to about $12.” This is an interesting product shift that seems simple, but took Uber a long time and many complaints to implement.

Uber’s new long pickup fee, as presented to drivers. Will users accept it?
Source: The Verge

This week, Uber announced fees to compensate drivers for customers that require a long drive before pickup and if they need to wait for their passenger after arrival. Interestingly, they don’t yet provide details on how the new fee will be presented to passengers as they only detail the driver’s side. I’m curious to see if the new fees will be presented before booking, as part of the total cost, or if they’ll be tacked on later, something that users might not like. The new delay fee will need to be added only after a ride, so it will be interesting to see how it’s presented to riders in a way that won’t cause ire.

My takeaway, though, is mostly about how products need to understand what the user’s sense of fairness is for the price of a service, how to best communicate a price that a user might not like, and at what stage. It seems like the best practices are fewer surprises for users, a sense of control, and the reasons for an unexpected price that establishes its worth. Mr Thaler, a Nobel-winning economist, said this: “A good rule of thumb is we shouldn’t impose a set of rules that will create moral outrage, even if that moral outrage seems stupid to economists.”   

Context changes everything: maps, cupcakes and how to meet user expectations

I was offline for a few days so I didn’t hear about the Google Maps cupcake feature until yesterday, when Google pulled it. The feature essentially showed users how many calories they could burn by walking to their destination and they measured that walk by cupcakes. Users responded by saying it felt judgemental to provide calories burned while walking after asking for driving directions, as if Google was trying to shame them into walking instead of driving the short distance. The addition of mini-cupcakes burned was seen as targeted as women specifically and added to the shaming.

I admit, I did not initially understand the negative reaction to this feature. After all, Citymapper has always included a walking option, along with estimated calories burned (not based on any personal user information) since I first started using it years ago and I have never seen any objection to that. Yet the objection was widespread and well-reasoned so I wanted to see what the difference was between this the Citymapper calorie option to see if there was a way to avoid such product mishaps.

Asking for driving instructions but getting unasked for walking route as well.
Google Maps without the calorie count.

Context: The calorie count was presented only when a user mapped a route that was deemed short enough to walk and calories were shown underneath the time it takes to walk. However, these walking routes were shown when a user selected Drive as a route option, not Walk. So two things jump out at me here:

  1. Google already has a Walking option in Maps (along with Cycling, Public Transport, and a Cab/Uber) so it is out of context to see a walking route when expecting for a driving route, and not what the user asked for.
  2. Time is the parameter in Maps for making route decisions where most users prefer the route with the lowest travel time. Calories seems out of place.  

In comparison, Citymapper’s entire flow is different. All route options, including walk, cycle, cab, and different public transit options are presented in a list, all at once. It’s only when users select a transport method that they see a map. Google Maps, on the other hand, starts with a map and offers transport options on the top. The cupcake equivalency caused additional pushback because users didn’t expect and didn’t want to be offered dieting advice with their route lookups.

Timing: In some ways, this is another aspect of the wrong context – a new feature appearing in a familiar app with a familiar interface, in a common use case, that users didn’t like and couldn’t turn off. In Citymapper, the walking option is available for every route since the beginning.

Scope: while I realize that Google must have tried this feature out with a small percentage of its users, possibly also geographically limited, Google Maps is one of the most popular apps on both mobile platforms, one used by many people every day. Even a small percentage of users might already be too many. This is something that less popular apps might not encounter but it means that the reaction for a misstep is amplified.

Diversity: so many product decisions come back to not having enough representation on the product, design and engineering teams in charge, so that products are created that may harm or offend certain unrepresented groups. In this case, Maps played into a stereotype that offended women, who felt targeted with the pink mini-cupcake calorie equivalency.

Give options: I know there’s a saying that the settings are where product decisions go to die, but even if that’s the case, it’s worth giving a clear way for users to disable it, especially when it’s not the core functionality of the app but interferes with one.

Finally, my guess is that it’s a combination of confusing context in a frequently-used flow, along with no ability to disable the feature that caused ire. Maps is used to get from one place to the other. Is it wrong to add fitness/health/diet features to a mapping app? Not necessarily, but it might be better to make them opt-in and contextually relevant.

App feature I love: ‘Go Later’ in Waze

Every once in awhile I encounter a feature in an app or service that is so smart I have to, well, write a blog post about it. Today I’m loving Waze’s “Go later” button, an option at the bottom of the destination action sheet that opens a screen that I think is extremely well designed both in terms of feature set and interface.

WazeGoLater

When is the best time to leave? Waze makes it easier to decide.

At the top is a reminder of what the user is trying to do, drive later to AT&T Park, the destination entered before, from the starting point, the current location. Below that is a dropdown for the day with the default set to Today.

It’s the graphic below that and the interaction with it which I think is brilliant: the bar graph on the right clearly shows the times that the journey is expected to take, when it’s worse, and when it improves. The shading of each bar, from yellow to dark red, also indicates the severity of expected traffic and it’s instantly clear that scrolling down will allow selection of a later arrival time while considering traffic. Each arrival time has an expected drive time and, accordingly, when to leave by to make it there on time. Finally, after selecting an arrival time, it can be saved so that Waze sends a reminder to leave on time. What I liked about this screen is that even though the bar format is not a widely used interface, it was instantly clear what information it was trying to convey and how to navigate that information to make the right choice.

DestinationWaze

Where to park – can Waze include costs?

Now, there’s no love without a desire to improve. One thing I’d like is to integrate parking information. Waze already knows where I want to go – I picked AT&T Park as my destination. Then I picked their recommendation for the most popular parking lot. However, that parking lot is, as Waze tells me, an 8 minute walk away from the park itself. So, it would be great if:

  1. Waze would take my original desired arrival time and add the walking distance from the parking lot to the destination (in this case, AT&T Park itself) to the originally stated time.
  2. Is there room at the parking lot? Some lots know how many free spaces they have. Could Waze create an interface for lot owners to update availability?
  3. Pricing of the lots. Sometimes a parking lot a short distance away offers rates that make the extra walking distance worthwhile. Can Waze help make that tradeoff?

Finally, I wonder if the time of the alert changes if a user changes their location? In my example, I set up an alert to get to AT&T Park from my home. But if later in the day I end up closer to the destination, will Waze notify me according to the new location? Also, will the notification time change if actual traffic is worse or better than predicted? Both require constant monitoring of traffic conditions but the former could be extremely valuable to users. After all better to be early than late.

All in all, a really well done feature. Respect to the UI designer that got this right.

Google’s new photo books: bold product trade-offs with a lot of potential

I’m passionate about photos. Not just taking the perfect photo in terms of composition and lighting, but as mementos of a significant moment, as a door into our parents and grandparents lives, and as keepsakes of the special moments in my and my family’s life. I have an emotional connection to photos and seeing a significant one takes me back to its story. I have always loved sorting through photos and creating keepsakes from them, be they one-off birthday cards, a collage, a photo book of memorable events.

I’ve always loved creating photo books but haven’t actually made that many. It’s, well,  quite a time-consuming process. For those who have never created one, there are three stages of creating a photo book:

  1. Gathering and sourcing: finding all the photos you want to include. This is easier when the scope of the book is a short time period and/or a recent event. A recent vacation, a year-in-review, and a special event are all relatively easy to source with usually one photographer adding photos to a single folder. This task becomes harder as the time period grows larger, especially when the source of the photos is a non-connected digital camera or, in even darker times, printed photos. The former requires going through cloud services and backup drives, while the latter require scanning and cataloguing, as they have no metadata. This takes time.
  2. Selection and sequencing: picking the good ones and trying not to be too repetitive. This is easier for analog photos as not many were taken of a single event, but becomes more difficult in the digital age where the quest for the perfect moment results in a lot of very similar photos, only one or two which are fantastic.
  3. Editing – choosing layouts and adding annotation: picking a book theme/design, placing photos in the book, and adding headers and further information, page by page. With analog and older digital photos this also includes assessing how large a photo can be printed without running into resolution problems and adjusting the page layout accordingly.

I’ve recently had the chance to complete this process with one of the more customizable photo book editors out there, Mixbook, and with Google, who introduced photo books at I/O earlier this year (and gave a free one to every attendee.) It’s interesting to compare the two processes and results of these two, especially since it looks like Google started from scratch and tried to reimagine the process at every step.

How Mixbook does layouts: lots of options, endless customization, as much text as necessary.

In a recent project, an anniversary book that spanned years, the gathering stage took me about 25 hours… over three weeks! To make the second step easier, I organized photos in folders by years. This worked really well for my Mixbook project as I uploaded photos and added them to the book grouped by year. Mixbook has a great feature that can hide photos that have already been used in a layout so it’s easier to focus on the current batch. The third step, however, again took me a long time, about 15 hours because I had to further edit my selection and make layout and add text for every single page of the project. I had to pay attention to photo quality, the photo actual photo size vs the space allotted in the

Google, on the other hand, takes a different approach. First, all photos have to be on Google Photos. Second, users can just choose between 20-100 photos, the maximum allowed per book. Finally, there are no layout or text options. It’s one photo per page, white background, no text.

Let’s start at the beginning: users have three different ways to start a book.

  1. Automatic selection: picking 77 photos out of an album of 140. This is more significant when albums are larger.

    Pick an existing folder/album with any amount of photos and let Google select what it considers the best ones. For example, I did this with a folder with 140 photos and it chose 77. After that selection users can manually add and remove selected photos from the same folder only. This is good for those working on a recent, focused event such as a recent vacation or party. It’s not great when a vacation has many photos split up into several folders by day, as I usually do. For the book I created on Google I had to merge all the photos into one folder before letting the selection algorithm work its magic. This means that even though Photos thinks the process is quick, there is more prep work to be done to adapt to that process. Also, I had to manually go through all the photos that were chosen and in about half the cases replaced them with either a similar photo of the same scene, a similar photo but with different people to ensure all participants were represented, or with an important moment that was not represented by Google’s selection. The problem is that liking only half of them means I don’t really trust the algorithm to choose the best, but it’s a rather good shortcut.

  2. Manually pick photos by scrolling through all your Google photos organized by date. This is more tedious than MixBook’s traditional upload-as-you-work process because scrolling through one long, long scroll of photos doesn’t give an option to pause and save the work and to asses the collection amassed so far. Also, it makes replacing difficult since only 100 selections can be made. That means the user has to scroll back and find a photo to unselect before continuing. Even with a long scroll, it’s easier to select every photo at once and then whittle down to 100. Of all three options, this was the one I liked least. It has no benefits over Mixbook’s process.
  3. Best of Spring 2017 – automatically created by Google.

    Start with one of Google’s Suggested Books. I had “best of spring” suggested for me which included 37 photos. The problem was is that while it included some great ones and some that represented important events this spring, it also included a photo of a medical device, of irrelevant people, of insignificant events. It also chose photos with only a place over a photo of people I care about in that place. When I narrow down a season to 37 photos, every photo counts. Anway, as in option A, it’s a nice shortcut but the selection needs to be monitored and modified.

Yet, let’s look at the bigger picture. What’s interesting here is that Google is trying to take my 25-hour gathering stage and reduce it to minutes. The drawback is that each shortcut has a tradeoff that Google hopes won’t be significant to the user.

In option A, the tradeoff is that each option has the paradigm of “one album/folder = one book” isn’t inclusive enough, but when it is, the automatic selection is a very powerful tool. Even with the algorithm’s not-quite-there selection – it still saves a significant chunk of time. The selection algorithm will improve over time as it learns who is important to me and what kind of mix of photo types I like to include (i.e. portraits, group shots, landscapes, etc.) Also, for cases where paradigm doesn’t hold, and the user needs to do significant prep work to create that one album, make it easier to merge several albums (eg: not just cut and paste.) Perhaps it means allowing users to select more than one album before the selection algorithm gets to work.

Another drawback to option A is the assumption that all photos were taken on a phone and backed up to Google Photos automatically. This would, indeed, shorten the process significantly. However, many photographers still use dedicated cameras, especially for special events and vacations. This site claims photos taken by Nikon, Canon, Sony, and Fulifilm cameras make up over 80% of all photos shared online on sites including Flickr, 500px and Pixabay. Despite the glaring absence of sharing on social media platforms, this is still a significant number. What this means in Google’s process is that more prep work is required to get those photos into online. It’s not a dealbreaker.

In option C the tradeoff is that for the short time it takes to review Google’s selection, the result could be a really cute keepsake. What I liked in this scenario is that it’s based on a certain time period that Google identified to be significant for me. It could do the same for a day, weekend, or even an event over a few hours. That’s powerful.

After making shortcuts in gathering and selection, in step 3, editing, Google has gone all out: only one photo per page, white background, no text. The only choice is the photo’s original shape on white, square on white, and full page. There is no editing. All that’s left is selecting a title for the book. While this absolutely saves time, it’s a bit too harsh on the creative soul. No way add descriptions, dates, location, and, perhaps most important, people?  

I, for one, can live with the simpler design with no theme choice. I might even be persuaded that one photo per page is not so bad, even though my photo book philosophy is more is best. What I can’t live with is no text at all. A photo book is a story. It’s me telling a story to my friends and family now and in the future. My story needs text. What I did to overcome this limitation is add photos of signs of the places we visited for the road trip photo book I created. It took away from my overall photo limit which meant less photos of people were included.

What Google did is make photo book creation very easy for a very specific use case. It automated parts of the process but in doing so made the manual one more difficult. By taking away design options, it also shortened creation time significantly. But at what price? For someone like me, who loves photos, has a huge collection, and wants to delight friends and family with personal keepsakes, then this isn’t the right product. While I love the shortcuts, the prep work is significant, the AI isn’t quite there yet to pick out the absolute best photos and the lack of any customization goes too far.

That said, Google getting this right is an intriguing option. The AI will improve, the photos selected will be more relevant. Instead of not allowing any text, captions for locations, dates, and people could be added automatically. It could do more, add more photos per page that have a connection between them, such a the same day, event, or person. It could make creating a book from several albums easier with an overarching selection algorithm. It could really tell a story, which is more than just the photos, and it could do that in less than an hour.

I can’t wait.