I read an interesting post yesterday titled “Apple doesn’t understand photography.” It explains how Apple thinks the typical use case for their photo app is a holdover from the days of analog photography: “we go on a trip, take a bunch of photos then struggle with how to show our friends these photos when we get back from our trip.” The post goes on to say that those kind of photos are maybe 1% of all photos taken with a phone camera. The rest, says the post, use the phone for:
- Zooming in, especially for text. I admit I’ve done this before, too, especially to take photos of text that isn’t directly visible, such as stickers stuck on underside of furniture and appliances.
- Memorization. The author’s example: his daughter took a photo of a missing cat poster. Yesterday I took a photo of a trail map in an area without mobile reception and I often take a photo to remember where I parked.
- Mirror. For me this is especially true when picking out with glasses, as I cannot see when I don’t have them on and cannot tell what frames look like.
- Receipt tracker. I do this all the time, too, especially for expense tracking and medical bills.
- Product minder. A bottle of wine that was liked, or a book that was recommended.
As I nodded to myself when reading the post, I looked through my Photos app (on Android) and noticed that I’ve also used my phone camera in the last month to:
- Document an important letter as I mailed it.
- Prove how a package and product arrived damaged.
- Record insurance and license data of a driver who caused a minor fender-bender as I was too shaken to write down information.
Share product information while at a store, such as color or design, with the intended owner of the product to save them the hassle of going to the store and me the hassle of returning the unwanted item later.
- Reduce backpack weight by photographing a few, select pages of a travel guidebook that are relevant to that day’s trip.
- Preserve an important whiteboard session.
- Save a recipe found in a magazine perused at the doctor’s waiting room.
Focus on a slide during a presentation that I wanted to spend more time digesting.
And the list goes on. The truth is that there are so many photos on my phone that really have nothing to do with how Apple, or Android for that matter, see their photo apps. Sure, there are some special events and some trips that I’d like to share with friends and family but the overwhelming majority of photos are not. Yet both Apple and Google currently focus on solely on the people-event type of photos and are recognizing people, places and objects, with Apple recently adding facial expressions and Google claiming to recognize events in a certain time frame (i.e. “ a wedding you attended last summer.”) I’ve tried numerous such searches with Google Photos and have always been awed by their image search capabilities, such as finding a Superman costume, T-shirt and a cap as results for “Superman.” It can also recognize people across the years, which I find amazing. It doesn’t do as well when I search for receipts or keywords from a flyer I photographed.
The post also mentioned that there are apps that are more fitting for taking the different kinds of photos than the default camera app, such as Evernote or a receipt app, but “that’s not how life works.” As users, we do what is easiest. And the simplicity of using the default camera app to take photos wins every time. He also suggested that Apple add intelligence to how it handles photos. “As soon as you take a photo the camera could detect what kind of photo it is and label it as ‘Receipts’, ‘Notes’ or ‘Expire after 7 days’. This could be a pop-up that would float there over the photo you just took. With one finger you can confirm or change a label, or just ignore it when it is right.”
While tagging would be a good start, users might not appreciate adding a step to taking a photo. There are two things that need to change to adapt to changes in camera usage:
- Categorization. To acknowledge the different use cases, photo apps could add intelligence to detect these new types of photos and treat them differently than the “traditional” people/event/landscape photos. Examples of these “new” formats could be receipts, flyers, recipes, maps, report cards, bills, letters, envelopes, parking poles, signs, etc. The formats can be identified without text recognition, if users prefer that images not be read. By recognizing and grouping such photos, users will be able to more efficiently find the ones they need. By letting users decide if the text should be read and recognized words be searchable, apps can maintain user privacy while still offering better search and categorizing functionality.
- Timing. Backup and deletion of photos is also currently based on the assumption that users want to back up every photo because every photo is a cherished memory. That assumption needs to change and will eventually drive a change in how apps treat photos based on time since they were taken and their content. Is it safe to auto-delete a parking location after a week? Probably. Is it safe to delete a receipt after a year? Maybe, but that’s best left for the user to decide. Should some photos be diverted into other apps, such as receipt trackers or lists apps? Tagged, deleted or archived as a group?
Based on the current AI and learning capabilities of photo apps I have no doubt they can serve users better. The first step is realizing that the user story has changed. It’s no longer about saving special moments, or telling and sharing a story. It’s about convenience, everyday things to remember, and sometimes meaningless visual communication. The solutions will follow.