A vacation away from Twitter and Silicon Valley usually serves as healthy step back and usually allows me to take a higher level look at how tech is perceived by outsiders. The unpleasant truth is that tech is no longer the shining light it once was (once being a scant few months ago) and people outside the industry are starting to see some of the more negative aspects. Some are issues that go beyond a specific technology, platform, or service and I’d like us, as PMs, to start thinking and planning for the negative aspects of the technology we create.
These are my personal priorities:
First, don’t harm people. What tech companies have done with the so-called gig-economy might be wonderful for those owning the platform, but it’s turning out to be terrible for many of the people trying to make a living out of it. Here in the US, where so many benefits are tied to employment, having part-time, randomly scheduled and unpredictable pay is not working out and it’s hurting entire swathes of society. From a few weeks ago, and damning: “The gig economy can also be thought of as an elaborate way around an age-old problem: how to control workers without the expense of making them employees — and without breaking the law.” Without naming companies, there should be a point where the potential negative impact on the people who provide these services for a lower cost than ever before needs to be considered and addressed.
The second part of this is low-wage employees, like warehouse staff. To meet ever shorter delivery time promises for a greater range of goods, warehouse personnel have to work for long hours under worsening conditions, on-the-job injuries, and lower pay, many times in locations where there are few other employment opportunities. Is it absolutely necessary to achieve fast and cheap delivery times at the expense of so many livelihoods?
Second, don’t harm society. There are several tech companies currently harming society and democracy by subverting the idea of “open platforms” and a place to discuss ideas into a stream of misinformation and lies. These are ideas, which, regardless of their intent, drive hate and eventually violence. It’s disingenuous to keep promoting certain bad actors when it’s so very clear that just giving them a platform causes immense harm to individuals but also society at large. Can we at least try changing the rules of the feed, where users, in their quest for popularity, have found that divisiveness is the key and that posting inflammatory content is rewarded with more likes, retweets, comments, and shares? Also, with the shift towards more AI-driven algorithms in these platforms, let’s examine the biases and assumptions that lead to these results.
Finally, don’t harm children by creating apps and platforms that, if not do them harm outright, certainly don’t benefit them. I’m specifically thinking of YouTube kids and the harmful content there but also apps like Facebook Messenger, which is targeted at kids too young to need it. YouTube, from the moment it became aware that “bad actors” (again this ugly term) were using its platform to reach kids with atrocious content, should have switched to whitelisted content only, as opposed to tinkering with its recommendation algorithm. I also don’t think that expecting parents to report “bad content” is the right path to take. This shouldn’t be a somewhat random process that may or may not happen because a parent stumbles onto a video but rather proactive action taken by the platform.
Finally, because I don’t want to just gripe, I looked for a set of tools to expand product thinking beyond the common desire to delight the user. I came across a beautiful set of cards, called the Tarot Cards of Tech, that do a great job at asking questions starting at feature and product ideation stage on through development. These two cards in particular, speak to some of what I tried to express here:
- The Radio Star: who or what disappears if your product is successful? Who loses their job? What other products or services are replaced?
- The Big Bad Wolf: what could a bad actor do with your product? What would predatory and exploitative behavior look like with your product? What product features are most vulnerable to manipulation? Who could be targeted with your product?
Bottom line is to ask more questions, even the hard ones, and commit to answering them and finding solutions, something we all need to do more of.