Yesterday Uber again caught the eye of the public when surge pricing went into effect in Sydney, Australia, during a hostage situation with people trying to leave the area. Keeping in mind that the Silicon Valley echo chamber amplifies the responses pertaining to tech, I thought this would end up being a minor issue. Yet later it made the mainstream news outlets as well, as part of reporting on the ongoing situation.
Initially Uber Sydney justified the surge pricing. They tweeted:
We are all concerned with events in CBD. Fares have increased to encourage more drivers to come online & pick up passengers in the area.
— Uber Sydney (@Uber_Sydney) December 15, 2014
In theory, Uber is right. Uber’s algorithm, attuned to shifts in supply and demand, correctly responded to increased rider demand to get out of Sydney’s financial district as soon as possible. The higher prices were intended to bring more drivers to the area and maybe they did. Yet users reported never seeing the fare multiplier as high as 4 before and Uber eventually promised to refund all rides out of Sydney during the hostage situation. The response was was mostly a cry for Uber to use some “human decency” when raising prices and some went as far to say that drivers might not be as motivated by money as Uber thinks. For some drivers, helping others even at a normal rate might have been a sufficient motivator to pick up rides in Sydney’s financial district.
It’s understandable that an algorithm can’t understand “human decency.” Artificial intelligence is not quite there yet. However, humans can still intervene when the algorithm reaches an unexpected result. Why would a surge multiplier of four apply to downtown Sydney on a normal Monday in December when historically Monday rider demand in Sydney hasn’t exceeded driver supply? We talk a lot about analyzing more and more user data to determine expected product behavior. In Uber’s case, it might have recognized an abnormal situation and alerted a human.
As this incident unfolded last night I tried to remember other incidents where “the algorithm” had been blamed for a service’s misbehavior Google’s AdSense is usually a finely tuned machine. Able to determine, based on past behavior what ads are more likely to be clicked (thus resulting in income for Google) based on the user’s search terms, location and previous performance of the ad and specific ad copy. The AdSense algorithm constantly optimizes ad display for maximum benefit to both Google and the advertiser. Almost two years ago, a professor at Harvard realized that on AdSense “a black-identifying name was 25 percent more likely to get an ad suggestive of an arrest record.” She explained this as an unfortunate but automatic extension of the AdSense algorithm which reflects the racism exhibited over time by users.
A possible scenario imagined by Salon makes a lot of sense. “An employer is Googling prospective job applicants. Some of those applicants have black-identified names. Due to his or her personal racism, the employer happens to be more likely to click on the ads that suggest “Arrested?” next to the black-identified names. And over time, Google’s AdSense algorithm learns that “ads suggestive of an arrest record” work better when associated with black-identified names.”
As in the Uber case, the algorithm is doing what it is supposed to do. But as in the Uber case, could humanity help prevent misuse? It may be more difficult to identify the problem with AdSense but it is possible to address it using historical data to weed out the potential racial bias.
Bottom line is that when your algorithm gets a product in trouble too often, it’s time to stop blaming it and either change it or add a dash of humanity.