Faculty Commentary

Uber’s Algorithms Could Spot Crimes in Progress. But Do We Want Them To?

Uber

The news out of Kalamazoo, Mich., this past weekend was grim: Authorities say Jason Brian Dalton, an Uber driver, shot and killed at least six people in different locations in the space of a few hours. Chillingly, Dalton apparently took several fares in between his alleged attacks.

The case raises difficult questions about Uber’s responsibility toward passengers and the public. For example, could the company have prevented Dalton’s crimes? And if so, how should Uber and regulators respond? More stringent background checks are one option, but they can be discriminatory. They also quickly lead to diminishing returns, since they measure past behavior, and it is incredibly difficult, even based on current behavior, to predict who will become a mass shooter.

There is one thing Uber probably could do using its existing technology and the massive amounts of data it already collects about its drivers and passengers: The company could spot crimes in progress by their drivers as they take place.

But while that approach might be more effective than implementing more background checks (and more allegedly misleading “safety fees”), it also raises profound and unsettling questions about the balance between safety, civil liberties and corporate power.

When you open its app to request a ride, Uber estimates how quickly a car could reach you, and may require you to agree to pay a higher fare if there’s unusually high demand. Once you request a car, Uber shows you that car’s progress toward you in real time. During your ride, Uber tracks your progress toward your destination. After your ride ends, Uber quickly sends you an email mapping the ride and noting your fare.

In other words, the company knows where its users are, and where its drivers are, at least when they are logged on, and apparently even when they are not. It ostensibly uses that data to adjust fares based on demand in order to encourage drivers to enter and exit certain zones of cities to pick up riders. I’ve argued elsewhere that it can and should use that data to ensure its drivers don’t discriminate, and to ensure they receive at least a minimum wage.

Even before the Kalamazoo shootings, the firm had begun to use data to ensure safety. For example, Uber has a pilot program in Houston — based on similar programs launched by insurance companies — to monitor how well drivers are driving through a smartphone app that senses rapid turns, speed and the like. That’s a smart approach, since it enables the company to weed out drivers who are statistically more likely to cause accidents.

But it’s only the beginning. Uber doesn’t just want to displace taxis. It wants to become an an all-purpose urban transportation and logistics firm. That would make it one of a handful of emerging monopolies in the tech sector, alongside Google (which dominates search and has a finger in almost all online behavior), Amazon (which dominates online retail, and whose chief executive, Jeff Bezos, owns The Washington Post) and Apple (which dominates music sales). Those companies and others are at the center of Silicon Valley’s push to link together billions of personal devices into networks that enable real-time performance tracking beyond anything in human history. By gathering data on devices’ performance and determining patterns, big data may enable more efficient means of heating buildings, moving traffic through streets, ensuring quality health care, delivering fresh produce — you name it.

“[Uber] could spot crimes in progress by their drivers as they take place. But while that approach might be more effective than implementing more background checks (and more allegedly misleading “safety fees”), it also raises profound and unsettling questions about the balance between safety, civil liberties and corporate power.”

Should Uber develop algorithms to take a quick look at a driver, in real-time, upon receiving a user complaint? Such a complaint was reportedly lodged before Dalton’s alleged killing spree. An Uber executive has noted that those complaints are often unfair toward drivers, which is a totally valid point, but if the technology exists to confirm the passenger’s report, that concern is mitigated. Is it so far-fetched to think that the right algorithms could detect when someone is driving erratically even before someone complains? Or even link a driver to several crime scenes, by comparing locational data to police scanner reports or to trending activity on Twitter and Facebook?

Uber could also add a so-called panic button to its app, as it has done in India. The company says this is unnecessary because “in the United States, 911 is the panic button,” and it is true that our emergency response systems are far better than those in many developing countries. But a panic button could communicate an SOS to local police and to the company, which — unlike the police — will instantly know the user’s location. If you’re trapped in a car with a speeding driver, do you really want to describe your situation to police with the driver listening? Or do you want Uber relaying your complaint and your location to the authorities in real time while you sit silently in the car?

No strategy is ever perfect, however, and explicitly and openly monitoring drivers or implementing a panic button would carry costs. One relates to thelegal status of Uber’s drivers. If the company acknowledges that it can or does exert intimate control over their work in order to protect riders and the general public, it will be more likely to be found to be their employer under the most common legal test for employment. The firm’s business model, so far, has relied on its assertions that all of its drivers are merely independent contractors.

Another set of hard questions revolve around civil liberties. The issue isn’t privacy per se — Uber is already collecting all the data, or will be soon — but rather whether the company should be legally required to alert police about particular drivers. The Philip K. Dick story (and the later film) “Minority Report” comes to mind. So does the impact of such a policy on more mundane criminal matters. Would Uber be expected to report passengers for frequenting known open-air drug markets or illegal red-light districts? Should it report drivers for unpaid parking tickets or other warrants? For failing to pick up passengers with disabilities?

There is, indeed, something deeply frightening about the prospect of a global logistics firm with the capacity to monitor our every movement collaborating intimately with the police. Yet the reality, I fear, is that such data already exists and is already being analyzed, and law enforcement agencies will be demanding it soon. (Just look at the current standoff between the FBI and Apple over the contents of the San Bernardino shooter’s phone.)

The key, then, is to permit Uber and its ilk to gather and use such data, so long as they also advance basic public goods, including safety, drivers’ rights and civil liberties. Ideally, lawmakers would require Uber and other firms to bargain over their data and privacy policies with associations of users, drivers and other stakeholders, and to be transparent about how those policies affect our lives. That may be utopian, but the alternative — private exercise of state-like power — would be downright undemocratic.


This article was originally published on the PostAnything section of The Washington Post

 

Questions about this post? Drop us a line at lawcomm@temple.edu.