NEW YORK — On a Harlem street this summer, New Yorkers caught a glimpse of the future. Strutting between a logjam of NYPD vehicles blocking an intersection was one of the NYPD’s newest recruits: a robotic canine called Digidog, emblazoned with the department’s blue and white colors and outfitted with a number of high-tech accessories. The funds to purchase the cybernetic hound did not go through the standard budgeting process, which requires oversight and a vote from the New York City Council. Instead, police brass received cash directly from the federal government under something called the Equitable Sharing Program, which supplements the budgets of local police departments with money and property forfeited in the course of criminal investigations. The multi-billion dollar initiative has helped law enforcement agencies pay overtime and arm themselves with equipment and sophisticated weaponry since the Reagan era. But the program is now entering a new phase as it provides access to a futuristic era of high-tech policing tools that have raised fresh questions about the balance between privacy and public safety, along with biases inherent in supposedly neutral algorithms. Advances in artificial intelligence, surveillance and robotics are putting the stuff of yesteryear’s science fiction into the hands of an ever-growing list of municipalities from New York City to Topeka. Privacy advocates are worried both about the Digidog — the devices can collect data and the NYPD has not thoroughly disclosed how that information would be used — and about high-tech policing tools in general. “More departments are using more tools that can collect even more data for less money,” said Albert Fox Cahn, head of the New York City-based watchdog group Surveillance Technology Oversight Project. “I’m terrified about the idea that we’ll start seeing decades of work to collect massive databases about the public being paired with increasingly invasive AI models to try to determine who and who isn’t a threat.” While recognizing that technology can sometimes be a helpful tool to fight crime, privacy advocates nevertheless worry about a lack of guardrails around the ethics of police departments using robots, facial recognition and increasingly broad local surveillance networks. At the end of a press release announcing the purchase of two Digidogs, for instance, the NYPD sought to assuage a concern grimly indicative of this new era. “Under the NYPD’s protocols, officers will never outfit a robot to carry a weapon and will never use one for surveillance of any kind,” the department wrote. It turns out, that’s an important disclaimer. Companies like Ghost Robotics have already attached sniper rifles to quadruped robots. And in November, the San Francisco Board of Supervisors voted to give law enforcement robots the authority to use lethal force. That proposal — which would have allowed police to place explosives on automatons in limited circumstances — was reversed after public outcry. But the board left the door open to reconsidering the initiative in the future. Other technology seems to have biases baked into its foundation, with serious implications for communities of color. And vast amounts of biometric data, along with license plate readers that can pinpoint the location of a particular vehicle, are creating the capability for broad surveillance of the citizenry. “In our country, the police should not be looking over your shoulder, literally or figuratively, unless they have an individualized suspicion that you are involved in wrongdoing,” Jay Stanley of the American Civil Liberties Union said in an interview. “They can’t just watch everybody all the time in case you commit a crime.” Alongside the new concerns that come with each technological advancement, the money underwriting some of these products is also under increasing scrutiny. Read the full story from POLITICO New York here.
|