1. 程式人生 > >Legislators, Stockholders, Civil Right Groups, and a CEO Seek Limits on AI Face Recognition Technology

Legislators, Stockholders, Civil Right Groups, and a CEO Seek Limits on AI Face Recognition Technology

Following the tragic killings of journalists and staff inside the Capital Gazette offices in Annapolis, Maryland, in late June, local police acknowledged that the alleged shooter’s identity was determined using a facial recognition technology widely deployed by Maryland law enforcement personnel.  According to DataWorks Plus, the company contracted to support the

Maryland Image Repository System (MIRS) used by Anne Arundel County Police in its investigation, its technology uses face templates derived from facial landmark points extracted from image face data to digitally compare faces to a large database of known faces.  More recent technology, relying on artificial intelligence models, have led to even better and faster image and video analysis used by federal and state law enforcement for facial recognition purposes.  AI-based models can process images and video captured by personal smartphones, laptops, home or business surveillance cameras, drones, and government surveillance cameras, including body-worn cameras used by law enforcement personnel, making it much easier to remotely identify and track objects and people in near-real time.

Recently, facial recognition use cases have led to privacy and civil liberties groups to speak out about potential abuses, with a growing vocal backlash aimed at body-worn cameras and facial recognition technology used in law enforcement surveillance.  Much of the concern centers around the lack of transparency in the use of the technology, potential issues of bias, and the effectiveness of the technology itself.  This has spurred state legislators in several states to seek to impose oversight, transparency, accountability, and other limitations on the tech’s uses.  Some within the tech industry itself have even gone so far as to place self-imposed limits on uses of their software for face data collection and surveillance activities.

Maryland and California are examples of two states whose legislators have targeted law enforcement’s use of facial recognition in surveillance.  In California, state legislators took a recent step toward regulating the technology when SB-1186 was passed by its Senate on May 25, 2018.  In remarks accompanying the bill, legislators concluded that “decisions about whether to use ‘surveillance technology’ for data collection and how to use and store the information collected should not be made by the agencies that would operate the technology, but by the elected bodies that are directly accountable to the residents in their communities who should also have opportunities to review the decision of whether or not to use surveillance technologies.”

If enacted, the California law would require, beginning July 1, 2019, law enforcement to submit a proposed Surveillance Use Policy to an elected governing body, made available to the public, to obtain approval for the use of specific surveillance technologies and the information collected by those technologies.  “Surveillance technology” is defined in the bill to include any electronic device or system with the capacity to monitor and collect audio, visual, locational, thermal, or similar information on any individual or group. This includes, drones with cameras or monitoring capabilities, automated license plate recognition systems, closed-circuit cameras/televisions, International Mobile Subscriber Identity (IMSI) trackers, global positioning system (GPS) technology, software designed to monitor social media services or forecast criminal activity or criminality, radio frequency identification (RFID) technology, body-worn cameras, biometric identification hardware or software, and facial recognition hardware or software.

The bill would prohibit a law enforcement agency from selling, sharing, or transferring information gathered by surveillance technology, except to another law enforcement agency. The bill would provide that any person could bring an action for injunctive relief to prevent a violation of the law and, if successful, could recover reasonable attorney’s fees and costs.  The bill would also establish procedures to ensure that the collection, use, maintenance, sharing, and dissemination of information or data collected with surveillance technology is consistent with respect for individual privacy and civil liberties, and that any approved policy be publicly available on the approved agency’s Internet web site.

With the relatively slow pace of legislative action, at least compared to the speed at which face recognition technology is advancing, some within the tech community have taken matters into their own hands.  Brian Brakeen, for example, CEO of Miami-based facial recognition software company Kairos, recently decided that his company’s AI software will not be made available to any government, “be it America or another nation’s.”  In a TechCrunch opinion published June 24, 2018, Brakeen said, “Whether or not you believe government surveillance is okay using commercial facial recognition in law enforcement is irresponsible and dangerous” because it “opens the door for gross misconduct by the morally corrupt.”  His position is rooted in the knowledge of how advanced AI models like his are created: “[Facial recognition] software is only as smart as the information it’s fed; if that’s predominantly images of, for example, African Americans that are ‘suspect,’ it could quickly learn to simply classify the black man as a categorized threat.”

Kairos is not alone in calling for limits.  A coalition of organizations against facial recognition surveillance published a letter on May 22, 2018, to Amazon’s CEO, Jeff Bezos, in which the signatories demanded that “Amazon stop powering a government surveillance infrastructure that poses a grave threat to customers and communities across the country. Amazon should not be in the business of providing surveillance systems like Rekognition to the government.”  The organizations–civil liberties, academic, religious, and others–alleged that “Amazon Rekognition is primed for abuse in the hands of governments. This product poses a grave threat to communities,” they wrote, “including people of color and immigrants….”

Amazon’s Rekognition system, first announced in late 2016., is a cloud-based platform for performing image and video analysis without the user needing a background in machine learning, a type of AI.  Among its many uses today, Rekognition reportedly allows a user to conduct near real-time automated face recognition, analysis, and face comparisons (assessing the likelihood that faces in different images are the same person), using machine learning models.

A few weeks after the coalition letter dropped, another group, this one a collection of individual and organizational Amazon shareholders, issued a similar letter to Bezos.  In it, the shareholders alleged that “[w]hile Rekognition may be intended to enhance some law enforcement activities, we are deeply concerned it may ultimately violate civil and human rights.”  Several Microsoft employees took a similar stand against Microsoft’s role in its software used by government agencies.

As long as questions surrounding transparency, accountability, and fairness in the use of face recognition technology in law enforcement continue to be raised, tech companies, legislators, and stakeholders will likely continue to react in ways that address immediate concerns.  This may prove effective in the short-term, but no one today can say what AI-based facial detection and recognition technologies will look like in the future or to what extent the technology will be used by law enforcement personnel.