Microsoft is calling for increased government oversight on companies developing facial recognition technology, at a time when public concern about the relationship between tech and the public sector is growing.
In a blog post published Friday, Microsoft President Brad Smith touted the potential benefits of facial recognition — like finding missing children and helping visually impaired people. But he also discussed the more insidious possibilities, like government surveillance and invasive marketing.
In the blog post, Smith addressed why he thinks government regulation is the right way to manage facial recognition, rather than the tech community self-policing on the issue. Smith wrote that a few tech companies might be more thoughtful about facial recognition on their own, while others won’t. Competitive dynamics between domestic and international tech companies will make self-policing nearly impossible, Smith argues. That’s why government needs to step in, according to Microsoft.
“While we appreciate that some people today are calling for tech companies to make these decisions – and we recognize a clear need for our own exercise of responsibility, as discussed further below – we believe this is an inadequate substitute for decision making by the public and its representatives in a democratic republic,” Smith said in the blog post. “We live in a nation of laws, and the government needs to play an important role in regulating facial recognition technology. As a general principle, it seems more sensible to ask an elected government to regulate companies than to ask unelected companies to regulate such a government.”
Here is a list of questions Smith would like to see potential regulations deal with:
- Should law enforcement use of facial recognition be subject to human oversight and controls, including restrictions on the use of unaided facial recognition technology as evidence of an individual’s guilt or innocence of a crime?
- Similarly, should we ensure there is civilian oversight and accountability for the use of facial recognition as part of governmental national security technology practices?
- What types of legal measures can prevent use of facial recognition for racial profiling and other violations of rights while still permitting the beneficial uses of the technology?
- Should use of facial recognition by public authorities or others be subject to minimum performance levels on accuracy?
- Should the law require that retailers post visible notice of their use of facial recognition technology in public spaces?
- Should the law require that companies obtain prior consent before collecting individuals’ images for facial recognition? If so, in what situations and places should this apply? And what is the appropriate way to ask for and obtain such consent?
- Should we ensure that individuals have the right to know what photos have been collected and stored that have been identified with their names and faces?
- Should we create processes that afford legal rights to individuals who believe they have been misidentified by a facial recognition system?
Smith didn’t absolve the tech industry of responsibility. He laid out a four-pronged approach that Microsoft is taking to govern its work on facial recognition technology. Microsoft is accelerating work to rectify higher rates of errors recognizing women and people of color and working on a set of governing principles related to its work and deployment on facial recognition. In the meantime, it plans to be “more deliberate” with consulting and contract work related to the technology while participating in public policy discussions.
Facial recognition technology has become a key issue of concern for civil rights groups. The American Civil Liberties Union has made several public entreaties for Amazon to stop selling its Rekognition software to law enforcement agencies. The ACLU claims, “facial recognition technology is biased, misidentifying African Americans and relying on databases built on a history of discrimination in our criminal justice system.”
The relationship between the tech industry, government, and law enforcement has come under increased scrutiny in recent months. Microsoft itself has been the target of a campaign by employees and customers to sever ties with ICE because of the agency’s practice of separating migrant families. The debate also raised questions about whether Microsoft could and would provide ICE with facial recognition technology.
Smith addressed that concern in his blog post Friday:
We’ve since confirmed that the contract in question isn’t being used for facial recognition at all. Nor has Microsoft worked with the U.S. government on any projects related to separating children from their families at the border, a practice to which we’ve strongly objected. The work under the contract instead is supporting legacy email, calendar, messaging and document management workloads. This type of IT work goes on in every government agency in the United States, and for that matter virtually every government, business and nonprofit institution in the world. Some nonetheless suggested that Microsoft cancel the contract and cease all work with ICE.
The ensuing discussion has illuminated broader questions that are rippling across the tech sector.