MICROSOFT’S ETHICAL RECKONING IS HERE
Microsoft has become the latest company dragged into the tech industry’s ethical reckoning over the use of its products by government agencies.
On Sunday, critics noted a blog post from January in which Microsoft touted its work with US Immigration and Customs Enforcement (ICE). The post celebrated a government certification that allowed Microsoft Azure, the company’s cloud-computing platform, to handle sensitive unclassified information for ICE. The sales-driven blog post outlined ways that ICE might use Azure Government, including enabling ICE employees to “utilize deep learning capabilities to accelerate facial recognition and identification,” Tom Keane, a general manager at Microsoft wrote. “The agency is currently implementing transformative technologies for homeland security and public safety, and we’re proud to support this work with our mission-critical cloud,” the post added.
The post resurfaced amid outrage over ICE’s role in forcibly separating families soon after they arrive in the US, with some children detained in cages. Critics lambasted Microsoft on social media, asking the company to discontinue its work with ICE. Yasha Levine, author of the book Surveillance Valley, says scrutiny of tech companies needs to extend beyond “the splashy Terminator-like stuff” and “look at the more routine and mundane integration of Silicon Valley tech with military and law enforcement. There is so much of it.”
Some of the criticism of Silicon Valley companies working with the government is rooted in specific Trump administration policies. Levine says US Customs and Border Protection uses Google Maps. “Does that make Google complicit in Trump’s immigration policies? I say, yes,” he says. But he notes that government agencies used Google Maps during the Obama administration as well.
Niles Guo, a former product manager at Microsoft, urged the company to do better. “The projects we take on matters[sic], they have real world implications,” Guo wrote on Twitter. “We can’t hide behind code without thinking about the ethical implications of our work.”
Microsoft intern Courtney Brousseau tweeted at Microsoft CEO Satya Nadella. “[A]s a current @Microsoft intern, I’d also like to know why Microsoft is ‘proud to support (the work of ICE).’”
Tech Workers Coalition, a labor group for tech industry employees, urged Microsoft employees to coordinate their opposition. “If you are a worker building these tools or others at Microsoft, decide now that you will not be complicit,” the group tweeted.
Azure is Microsoft’s brand name for its cloud computing services, which can range from hosting a customer’s data to facial recognition. Late Monday, Microsoft said it is “not working with U.S. Immigration and Customs Enforcement or U.S. Customs and Border Protection on any projects related to separating children from their families at the border, and contrary to some speculation, we are not aware of Azure or Azure services being used for this purpose.”
The company also decried policies that lead to separating families. “As a company, Microsoft is dismayed by the forcible separation of children from their families at the border,” the statement said. “We urge the administration to change its policy and Congress to pass legislation ensuring children are no longer separated from their families.”
Before releasing the statement, Microsoft temporarily deleted four paragraphs about its work with ICE from the January blog post. The company initially told WIRED the deletion was “a mistake,” but later described it as an error in judgment.
The backlash against Microsoft underscores the shifting moral boundaries for tech companies, which have worked closely with defense and military since the advent of Silicon Valley. Tech employers began to pay attention once engineers organized to publicize their objections, beginning with a pledge not to build a Muslim registry soon after President Donald Trump’s election.
Most of the recent debate has been around uses of artificial intelligence to identify objects in video footage from drones, or to identify people through facial recognition. In recent months, more than 4,000 Google employees signed a petition objecting to the company’s work on Project Maven, which seeks to apply AI to the military. Internal objections have been buttressed with external support from academics, researchers, and shareholders. On Friday, Amazon shareholders, including Arjuna Capital, Zevin Asset Management, and the Social Equity Group, published an open letter asking CEO Jeff Bezos to halt expansion or development of Rekognition, Amazon’s image-recognition software, for use in government surveillance until it can be vetted by Amazon’s board of directors. Following reports that Amazon marketed Rekognition to police departments, consumers and advocacy groups also asked Amazon to take Rekognition off the table for governments.
For some, Project Maven crossed a line by weaponizing artificial intelligence. Others are not as certain. On one side of the debate, supporters of Silicon Valley’s work with the government on border walls or drones argue that better technology can keep Americans safer. Wouldn’t you rather Google handle facial recognition than Lockheed Martin?But when the companies supporting that kind of defense work also collect vast troves of personal information, it raises questions about whether there are protections for consumer privacy. Amazon declined to comment. Google did not respond to a request for comment.
In Google’s case, employee protest, coupled with a letter of support from leading academics working in AI, made an impact. Earlier this month, CEO Sundar Pichai announced that Google would not renew its Project Maven contract when it expires next year, but said Google would continue its work “with governments and the military in many other areas.”