Google's Latest Change To Its Ai Policies Signals How Silicon Valley Is Warming Up To The Defense Industry
Google headquarters in Mountain View, California.
Tayfun Coskun/Anadolu via Getty Images
- Google updated its ethical AI guidelines in a blog post on Tuesday.
- The post omitted a 2018 statement that Google wouldn't use AI for weapons or surveillance.
- The announcement follows other Silicon Valley companies seeking to partner with the US on defense tech.
Google updated its ethical guidelines for artificial intelligence in a blog post on Tuesday, removing the company's previous vows to not use its technology to build weapons or surveillance tools.
In 2018, the company outlined AI "applications we will not pursue." These included weapons and "technologies that gather or use information for surveillance violating internationally accepted norms," as well as "technologies that cause or are likely to cause overall harm" and "technologies whose purpose contravenes widely accepted principles of international law and human rights."
The 2018 post now includes an appended note at the top of the page that says the company has updated its AI principles in a new post, which does not mention the previous guidelines against using AI for weapons and some surveillance technologies.
The company first published these AI guidelines in 2018 after thousands of Google employees protested its involvement in Project Maven, an AI project that Google and the US Department of Defense collaborated on. After over 4,000 workers signed a petition demanding that Google stop working on Project Maven and promise never to again "build warfare technology," the company decided not to renew its contract to build AI tools for the Pentagon.
James Manyika, Google's senior vice president for technology and society, and Demis Hassabis, the CEO of Google DeepMind, said in a blog post that democratic nations and companies should work together in leveraging AI that strengthens homeland security:
"There's a global competition taking place for AI leadership within an increasingly complex geopolitical landscape," the executives wrote. "We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security."
A spokesperson from Google did not immediately respond to a request for comment.
Although many in Silicon Valley previously steered clear of US military contracts, this move — in the backdrop of the Trump administration, rising US-China tensions, and the Russian-Ukraine war — is part of a broader shift among tech companies and startups moving toward offering their proprietary technology, including artificial intelligence tools, for defense purposes.
Defense tech companies and startups have been optimistic that the industry is poised for success during President Donald Trump's second term. In November of last year, Anduril cofounder Palmer Luckey said in an interview with Bloomberg TV of Trump that it is "good to have someone inbound who is deeply aligned with the idea that we need to be spending less on defense while still getting more: that we need to do a better job of procuring the defense tools that protect our country."
Late last year, Palantir and Anduril, which makes autonomous vehicles for military use, held discussions with other defense companies and startups, including SpaceX, ScaleAI, and OpenAI, to form a bidding group for the US government's defense contracts.