Google, Pichai said, will not pursue the development of AI when it could be used to break global law, cause overall harm or surveil people in violation of "internationally accepted norms of human rights".
The statement comes amid employee discontent over Google's involvement in Project Maven, a controversial Pentagon AI program that seeks, among other things, to use machine learning to quickly detect and categorize objects in images captured by drones.
While Google always said this work was not for use in weapons, the project may have fallen foul to the new restrictions, as Google said it will no longer continue with Project Maven after its current contract ends. This officially turned Google into a defense contractor, which is a company that provides products or services to the USA military or US intelligence agencies. The charter sets "concrete standards" for how Google will design its AI research, implement its software tools and steer clear of certain work, Pichai said in a blog post. And they should only be made available for purposes that fall in line with the above. About a dozen Google employees reportedly resigned due to the company's involvement in the program.
However, Google went on to confirm that they will continue to work with government bodies and military. "In the absence of positive actions, such as publicly supporting an global ban on autonomous weapons, Google will have to offer more public transparency as to the systems they build".
Pichai's insistence that Google will continue to work with the military may be a signal that Google still plans to vye for Joint Enterprise Defense Infrastructure (JEDI), a 10-year, $10 billion cloud contract with the USA military that drew the attention of major tech companies like Amazon and Google. The principles also state that the company will work to avoid "unjust impacts" in its AI algorithms by injecting racial, sexual or political bias into automated decision-making. So today, we're announcing seven principles to guide our work going forward. I mean, sure, Google might not be the company developing such systems firsthand, but they are helping prop it up by working on defense contracts. "This is the reality faced by any developers of what are usually called dual-use technologies".
The new principles follow months of debate inside Google over AI technology it had developed for the USA military for analyzing drone footage as part of what was known as Project Maven. Several employees said that they did not think the principles went far enough to hold Google accountable-for instance, Google's AI guidelines include a nod to following "principles of worldwide law" but do not explicitly commit to following global human rights law. "These include cybersecurity, training, military recruitment, veterans' healthcare, and search and rescue. These collaborations are important and we'll actively look for more ways to augment the critical work of these organisations and keep service members and civilians safe", he said.