The Strategist

Pentagon hires Google to use its AI developments



03/12/2018 - 13:22



Google and the US Department of Defense actively cooperate in the field of artificial intelligence within the framework of the joint project codenamed Maven. The project was commenced almost a year ago, but the information became public became only now. The Guardian and Gizmodo say that the project itself raises questions and disagreements both within the company itself and outside it.



Last spring, the US Department of Defense launched its Maven project. Its goal is to use artificial intelligence and machine learning skills to process a huge amount of visual information coming from unmanned aerial vehicles of the ministry operating in various parts of the world. It is assumed that the entire process of collecting, processing information, deciding and processing it will not be fully robotized. AI will only select information and mark one that may be of interest. Then, people will analyze it. It is supposed, though by no means confirmed, that the project has already been used in the fight against Islamic groups in Syria. Information about the project appeared only now. It seems that the project is headed by Lieutenant-General John Shanahan from the Ministry of Defense, and Eric Schmidt himself represents Alphabet. The direct commander of the project is the Marine Colonel Drew Cukor. The US Department of Defense refused to comment on the project and its contractors.

Technology companies, such as Google/Alphabet, Microsoft and Amazon, have long been in contact with state structures. Nevertheless, the participation of Google, which provided its AI systems (in particular, TensorFlow) to the Maven project, has caused a lot of questions and doubts, both inside the company and outside it. Suffice it to say that the company even issued a special statement on this matter, noting that according to its data, technology is not used in offensive operations.

"This is a joint pilot project with the Ministry of Defense to provide open interfaces for TensorFlow application programming for object recognition... The technology only marks images for human analysis and is used exclusively for non-target purposes," the statement said. Moreover, the company also recognizes the legitimacy of doubts expressed with regard to cooperation with the Pentagon. "The military use of machine learning raises legitimate questions. We are actively discussing this problem both inside the company and with others. We continue to create rules and security measures to use our technologies," the company said.

The former executive chairman of Google’s board of directors said about the same. Eric Schmidt repeated him noting that the whole industry is "worried that... using their technology to kill people is wrong." Nevertheless, Mr. Schmidt himself and another top manager of Google, Milo Medin, are members of the Council for Military Innovations, an advisory body to the US Secretary of Defense. Mr. Schmidt himself, as follows from the information on the Council's website, is its chairman.

According to one of the most informed experts in the field of interaction between the US Department of Defense and Silicon Valley, Greg Allen of the Center for New American Security, the joint project with Google is extremely important for the Pentagon. The ministry, which spent $ 7.4 billion in 2017 on everything related to AI (The Wall Street Journal), which keeps collecting huge amounts of information from its UAVs, doesn’t have any slightest idea how to properly buy, deploy and use AI, he believes. Therefore, the real benefit from the project is much more than officially declared.

Gizmo sources in the company report that many Google employees are outraged by the very fact of cooperation with the Ministry of Defense, while others notice that such cooperation raises serious ethical issues regarding the development of AI. Nevertheless, the US military and law enforcement agencies are already actively using AI, for example, to assess probability of recidivism among prisoners of American prisons. Experts, however, caution that such systems can be biased. Moreover, it is much more difficult to find this bias in AI than in humans. For example, the same program for determining recidivists, as reported, regularly showed its racism.

source: theguardian.com, gizmodo.com