Google Promises Ethical Principles to Guide Development of Military AI

May 31st, 2018

Update: Leaked Emails Show Google Expected Lucrative Military Drone AI Work to Grow Exponentially

Via: The Intercept:

FOLLOWING THE REVELATION in March that Google had secretly signed an agreement with the Pentagon to provide cutting edge artificial intelligence technology for drone warfare, the company faced an internal revolt. About a dozen Google employees have resigned in protest and thousands have signed a petition calling for an end to the contract. The endeavor, code-named Project Maven by the military, is designed to help drone operators recognize images captured on the battlefield.

Google has sought to quash the internal dissent in conversations with employees. Diane Greene, the chief executive of Google’s cloud business unit, speaking at a company town hall meeting following the revelations, claimed that the contract was “only” for $9 million, according to the New York Times, a relatively minor project for such a large company.

Internal company emails obtained by The Intercept tell a different story. The September emails show that Google’s business development arm expected the military drone artificial intelligence revenue to ramp up from an initial $15 million to an eventual $250 million per year.

In fact, one month after news of the contract broke, the Pentagon allocated an additional $100 million to Project Maven.

The internal Google email chain also notes that several big tech players competed to win the Project Maven contract. Other tech firms such as Amazon were in the running, one Google executive involved in negotiations wrote. (Amazon did not respond to a request for comment.) Rather than serving solely as a minor experiment for the military, Google executives on the thread stated that Project Maven was “directly related” to a major cloud computing contract worth billions of dollars that other Silicon Valley firms are competing to win.

The emails further note that Amazon Web Services, the cloud computing arm of Amazon, “has some work loads” related to Project Maven.

Tell me another one.

Via: The Verge:

Google is drawing up a set of guidelines that will steer its involvement in developing AI tools for the military, according to reports from The New York Times and Defense One. What exactly these guidelines will stipulate isn’t clear, but Google told the Times they will include a ban on the use of artificial intelligence in weaponry. The principles are expected to be announced in full in the coming weeks. They are a response to the controversy over the company’s decision to develop AI tools for the Pentagon that analyze drone surveillance footage.

2 Responses to “Google Promises Ethical Principles to Guide Development of Military AI”

  1. Dennis says:

    IF AI reduces the number of innocents killed by drones and ‘surgical strikes’ that wouldn’t be a bad thing, but who will decide how much weight is given to the ‘spare non-combatants’ algorithm? And how long until it’s being gamed (e.g. human shields)?

  2. dale says:

    Yeah but, who is innocent and who is guilty? Wars are not fought against the guilty anyway. They are fought against the enemy, or the combatants, or whoever’s in the way. Humans terrorize humans. We’re all Indians now.

    Flying Monkeys
    Drone Swarms
    209 x 1,000,000

    https://m.youtube.com/watch?v=Hzlt7IbTp6M

Leave a Reply

You must be logged in to post a comment.