~ Auto Buzz ~: Should Google’s AI be used by the US military drone program?

Wednesday, 16 May 2018

Should Google’s AI be used by the US military drone program?



Project Maven was established in July 2017 and uses machine learning and artificial intelligence to analyze the massive amount of data shot by US military drones.

Google’s TensorFlow AI systems are what is specifically being used by the US Department of Defense (DOD). The initial intention is to have AI analyze the video, detect objects of interest and flag them for a human analyst to review.

Drew Cukor, chief of the DoD’s Algorithmic Warfare Cross-Function Team, wrote an article on the DOD website when the program first launched stated: “People and computers will work symbiotically to increase the ability of weapon systems to detect objects. Eventually, we hope that one analyst will be able to do twice as much work, potentially three times as much, as they’re doing now. That’s our goal.”

The program has re-entered the spotlight this week with about a dozen Google employee’s leaving the company over the companies insistence on developing AI for the military.

It seems like the “Don’t be evil” motto covers making machine learning to aid warfare.

Google justifies its actions by saying that they’re not developing weapons for the government, they’re helping the military configure TensorFlow to analyze video. At least Google and the DOD are telling the same story.

We can see both sides of the story, a case for working with the DOD and against it. Let’s take a look at both sides

A Case for Google Working with the DOD on Project Maven

  • Google Cloud CEO Diane Greene responded to concerns internally, according to TNW, she clarifies that the technology will not “operate or fly drones” and “will not be used to launch weapons.”
  • To dominate the cloud business and fulfill CEO Sundar Pichai’s dream of becoming an “AI-first company,” Google will find it hard to avoid the business of war. Bloomberg has made this great chart to drive the point home about where the money are cloud computing will be in the coming years. If they want to maintain relevance in one of their main fields of focus, they can’t avoid the military.

  • Working with the Pentagon will damage relations with consumers and Google’s ability to recruit.

Google only won a small part of Project Maven, the automation of weapons is going to happen with or without them. So whose ethics to do trust more? A publicly traded company whose motto is “Do No Evil” or an unknown who could have no moral compass?

  • Amazon’s cloud business alone has generated $600 million in classified work with the Central Intelligence Agency since 2014. This debate is not just about Google, it’s about the overall involvement of the tech industry in military contracts. If we hold Google to this standard, we should also hold every other tech company accountable.

Why Google Should now work with the military

  • Even if Google does not help the military act, once delivered, the technology could easily be used to do harm.
  • TensorFlow is highly regarded as an easy to use, maybe Google should let the military hire its own AI experts to handle things inhouse.
  • There is a petition internally at Google with over 4,000 of the 85,000 employees asking Google end its involvment with Project Maven. It cites Google’s history of avoiding military work and of couse, “Do No Evil”.
  • The International Committee for Robot Arms Control wrote an open letter to Google, in opposition to their involvement in Project Maven.

According to Defense One, Joint Special Operations Forces “in the Middle East” have conducted initial trials using video footage from a small ScanEagle surveillance drone. The project is slated to expand “to larger, medium-altitude Predator and Reaper drones by next summer” and eventually to Gorgon Stare, “a sophisticated, high-tech series of cameras…that can view entire towns.” With Project Maven, Google becomes implicated in the questionable practice of targeted killings. These include so-called signature strikes and pattern-of-life strikes that target people based not on known activities but on probabilities drawn from long range surveillance footage. The legality of these operations has come into question under international[1] and U.S. law.[2] ICRAC

Project Maven is currently focused on human analysis however, these technologies are poised to become the basis for automated target recognition and autonomous weapons systems.  This is definitely not “Do No Evil”.

the DoD already plans to install image analysis technologies on-board the drones themselves, including armed drones. We are then just a short step away from authorizing autonomous drones to kill automatically, without human supervision or meaningful human control. If ethical action on the part of tech companies requires consideration of who might benefit from a technology and who might be harmed, then we can say with certainty that no topic deserves more sober reflection – no technology has higher stakes – than algorithms meant to target and kill at a distance and without public accountability. ICRAC

  • This is the first step to autonomous weapons.
  • Last year, several executives—including Demis Hassabis and Mustafa Suleyman, who run Alphabet’s DeepMind AI lab, and famed AI researcher Geoffrey Hinton—signed a letter to the United Nations outlining their concerns about lethal autonomous weapons.

What’s clear is that this debate isn’t about if Google specifically should be involved with the military, but should it be involved when we as a society have yet to establish principals and values around AI. When it comes to AI and the military it can be used for defensive and offensive purposes, no one would argue against the technology being used to save lives, but it’s a double-edged sword.

Eric Schmidt former CEO of Google submitted written testimony to the Housed Arm Service Committee:

The industry is going to come to some set of agreements on AI principles — what is appropriate use, what is not — and my guess is that there will be some kind of consensus among key industry players on that,” said Schmidt “The world’s most prominent AI companies focus on gathering the data on which to train AI and the human capital to support and execute AI operations. If DoD is to become ‘AI‑ready,’ it must continue down the pathway that Project Maven paved and create a foundation for similar projects to flourish… It is imperative the Department focus energy and attention on taking action now to ensure these technologies are developed by the U.S. military in an appropriate, ethical, and responsible framework. Eric Schmidt

 

 

The post Should Google’s AI be used by the US military drone program? appeared first on Mobile Geeks.


【Top 10 Malaysia & Singapore Most Beautiful Girls】Have you follow?



Share This: