Software

Google Won’t Allow its AI Software to be used in Weapons

A latest blog post from Google’s CEO, Sundar Pichai says that they will not allow Google’s artificial intelligence software to be used in weapons or unreasonable surveillance efforts under the new standards.

Google instead will seek government contracts in areas such as cybersecurity, military recruitment and search and rescue.

“We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas” said Chief Executive Sundar Pichai.

Google won’t help the U.S. Military Anymore?

It merits mentioning that thousands of employees  protested against company’s work with U.S. military to identify objects in the drone video, using its AI software.

More than 4,600 employees petitioned Google to cancel the deal sooner, with at least 13 employees resigning in recent weeks in an expression of concern.

The company recommended that developers avoid launching AI programs likely to cause significant damage if attacked by hackers because existing security mechanisms are unreliable.

Google’s Principles are not good enough?

Google’s principles say it will not pursue AI applications intended to cause physical injury, that tie into surveillance “violating internationally accepted norms of human rights,” or that present greater “material risk of harm” than countervailing benefits.

However, tech pundits believe that If Google should take the issue of developing AI serious enough to come up with a realistic set of principles to guide future development– one that addresses the ethical concerns head on.