WASHINGTON — The Department of Defense will follow a new set of ethical principles for the use of artificial intelligence, officials announced Feb. 24.

The rollout of a code of conduct for the use of AI technology follows a 15-month study led by the Defense Innovation Board, a panel of outside advisers led by Eric Schmidt, former executive chairman of Google’s parent company, Alphabet.

The new principles for the use of AI were developed from an existing ethics framework based on the U.S. Constitution, the laws of war, international treaties and longstanding norms and values, DoD Chief Information Officer Dana Deasy told reporters at the Pentagon.

AI technology is increasingly used by DoD, the intelligence community and contractors to analyze data. In the space industry there has been a boom of AI and machine learning software to help human analysts extract information from satellite imagery. Given the growth of remote sensing satellites that collect multiple forms of data and signals, experts point out that the only way to process that much data without having to hire thousands of analysts is with machine learning and AI.

The ethics guidelines do not recommend curtailing the use of AI but call on DoD personnel and contractors to “exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.”

The Pentagon years ago reached out to companies like Google for help connecting the military to the AI world. But the initiative came under fire when it was revealed that DoD was funding the development of AI algorithms to analyze live video streamed from aerial drones under the so-called Project Maven. Google employees and other tech industry communities called for the Pentagon to restrict the use of AI technology on the battlefield.

Under the new guidelines, personnel who employ AI-based tools must “possess an appropriate understanding of the technology, development processes, and operational methods.”

Lt. Gen. Jack Shanahan, director of the Joint Artificial Intelligence Center, told reporters that DoD will “design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences.” That means DoD should have the ability to “disengage or deactivate deployed systems that demonstrate unintended behavior.”

Shanahan said the intelligence community is likely to embrace similar guidelines and that discussions among agencies and international allies have been underway for months.

He said the reaction of the tech community to Project Maven has been “a little hyped” and that DoD would be doing this “regardless of the angst in the tech industry.”

Sandra Erwin writes about military space programs, policy, technology and the industry that supports this sector. She has covered the military, the Pentagon, Congress and the defense industry for nearly two decades as editor of NDIA’s National Defense...