Skip to main content
ai

DoD Releases Principles for Ethical AI in Combat

These principles will do little to assuage critics who say AI-augmented weapons could be a "killer robot" storyline ripped from Sarah Connor's nightmares
article cover

Francis Scialabba

less than 3 min read

Believe it or not, the organization with Earth's largest weapons arsenal now has an AI code of conduct. After a 15-month review, the U.S. Department of Defense formally adopted "ethical principles" for AI on Monday.

The principles cover five main areas

  1. Responsible. Use "appropriate levels of judgment and care."
  2. Equitable. Minimize "unintended bias."
  3. Traceable. Don't let AI systems operate like a black box.
  4. Reliable. No buggy algorithms or hardware.
  5. Governable. All autonomous systems should have an "off" button in case stuff hits the fan.

These principles will do little to assuage critics who say AI-augmented weapons could lead to a "killer robot" storyline ripped from Sarah Connor's nightmares. But for now, the DoD says humans have veto power over the actions of armed robots.

Bottom line: The Pentagon seems most focused on deploying AI into non-combat arenas like surveillance, intelligence, and logistics. But even those efforts are bound to meet resistance from civil society groups and contracted tech employees.

Tech news that makes sense of your fast-moving world.

Tech Brew breaks down the biggest tech news, emerging innovations, workplace tools, and cultural trends so you can understand what's new and why it matters.

By subscribing, you accept our Terms & Privacy Policy.

Tech news that makes sense of your fast-moving world.

Tech Brew breaks down the biggest tech news, emerging innovations, workplace tools, and cultural trends so you can understand what's new and why it matters.

By subscribing, you accept our Terms & Privacy Policy.