“AI tools aren’t 100% reliable — they can fail in subtle ways and yet operators continue to over-trust them,” said Rep. Sara Jacobs, D-Calif, a member of the House Armed Services Committee.
“We have a responsibility to enforce strict guardrails on the military’s use of AI and guarantee a human is in the loop in every decision to use lethal force, because the cost of getting it wrong could be devastating for civilians and the service members carrying out these missions,” she said.