Alors que l’armée américaine élargit son utilisation des outils d’IA pour déterminer les cibles de frappes aériennes en Iran, les membres du Congrès demandent des gardes-corps et une surveillance accrue de l’utilisation de la technologie en temps de guerre.

ADVERTISEMENT

OpenAI and Anthropic, both of which have worked with the U.S. military, have said that even their most advanced systems are error prone, and the world’s top AI researchers admit they don’t fully understand how leading AI systems work.

In an interview with NBC last month, Anthropic CEO Dario Amodei said: “I can’t tell you there’s a 100% chance that even the systems we build are perfectly reliable.”

A major OpenAI study published in September found that all major AI chatbots, which rely on systems called large language models, “hallucinate” or periodically fabricate answers.

Sen. Kirsten Gillibrand, D-N.Y., called for clearer rules on how the military can use AI.

“The Trump administration has already proven that it is willing to subvert American law to prosecute an unpopular war,” she told NBC News. “There is little reason to trust that the DOD will be any more responsible with its use of AI without explicit safeguards.”

Mark Beall, head of government affairs at the AI Policy Network, a Washington D.C. think tank, and the director of AI strategy and policy at the Pentagon from 2018 to 2020, said that while AI could streamline the process of deciding where to strike, it was clear humans still need to thoroughly vet targets.

Leave a Comment