ADVERTISEMENT
OpenAI and Anthropic, both of which have worked with the U.S. military, have said that even their most advanced systems are error prone, and the world’s top AI researchers admit they don’t fully understand how leading AI systems work.
A major OpenAI study published in September found that all major AI chatbots, which rely on systems called large language models, “hallucinate” or periodically fabricate answers.
Sen. Kirsten Gillibrand, D-N.Y., called for clearer rules on how the military can use AI.
Mark Beall, head of government affairs at the AI Policy Network, a Washington D.C. think tank, and the director of AI strategy and policy at the Pentagon from 2018 to 2020, said that while AI could streamline the process of deciding where to strike, it was clear humans still need to thoroughly vet targets.