By early 2024, OpenAI’s rules for how the military could use its technology were unambiguous. The company prohibited anyoneuse their models for “weapons development” or “military and war”. That changed when The Intercept reported that OpenAI had eased those restrictions, prohibiting anyone from using the technology to “harm yourself or others” by developing or using weapons, injuring others or destroying property.
OpenAI said shortly afterward that it would work with the Pentagon on cybersecurity software, but not weapons. Then, in a post on her blog, the company shared that it is working in the national security space, arguing that, in the right hands, AI could “help protect peopledeter adversaries and even prevent future conflicts.”
However, now OpenAI has announced that its technology will be deployed directly on the battlefield. The company says that will partner with defense technology company Anduril, a manufacturer of drones, radar systems and powered missiles by AI, to help US and allied forces defend against drone attacks. OpenAI helpto to build AI models that “synthesize rtorequests time-sensitive data, reduce the burden on human operators and improve situational awareness.eithern” to shoot down enemy drones, according to the statement.
No details have been released, but The program will focus strictly on defending US personnel and facilities from unmanned aerial threats.according to Liz Bourgeois, spokesperson for OpenAI. “This partnership is consistent with our policies and does not involve leveraging our technology to develop systems designed to harm others,” he said.
Now, OpenAI is focusing its work on national security. If working with militaries or defense technology companies can help ensure that democratic countries dominate the AI race, the company has written, Doing so is not inconsistent with OpenAI’s mission to ensure that the benefits of AI are widely shared.. In fact, he argues, it will help fulfill that mission. This is a big change from his position just a year ago.
To understand how quickly this pivot unfolded, it’s worth noting that as the company wavered in its approach to the national security space, others in the tech sector were quick to adopt it. Simply put: if they didn’t do it, someone else would.. “We believe that a democratic vision of AI is essential to unleash its full potential and ensure that its benefits are widely shared – adds the aforementioned OpenAI post -. “We believe that democracies must continue to take the lead in the development of AI, guided by values such as freedom, equity and respect for human rights.”
The problem with this is that once a contract is signed with a country’s defense system, Laws may be more flexible than initially thought and they will never be above the well-known phrase: “national interest.”