AI & Law of Armed Conflict
Back in Tokyo after an (ecologically problematic) trip to Austin, TX, for a conference on AI, National Security, and the Law of Armed Conflict.
Back in Tokyo after an (ecologically problematic) trip to Austin, TX, for a conference on AI, National Security, and the Law of Armed Conflict.
◾ It was an interesting time to be in the US, as the Biden administration had just released its Executive Order on AI. Overall, the EO seemed to be relatively well received. From what I've heard, people believe that this sweeping set of regulations is getting at the right issues: privacy, bias, watermarking, etc. However, many are skeptical about the EO's implementation. "The larger the EO, the more likely it is to fail," one said. People were also unimpressed with the "Advancing American Leadership Abroad" measures, which they saw as relatively weak—though understandably so, given the difficulty of reaching international agreements on tech issues.
◾ It was an interesting time to be talking about AI in a military context, too, as the UN recently approved a draft resolution on Lethal Autonomous Weapons Systems (LAWS). My knee-jerk reaction was enthusiasm, but I think the question may require more than a knee-jerk reaction. Joanne Kirkham, a French scholar (nothing to do with the Austin conference), has written her dissertation on LAWS. I hope to give it a proper read soon, but for now here's what I take from skimming through it.
International Law is not a « sacred law » that shouldn't be amended, but it's rich and flexible enough to adapt to new realities such as LAWS. This is especially true given that LAWS are not fundamentally disruptive, and that the current legal framework that governs them is not obsolete. But while substantive rules of international law shouldn't be rewritten, soft law mechanisms such as codes of conduct could prove useful in keeping international law in sync with new technologies such as LAWS.
◾ Finally, there's another kind of use of AI in a military context that I learned about in Austin and found quite interesting: drones that are deployed shortly after the bombing of an area to create digital twins of what remains. In the future, this kind of real-time evidence gathering could be used to prosecute war crimes. However, these technologies would have to be managed by neutral and responsible third parties, and it's not clear who those third parties would be. The first actor that comes to mind—the United Nations is paradoxically lagging behind when it comes to assessing the human rights impact of the technologies it uses. Indeed, at hashtag#IGF2023, UN representatives presented their efforts to "walk the talk" regarding human rights due diligence. What became clear during this presentation is that the UN's efforts, while laudable, are long overdue. The fact that there are no human rights impact assessment processes for digital technologies used by the UN right now is a legitimate source of concern when we know, for example, that the United Nations High Commissioner for Refugees has used iris scanning technology in Jordanian camps.