The United States, the United Kingdom, and more than a dozen other countries unveiled what a senior US official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors on Nov 27, urging companies to develop AI systems that are "secure by design." The 18 countries agreed in a 20-page document released on Nov 27 that companies designing and using AI must develop and deploy it in a way that protects customers and the general public from misuse.
The agreement is non-binding and consists mostly of general recommendations such as monitoring AI systems for abuse, protecting data from tampering, and vetting software vendors.
Still, Jen Easterly, director of the United States Cybersecurity and Infrastructure Security Agency, said it was important that so many countries signed on to the idea that AI systems must prioritise safety.
"This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs," Easterly said in a Reuters interview, adding that the recommendations represent "an agreement that the most important thing that needs to be done at the design phase is security."
The agreement is the latest in a series of initiatives by governments around the world to shape the development of AI, whose weight is increasingly being felt in industry and society at large.