AI/ML, Privacy, AI benefits/risks

UN resolution on AI encourages measures against malicious use

The United Nations General Assembly unanimously adopted its first global resolution on artificial intelligence Thursday.  

The non-binding resolution was led by the United States, co-sponsored by more than 120 other members states and adopted by consensus. The eight-page resolution text outlines general baseline goals for members states to promote “safe, secure and trustworthy” AI systems.

“Artificial intelligence poses existential, universal challenges. AI-generated content, such as deepfakes, holds the potential to undercut the integrity of political debates […] But AI also holds profound, universal opportunities to accelerate our work to end poverty, save lives, protect our planet, and create a safer, more equitable world,” U.S. Ambassador to the UN Linda Thomas-Greenfield stated when introducing the resolution.

“We expect [the resolution] will open up the dialogue between the United Nations, civil society, academia and research institutions, the public and private sector, and other communities for collaboration; facilitating continuous innovation and building capacity to close digital divides,” Thomas-Greenfield said.

UN AI resolution recognizes cybercrime, deepfake, privacy risks

The resolution makes recommendations covering a range of AI challenges and opportunities, touching on topics, including the malicious use of AI by threat actors, the generation of potentially deceptive AI content and the need to secure personal data over the life cycle of AI systems.

The UN encourages member states to strengthen their investment in implementing security and risk management safeguards for AI systems, and to promote measures for identifying, evaluating, preventing and mitigating vulnerabilities in the design and development phase of the system prior to deployment.

These goals align with the international AI security guidelines developed by the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the United Kingdom’s National Cyber Security Centre (NCSC) last November, which emphasizes the importance “secure-by-design” and “secure-by-default” principles for AI systems.  

The UN resolution also noted the need for effective processes for detecting and reporting security vulnerabilities, risks, misuse and other adverse incidents by end-users and third parties post-deployment of AI systems.

Amid growing concerns about users inputting potentially sensitive information into AI tools such as ChatGPT, the resolution also encourages member states to foster the development and transparent disclosure of “mechanisms for securing data, including personal data protection and privacy policies, as well as impact assessments as appropriate, across the life cycle of artificial intelligence systems.”

While the resolution does not include the word “deepfake,” it recognized the risks of AI-generated content that may be indistinguishable from authentic content, and promoted the development of tools, standards or practices for “reliable content authentication,” specifically noting “watermarking or labelling” as examples. The resolution also called for “increasing media and information literacy” to enable users to determine when digital content has been generated or manipulated by AI.

This section is relevant not only to potential misinformation campaigns, but also to phishing and fraud, as seen in a recent case of a company losing millions of dollars due to an advanced social engineering campaign involving a video conference with multiple employee deepfakes.

The UN’s adoption of its AI resolution comes a week after the European Parliament approved the European Union AI Act, which imposed risk-based requirements for providers of AI systems, including bans on some uses and mandated labelling of AI-generated media like deepfakes.

The resolution was agreed by all 193 UN member states, reportedly after months of negotiations and “heated conversations” between countries with differing views, senior U.S. officials said in response to questions about whether China and Russia resisted the resolution, according to Reuters. China ultimately became a co-sponsor of the resolution.

Last month, Microsoft reported that nation-state threat actors from China, Russia, North Korea and Iran were using large language models, namely ChatGPT, to optimize their operations. The threat groups, including the Russia-backed Fancy Bear and China-backed Charcoal Typhoon, used the chatbots to perform reconnaissance and vulnerability research, get help with scripting and translation, and generate phishing content.

Microsoft President Brad Smith commented on the UN resolution in a post on X, stating: “We fully support the UN’s adoption of the comprehensive AI resolution. The consensus reached today marks a critical step towards establishing international guardrails for the ethical and sustainable development of AI, ensuring this technology serves the needs of everyone.”

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.