Skip to main content

In view of the global availability and impact of social media, malware and disinformation, casting the widest possible net for AI regulation is essential. No country or organization should be left behind. Regional and multilateral efforts are now underway in a race to regulate AI. Using IAEA Safeguards as a model for AI regulation will ensure the safety, transparency and beneficial development of AI.

Internationally agreed AI regulations are needed to avoid conflicting or competing standards and to ensure safety, reliability and transparency of AI built on large language or deep learning models such as ChatGPT and GPT-4.1. The development of international rules and regulations, should involve all relevant countries and organizations.

AI Regulation Efforts To Date

France, Germany and Italy reached an agreement on 18 November 2023 on future AI regulation, with an EU authority to monitor compliance.

The President of the Russian Federation has proposed “…common principles of controlling artificial intelligence similar to the nuclear non-proliferation rules” (TASS, 24 November 2023).

The “AI Safety Summit 2023”, held on 1-2 November 2023 at Bletchley Park, UK, agreed to promote safe, reliable AI and AI regulation, and control AI risks. At the Summit the EU, US, China and other countries agreed to “collectively manage the risk from artificial intelligence”. The United Nations also gave its approval to the Bletchley Park Declaration. The follow-up meeting to the UK intergovernmental AI Safety Summit will take place in France in 2024.

On 26 October 2023 the UN Secretary General appointed a “High Level Advisory Body on Artificial Intelligence” consisting of 39 members from both developed and developing countries. This Advisory Body will address the risks, opportunities and international governance of AI.

The US President issued on 30 October 2023 an Executive Order on regulating AI, i.a., to ensure the safety and security of AI applications, protect consumers and promote innovation and competition.

On 7 September 2023, the G7 (US, UK, France, Germany, Italy, Japan, Canada) committed to developing, for advanced AI systems, guiding principles and an international code of conduct for organizations. The International Draft Guiding Principles for Organizations Developing Advanced AI systems emphasize, i.a., the need for safety, security, reliability, transparency, risk management and international standards. The EU is now reviewing these Guiding Principles.

The EU Parliament approved on 14 June 2023 the world’s first law regulating AI, the EU AI Act. If ratified by the EU’s member states the AI Act will become law by 2025.

On 13 July 2023 the Chinese internet regulator published finalized rules on generative AI: “Interim Measures for the Management of Generative Artificial Intelligence Services“. The regulation incorporates many principles AI critics are debating in the West: intellectual property infringement by AI models; privacy infringement by AI; non-discrimination by AI algorithms; and transparency of AI companies on data, training.

There is thus a clear and growing awareness at the highest political levels of the importance and urgency of AI regulation, and that the experience of controlling nuclear proliferation would be relevant.

International AI Regulation Is Needed Now

The IAEA (International Atomic Energy Agency) was established to promote the peaceful uses of nuclear energy and create an effective, internationally accepted system prohibiting and controlling non-peaceful activities. For these same reasons, an International Artificial Intelligence Agency (IAIA) should be set up for AI regulation and for ensuring the safety, transparency and beneficial development of AI. This has become a matter of urgency, in particular to achieve a unified international system as opposed to individual countries or groups of countries developing their own, differing and possibly conflicting systems for AI regulation.  Elon Musk and Warren Buffet have strongly emphasized that the seriousness and magnitude of the risks associated with AI are more dangerous than those associated with nuclear weapons.

An IAIA (https://www.iglobenews.org/iaea-safeguards-a-model-for-international-ai-regulation/) would be responsible for the international regulation of AI and the independent and objective verification of national AI systems and applications to ensure compliance with national and international obligations. This verification process would provide assurances that AI is being developed and used for safe and peaceful purposes only, while identifying actual/possible dangerous and prohibited AI development and applications.

IAEA Safeguards: A Model for the IAIA

The IAEA’s experience is uniquely relevant. The reasons for and role of IAIA regulation correspond essentially to those for which IAEA’s international safeguards system was created. IAEA Safeguards involve the independent verification of national safeguards systems to provide credible assurances that nuclear material, items and facilities are used for peaceful purposes only.

Similarly, the IAIA would also have the two-fold task of promoting the safe and peaceful uses of AI while verifying and controlling that AI is not being used for illicit or destabilizing purposes. The obligations and benefits for the IAIA participating states would include:

  • Abiding by international AI regulations and accepting international verification of their AI activities
  • Declarations of all AI R&D, AI applications and other AI activities of relevant entities, all of which should adhere to the state’s legal and other commitments and obligations for the peaceful and beneficial use of AI
  • Verifying the completeness and correctness of national AI declarations
  • Qualification of international inspectors empowered to inspect and verify declarations of AI activities and uncover undeclared AI activities (similar to what IAEA Safeguards inspectors do regarding nuclear materials and activities)
  • Access to AI technology and technical cooperation to develop their AI systems for peaceful purposes
  • An early warning system in the event that AI is suspected of or being used for illegal, anti-social or criminal purposes
  • Identifying and preventing uncontrolled development, applications and competition to capture/control the “AI market”
  • Providing information on AI inspections and other relevant, agreed data to an international AI database
  • Serious violations of IAIA regulations could be brought to the UN Security Council (UNSC), as is the case for serious IAEA Safeguards violations

IAIA: An Inclusive vs Exclusive Approach to AI Regulation

An IAIA would be created through an international process to promote and ensure the peaceful and productive use of generative AI. An effectively functioning IAIA requires international cooperation involving all states and their organizations/experts with AI activities and interests. Only an international approach can succeed in dealing with the global dimensions, challenges, and rapid development of AI technology. Thus, not only the G7 and EU but also other key countries, including, i.a., Brazil, China, India, Republic of Korea, Russia and others must be involved from the start.

In view of the global availability and impact of social media, malware and disinformation, casting the widest possible net for AI regulation is essential.  No country or organization should be left behind.

It is to be hoped that the newly formed UN High-Level Advisory Body on AI will, at the earliest possible time, start an inclusive process for international AI regulation such as an IAIA.

Picture © kjpargeter on Freepik
WordPress Cookie Notice by Real Cookie Banner