Articles

G7 Countries to Discuss about the AI Regulatory Policies

The Group of Seven (G7) nations, which includes the United States, European Union, Japan, Canada, Germany, France, Italy, and the United Kingdom, will be discussing the implementation of global AI regulatory policies.

 The G7 countries have agreed on the need for “risk-based” AI regulations that should be human-centric and based on democratic values, including protection. The G7 officials will hold the first working-level AI meeting on May 30 to consider issues such as intellectual property protection, disinformation, and how the technology should be governed. 

The G7 leaders have also called for developing and adopting international technical standards to keep AI “trustworthy” and “in line with our shared democratic values”. The G7 AI working group will seek input from the Organisation for Economic Co-operation and Development. 

Japan, as the host country, is expected to discuss the human-centric approach to AI, which may cover regulatory or non-regulatory policy tools. The G7 countries will discuss the rules for AI and issues with generative AI tools like ChatGPT.

What are some of the challenges presented by generative AI tools

Generative AI tools have several challenges that businesses and organizations should consider before adopting them. Here are some of the challenges presented by generative AI tools:

  1. Technical complexity: Generative AI models may contain billions or even trillions of parameters, making them a complex undertaking for the typical business. These models are impractically large to train for most organizations, and the necessary compute resources can make this technology expensive and ecologically unfriendly.
  2. Data security: Generative AI models require large amounts of data to be trained, which can be a security risk if the data is not properly secured.
  3. Intellectual property: Generative AI technology uses neural networks that can be trained on large existing data sets to create new data or objects like text, images, audio, or video based on patterns it recognizes in the data it has been trained on. This presents a slew of challenges for companies that use generative AI, including risks regarding infringement — direct or indirect — of intellectual property.
  4. Biases, errors, and limitations: Generative AI models can make mistakes, make things up, or amplify stereotypes. Even such powerful models require human supervision and double-checking of the generated outputs.
  5. Concentration of power: The difficulty in creating models leads to another issue: the concentration of power in a few, making it difficult for smaller businesses to compete.
  6. Hybrid solution: Generative AI uses massive language models, it’s processor-intensive, and it’s rapidly becoming as ubiquitous as browsers. This is a problem because existing, centralized data centers aren’t structured to handle this kind of load.
  7. Limited creativity: While generative AI can create new data based on existing patterns, it is limited in terms of creativity and originality.

How do the G7 countries plan to collaborate with international organizations like the OECD and GPAi to advance discussions on AI governance

G7 countries plan to collaborate with international organizations like the Organisation for Economic Co-operation and Development (OECD) and the Global Partnership on AI (GPAI) to advance discussions on AI governance in an inclusive way.

The relevant ministers will create the Hiroshima AI process, which will be a working group within the G7. The group will collaborate with the OECD and GPAI to discuss generative AI in an inclusive way.

These discussions are expected to take place by the end of this year. The GPAI is a multi-stakeholder initiative that aims to bridge the gap between theory and practice on AI by supporting cutting-edge research and applied activities on AI-related priorities.

The GPAI brings together engaged minds and expertise from science, industry, civil society, governments, international organizations, and academia to foster international cooperation.

The GPAI provides a mechanism for sharing multidisciplinary research and identifying key issues among AI practitioners, facilitating international collaboration and promoting the adoption of trustworthy AI.

The GPAI Secretariat is hosted at OECD to facilitate strong synergies between GPAI’s scientific and technical work and the international policy leadership provided by the OECD, strengthening the evidence base for policy aimed at trustworthy AI.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button