EU’s AI Act and How Would It Affect ChatGPT

What Is EU’s AI Act and How Would It Affect ChatGPT?

While ChatGPT has become very popular worldwide, it has also raised worries about how AI might affect people’s privacy in various countries.

To address these concerns, countries are working hard to establish rules that allow their citizens to utilize AI tools while safeguarding their privacy. One notable example is the European Artificial Intelligence Act, which aims to regulate the use of AI across European countries. This act classifies AI tools into different categories based on their level of risk.

The development of this act has been ongoing for the past two years and has undergone several changes. However, recent reports suggest that the European government is planning to finalize and implement this act soon. It’s important to delve deeper into what the EU’s AI act entails and how it will impact ChatGPT.

What Is the EU AI Act?

The EU AI Act is a newly proposed legal framework by European lawmakers that aims to regulate the development and use of AI tools throughout Europe. The main focus of this act is to establish rules and guidelines for AI usage. To achieve this, the act introduces a classification system that categorizes AI tools based on their level of risk.

The classification includes four categories: minimal, limited, high, and unacceptable risk. Any applications considered to be in the unacceptable category will be prohibited immediately. High-risk applications will face stringent requirements to ensure the privacy and safety of users.

On the other hand, minimal and limited risk applications will have fewer transparency obligations and fewer requirements to meet. The primary objective of the EU AI Act is to establish clear rules that promote transparency and protect user safety. It also addresses ethical concerns arising from the use of AI tools.

Specifically, the act acknowledges the risks associated with general-purpose applications like ChatGPT and proposes guidelines to mitigate those risks. By implementing this act, the European Union aims to strike a balance between harnessing the potential of AI while ensuring the well-being and rights of its citizens.

Also Read: How to use ChatGPT 4 for free

Who Will Be Affected by the EU AI Act?

The EU AI Act is currently in the development phase, but it is expected to have implications for various entities involved in AI research, development, and usage. These include:

  1. Organizers who utilize AI systems.
  2. Individuals physically present in the EU while using AI systems.
  3. Users from third-party countries who employ AI tools in their own country, but the output or association is present within the EU.
  4. Distributors of AI systems.
  5. Representatives of AI systems.
  6. Manufacturers who label and utilize AI systems from the EU under their own name or trademark.

In simpler terms, the EU AI Act will have an impact on every user, representative, or system operating within the European Union. These measures aim to ensure that AI is used responsibly, with transparency, and in accordance with the established regulations to safeguard the rights and well-being of individuals within the EU.

Also Read: 100+ Best ChatGPT Prompts for Everything

Why is Sam Altman, the CEO of OpenAI, threatening to remove ChatGPT from the European Union (EU)?

The EU AI Act, although applicable to all AI applications and tools, is having a significant impact on the usage of ChatGPT within the European Union. This act raises questions about various aspects of ChatGPT, including its usage, operational policies, transparency, and data collection practices.

Upon learning about the EU AI Act, Sam Altman, the CEO of OpenAI, expressed concern and issued a warning. He stated that if the act’s wording and requirements prove to be too challenging, he might decide to withdraw ChatGPT from the EU. 

However, Altman also assured that OpenAI is committed to making every effort to ensure that ChatGPT complies with the EU AI Act. The feasibility of achieving this compliance would depend on the technical feasibility of modifying ChatGPT accordingly.

Also Read: How to Download and Install ChatGPT for FREE

The EU’s AI Act Could Alter AI Development

The EU’s AI Act has garnered support from many, but it has also brought disappointment to AI developers and companies. These individuals and organizations are now required to follow the regulations outlined in the EU AI Act and ensure that their applications and tools are in compliance with the rules.

On one hand, the act sets guidelines and standards for responsible AI usage, aiming to protect the rights and well-being of EU residents. This is seen as a positive impact, as it promotes transparency, safety, and ethical considerations in AI development and usage.

However, on the other hand, the act imposes strict requirements and limitations on AI developers and companies. They must navigate these regulations, making sure their apps and tools do not violate the rules set forth in the act. This can be viewed as a negative impact, as it may introduce additional challenges, burdens, and potential limitations to the development and innovation of AI technologies.

WHAT IS A ‘GPAIS’?

Lawmakers have proposed a category called GPAIS (General Purpose AI System) to address AI tools that have multiple applications, like generative AI models such as ChatGPT. Currently, there is an ongoing debate among lawmakers regarding whether all forms of GPAIS should be classified as high risk and the implications this designation would have for technology companies intending to integrate AI into their products.

However, the draft of the AI Act does not provide clear clarification on the specific obligations that AI systems falling under the GPAIS category would be subjected to. This lack of clarity raises questions about the potential regulatory requirements and responsibilities that technology companies would need to adhere to when utilizing AI systems like ChatGPT.

Also Read: How can I use ChatGPT in unsupported countries?

WHAT IF A COMPANY BREAKS THE RULES?

According to the proposals of the AI Act, companies found to be in violation of the regulations could face significant financial penalties. These penalties include fines of up to 30 million euros or 6% of their global profits, whichever amount is higher.

For instance, let’s consider Microsoft, a company that supports OpenAI, the creator of ChatGPT. If Microsoft were found to be in breach of the AI Act, it could potentially face a fine exceeding $10 billion. This demonstrates the substantial monetary consequences that companies may encounter if they are found to be non-compliant with the rules outlined in the AI Act.

Also Read: What Is the Valuation of ChatGPT?

WHEN WILL THE AI ACT COME INTO FORCE?

Although the industry anticipates the AI Act to be passed this year, there is currently no specific deadline set. The Act is currently undergoing discussions among parliamentarians, and once they reach a consensus, a trialogue will take place. The trilogy involves representatives from the European Parliament, the Council of the European Union, and the European Commission coming together to finalize the terms of the Act.

After the terms are finalized, there will be a grace period of approximately two years. This grace period allows the parties affected by the regulations to adjust their operations and ensure compliance with the Act’s requirements.

Also Read: Why Did Elon Musk Leave OpenAI?


Posted

in

,

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *