According to a report from Time magazine, OpenAI, the company behind ChatGPT AI chatbot, allegedly lobbied to weaken the stricter regulations outlined in the draft of the AI Act passed by European Union lawmakers.
Documents obtained from the European Commission suggest that OpenAI advocated for less stringent elements in the legislation, aiming to reduce regulatory burdens on the company. The report claims that some of OpenAI’s proposed amendments made it into the final draft law, which will undergo further negotiations before finalization in January.
In 2022, OpenAI argued that its general-purpose AI systems, like ChatGPT and Dall-E, should not be classified as “high risk” under the AI Act. This stance aligns with similar arguments put forth by Microsoft and Google, who also sought lenient rules for AI deployment.
Earlier versions of the AI Act considered systems like ChatGPT and Dall-E as high risk if they generated text or imagery that could be mistaken for human-created content. OpenAI responded by suggesting AI-generated content should be clearly labeled as such to ensure user awareness.
OpenAI stated that it provided input on the draft legislation at the request of EU policymakers, emphasizing the company’s commitment to engaging with policymakers and supporting the safe deployment of AI tools.
Regarding OpenAI’s operations in Europe, CEO Sam Altman initially mentioned the possibility of ceasing operations if compliance became unfeasible, but later clarified that the company had no plans to leave the region.