AI regulation around the world
Countries and economic blocs around the world are at different stages of regulating artificial intelligence, from a relative "Wild West" in the United States to highly complex rules in the European Union.
Here are some key points about regulation in major jurisdictions, ahead of the Paris AI summit on February 10-11:
- United States -
Returning President Donald Trump last month rescinded Joe Biden's October 2023 executive order on AI oversight.
Largely voluntary, it required major AI developers like OpenAI to share safety assessments and vital information with the federal government.
Backed by major tech companies, it was aimed at protecting privacy and preventing civil rights violations, and called for safeguards on national security.
Home to top developers, the United States now has no formal AI guidelines -- although some existing privacy protections do still apply.
Under Trump, the United States has "picked up their cowboy hat again, it's a complete Wild West", said Yael Cohen-Hadria, a digital lawyer at consultancy EY.
The administration has effectively said that "we're not doing this law anymore... we're setting all our algorithms running and going for it", she added.
- China -
China's government is still developing a formal law on generative AI.
A set of "Interim Measures" requires that AI respects personal and business interests, does not use personal information without consent, signposts AI-generated images and videos, and protects users' physical and mental health.
AI must also "adhere to core socialist values" -- effectively banning AI language models from criticising the ruling Communist Party or undermining China's national security.
DeepSeek, whose frugal yet powerful R1 model shocked the world last month, is an example, resisting questions about President Xi Jinping or the 1989 crushing of pro-democracy demonstrations in Tiananmen Square.
While regulating businesses closely, especially foreign-owned ones, China's government will grant itself "strong exceptions" to its own rules, Cohen-Hadria predicted.
- European Union -
In contrast to both the United States and China, "the ethical philosophy of respecting citizens is at the heart of European regulation", Cohen-Hadria said.
"Everyone has their share of responsibility: the provider, whoever deploys (AI), even the final consumer."
The "AI Act" passed in March 2024 -- some of whose provisions apply from this week -- is the most comprehensive regulation in the world.
Using AI for predictive policing based on profiling and systems that use biometric information to infer an individual's race, religion or sexual orientation are banned.
The law takes a risk-based approach: if a system is high-risk, a company has a stricter set of obligations to fulfil.
EU leaders have argued that clear, comprehensive rules will make life easier for businesses.
Cohen-Hadria pointed to strong protections for intellectual property and efforts to allow data to circulate more freely while granting citizens control.
"If I can access a lot of data easily, I can create better things faster," she said.
- India -
Like China, India -- co-host of next week's summit -- has a law on personal data but no specific text governing AI.
Cases of harm originating from generative AI have been tackled with existing legislation on defamation, privacy, copyright infringement and cybercrime.
New Delhi knows the value of its high-tech sector and "if they make a law, it will be because it has some economic return", Cohen-Hadria said.
Occasional media reports and government statements about AI regulation have yet to be followed up with concrete action.
Top AI firms including Perplexity blasted the government in March 2024 when the IT ministry issued an "advisory" saying firms would require government permission before deploying "unreliable" or "under-testing" AI models.
It came days after Google's Gemini in some responses accused Prime Minister Narendra Modi of implementing fascist policies.
Hastily-updated rules called only for disclaimers on AI-generated content.
- Britain -
Britain's centre-left Labour government has included AI in its agenda to boost economic growth.
The island nation boasts the world's third-largest AI sector after the United States and China.
Prime Minister Keir Starmer in January unveiled an "AI opportunities action plan" that called for London to chart its own path.
AI should be "tested" before it is regulated, Starmer said.
"Well-designed and implemented regulation... can fuel fast, wide and safe development and adoption of AI," the action plan document read.
By contrast, "ineffective regulation could hold back adoption in crucial sectors", it added.
A consultation is under way to clarify copyright law's application to AI, aiming to protect the creative industry.
- International efforts -
The Global Partnership on Artificial Intelligence (GPAI) brings together more than 40 countries, aiming to encourage responsible use of the technology.
Members will meet on Sunday "in a broader format" to lay out an "action plan for 2025", the French presidency has said.
The Council of Europe in May last year adopted the first-ever binding international treaty governing the use of AI, with the US, Britain and European Union joining the signatories.
Of 193 UN member countries, just seven belong to seven major AI governance initiatives, while 119 belong to none -- mostly in the Global South.
burs-tgb/adp/bc
姜-A.Jiāng--THT-士蔑報