South Korea has introduced “world’s first” comprehensive regulations on AI, frustrating business interests even as activists argue the new laws don’t go far enough.
The new AI Basic Act, which took effect last Thursday, reports the Guardian, and aims to strengthen trust and safety while also promoting growth in the sector. Lawmakers hope it will help position the country among the world’s three leading AI powers alongside the US and China.
It mandates human oversight for “high-impact AI,” including healthcare, transport, finance and nuclear safety. Products and services using generative or high-impact AI must be clearly labelled, and the most powerful models will require safety reports, although none currently meet the high threshold.
Companies will have a grace period of at least one year to allow regulators and industry to adapt before enforcement begins. Regulators are planning a guidance platform and dedicated support centre during this period.
“Additionally, we will continue to review measures to minimise the burden on industry,” a ministry spokesperson said, noting that authorities may extend the grace period if warranted by industry conditions.
[See more: China is making labels compulsory for all AI-generated content]
However, startups complain that the law’s language is vague and may stifle innovation as companies seek to avoid regulatory risk. Others warn that it puts them at a competitive disadvantage as all Korean companies face regulation while only the largest foreign firms meet the threshold for compliance.
“There’s a bit of resentment,” Lim Jung-wook, co-head of South Korea’s Startup Alliance, said. “Why do we have to be the first to do this?”
Civil society groups argue that the law doesn’t go far enough, pointing to a need for greater protections for those harmed by AI systems. A deepfake porn crisis in 2024 sparked widespread outrage in South Korea, after a journalist uncovered “countless” Telegram chatrooms creating and distributing AI-generated sexual imagery of young women and girls.
Female South Korean celebrities face a similar onslaught, constituting more than half of the individuals depicted in deepfakes worldwide, according to a 2023 report by Security Hero.
Minbyun, a collective of human rights lawyers, along with three other organisations, issued a joint statement on Friday arguing that the laws contained almost no provisions to protect citizens. The law has established no prohibited AI systems and “user” protections are reserved for institutions like hospitals and financial companies that use AI systems, not individuals harmed by the technology.


