South Korea has implemented what officials describe as the world’s first fully enforceable AI legislation, aiming to position the country among the leading global AI powers alongside the United States and China (29/01).
The AI Basic Act, effective last Thursday, sets strict rules for AI use while focusing primarily on promoting industry growth.
The law comes amid growing international concern over AI-generated media and automated decision-making, as governments struggle to keep up with rapidly advancing technologies.
Key Requirements for Companies Under the Law
Under the act, companies offering AI services must label AI-generated content. Invisible digital watermarks are required for clearly artificial outputs such as cartoons and artwork, while visible labels must be applied to realistic deepfakes.
“High-impact AI” systems used for medical diagnosis, hiring, or loan approvals must undergo risk assessments and document how decisions are made.
Extremely powerful AI models also require safety reports, although no existing models currently meet the high threshold set by the law.
Companies that violate the rules can face fines of up to 30 million won (£15,000). However, the government has promised at least a one-year grace period before penalties are enforced.
Startup Concerns and Compliance Challenges
Despite the law’s industry focus, many AI startups report feeling unprepared. A December survey by the Startup Alliance found that 98% of AI startups had not readied themselves for compliance.
Lim Jung-wook, co-head of the alliance, said: “There’s a bit of resentment. Why do we have to be the first to do this?”
Companies are responsible for determining if their systems qualify as high-impact AI, a process critics say is lengthy and creates uncertainty.
Smaller Korean firms face the same regulatory obligations as larger companies, while only certain foreign firms, such as Google and OpenAI, must comply, raising concerns over competitive fairness.
Civil Society Groups Warn of Limited Citizen Protection
Civil society organizations have criticized the law for not sufficiently protecting people affected by AI systems.
South Korea accounted for 53% of global deepfake pornography victims in 2023, according to Security Hero. In 2024, investigations exposed networks on Telegram distributing AI-generated sexual imagery of women and girls.
Four organizations, including human rights collective Minbyun, said the law focuses on protecting “users” like hospitals, financial institutions, and public agencies, rather than individuals harmed by AI. Exemptions for human involvement, they argued, create significant loopholes.
The country’s human rights commission also highlighted that unclear definitions for high-impact AI leave those most at risk of rights violations outside regulatory oversight.
South Korea’s Principles-Based Approach to AI Governance
Unlike the EU’s strict risk-based model, the US and UK’s sector-specific approach, or China’s state-led industrial policies, South Korea has opted for a flexible, principles-based framework.
Melissa Hyesun Yoon, law professor at Hanyang University, described the strategy as “trust-based promotion and regulation,” which could provide a reference point for global AI governance discussions.
Alice Oh, a computer science professor at KAIST, said that while the law is not perfect, it is designed to evolve without stifling innovation.
PHOTO: FREEPIK
This article was created with AI assistance.
We make every effort to ensure the accuracy of our content, some information may be incorrect or outdated. Please let us know of any corrections at [email protected].
Read More

Thursday, 29-01-26
