Beginning today, January 22, South Korea has started enforcing its Artificial Intelligence Act, becoming the first country to formally establish safety requirements for high-performance—or so-called frontier AI systems.
The move has set the country apart in the global regulatory landscape with the inclusion of legal safety obligations for frontier AI as a world-first legislative step.
According to the South Korean Ministry of Science and Information and Communications Technology (ICT), the new law is designed primarily to foster growth in the domestic AI sector while also introducing baseline safeguards to address potential risks posed by increasingly powerful AI technologies.
“We’re approaching this from the most basic level of global consensus,” Kim Kyeong-man, deputy minister of the office of artificial intelligence policy at the ICT ministry, disclosed during a study session in Seoul.
Kim added that the landmark law and it’s implementation has prompted other countries to take notice in the new development but he added that “this is not about boasting that we are the first in the world.”
He explained that the Act lays the groundwork for a national-level AI policy framework which establishes a central decision-making body—the Presidential Council on National Artificial Intelligence Strategy—and creates a legal foundation for an AI Safety Institute that will oversee safety and trust-related assessments.
“The law also outlines wide-ranging support measures, including research and development, data infrastructure, talent training, startup assistance, and help with overseas expansion,” the deputy minister further clarified.
To reduce the initial burden on businesses, the South Korean government plans to implement a grace period of at least one year and during this time, it will not carry out fact-finding investigations or impose administrative sanctions.
Instead, it will focus on consultations and education with a dedicated AI Act support desk established that will help companies determine whether their systems fall within the law’s scope and how to respond accordingly.
Officials likewise noted that the grace period may be extended depending on how international standards and market conditions evolve.
The AI law applies to three areas only: high-impact AI, safety obligations for high-performance AI and transparency requirements for generative AI.
High-impact AI refers to fully automated systems deployed in critical sectors such as energy, transportation and finance – areas where decisions made without human intervention could significantly affect people’s rights or safety.
At present, Seoul says no domestic services fall into this category, though fully autonomous vehicles at level 4 or higher could meet the criteria in the future.
What distinguishes South Korea’s approach from that of the European Union is how it defines ‘high-performance AI’. While the EU focuses on application-specific risk that targets
AI used in areas like health care, recruitment and law enforcement, the South Korean government instead applies technical thresholds.
These thresholds include indicators such as cumulative training computation, meaning only a very limited set of advanced models would be subject to the safety requirements.
As of now, the government believes no existing AI models, either in South Korea or any other country, meet the criteria for regulation under this clause. In comparison, the EU is rolling out its own AI regulations gradually, with some measures accompanied by multiyear transition periods.
Enforcement under South Korean law is intentionally light. It does not impose criminal penalties. Instead, it prioritises corrective orders for noncompliance, with fines—capped at 30 million won ($26,210)—issued only if those orders are ignored. This, the government says, reflects a compliance-oriented approach rather than a punitive one.
Transparency obligations for generative AI largely align with those in the EU, but South Korea applies them more narrowly. Content that could be mistaken for real, such as deepfake images, video or audio, must clearly disclose its AI-generated origin.
For other types of AI-generated content, invisible labeling via metadata is allowed. Personal or noncommercial use of generative AI is excluded from regulation.
In ending, Deputy Minister Kim emphasized that the purpose of the legislation is not to hinder innovation but to offer a basic regulatory foundation that reflects growing public concerns.
“The goal is not to stop AI development through regulation. It’s to ensure that people can use it with a sense of trust.
“The legislation didn’t pass because it’s perfect,” Mr Kim said. “It passed because we needed a foundation to keep the discussion going,” he concluded while also recognizing concerns from smaller firms and startups, assuring them that the government plans to stay engaged throughout the law’s implementation.