If you're using AI to categorize job applications or chatbots for customer service, your service might already fall under this new law. As of January 22, 2026, South Korea's 'Framework Act on the Promotion of Artificial Intelligence Industry and Creation of a Foundation for Trust' (the AI Framework Act) has fully come into effect. According to an analysis by law firm Sejong, South Korea has become the first country to actually operationalize comprehensive AI regulatory legislation, even ahead of the EU.
Fines won't be imposed immediately. Yonhap News reported that the government plans a grace period of at least one year before imposing penalties or conducting investigations, reserving investigations primarily for serious incidents like those involving human casualties. The Ministry of Science and ICT also intends to open support channels for businesses. However, a grace period is not an exemption from preparation. This is the critical time to establish standards and reorganize internal systems.
Note: This article provides practical guidance based on publicly available research. The criteria for defining 'High-Impact AI' and specific enforcement decrees are still being refined. Please consult with a legal expert for actual legal interpretations.

What Is This Law Trying to Achieve?
While the official name of the Act is lengthy, its purpose is concise. According to law firm Daeryun's summary, it's a law that simultaneously aims to foster AI technology and build trustworthiness. Promotional provisions, such as R&D support, standardization, and learning data policies, are at its core. It's not a law to 'block AI,' but rather to 'promote it, while exercising greater caution in areas with significant impact.'
That doesn't mean there are no regulations. The intensity of application varies depending on what kind of AI is used and for what purpose.
Which AI Systems Face Stricter Standards?
As highlighted by SK C&C's analysis, this Act adopts a 'risk-based approach' that doesn't treat all AI equally. Practically, two categories must be distinguished first.
High-Impact AI refers to systems that can significantly affect a person's life, physical safety, or fundamental rights. Examples provided by law firm Daeryun offer a clear picture: healthcare, energy, recruitment and HR evaluations, financial credit assessment, criminal investigation and judicial support, and transportation.
Generative AI encompasses functions that create new outputs like text, images, or code. A primary obligation involves how to indicate and disclose that the output originated from AI. However, there are concerns that if the transparency obligation is interpreted too broadly, it could apply even to minor AI uses, so detailed criteria will need to be verified.
The simplest criterion for judgment is this: Does the AI produce actual outcomes for people? Functions like filtering job applications, assessing loan eligibility, or assisting with medical diagnoses directly impact an individual's rights or opportunities. In contrast, an internal meeting summarization tool is relatively distant from this. Even with the same 'AI utilization,' the law assigns different weight.
Currently, the precise criteria for 'significant impact' and 'High-Impact AI' are not yet clear in practice. Rather than a black-and-white approach, a practical understanding of risk levels per service is paramount.

What Businesses Should Focus on Now
There are immediate actions businesses can take, rather than grand organizational overhauls.
First, identify if you have functions linked to rights or safety, such as in recruitment, finance, healthcare, or transportation. If so, simply documenting internally what data that AI reviews and what decisions it aids can be a starting point. If you utilize generative AI, you must also define how its outputs will be disclosed.
Particularly crucial are records that can explain 'why operations were conducted in a certain way.' In times of ambiguous standards, companies that document their decision-making processes will have a significant advantage. It's also helpful to separate the point of final human review from exception handling procedures. Practically, designating a channel to monitor the Ministry of Science and ICT's support windows and future guidelines in advance is advisable.
Startups and small to medium-sized enterprises don't need to build a perfect system from scratch. The most realistic starting point is to first categorize functions that are closest to human rights and safety.
What Developers and Planners Need to Change
Functions such as recruitment, evaluation, recommendation, and selection aren't solely about technical accuracy. Explainability and operational accountability come hand-in-hand.
There are critiques that assessing AI risk based solely on a single numerical criterion, like cumulative computational power (10²⁶ FLOPS), has limitations. The risk of an AI system varies depending on various factors, including its purpose, service environment, and the quality of its training data. The more critical questions in the field are: What is the AI for, in what environment does it operate, and who does it affect?
Changes are also beginning from the perspective of general users. A trend is emerging where individuals can ascertain whether AI was involved in their job application review or loan assessment. While redress mechanisms for damages are not yet fully concrete, the direction is clear.
What We Still Don't Know
To be frank, this Act is not yet a finished product. The scope of 'High-Impact AI' and the criteria for determining 'significant impact' are not yet clear in practice. Businesses are confused about what exactly falls under regulation, while civil society simultaneously criticizes the scope for being too narrow.
How individuals can seek redress if damage occurs due to AI is also not yet legally clear. Alignment with international standards, such as the EU AI Act and the US NIST AI RMF, is another area that will need further refinement. For companies operating global services, it's no longer sufficient to consider only Korean law.
Ongoing verification is needed rather than definitive explanations.
What You Should Ultimately Do Now
What the AI Framework Act demands is simple: the greater the impact, the more transparent and cautious the operation must be.
Businesses can start with service classification and operational records; developers, with impact assessments during the feature design phase; and users, with the habit of asking where AI is involved.
Just because there's a grace period doesn't mean it's time to relax. Those who act proactively while standards are still fluid will be at a significant advantage compared to those who scramble to comply after the standards are solidified.


