In this article for IAPP, Andrew Eichen examines South Korea’s AI Basic Act, South Korea’s new AI law which clarifies who must comply, what systems are covered, and how risk-based obligations will be enforced. Please note that the original publication is paywalled, but the full text is displayed below.
South Korea’s AI Basic Act, which took effect 22 Jan. 2026, represents the first major step toward regulating AI in one of Asia’s largest technology markets. Following in the footsteps of the EU AI Act and Colorado AI Act, Korea’s law is the most recent entry in the International turn toward risk-based AI regulation. Organized around three distinct regulatory tracks, the act imposes transparency obligations on generative and high-impact AI, safety requirements for frontier models, and broader governance duties for high-impact systems deployed in sensitive domains.
While the AI Basic Act’s text has been available for more than a year, it offers scant detail on the scope of obligations or which entities in the AI supply chain each requirement applies to. However, the recently released presidential decree and accompanying guidelines issued by the Ministry of Science and Information and Communication technology go a long way in clarifying the law’s operational substance. Though interpretive rather than legally binding and likely to evolve as the framework matures, they represent out best current understanding of how the law will apply.
Who is covered?
The act applies to the AI business operator, which is defined as any entity engaged in business related to the AI industry. Foreign entities whose AI systems may affect users in Korea are explicitly covered, and those without a local address must appoint a domestic representative if they meet certain thresholds – total annual revenue over KRW1 trillion, AI services revenue exceeding KRW10 billion, or at least one million daily users in Korea.
The framework imposes obligations on two types of defined entities: AI developers and AI-using business operators.
An AI developer is an entity that develops and provides AI. As the decree and guidelines clarify, “develop” encompasses both original creation and modification that significantly affect the system’s performance. A party outsourcing development may also be considered the developer for regulatory purposes if they exercise sufficient control over the design process.
An AI-using business operator is a party that provides AI products or services to users utilizing AI obtained either directly or indirectly from a developer. This includes entities that use off-the-shelf AI systems to provide services and those that integrate AI models into broader systems sold to users. Internal use for experimentation, testing or reference purposes is excluded. A party may hold both statuses if it develops an AI model and also uses it to provide services or products to end users.
The AI Basic Act broadly defines AI as an electronic implementation of human intellectual abilities, such as learning, reasoning, perception, decision-making and language comprehension.
Three types of AI systems are specifically covered under the act. Generative AI systems generate text, sound, images, video and other outputs by imitating the structure and characteristics of input data. High-performance AI are defined as systems whose cumulative computing used for training exceeds a certain threshold and whose operation may pose systemic risk. Finally, high-impact AI systems are those that operate in certain enumerated domains and have the potential to significantly affect human life, safety or fundamental rights.
Transparency Obligations
The first set of rules focus on transparency toward end users and apply to both high-impact and generative AI systems. Obligations fall exclusively on the AI-using operator who provides the product or service to users but are waived where the AI nature of the product, service or content is already obvious.
AI-using operators must provide individuals with advance notice when products or services are AI-based. This duty appears to not only apply when users directly interact with an AI but also when an operator uses a high-impact system to provide a service. The decree clarifies that notice can take many forms, including disclosures on the product, user manuals, terms of service or at the place of service.
The law also imposes labeling requirements on AI-generated content. Labeling may use either human-perceptive methods like a visual notice or machine-readable methods such as watermarks or metadata. If relying on the latter, however, a one-time human-readable notice is still required. For outputs displayed only within the service, labels may appear either on the output itself or in the user interface. Of outputs users can download or share, the output itself must be labeled so its AI-generated nature remains evident when externally distributed.
Deepfakes, defined as virtual sounds, images, or videos that are difficult to distinguish from reality, face stricter rules. Unlike other AI-generated content, they must always carry a human-readable notice, and the guidelines prescribe format-specific rules. Videos for example, must display a logo throughout the entirety of an AI-generated clip. Less intrusive notices are acceptable for artistic or creative works where a more prominent label would hinder the work’s enjoyment.
Frontier AI
The act imposes safety and incident response obligations on developers of high-performance models, systems commonly referred to as frontier AI. An AI model is subject to these requirements if all the following criteria are met: The cumulative compute used to train the system is equal to or greater than 1026 floating point operations per second; the system represents state of the art technology; and the system has the potential to cause widespread and significant impacts on life, safety or fundamental rights.
Compliance obligations fall on the developer who trained the AI rather than the AI-using operator. Downstream parties, however, may become responsible if they substantially modify the system. Examples of qualifying modifications include retaining a significant portion (≥ 1/3) of parameters, fine-tuning that changes the system’s scope or risk profile, making architectural changes, integrating the system with a high-impact AI and making deployment environment changes that expand the system’s use.
First, developers must establish a risk management program covering the entire AI lifecycle and addressing risks to life, safety and fundamental rights. At minimum, a risk management program must lay out processes for risk identification, risk assessment and risk mitigation.
Risk identification involves identifying systemic risks, errors, bias, security vulnerabilities, and potential for misuse based on research, past incidents and red-team testing.
Assessments must be documented and updated quarterly. Risk assessments must analyze risks based on factors including severity, materiality, likelihood, frequency and manageability. They must be multidisciplinary in nature and performed by a group independent from an organization’s development, business and marketing teams.
Risk mitigation requires a process for prioritizing risks by severity and likelihood and developing relevant mitigations for each. Entities must establish a structured process for approving mitigation plans with escalating requirements based on risk level. Following mitigation, the risk assessment must be repeated to confirm effectiveness.
Second, developers must establish an incident response system to monitor and address safety incidents – defined as events with significant impact on life, safety, fundamental rights or public safety. Examples include harmful or illegal outputs, unintended exposure of personal or sensitive information, major system malfunctions or loss of control, and physical or social harm from AI outputs. The framework requires developers to:
- Define criteria for what constitutes a safety incident and implement processes to continuously monitor for actual incidents and early warning indicators.
- Designate a dedicated team to manage incident response. The team must be multidisciplinary, independent from internal business interests and able to draw on external expertise when needed.
- Develop clear procedures for each stage of the response, from recognition through post-incident review. All personnel involved in the system’s lifecycle must undergo appropriate training on these procedures.
- Report to MSIT within 24 hours of becoming aware that a covered safety incident occurred. An initial response report is due within seven days, followed by a resolution report within 15 days.
Developers must submit documentation of both safety measures to MSIT within three months of determining whether an AI meets these criteria. Additional submissions are required within one month if a modification increases risk or new risks are discovered.
High-Impact AI
High-impact AI systems are those deployed in specific high-risk domains with the potential to significantly affect life, safety or fundamental rights. The requirements for these systems represent the centerpiece of the AI Basic Act, yet the statute provides scant details on scope and applicability. Fortunately, the guidelines fill in many of the gaps.
Determining High-Impact Status
While statutory language suggests that all AI used in specific enumerated domains are considered high impact, the guidelines introduce a two-step test for determining a system’s status. Operators must perform and document this assessment before deployment. Those uncertain may request confirmation from MSI, though any determination is only administrative guidance and not binding in court.
The first step asks whether the AI is used in a designated domain. The act lists a number of high-impact use cases, including energy supply, healthcare services, transportation and consequential decisions affecting individuals, such as employment and loans. The full list of covered domains can be found in the act’s definitions.
If the AI operates in a covered domain, operators proceed to step two, which requires assessing whether the AI has the potential to significantly impact life, safety or fundamental rights. Rather than a unified framework, the guidelines offer sector-specific criteria and illustrative examples to help operators conduct their own context-dependent assessments. The importance of meaningful human oversight is a recurring theme across sectors. Where a person with proper authority substantively reviews the AI’s output and makes the final decision, the system is generally not considered high-impact; where review is largely a rubber stamp or decisions rely substantively on the AI’s output, the system is more likely to cross into high-impact territory.
That said, human oversight is just one factor among several and not a factor at al for some sectors. For medical devices, high-impact status is tied to the country’s Digital Medical Products Act classification system. For consequential decisions, other factors include whether life or safety is at issue, whether the decision effectuates a major change in rights and whether the resulting harm is recoverable.
Required Measures and Allocation
The requirements for high-impact AI systems center on five core obligations that apply to both developers and AI-using operators: establishing a risk management plan, offering explanations, implementing user protection measures, ensuring human supervision and maintaining documentation. What makes this portion of the AI Basic Act particularly difficult to navigate, however, is the nuanced allocation of responsibilities. Unlike the EU AI Act, which places the bulk of the burden on the provider, Korea’s framework splits duties between the developer and AI-using operator based on the specific stage of the AI lifecycle each entity controls.
Two nuances further govern this allocation. First, the boundary between developers and AI-using operators is not static. If an AI-using operator makes a significant fundamental change to AI received from a developer, that operator becomes a developer themselves and must comply with both sets of obligations. Whether a change qualifies as significant depends on the extent to which the system still resembles the original, with relevant factors including changes in purpose or use and their foreseeability to the original developer. The guidance notes, for example, that integrating a general-purpose foundation model into a high-impact system would reclassify the integrator as a developer.
Second, the decree provides a way to reduce duplication of compliance work. If a developer has already implemented the requisite measures for risk management, explanation, or user protection and the AI-using operator has not materially modified the system, the operator may adopt the developer’s protocols and be deemed compliant with those requirements.
The guidance does not clarify what it means to adopt the developer’s standards, but it appears not to be a wholesale exemption, Rather, the operator must actively incorporate the developer’s standards into their own compliance framework, operationalizing and supplementing them to address risks unique to the deployment context. This deference does not extend to human oversight or documentation requirements, which remain the operator’s independent responsibility.
Risk Management Plan
Both developers and AI-using operators must establish and implement a plan to identify, analyze and treat risks to life, safety and fundamental rights.
| Component | Developer | AI-Using Operator |
|---|---|---|
| Risk identification | Identify risks arising during design, data collection, model training and testing. | Identify deployment-specific risks including vulnerabilities of user population, integration issues and patterns associated with misuse. Continuously update risks using operational data such as incidents and complaints. |
| Risk analysis and evaluation | Analyze severity and probability of AI-specific risks. Identify affected stakeholder groups and analyze benefits and risks for each. | Analyze risks from malfunctions, misuses and unintended uses in the specific deployment context. Identify affected stakeholder groups and analyze benefits and risks for each. |
| Risk treatment | Implement and document elimination, mitigation or monitoring measures. Provide records to AI-using operators. | Develop treatment plan for risks that may rise during deployment. |
| Organization | Designate oversight personnel with relevant expertise – technical, legal and ethical – that are functionally independent from development teams. | Same requirement as a developer. |
Explanation Plan
To the extent that is technically feasible, operators must develop a plan to explain AI-generated outputs, describe the key criteria considered by the system and provide an overview of the training data used.
| Component | Developer | AI-Using Operator |
|---|---|---|
| Transparency and explainability infrastructure | Document operating principles, functions, limitations, algorithm architecture and model specifications. Implement technical measures to enable explainability. | N/A |
| Training data management | Define and document data specifications including format, size, quantity, collection criteria and preprocessing procedures. | N/A |
| Explanation plan development | N/A | Develop procedures to provide user-facing explanations that specific who receives explanations, when, for which functions, what content is provided and how. |
| Explanation plan publication | N/A | Publish key components of explanation plan on website, including responsible department, explanation procedures and how users can request. |
User Protection Measures
Parties must establish measures to protect users from AI-related harms, including privacy protection, safe design, testing and ongoing monitoring.
| Component | Developer | AI-Using Operator |
|---|---|---|
| Data collection and management | Document the purpose and methods of data processing Ensure data collection is lawful and consent has been obtained. Implement security measures. | Review developer’s data practices. Establish parallel data collection processes if collecting additional data. Implement security measures. |
| Safe design and development | Design system for safety and robustness. Build error detection and emergency stop capabilities. Build defenses against adversarial attacks. | N/A |
| Testing and evaluation | Test systems using diverse risk-based scenarios considering technical characteristics and potential development environment. | Test system in actual deployment environment to verify performance in specific context. |
| Monitoring and response | Create technical infrastructure to allow for system monitoring. Ensure outputs are stored and analyzable. | Set operational safety constraints to prevent use beyond intended functions. Develop incident response procedures. |
| Feedback collection | Provide technical support to AI-using operators. Analyze collected feedback and integrate into system and model updates. | Establish channels for user complaints and error reporting and integrate feedback into operations. Send feedback to developer. |
| User rights protections | N/A | Notify users what data is being processed. Provide users with the ability to suspend use, object to decisions and request explanations. Establish procedures for redress and compensation. |
Human Management and Supervision
Developers must build in human supervision capabilities, and Al-using operators must ensure systems are operated with meaningful oversight.
| Component | Developer | AI-Using Developer |
|---|---|---|
| Intervention criteria | Design criteria for when humans should intervene | Implement developer’s criteria Maintain ability to pause, restart or disable system |
| Intervention methods | Design and build error detection, emergency stop capabilities and diagnostic tools | Ensure personnel have access to diagnostic tools and override capabilities. |
| Periodic inspection | Establish inspection plans, designate responsible personnel and define inspection cycles. | Same requirement as a developer |
| Training | Train AI-using operators on system operation, common error types and response measures. | Train end users on AI capabilities and limitations. |
Documentation and Publication
Both developers and Al-using operators must prepare a safety and trust document detailing all measures taken to comply with the aforementioned requirements for high-impact systems. The documentation must be provided to MSIT upon request and retained for five years.
Covered entities must also publish key aspects of their compliance plan on their websites, covering: the risk management policy and organizational structure, explanation plan and user protection measures. The name and contact information of the person responsible for the Al system must also be included.
Impact Assessment
In addition to the five compliance measures described above, businesses must make efforts to assess the impact of their high-impact Al on fundamental rights.
The assessment requires businesses to identify affected individuals and groups, map which constitutional rights may be implicated, evaluate the scope and severity of potential impacts, and document mitigation measures. The framework is designed with interoperability in mind: entities who have completed comparable assessments, e.g., the EU Al Act’s Fundamental Rights Impact Assessment or ISO/IEC 42001, may use existing documentation to satisfy the requirement. MSIT is currently developing recognition procedures for specific frameworks.
Enforcement
MSIT is tasked with enforcing the law and may request data, conduct inspections and issue corrective orders. While administrative fines can reach up to KRW30 million, MSIT has stated it will provide a one-year grace period before imposing fines.
Takeaways and Next Steps
Despite the one-year grace period before fines take effect, businesses should not treat the coming year as a compliance holiday. MSIT retains authority to issue corrective orders during this window. Given the framework’s complexity, organizations will need time to build out the required processes.
While many requirements under the Al Basic Act have parallels in other frameworks like the EU Al Act, covered entities should not assume their existing compliance programs will suffice. The framework takes a distinct approach, particularly for Al-using operators, who face duties that go beyond deployer obligations under comparable laws.
Organizations providing or deploying Al in Korea should consider the following steps.
Determine Which Regulatory Tracks Apply
Identify whether your Al activities involve generative Al, frontier models or one of the enumerated high-impact domains. Each track carries different obligations and applies to different parties in the supply chain.
Map Your Supply Chain Role
Determine whether you qualify as a developer, an Al-using operator or both. Note that these categories do not align neatly with the provider and deployer roles under the EU Al Act.
Assess High-Impact Status
For each Al system you develop or deploy, document your analysis of whether it operates in a designated domain and has the potential to significantly affect life, safety or fundamental rights. Do not assume systems classified as high-risk under the EU or Colorado Al Acts will receive the same treatment. The guidance sets out sector-by-sector analyses with distinct criteria, and some systems deemed high-risk elsewhere may not qualify here and vice versa.
Request Upstream Documentation
Al-using operators providing high-impact systems should request developers’ compliance documentation sooner rather than later, allowing them to leverage the developer’s existing work and focus on deployment-specific measures rather than building a compliance program from scratch.
Reprinted with permission from IAPP. © 2026 IAPP. Further duplication without permission is prohibited. All rights reserved.