This article is a contribution by Attorney Jaesik Moon of Choi & Lee Law Firm. If you would like to share quality content for startups in the form of a contribution, please contact the Venture Square editorial team at editor@venturesquare.net.

When Chat GPT first appeared, people simply played around with AI to pass the time, joking around with it, and thought AI technology was just a passing fad. Chat GPT was released in November 2022, and now, less than three years later, it seems AI is everywhere. It's already been used to create documents, photos, audio, and videos, and even legal documents drafted by lawyers. While it has certainly made work and daily life more efficient, as with any new technology, concerns about its potential negative consequences are also high. There's the risk of AI being used for criminal purposes like voice phishing, and there are reports of people blindly trusting AI to draft legal documents, only to have the documents submitted citing incorrect precedents, potentially resulting in damages.
To address the risks posed by the rapid development and spread of AI, major countries such as the US, Japan, and the EU are establishing AI-related laws and systems. Recognizing the need for such AI norms, South Korea enacted the Framework Act on AI (officially known as the "Framework Act on the Development of Artificial Intelligence and Creation of a Trust Foundation") in December 2024, slated for enforcement in January 2026. Recently, the Ministry of Science and ICT has prepared and announced the direction and draft of subordinate statutes under the Framework Act on AI and is currently in the process of collecting public opinion. Given the emergence of various AI startups based on AI technology, as well as our firm's clients, we believe familiarity with the Framework Act on AI is essential. Therefore, in this column, we will present the core provisions and contents of the Framework Act as the basic framework, while in subsequent installments, we will introduce the direction and key contents of recently announced subordinate statutes.
1. Purpose and direction of enactment of the AI Basic Act
Article 1 of the Framework Act on AI stipulates, “This Act aims to protect the rights and interests of the people and contribute to improving the quality of life of the people and strengthening national competitiveness by stipulating the basic matters necessary for the sound development of artificial intelligence and the establishment of a foundation for trust.” As its official name suggests, it focuses more on the development and establishment of a foundation for safety and trust in the AI industry rather than on regulating it.
Accordingly, a significant portion of the Basic AI Act concerns AI policy governance and industry development support, aimed at supporting the development of AI technology and industries. The remainder concerns the obligations and responsibilities of businesses to establish a safe and reliable foundation for AI. This last section is likely to be crucial for businesses operating AI startups.
2. Key provisions of the AI Basic Act
First, let's consider the establishment of national AI policy governance regarding the development of AI technology and related industries. The Framework Act on AI establishes a National AI Support Council under the President and authorizes the creation of subcommittees and special committees to carry out the council's work by specialized field or specific issue (Articles 7 to 10). It also designates and operates an AI Policy Center and an AI Safety Research Institute (Articles 11 and 12), establishing a foundation for work and support related to AI policy formulation, research and development, and other areas.
Furthermore, the Framework Act on AI has established a basis for various government support measures, including support for the development and safe use of AI technology (Article 13), promotion of projects for the standardization of AI technology (Article 14), support for the introduction and use of AI technology (Article 16), support for small and medium-sized enterprises and startups (Articles 17 and 18), and promotion of policies related to AI data centers (Article 25), so that the government can provide various support measures for the development, introduction, and utilization of AI technology to advance the AI industry.
Finally, as regulations for establishing a safe and reliable foundation for AI, the following provisions were established: AI ethics principles (Article 27), a private autonomous AI ethics committee (Article 28), and support for various policies, inspections, and certifications to ensure safety and reliability.
Next, let's look at the obligations and responsibilities of business operators for creating a safe and reliable foundation for AI. The Framework Act on AI imposes on "high-impact AI" and "generative AI" business operators the obligation to ensure transparency regarding their use of AI (Article 31) and the obligation to ensure safety (Article 32). In particular, "high-impact AI business operators" are responsible for conducting a prior review of whether the AI they provide qualifies as high-impact AI (Article 33), establishing and operating risk management plans, establishing and operating user protection measures, and supervising and managing high-impact AI. Furthermore, for AI business operators without a domestic address or place of business that meet certain criteria, the Act also stipulates the obligation to designate a domestic agent (Article 36).
Next, as a measure or penalty in case of violation of the obligations imposed on the business operator as above, the Minister of Science and ICT is granted the authority to investigate the facts and issue suspension or corrective orders (Article 40), and an artificial intelligence business operator that fails to comply with this may be subject to a fine of up to 30 million won (Article 43).
3. Specific details of the obligations borne by AI business operators
Among the provisions above, the parts of the AI Basic Act that AI startups and other AI business operators should pay particular attention to are the obligations of AI business operators from Articles 31 to 34. As seen above, these are broadly divided into three categories: the obligation to ensure AI transparency (Article 31), the obligation to ensure AI safety (Article 32), and the obligations of high-impact AI business operators (Articles 33 and 34).
First, the obligation to ensure AI transparency (Article 31) stipulates that if a product or service provided to a consumer is operated by AI, or if any result is generated by AI or is difficult to distinguish from reality, such fact must be notified or displayed.
Specifically, when an AI business operator intends to provide a product or service using high-impact artificial intelligence or generative artificial intelligence, it must ‘notify in advance’ the user that the product or service is operated based on the artificial intelligence (Article 1), when providing generative artificial intelligence or a product or service using it, it must ‘indicate’ the fact that the result was generated by the artificial intelligence (Article 2), and when providing a virtual sound, image, or video that is difficult to distinguish from reality using an artificial intelligence system, it must ‘notify or display in a manner that allows the user to clearly recognize’ the fact that the result was generated by the artificial intelligence system (Article 3).
Next, the obligation to ensure artificial intelligence safety (Article 32) requires the implementation of matters related to the identification, assessment, and mitigation of risks throughout the artificial intelligence lifecycle, monitoring of artificial intelligence-related safety accidents, and establishment of a risk management system to ensure safety, in the case of artificial intelligence systems where the cumulative computational volume used for learning exceeds a certain standard, due to concerns about the occurrence of risks such as functional errors, data bias, and misuse.
Finally, in the case of high-impact AI, as it is defined as something that may or may not significantly impact the lives, bodies, and fundamental rights of citizens, AI service providers are required to conduct a prior review to determine whether the AI they use qualifies as high-impact AI (Article 33). If, following this prior review, it qualifies as high-impact AI, the following measures must be taken to ensure the safety and reliability of the AI: 1) establishment and operation of a risk management plan; 2) establishment and implementation of an explanation plan for the final results derived from the AI within the scope technically feasible, the main criteria utilized in deriving the final results, and an overview of the learning data used in the development and utilization of the AI; 3) establishment and operation of a user protection plan; 4) human management and supervision of high-impact AI; 5) preparation and storage of documents verifying the contents of measures to ensure safety and reliability; and 6) other measures deliberated and decided by the Committee to ensure the safety and reliability of high-impact AI (Article 34).
We've examined the outline and key provisions of the Framework Act on AI. However, this law alone remains unclear regarding the definition of high-impact AI, the specific methods and means of fulfilling the obligations to ensure transparency and safety, and the standards for high-performance AI with a certain level of accumulated learning. These will be specifically regulated in the subordinate statutes of the Framework Act on AI, as previously explained. The government recently announced the direction and draft of this legislation, so we will continue to provide details in subsequent articles.
- See more related columns
You must be logged in to post a comment.