
Google Korea's Digital Responsibility Committee announced that it held the 'Responsible AI Forum' twice in the first half of this year and discussed key issues related to the Framework Act on the Promotion of Artificial Intelligence and Creation of a Trust Foundation (hereinafter referred to as the Framework Act on AI), which was enacted in December of last year.
The Responsible AI Forum is one of the forums under the Digital Responsibility Committee launched by Google Korea early last year to create a responsible digital ecosystem. This year, 14 experts from various fields such as law, policy, IT/technology, and startups participated as members, and four times a year, they will examine domestic and international legislative trends and social and ethical issues surrounding the development of AI and seek more responsible AI development and utilization methods.
The first forum held in March was held under the theme of ‘Definition and Prospects of High-Impact AI,’ which is considered one of the key issues of the Basic AI Act. The Basic AI Act defines AI systems that can have a significant impact on human life, safety, and basic rights as ‘high-impact AI’ in 11 fields, but some point out that the specific categories of high-impact AI included in the bill are ambiguous, which may lead to differences in interpretation in the application of the law.
Accordingly, the Responsible AI Forum invited Professor Lee Seong-yeop of Korea University’s Graduate School of Technology Management, an expert in AI regulation law, as a presenter to review the definition and standards of high-impact AI, the current status of regulations, and discuss approaches that can promote AI industry innovation while ensuring safety. Experts who participated in the forum agreed that it is necessary to revise the bill to sufficiently reflect the perspectives and current status of the AI industry ecosystem, while reorganizing ambiguous or overlapping provisions and supplementing follow-up responsibilities and procedures.
The second forum held on May 21st dealt with the topic of 'Securing AI Safety and Transparency and AI Impact Assessment', which is mandatory under the Framework Act on AI. The Framework Act on AI mandates AI safety and transparency measures and AI impact assessments in order to prevent various risks that may arise from rapidly developing AI technology in advance and to protect the basic rights of the people. However, it has been pointed out as a limitation that ▲regulations that uniformly enforce safety and transparency measures across various levels of the AI industry may amount to excessive regulation, and ▲the standards that define the subject, scope, and method of AI impact assessments are ambiguous.
In this regard, Professor Lee Sang-yong of Konkuk University Law School gave a presentation on the topic of 'Two Paradigms of AI Regulation: Context-Based Regulation and Capability-Based Regulation' and divided AI risks into 'contextual risks' centered on specific usage situations and 'capability risks' centered on potential, and emphasized the need for flexible and autonomous regulations and national strategies and directions to respond to changes in the AI industry. Professor Kwon Eun-jung of Gachon University Law School also introduced the significance of the impact assessment system and its current status at home and abroad in her presentation on 'AI Impact Assessment: Significance, Current Status, and Tasks', and mentioned the need for AI regulation legislation that takes into account diversifying risk types and establishing an integrated AI assessment platform in which the government and the private sector cooperate.
The experts who attended the forum that day agreed that a more effective AI impact assessment methodology should be developed in terms of technology and social ethics, and comprehensively discussed the subject and scope of AI impact assessment, the responsibility of the impact assessment entity, and the direction of reflection in policy. In particular, the experts emphasized that the responsibility of various entities including startups, the speed of AI technology development, and consistency with international norms should be thoroughly considered in AI impact assessment, and agreed that a more flexible and step-by-step approach is necessary to secure the responsibility for AI utilization and the legitimacy of regulation.
“In an era where a large number of companies, organizations, and individuals are using AI programs, safe and efficient use of AI is more important than ever,” said Choi Jae-sik, chairman of the Responsible AI Forum and professor at KAIST’s Kim Jae-chul AI Graduate School and director of the XAI Research Center. “How well we respond to the limitations and vulnerabilities of existing AI services will be a factor in determining the future responsibility and safety of AI, and furthermore, the direction of AI leadership.”
Meanwhile, the Responsible AI Forum will hold in-depth discussions on AI regulations centered on the Basic AI Act in the first half of the year, and in the second half of the year, it will examine the impact and prospects of AI technologies that will bring about innovative changes across industries, such as AI agents and AI robotics.
- See more related articles
You must be logged in to post a comment.