– Protecting AI safety with 30,000 vulnerability patterns
– Recognized by global AI companies with a 94.4% penetration rate
-Attention as a beneficiary company in the AI security market with the passage of the AI Basic Law
– From Establishment to Global AI Company Diagnosis in Just One Year… Attention on Solutions Combining ‘Security + Ethics’
Last November, AI chatbot Gemini caused controversy when it gave the inappropriate answer, “Please die.” Recently, the “multiple jailbreak” method, which repeatedly asks indirect questions to elicit answers related to violence or crime, has also become rampant. In addition, the phenomenon of “hallucination” in which AI creates false information is occurring frequently, and voice phishing and email fraud using AI are also on the rise.
There is a startup that has stepped up to solve these AI safety issues. AIM Intelligence is an AI security startup that possesses technology that blocks malicious attempts by AI users at the source. AIM Intelligence effectively blocks phishing emails, false information, and deepfake image creation, as well as hacking and cyberattack attempts.
“As AI technology advances, the importance of security also increases. Our goal is to help AI be used ethically and safely.”

This is how Sang-yoon Yoo, CEO of AIM Intelligence, explained the company’s vision, which we met at the SKT AI Lab for Startups in COEX, Gangnam-gu, Seoul. The “aim” in the company’s name, AIM Intelligence, has a dual meaning of AI and “aim.” The red dot on the company logo symbolizes an accurate crosshair, and it contains the will to precisely find and eliminate vulnerabilities in AI security.
CEO Yoo, who holds a master's degree in electrical and information engineering from Seoul National University, founded AimIntelligence in early 2024 with a junior from the Virtual Machine Optimization Lab while studying AI ethics. In a short period of time since founding the company, it has achieved notable results such as winning the 'Meta Rama Impact Innovation Award', winning the 'AI Red Team Challenge' of the Ministry of Science and ICT, and being selected for SK Telecom's 'AI Startup Accelerator 2nd Batch'. Based on these achievements, AimIntelligence attracted seed investment from Mashup Ventures. The 'AI Startup Accelerator 2nd Batch' program is a program created by SK Telecom (CEO Yoo Young-sang) and Hana Bank (CEO Lee Seung-yeol) to foster AI startups. Selected companies will receive support such as free office space, business mentoring, patents, investment, and public relations.
Although it has not even been a year since its establishment, Aim Intelligence has already been recognized for its value, with major domestic telecommunications companies diagnosing their AI services with Aimred and participating in the Claude model diagnosis project of global AI company Anthropic.
■ Development of innovative ‘attack’ and ‘defense’ solutions
AIM Intelligence's flagship products are 'AIM Red' and 'AIM Guard'. AIM Red is a diagnostic tool that automatically finds vulnerabilities in AI systems. Previously, security experts used the 'human red team' method to find vulnerabilities, but AIM Red has automated this process to increase efficiency.
Aimred tests AI systems in a variety of ways from a hacker’s perspective, for example by exploiting the fact that while direct requests for hacking code are rejected by AI, they can be vulnerable to roundabout requests based on specific scenarios.
“We have systematized known vulnerability patterns. We have developed patterns by assigning specific roles and tasks to AI to attempt attacks, and combining them with specific topics such as cyberattack code or disinformation production. We are continuously discovering new patterns through the community and competitions.”
AimIntelligence automates the creation of redtiming data by creating various attack patterns and topics and then augmenting them with synthetic data generation. AimIntelligence currently has over 30,000 vulnerability patterns. In particular, it is performing attacks in the 'multi-turn' method, which involves multiple conversations, rather than the 'single-turn' method, which involves only one question and answer. The key to the multi-turn method is that the AI remembers the context of the previous conversation and generates an appropriate response based on it. Recent AI models are blocking simple attacks, but they are revealing vulnerabilities in complex conversation processes.
AimGuard is a solution that defends against these vulnerabilities. If AimRed is an 'attack' tool that finds vulnerabilities, AimGuard can be said to be a 'defense' tool that blocks these vulnerabilities.
The core of AimGuard is its dual defense system that operates at both the input and output stages. At the input stage, it blocks malicious attempts by the user in advance, and at the output stage, it checks whether the AI's answers are appropriate to prevent abusive language, personal information leakage, and copyright infringement.
Both products show excellent performance. Aimred recorded a penetration rate of 94.4%, which is higher than Microsoft's PyRIT (33.3%). This means that it can find about three times more vulnerabilities in the same amount of time. Aimguard achieved a protection rate of 99%, surpassing Meta's Llama Guard, which achieved a protection rate of 90%. In particular, its strength is that it can diagnose vulnerabilities that reflect the unique characteristics of the Korean language and Korean culture. For example, it can effectively perform penetration and protection for topics that are particularly sensitive in Korean society, such as gender conflicts or military-related issues.
“Traditional cybersecurity analyzes software code, but AI is like a ‘black box’ and needs to find problems with various input values. AI security is also closely related to ethical issues,” said CEO Yoo, explaining the special nature of AI security.

■ The importance of AI security increases with the AI Basic Law
The importance of AI security is expected to increase further with the passage of the Framework Act on AI (Basic Act on the Development of Artificial Intelligence and Creation of a Trust Base, etc.) by the National Assembly on December 26, 2024. This law is the second comprehensive AI regulation law enacted in the world after the EU, and mandates the transparency and safety of AI. This bill consists of contents to establish a national governance system for AI, systematically foster the AI industry, and prevent problems that may arise due to the technical limitations and misuse of AI in advance. The law also includes the obligation to ensure transparency, the obligation to ensure safety, the responsibility of business operators, and the basis for supporting private sector autonomous AI safety, reliability verification, and AI impact assessment.
Representative Yoo diagnosed that Korea is somewhat behind in the AI security field, explaining, “The US has an established AI security industry and related laws and systems in place. On the other hand, Korea is just beginning to take interest. With the recent passage of the AI Basic Act, the AI security industry is expected to develop.”
■ Dreaming of a safe AI era
Aim Intelligence plans to expand its current consulting-based services to subscription-based SaaS soon. It is also preparing customized security solutions for specialized fields such as finance and healthcare, and is also pursuing entry into the US market.
In the long term, we plan to expand our scope beyond generative AI to include physical AI security such as robots and self-driving cars. CEO Yoo emphasized, “When physical AI such as robots and self-driving cars emerge, safety will become even more important,” and “Our goal is to become a company that will be indispensable at that time.” To this end, we are actively recruiting talents from various fields such as AI safety, security researchers, and regulatory experts.
You must be logged in to post a comment.