Trillion Labs Unveils Korea's First 70B LLM and Model Checkpoint

-Tri-70B (70 billion parameters), the largest model developed from scratch in Korea, released as a base model with minimal post-learning.

– Simultaneously , ' Open Source Week' was declared, and the most open license, Apache 2.0, was released across 0.5B, 1.8B, 7B, and 70B, providing openness that can be used commercially by both research and industry.

-Provides a global research standard that enhances the transparency of the training process by fully disclosing all checkpoints, enabling research reproducibility and analysis of learning dynamics.

Trillion Labs (CEO Jaemin Shin), a super-intelligent AI startup, has unveiled Tri-70B, Korea's first large-scale language model (LLM) with 70 billion (70B) parameters. This model is significant as it is the largest language model developed from scratch in Korea to date. Furthermore, it is released as a base model with minimal post-training, allowing researchers and companies to easily customize it, which is expected to contribute to both academia and industry.

Additionally, with the release of the 70B model, Trillion Labs declared "Open Source Month" and released its entire lineup of 0.5B, 1.8B, 7B, and 70B models under the Apache 2.0 license. This provides Trillion Labs' language model lineup in the most open form possible, from research purposes to commercial use, going beyond simple sharing of results to providing assets that can be utilized in research and industrial settings.

Notably, during this Open Source Month, Trillion Labs will fully disclose not only the final model but also intermediate checkpoints generated during training. This will allow academia and industry to closely study the training process of large-scale models and conduct efficient retraining and applied research.

This is a rare endeavor, even on a global scale. Previously, only non-profit organizations like AllenAI and Hugging Face had undertaken such efforts, and Trillion Labs is the third globally and the first in Korea to do so. Trillion Labs is considered to have established a new research standard, ensuring transparency and reproducibility throughout the entire learning process through research disclosure, a first in Korea and one of the largest of its kind globally.

This release also includes models specialized for multilingual translation and real-time search. The search model, in particular, can be integrated with search engines like DuckDuckGo to reflect the latest information in real time, demonstrating the potential for large-scale language models to continuously learn and utilize new knowledge.

Shin Jae-min, CEO of Trillion Labs, emphasized, “We are not simply creating a language model, but we are also disclosing the training process and core techniques to ensure research transparency and lay the foundation for the development of the AI research ecosystem not only in Korea but also globally.” He added, “This disclosure is a meaningful challenge that is being attempted for the first time in Korea, and it will be an important starting point to show that domestic AI companies can secure global competitiveness through technological excellence and an open research culture.”

Meanwhile, Trillion Labs, founded in August 2024, is the only startup in Korea to have independently designed and implemented a Korean-focused LLM program from scratch. Led by CEO Jaemin Shin, a pioneer in generative AI, the team comprises top-tier AI engineers and researchers from Korea and abroad, including graduates from KAIST, Oxford, Berkeley, Amazon, and Naver. The company secured $5.8 million (approximately KRW 9 billion) in pre-seed funding in September 2024 and open-sourced its pre-released model, Trillion-7B (Trillion-7B-preview), in March 2025, and Tri-21B in July.