AGÕæÈ˹ٷ½

STOCK TITAN

Hon Hai Research Institute Launches Traditional Chinese LLM With Reasoning Capabilities

Rhea-AI Impact
(No impact)
Rhea-AI Sentiment
(Very Positive)
Tags

Hon Hai Research Institute has launched FoxBrain, the first Traditional Chinese Large Language Model (LLM), developed in just four weeks using 120 NVIDIA H100 GPUs. Based on Meta Llama 3.1 architecture with 70B parameters, FoxBrain demonstrates superior performance in mathematics and logical reasoning compared to existing models.

The model, backed by Foxconn (TWSE:2317), will be open-sourced and was initially designed for internal applications including data analysis, decision support, and code generation. FoxBrain utilized efficient training methods and resource optimization, processing 98B tokens of high-quality pre-training data across 24 topic categories.

The model's development showcases Taiwan's AI capabilities, achieving results comparable to world-class AI models despite computational resources. FoxBrain will be integrated into Foxconn's three major platforms: Smart Manufacturing, Smart EV, and Smart City, with its results to be presented at NVIDIA GTC 2025 on March 20.

Il Hon Hai Research Institute ha lanciato FoxBrain, il primo Modello di Linguaggio di Grandi Dimensioni (LLM) in cinese tradizionale, sviluppato in soli quattro settimane utilizzando 120 GPU NVIDIA H100. Basato sull'architettura Meta Llama 3.1 con 70 miliardi di parametri, FoxBrain dimostra prestazioni superiori in matematica e ragionamento logico rispetto ai modelli esistenti.

Il modello, supportato da Foxconn (TWSE:2317), sarà reso open-source ed è stato inizialmente progettato per applicazioni interne, tra cui analisi dei dati, supporto decisionale e generazione di codice. FoxBrain ha utilizzato metodi di addestramento efficienti e ottimizzazione delle risorse, elaborando 98 miliardi di token di dati di pre-addestramento di alta qualità suddivisi in 24 categorie tematiche.

Lo sviluppo del modello mette in mostra le capacità dell'IA di Taiwan, raggiungendo risultati comparabili a modelli di IA di livello mondiale nonostante le limitazioni delle risorse computazionali. FoxBrain sarà integrato nelle tre principali piattaforme di Foxconn: Smart Manufacturing, Smart EV e Smart City, con i risultati che saranno presentati al NVIDIA GTC 2025 il 20 marzo.

El Instituto de Investigación Hon Hai ha lanzado FoxBrain, el primer Modelo de Lenguaje de Gran Tamaño (LLM) en chino tradicional, desarrollado en solo cuatro semanas utilizando 120 GPU NVIDIA H100. Basado en la arquitectura Meta Llama 3.1 con 70 mil millones de parámetros, FoxBrain demuestra un rendimiento superior en matemáticas y razonamiento lógico en comparación con los modelos existentes.

El modelo, respaldado por Foxconn (TWSE:2317), será de código abierto y fue diseñado inicialmente para aplicaciones internas, incluyendo análisis de datos, soporte en la toma de decisiones y generación de código. FoxBrain utilizó métodos de entrenamiento eficientes y optimización de recursos, procesando 98 mil millones de tokens de datos de preentrenamiento de alta calidad en 24 categorías temáticas.

El desarrollo del modelo muestra las capacidades de IA de Taiwán, logrando resultados comparables a modelos de IA de clase mundial a pesar de las limitaciones de recursos computacionales. FoxBrain se integrará en las tres principales plataformas de Foxconn: Smart Manufacturing, Smart EV y Smart City, con los resultados que se presentarán en el NVIDIA GTC 2025 el 20 de marzo.

í™í•˜ì� 연구ì†�ê°¶Ä í­ìŠ¤ë¸Œë ˆì�ì� 출시했습니다. ì´ëŠ” 전통 중국어로 ë� ì²� 번째 대규모 언어 모ë¸(LLM)ë¡�, 120ê°œì˜ NVIDIA H100 GPUë¥� 사용하여 ë‹� 4ì£� ë§Œì— ê°œë°œë˜ì—ˆìŠµë‹ˆë‹�. 70ì–� ê°œì˜ ë§¤ê°œë³€ìˆ˜ë¥¼ ê°¶Äì§� 메타 ë¼ë§ˆ 3.1 아키í…처ë¥� 기반으로 하며, í­ìŠ¤ë¸Œë ˆì¸ì€ 기존 모ë¸ì—� 비해 수학 ë°� 논리ì � 추론ì—서 우수í•� 성능ì� ë³´ì—¬ì¤ë‹ˆë‹�.

ì� 모ë¸ì€ í­ìФì½� (TWSE:2317)ì� ì§€ì›ì„ 받아 오픈 ì†ŒìŠ¤í™”ë  ì˜ˆì •ì´ë©°, ë°ì´í„� ë¶„ì„, ì˜ì‚¬ ê²°ì • ì§€ì›� ë°� 코드 ìƒì„± ë“� ë‚´ë¶€ 애플리케ì´ì…˜ì� 위해 ì²˜ìŒ ì„¤ê³„ë˜ì—ˆìŠµë‹ˆë‹�. í­ìŠ¤ë¸Œë ˆì¸ì€ 효율ì ì¸ 훈련 방법ê³� ìžì› 최ì í™”를 활용하여 24ê°� 주제 카테고리ì—서 98ì–� ê°œì˜ ê³ í’ˆì§� 사전 훈련 ë°ì´í„°ë¥¼ 처리했습니다.

모ë¸ì� ê°œë°œì€ ëŒ€ë§Œì˜ AI 역량ì� 보여주며, 컴퓨íŒ� ìžì›ì—ë„ ë¶ˆêµ¬í•˜ê³  세계ì � 수준ì� AI 모ë¸ê³� 비êµí•� 만한 ê²°ê³¼ë¥� 달성했습니다. í­ìŠ¤ë¸Œë ˆì¸ì€ í­ìŠ¤ì½˜ì˜ ì„� ê°¶Äì§€ 주요 플랫í¼ì¸ 스마íŠ� 제조, 스마íŠ� 전기ì°�, 스마íŠ� 시티ì—� 통합ë� 예정ì´ë©°, ê·� ê²°ê³¼ëŠ� 2025ë…� 3ì›� 20ì� NVIDIA GTCì—서 발표ë� 것입니다.

Le Hon Hai Research Institute a lancé FoxBrain, le premier Modèle de Langage de Grande Taille (LLM) en chinois traditionnel, développé en seulement quatre semaines avec 120 GPU NVIDIA H100. Basé sur l'architecture Meta Llama 3.1 avec 70 milliards de paramètres, FoxBrain démontre des performances supérieures en mathématiques et en raisonnement logique par rapport aux modèles existants.

Le modèle, soutenu par Foxconn (TWSE:2317), sera open-source et a été initialement conçu pour des applications internes, y compris l'analyse de données, le soutien à la décision et la génération de code. FoxBrain a utilisé des méthodes d'entraînement efficaces et l'optimisation des ressources, traitant 98 milliards de tokens de données de pré-entraînement de haute qualité réparties sur 24 catégories thématiques.

Le développement du modèle met en avant les capacités en IA de Taïwan, atteignant des résultats comparables à ceux des modèles d'IA de classe mondiale malgré des ressources informatiques limitées. FoxBrain sera intégré dans les trois principales plateformes de Foxconn : Smart Manufacturing, Smart EV et Smart City, avec ses résultats présentés lors du NVIDIA GTC 2025 le 20 mars.

Das Hon Hai Research Institute hat FoxBrain gestartet, das erste große Sprachmodell (LLM) in traditionellem Chinesisch, das in nur vier Wochen mit 120 NVIDIA H100 GPUs entwickelt wurde. Basierend auf der Architektur Meta Llama 3.1 mit 70 Milliarden Parametern zeigt FoxBrain überlegene Leistungen in Mathematik und logischem Denken im Vergleich zu bestehenden Modellen.

Das Modell, unterstützt von Foxconn (TWSE:2317), wird als Open Source veröffentlicht und wurde ursprünglich für interne Anwendungen wie Datenanalyse, Entscheidungsunterstützung und Code-Generierung entwickelt. FoxBrain nutzte effiziente Trainingsmethoden und Ressourcenoptimierung und verarbeitete 98 Milliarden Token hochwertiger Vortrainingsdaten in 24 Themenkategorien.

Die Entwicklung des Modells zeigt die KI-Fähigkeiten Taiwans und erzielt Ergebnisse, die mit weltklasse KI-Modellen vergleichbar sind, trotz begrenzter Rechenressourcen. FoxBrain wird in die drei Hauptplattformen von Foxconn integriert: Smart Manufacturing, Smart EV und Smart City, und die Ergebnisse werden am 20. März 2025 auf der NVIDIA GTC präsentiert.

Positive
  • Development of proprietary AI technology demonstrating competitive capabilities
  • Efficient resource utilization: 4-week development vs industry standards
  • Superior performance metrics in mathematics and reasoning compared to existing models
  • Strategic application across three major business platforms
Negative
  • Still trails DeepSeek's distillation model in mathematical reasoning performance

First version by AI Research Center performs well in mathematics and reasoning

TAIPEI, March 10, 2025 /PRNewswire/ -- Hon Hai Research Institute announced today the launch of the first Traditional Chinese Large Language Model (LLM), setting another milestone in the development of Taiwan's AI technology with a more efficient and lower-cost model training method completed in just four weeks.

The institute, which is backed by Hon Hai Technology Group ("Foxconn") (TWSE:2317), the world's largest electronics manufacturer and leading technological solutions provider, said the LLM � code named FoxBrain � will be open sourced and shared publicly in the future. It was originally designed for applications used in the Group's internal systems, covering functions such as data analysis, decision support, document collaboration, mathematics, reasoning and problem solving, and code generation.

FoxBrain not only demonstrates powerful comprehension and reasoning capabilities but is also optimized for Taiwanese users' language style, showing excellent performance in mathematical and logical reasoning tests.

"In recent months, the deepening of reasoning capabilities and the efficient use of GPUs have gradually become the mainstream development in the field of AI. Our FoxBrain model adopted a very efficient training strategy, focusing on optimizing the training process rather than blindly accumulating computing power," said Dr. Yung-Hui Li, Director of the Artificial Intelligence Research Center at Hon Hai Research Institute. "Through carefully designed training methods and resource optimization, we have successfully built a local AI model with powerful reasoning capabilities."

The FoxBrain training process was powered by 120 , scaled with InfiniBand networking, and finished in just about four weeks. Compared with inference models recently launched in the market, the more efficient and lower-cost model training method sets a new milestone for the development of Taiwan's AI technology.

FoxBrain is based on the Meta Llama 3.1 architecture with 70B parameters. In most categories among TMMLU+ test dataset, it outperforms Llama-3-Taiwan-70B of the same scale, particularly exceling in mathematics and logical reasoning (For TMMLU+ benchmark of FoxBrain, please refer to Fig.1). The following are the technical specifications and training strategies for FoxBrain:

  • Established data augmentation methods and quality assessment for 24 topic categories through proprietary technology, generating 98B tokens of high-quality pre-training data for Traditional Chinese
  • Context window length: 128 K tokens
  • Utilized 120 NVIDIA H100 GPUs for training, with total computational cost of 2,688 GPU days
  • Employed multi-node parallel training architecture to ensure high performance and stability
  • Used a unique Adaptive Reasoning Reflection technique to train the model in autonomous reasoning

In test results, FoxBrain showed comprehensive improvements in mathematics compared to the base Meta Llama 3.1 model. It achieved significant progress in mathematical tests compared to Taiwan Llama, currently the best Traditional Chinese large model, and surpassed Meta's current models of the same class in mathematical reasoning ability. While there is still a slight gap with DeepSeek's distillation model, its performance is already very close to world-leading standards.

FoxBrain's development � from data collection, cleaning and augmentation, to Continual Pre-Training, Supervised Finetuning, RLAIF, and Adaptive Reasoning Reflection � was accomplished step by step through independent research, ultimately achieving benefits approaching world-class AI models despite limited computational resources. This large language model research demonstrates that Taiwan's technology talent can compete with international counterparts in the AI model field.

Although FoxBrain was originally designed for internal group applications, in the future, the Group will continue to collaborate with technology partners to expand FoxBrain's applications, share its open-source information, and promote AI in manufacturing, supply chain management, and intelligent decision-making.

During model training, NVIDIA provided support through the Taipei-1 Supercomputer and technical consultation, enabling Hon Hai Research Institute to successfully complete the model pre-training with NVIDIA NeMo. FoxBrain will also become an important engine to drive the upgrade of Foxconn's three major platforms: Smart Manufacturing. Smart EV. Smart City.

The results of FoxBrain is scheduled to be shared for the first time at a major conference during NVIDIA GTC 2025 Session Talk "" on March 20.

About Hon Hai Research Institute

The institute has five research centers. Each center has an average of 40 high technology R&D professionals, all of whom are focused on the research and development of new technologies, the strengthening of Foxconn's technology and product innovation pipeline, efforts to support the Group's transformation from "brawn" to "brains", and the enhancement of the competitiveness of Foxconn's "3+3" strategy.

About Foxconn .

Cision View original content to download multimedia:

SOURCE Hon Hai Research Institute

FAQ

What are the key technical specifications of FoxBrain, Hon Hai's new LLM (HNHPF)?

FoxBrain uses Meta Llama 3.1 architecture with 70B parameters, 128K token context window, trained on 120 NVIDIA H100 GPUs over 2,688 GPU days with 98B tokens of pre-training data.

How does FoxBrain's performance compare to other Traditional Chinese language models?

FoxBrain outperforms Llama-3-Taiwan-70B in most TMMLU+ categories, particularly in mathematics and logical reasoning, approaching world-leading standards.

When will Hon Hai (HNHPF) make FoxBrain available as open-source?

While Hon Hai has announced FoxBrain will be open-sourced in the future, no specific release date has been provided.

What business applications will FoxBrain serve for Hon Hai (HNHPF)?

FoxBrain will power Foxconn's Smart Manufacturing, Smart EV, and Smart City platforms, supporting data analysis, decision support, document collaboration, and code generation.
Hon Hai Precision Inds Ltd

OTC:HNHPF

HNHPF Rankings

HNHPF Latest News

HNHPF Stock Data

69.70B
6.95B
0.04%
Electronic Components
Technology
Taiwan
New Taipei City