News Detail

The Breakthrough: 20 Global Tech Giants Announce Integration with DeepSeek

6
Issuing time:2025-02-07 09:55

In the dynamic landscape of artificial intelligence, DeepSeek, a Chinese AI enterprise, has recently catapulted into the spotlight. Its large models, characterized by remarkable technological prowess and expansive application prospects, have triggered a new wave of transformation in the AI market as numerous renowned domestic and international cloud platforms and tech companies unveil their adoption of DeepSeek.

International Titans' Proactive Moves in Shaping Global AI Development

AMD was among the first to make a significant move on January 25th. The company announced the integration of the DeepSeek-V3 model into its Instinct MI300X GPU. In a post on the X platform, Dr. Lisa Su, AMD's Chair and CEO, lauded DeepSeek, highlighting "the speed and pace of innovation in the AI world" and emphasizing that "model and algorithm innovation are conducive to the popularization of AI."

Microsoft wasted no time in following suit. On January 30th, it declared that the DeepSeek-R1 model was available through Azure AI Foundry and GitHub. Microsoft has plans to incorporate this model into its AI-powered computer Copilot+PC and has even rolled out a version optimized for NPU, further expanding its AI application ecosystem.

Nvidia, on January 31st, successively announced the launch of software services featuring DeepSeek-R1. As per Nvidia's official website, the DeepSeek-R1 model is now offered as a preview of the NVIDIA NIM microservice, opening the door for developers to test and experience this API.

Intel also joined the fray on January 31st, revealing that DeepSeek can operate offline on AI PCs equipped with Core processors. On the Core Ultra 200H (Arrow Lake H) platform, the DeepSeek-R1-1.5B model can perform tasks such as translation, meeting minutes taking, and document writing locally and offline.

AWS (Amazon Web Services) has been actively involved as well. On January 31st, Amazon Web Services announced that users can deploy the DeepSeek-R1 model in Amazon Bedrock and Amazon SageMaker AI. Additionally, through Amazon EC2 and Amazon SageMaker AI, users can utilize Amazon Trainium and Amazon Inferentia to deploy the DeepSeek-R1-Distill model.

Domestic Enterprises' Swift Response and Advancements

Huawei Cloud took action on February 1st. They reported that Silicon Mobility and the Huawei Cloud team jointly premiered and launched the DeepSeek R1/V3 inference service based on Huawei Cloud's Ascend cloud service. Leveraging its self-developed inference acceleration engine, this service enables the deployed DeepSeek model to achieve comparable results to those of globally high-end GPU-deployed models while maintaining stable production-level service capabilities.

Tencent Cloud made an announcement on February 2nd, stating that it supports one-click deployment of the DeepSeek-R1 model on its High-Performance Application Service (HAI). Developers can complete the startup and configuration of the model in just 3 minutes and seamlessly integrate it with other Tencent Cloud services, significantly enhancing the efficiency of building a complete AI application based on DeepSeek R1.

China Telecom's Tianyi Cloud joined the list on February 5th, proclaiming itself as one of the earliest domestic cloud service providers to support the DeepSeek-R1 model. Tianyi Cloud has fully integrated the DeepSeek-R1 model into its intelligent computing product system, covering products and services such as Xirang - Research Assistant, Tianyi AI Cloud PC, the "Xirang" Intelligent Computing Platform, GPU Cloud Host / Bare Metal, and more.

Alibaba Cloud announced on February 3rd that its PAIModelGallery supports one-click deployment of the DeepSeek-V3 and DeepSeek-R1 models on the cloud. Users can achieve the entire process from training to deployment and then to inference with zero code on this platform.

Baidu Smart Cloud officially listed the DeepSeek-R1 and DeepSeek-V3 models on its Qianfan platform on February 3rd and introduced ultra-low price plans and limited-time free services.

On February 4th, Volcano Engine announced its full support for the DeepSeek series of large models, including different sizes such as V3 and R1. Enterprise users can deploy these models on the Volcano Engine Machine Learning Platform (veMLP) or directly call them through the Volcano Ark platform.

Muxi joined hands with the Chinese open-source large model platform Gitee AI to release a full set of DeepSeek-R1 Qianwen distilled models. On February 2nd, four smaller-scale models, namely 1.5B, 7B, 14B, and 32B, were launched for the first time and deployed on Muxi's XiYun GPU. The combination of the DeepSeek-R1 model, Muxi's XiYun GPU, and the Gitee AI platform represents a complete domestic R & D and manufacturing process from chip to platform, from computing power to model, which the official calls "the power of 100% domestic AI."

Tianshu Zhixin announced its cooperation with Gitee AI on February 4th. In just one day, it completed the adaptation work with the DeepSeek-R1 model and officially launched multiple large model services, including DeepSeek R1-Distill-Qwen-1.5B, DeepSeek R1-Distill-Qwen-7B, and DeepSeek R1-Distill-Qwen-14B. The company stated that the adaptation of domestic GPUs with DeepSeek can achieve a deep integration of deep learning frameworks with domestic independent hardware, promoting the independent and controllable development of the domestic AI industry chain, reducing dependence on foreign hardware platforms, and minimizing technical risks and costs.

Moore Thread announced on February 4th that it has completed the deployment of the inference service for the DeepSeek distilled model and is soon to open its self-designed Kua'e (KUAE) GPU intelligent computing cluster, which will support the distributed deployment of the DeepSeek V3, R1 models, and the new generation of distilled models. Moore Thread believes that DeepSeek's open-source models and its hardware form a closed loop, validating the support capabilities of domestic full-function GPUs for complex AI tasks and providing a viable path for the popularization of AGI technology.

Higon Information announced on February 4th that its technical team has completed the adaptation and launch of the DeepSeek V3 and R1 models with its Higon DCU. The Higon DCU is a high-performance GPGPU architecture AI acceleration card that has been widely applied in various fields. Users can access and download relevant models through the "Light Source" section of the "Photosynthesis Developer Community" and quickly deploy and use them based on the DCU platform.

Wuwenxinqiong announced its support for the DeepSeek-R1-Distill 32B model on its Infini-AI heterogeneous cloud as early as January 28th. The Infini-AI heterogeneous cloud platform of Wuwenxinqiong has launched products such as DeepSeek-R1-Distill, providing services based on the DeepSeek model for customers.

PPIO announced its support for the DeepSeek model on its cloud service platform on February 2nd. PPIO's computing power cloud supports the DeepSeek-V3, DeepSeek-R1, and the distilled model DeepSeek-R1-Distill-Llama-70B.

On February 2nd, 360 Digital Security announced that its security large model has officially integrated DeepSeek. Through technical means such as reinforcement learning, 360 will introduce a "DeepSeek version" of its security large model, leveraging its advantages in security big data.

On February 2nd, ZStack, a company focused on cloud infrastructure, announced that its AIInfra platform, ZStack Zhita, fully supports the private deployment of three models: DeepSeek V3/R1/JanusPro for enterprises. This platform can be adapted to a variety of domestic and international CPUs/GPUs, meeting the different AI scenario requirements of enterprises.

The Rise of DeepSeek: Growing AI Infrastructure Demand and the Industry's March Towards Cost-Effectiveness

According to TrendForce, the global AI Server market has been growing rapidly since 2023. It is expected that by 2025, its proportion in the overall Server shipments will exceed 15%, and by 2028, it is likely to approach 20%. In recent years, major CSP players have been actively expanding in response to the demands of AI training. Starting from 2025, their focus will shift to edge AI inference. In addition to adopting new-generation GPU platforms such as NVIDIA Blackwell, companies like AWS are also intensifying their development of in-house ASICs to enhance cost-effectiveness and meet the specific needs of AI applications. Chinese CSPs and relevant AI players like DeepSeek are emphasizing the development of more efficient AI chips or algorithms to promote the diversified development of AI demand and applications.

The AI industry has traditionally relied on expanding models, increasing data, and enhancing hardware efficiency for development. However, cost and efficiency have emerged as significant challenges. DeepSeek employs the Model Distillation technology to compress large models, thereby boosting inference speed and reducing hardware requirements. At the same time, it fully exploits the benefits of NVIDIA Hopper's downscaled chips, maximizing the utilization of computing resources. Its cost advantages stem from the selection of high-performance hardware, new distillation techniques, and an API open-source strategy. This not only optimizes the balance between technology and commercial applications but also demonstrates the trend of the AI industry towards greater efficiency.

The successful launch of DeepSeek's large models has provided enterprises with a cost-effective and high-performance technical solution and has also spurred healthy competition and development in the artificial intelligence industry. As more application scenarios are developed and implemented, DeepSeek is expected to play an even more significant role in a wider range of fields. Its future development is undoubtedly worthy of continuous attention.

Home                                    Product                                        News                                   About                                        Contact
Tel: +86-0755-84866816  13924645577
Tel: +86-0755-84828852  13924649321
Mail:  kevin@glochip.com
Web:  www.glochip.com
Rm401.1st Building, Dayun software Longgang Avenue, Longgang district,Shenzhen,China
Samsung Micron SKhynix Kingston Sandisk  Kioxia Nanya  BoyaMicro  Piecemakers Rayson  Skyhigh  Netsol

SRAM MRAM  DDR3 DDR4 DDR5 LPDDR3 LPDDR4 LPDDR4X LPDDR5 LPDDR5X  eMMC UFS eMCP uMCP SSD Module