AI chips are the core component of AI server computing power, and with the rapid growth of AI computing power, there will be a greater demand for AI chips. According to the survey data, it is expected that the market size of China's artificial intelligence chips will reach 85 billion yuan in 2022, 103.9 billion yuan in 2023, and 178 billion yuan in 2025.
Overview of AI Chip Industry
AI chips, also known as AI accelerators or computing cards, are specifically designed for the field of artificial intelligence. Their architecture and instruction set are optimized for various algorithms and applications in the field of artificial intelligence, and can efficiently support intelligent processing tasks such as vision, speech, natural language processing, and traditional machine learning.
From the perspective of technical architecture, Al chips are mainly divided into three categories: GPU (graphics processor), FPGA (field programmable logic gate array), and ASIC (specialized integrated circuit).
Among them, GPU is a relatively mature universal artificial intelligence chip, while FPGA and ASIC are semi customized and fully customized chips tailored to the characteristics of artificial intelligence needs.
GPU
GPU is the abbreviation for graphics processor, which is a hardware device specifically used for processing high-performance computing such as graphics, videos, games, etc.
Compared to traditional central processing units (CPUs), GPUs have more computing cores and faster memory bandwidth, which can significantly improve computing efficiency and graphics rendering speed.
At present, with the release of models such as NVIDIA A100 and H100, GPUs have significant advantages in computing power compared to other hardware, and their work has gradually shifted from graphic processing to computing.
GPU, due to its strongest computing power and deep learning capabilities, has become the most suitable hardware for supporting artificial intelligence training and learning, and is currently the preferred accelerator chip in servers.
According to its purpose and performance, GPUs can be divided into two categories: professional cards and consumer grade cards. Professional cards are usually used for high-performance computing and large-scale data processing in fields such as engineering, science, and medicine, with major manufacturers including NVIDIA, AMD, etc; Consumer grade cards are mainly used for ordinary households and game players, with major manufacturers including NVIDIA, AMD, Intel, and others.
The relationship between the number of GPUs and total computing power (GPU is NVIDIA A100):
On August 31, 2022, GPU products produced by NVIDIA and AMD were included in the restricted range by the United States. NVIDIA's restricted products include A100 and H100, while AMD's regulated GPU products include the MI100 and MI200 series. Haiguang DCU belongs to a type of GPGPU. In typical application scenarios, the Haiguang Deep Computing No.1 indicator has reached the level of high-end products of the same type internationally. Against the backdrop of stricter overseas supervision, domestic GPU manufacturers represented by Haiguang are entering a golden development period.
AI chips are mainly based on GPUs, accounting for about 90% of the total market size:
FPGA
FPGA has incomparable advantages in terms of flexibility compared to processors such as ASICs and GPUs.
FPGA can completely eliminate the need to read DRAM, and the entire algorithm is completed on chip. For example, Shenjian Technology used FPGA to create an ESE model and ran it on different processors (CPU/GPU/FPGA). It was found that the FPGA had the shortest training time and the lowest energy consumption.
In terms of energy consumption, CPU Dense consumes 11W, CPU Sparse consumes 38W, and GPU Dense consumes 202W, which is the highest energy consumption scenario. GPU Spare consumes 136W, compared to FPGA which only requires 41W; In terms of training delay, FPGA takes 82.7 hours μ s. Far smaller than 6017.3 of CPU μ s. It is only one-third of the GPU training duration.
Source: ESE
FPGA not only has software programmability and flexibility, but also has hardware parallelism and low latency, which has great advantages in terms of market cycle and cost.
Therefore, in fields such as artificial intelligence and 5G communication where iterative upgrades are frequent, FPGA has become a relatively ideal solution.
According to industry research database data, the FPGA industry chain is relatively complex, and FPGA manufacturers have strong bargaining power over upstream and downstream enterprises. The FPGA industry includes upstream enterprises such as low-level algorithm design companies, wafer foundries, specialized materials and equipment suppliers, midstream enterprises such as FPGA chip manufacturers and packaging manufacturers, as well as downstream enterprises such as visual industry manufacturers, automotive manufacturers, communication service providers, and cloud data centers. The entire industrial chain is relatively complex, with well-known domestic and foreign enterprises such as IBM, TSMC, Intel, Huawei, and Tencent participating. Among them, the leading FPGA companies in the middle reaches of the industry chain occupy a monopolistic position in the global market, and have strong bargaining power over upstream software and hardware suppliers and downstream customer enterprises.
From the perspective of market structure, the global FPGA market is mainly occupied by Celestine and Altera, with current market share of 52% and 35% respectively; Next are Lattice and Microsemi, both with 5% share. Among Chinese FPGA manufacturers, Ziguang Guowei, Fudan Microelectronics, and Anlu Technology have a market share of over 15% in 2021. Benefiting from the accelerated promotion of localization, Chinese FPGA manufacturers will have enormous growth potential.
ASIC
If ARM's counterattack on the X86 architecture in the chip industry is due to the rise of mobile terminals, then under the AI wave, AI chips, especially ASICs dedicated to deep learning, are used to achieve leapfrog development in a point-to-point manner, which is a good opportunity to overtake in a curve.
In terms of competitive space, traditional CPU fields include Intel and Qualcomm, GPU fields include NVIDIA, FPGA fields include Xilinx and Altera, and only the ASIC field that is most customized with AI computing has not yet had an absolute monopoly leader.
In application scenarios, smartphones, wearable devices, and security front-end devices may all become leading destinations for the deployment of ASIC chips.
AI chips, especially ASIC chips, are particularly suitable for smart phones, smart security cameras, smart homes, drones and other intelligent terminals with low power consumption and small space due to their low power consumption and high efficiency characteristics. These fields may become the first areas for ASIC chips to expand.
At present, the mainstream ASICs in the market include TPU chips, NPU chips, VPU chips, and BPU chips, which are designed and produced by Google, Cambrian, Intel, and Horizon respectively.
Due to the long development cycle of ASIC, only large factories have the funds and strength to conduct research and development. Meanwhile, ASIC is a fully customized chip that operates with the highest efficiency in certain specific scenarios. Therefore, when the downstream market space is large enough in certain scenarios, mass production of ASIC chips can achieve rich profits.
ASIC market pattern and representative manufacturers:
Source: Compilation of public information
AI chip market pattern
According to Liftr Insights, currently in the forefront of AI technology development in the North American data center AI chip market, Nvidia has a market share of over 80% and continues to lead in training and reasoning; In the data center AI acceleration market, NVIDIA's market share reached 82% in 2022, with AWS and Xilinx accounting for 8% and 4% respectively, and AMD, Intel, and Google accounting for 2%.
The domestic AI chips are represented by the Cambrian Siyuan series and Huawei Shengteng series. The performance of some of the Cambrian and Huawei Shengteng AI chip products has reached a high level, which is expected to accelerate the realization of domestic substitution and usher in a period of rapid development.
Local AI chip manufacturers in China:
Source: Guangfa Securities
Under the AI wave, artificial intelligence industries such as cloud computing, intelligent cars, and intelligent robots are rapidly developing, and the market demand for AI chips is constantly increasing. The size of the AI chip market will continue to grow.
Solemnly declare that the article only represents the author's views and does not represent the views of our company. The copyright of this article belongs to the original author, and the reprint of the article is only for the purpose of disseminating more information. If the author's information is marked incorrectly, please contact us immediately to modify or delete it. Thank you for your attention!