Welcome: Yunismart Home Manufacturer
director@yunismart.com +86-13815034152

Industry News

Edge Computing And AI


Many new services using edge computing have low latency requirements, so many new systems have adopted the latest industry interface standards, including PCIe 5.0, LPDDR5, DDR5, HBM2e, USB 3.2, CXL, PCIe-based NVMe, and other new-generation based Standard technology. Compared with the previous generation products, these technologies all reduce latency by improving bandwidth.

 

These edge computing systems have also added AI acceleration functions. For example, some server chips use x86 to extend the AVX-512 vector neural network instruction (AVX512 VNNI) and other new instructions to provide AI acceleration.

 

In addition, custom AI accelerators have been added to most new systems. The connectivity required by these chips usually uses the highest bandwidth host to achieve accelerator connections. For example, in a certain switching configuration with multiple AI accelerators, the PCIe 5.0 interface is used in many systems because bandwidth requirements affect latency.

 

In addition to local gateways and aggregate server systems, a single AI accelerator usually cannot provide sufficient performance, so it is necessary to use very high bandwidth chip-to-chip SerDes PHY to expand these accelerators. The newly released PHY supports 56G and 112G connections.

 

AI algorithms are breaking the limits of memory bandwidth requirements. For example, the latest BERT and GPT-2 models require 345M and 1.5B parameters respectively. In order to meet these requirements, not only high-capacity memory capacity is required, but also many complex applications need to be executed in the edge cloud. To achieve this capability, designers are adopting DDR5 in new chipsets. In addition to the capacity challenge, it is also necessary to access the coefficients of the AI algorithm to perform a large number of multi-accumulation calculations performed in parallel in a nonlinear sequence. Therefore, HBM2e has also become a rapidly adopted new technology, and some chips have realized several HBM2e instantiations in a single chip.

 

In the future, the need for edge computing will focus on reducing latency and power, and ensuring that there is enough processing power to handle specific tasks. The new generation of server SoC solutions will not only have lower latency and lower power consumption, but will also incorporate AI functions, that is, AI accelerators.

 

But it is clear that the demand for AI and edge computing is also changing rapidly, and many of the solutions we see today have made progress many times in the past two years and will continue to be improved.


CATEGORIES

CONTACT US

Contact: Jacktao

Phone: +86-13815034152

E-mail: director@yunismart.com

Whatsapp:13815034152

Add: No.143,Qingyangbei Road,Changzhou,Jiangsu,China