قالب وردپرس درنا توس
Home https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ Business https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ New AI Chips Look for a change in the design of the Design, Cooling Center

New AI Chips Look for a change in the design of the Design, Cooling Center



The Cerebras Wafer Scale Engine (WSE) is optimized for AI loads and is the largest chip ever created. (Image: Cerebras Systems)


The rise of artificial intelligence is transforming the world of business.

Powerful new artificial intelligence (AI) hardware can change the structure of data centers and how they are cooled. A number of startups were introduced this week at the Hot Chips Conference at Stanford University, offering new personalized AI silicon offerings as well as new suggestions from attendees.

The most startling new design came from Cerebras Systems, which came out of a stealth mode with a chip that completely rethinks the form factor of the data center calculations. The Cerebras Wafer Engine (WSE) is the largest chip ever made, nearly 9 inches wide. At 46.2 square millimeters, the WSE is 56 times larger than the largest GPU.

Is it bigger? Cerebras says size is "extremely important" and its larger chips will process information faster, reducing the time it takes AI researchers to learn algorithms for new tasks.

Cerebras design offers a radical new take on the future of AI hardware. Its first products are not yet on the market and analysts are keen to verify that performance testing confirms Cerebras' claims for its capabilities.

Cooling 15 kilos per chip

If successful, Cerebras will push the existing limits of high-density calculations, a trend that is already starting to create both opportunities and challenges for data center operators. A WSE contains 400,000 cores and 1.2 trillion transistors and uses 15 kilowatts of power.

I will repeat for clarity – one WSE uses 15 kW . By comparison, a recent AFCOM survey found that users use an average of 7.3 kilowatts per rack, which can hold up to 40 servers. Overweight providers average about 10 to 12 kilowatts per boot.

The heat released from Cerebras chips will require a different approach to cooling as well as the server chassis. The WSE will be packaged as a server unit, which will include a liquid cooling system that reportedly has a cold plate, powered by a series of tubes, with a chip located vertically in the chassis to better cool the entire surface of the huge chip. [19659011] A look at the manufacturing process for the Cerebras Wafer Rock Machine (WSE) manufactured at TSMC. (Image: Cerebras)

Most servers are designed to use air cooling and thus most datacenters are designed to use air cooling. A broad transition to liquid cooling would encourage maintenance center operators to maintain water in the trunk, which is often supplied through a system of pipes under a raised floor.

Google's decision to switch to liquid cooling with the latest artificial intelligence hardware expectations others may follow. Alibaba and other Chinese large-scale companies have adopted liquid cooling.

Free resource from Data Center White Paper Border Library

  Computer Room Cooling

Computer Room Cooling Choice Rules

There is no misinformation determining the proper use of air-conditioning and specifying the distinction between comfort cooling applications and computer room applications. To make sense of all this, it helps to understand the history and development of the various cooling requirements of a computer room. Download the new Schulz White Paper that walks readers through the changes to changing computer room cooling requirements and helps them choose the best cooling options for their business.

“Created on the basis of AI work, the Cerebras WSE contains fundamental innovations that advance state-of-the-art technology by solving decades-limited, limited-chip limited technical challenges – such as cross-box connectivity, production, delivery electricity and packaging, "said Andrew Feldman, founder and CEO of Cerebras Systems. "Every architectural decision was made to optimize the productivity of AI work. The result is that, depending on workload, WSE Cerebras delivers hundreds or thousands of times the performance of existing solutions with a small fraction of power and space drawing. "

Data center observers know Feldman as the founder and CEO of SeaMicro, an innovative startup server that packs more than 750 low-power Intel Atom chips into a single server chassis.

Much of the secret sauce for SeaMicro was in the mesh tissue that bound these cores together. So it's no surprise that Cerebras has an interprocessor fabric called Swarm that combines massive bandwidth and low latency. The company's investors include two pioneers in the network, Andy Bechtolsheim and Nick McCaun.

For deep diving in Cerebras and its technology, see additional coverage in Fortune, TechCrunch, New York Times and Wired.

New Form Factors Bring More Density, Cooling Challenges

We've been tracking for years the advancements in boot density and fluid intake at the Data Center Frontier as part of our focus on emerging technologies and how they can transform the data center. The new AI workload hardware accumulates more computing power in each piece of equipment, increasing power density – the amount of electricity used by servers and storage in a trunk or cabinet and the accompanying heat.

Cerebras is one of a group of startups building AI chips and hardware. The arrival of the startup silicon in the AI ​​computer market follows several years of intense competition between market leader Intel Corp. and competitors, including NVIDIA, AMD, and several ARM technology players. Intel continues to occupy a dominant position in the enterprise computing space, but the development of powerful new workload-optimized hardware has been a major trend in the HPC sector.

This will not be the first time that the data center market needs to take on newer forms and higher density. The introduction of blade servers packed dozens of server boards into each chassis, resulting in higher heat loads that many data center managers were struggling to manage. The rise of the Open Compute project also introduced new standards, including a 21-inch rack that was slightly wider than the traditional 19-inch rack.

There is also the question of whether the rise of powerful AI devices will compress more processing power into less space, prompting the redesign or upgrade of liquid cooling, or whether the high density will be dispersed into existing facilities to distribute their impact on existing supply and cooling infrastructure.

For further reading, here are some articles summarizing some of the key issues in the development of high density hardware and how the data center industry is adapting:


Source link