Home https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ Technology https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ Microsoft is sending a new kind of AI processor to the cloud

Microsoft is sending a new kind of AI processor to the cloud

Microsoft became dominant in the 80s and 90s thanks to the success of its Windows operating system running Intel processors, a cozy connection called Wintel.

Now Microsoft hopes that other combo hardware will help it regain that success – and catch rivals Amazon and Google in the race to deliver cutting edge artificial intelligence across the cloud.

Microsoft hopes to expand the popularity of its Azure cloud platform with a new kind of computer chip designed for the age of AI. As of today, Microsoft is giving Azure customers access to chips made by British startup Graphcore.

Graphcore, based in Bristol, UK, in 201

6, has attracted considerable attention among AI researchers – and several hundred million dollars in investment – on its promise that its chips will accelerate the computations needed for AI work. So far, it has not made the chips publicly available or shown the results of trials involving early testers.

Microsoft, which invested its own money in Graphcore last December as part of a $ 200 million round of funding, is looking to find hardware that will make its cloud services more attractive to a growing number of clients for AI applications.

Unlike most chips used for AI, Graphcore processors are designed from scratch to support computations that help machines recognize faces, understand speech, parse languages, drive cars, and train robots. Graphcore expects to appeal to companies that perform mission-critical AI operations, such as startup self-driving cars, commercial companies, and operations that process large amounts of video and audio. Those working on next-generation AI algorithms may also strive to explore the benefits of the platform.

Microsoft and Graphcore today released metrics that suggest that the chip matches or exceeds the performance of the best AI chips from Nvidia and Google using algorithms written for them by competing platforms. Code written specifically for Graphcore hardware can be even more effective.

Companies claim that certain image processing tasks run many times faster on Graphcore chips, for example, than on its competitors using existing code. They also say they have been able to train a popular AI language processing model called BERT at speeds that match any other existing hardware.

BERT has become extremely important for AI applications involving language. Google recently stated that it was using BERT to manage its core search business. Microsoft says it now uses Graphcore chips for internal AI research projects involving native language processing.

Karl Freund, who monitors the AI ​​chip market at Moor Insights, says the results show that the chip is cutting-edge but still flexible. A highly specialized chip may surpass that of Nvidia or Google, but it would not be programmable enough for engineers to develop new applications. "They did a good job of making it programmable," he says. "A good performance in both training and inference is something they've always said they would do, but it's really difficult."

Freund adds that the Microsoft deal is crucial to Graphcore's business because it provides a ramp for customers to try out new hardware. The chip may be better than existing hardware for some applications, but it takes a lot of effort to reconfigure AI code for a new platform. With a few exceptions, according to Frend, the performance of the chip is not attractive enough to lure companies and researchers away from the hardware and software they are already comfortable to use.

Graphcore created a software framework called Poplar that allows the existence of AI programs that will be ported to its hardware. However, the abundance of existing algorithms may still be more appropriate for software that runs on rival hardware. Google's Tensorflow AI software framework has actually become the standard for AI programs in recent years and was written specifically for Nvidia and Google chips. Nvidia is also expected to launch a new AI chip next year, which is likely to perform better.


Nigel Town, co-founder and CEO of Graphcore, says that companies started working together a year after his company was launched through Microsoft Research Cambridge in the UK. His company chips are particularly suited to tasks that involve very large AI models or time data, he says. It is estimated that one of the finance clients saw a 26-fold increase in productivity in an algorithm used to analyze market data thanks to Graphcore hardware.

A handful of other smaller companies also announced today that they are working with Graphcore chips through Azure. This includes Citadel, which will use the chips to analyze financial data, and Qwant, a European search engine that wants the hardware to run an image recognition algorithm known as ResNext.

The AI ​​boom has already shaken the computer chip market in recent years. The best algorithms perform parallel mathematical calculations that can be made more efficiently on graphics chips (or GPUs) that have hundreds of simple processing kernels as opposed to conventional chips (CPUs) that have several complex processing kernels .

GPU maker Nvidia has driven the AI ​​wave to riches, and Google announced in 2017 that it would develop its own chip, the Tensor Processing Unit, which is architecturally similar to a GPU but optimized for Tensorflow.

Graphcore chips, which it calls Intelligence Processing Units (IPUs), have many more cores than GPUs or TPUs. They also have memory of the chip itself, which eliminates the hassle that comes with moving data to a chip for processing and shutting down again.

Facebook is also working on its own AI chips. Previously, Microsoft was promoting pre-configurable chips made by Intel and customized by its engineers for AI applications. A year ago, Amazon revealed that it was also engaged in chip creation, but with a more versatile processor optimized for Amazon's cloud services.

Recently, the AI ​​boom has caused a storm of startup hardware companies to develop more specialized chips. Some are optimized for specific applications, such as autonomous driving or surveillance cameras. Graphcore and several others offer much more flexible chips that are critical to AI development, but also much more challenging to produce. The company's latest investment round has estimated $ 1.7 billion.

Graphcore chips can find the first grip with the best AI experts who are able to write the code needed to reap their benefits. Several prominent AI researchers have invested in Graphcore, including Demis Hassabis, co-founder of DeepMind, Zubin Gahramani, a professor at Cambridge University and head of Uber's AI lab, and Peiter Abbeel, a professor at UC Berkeley who specializes in AI and robotics , In an interview with WIREDLast December, AI visionary Jeffrey Hinton discussed the potential of Graphcore chips to expand basic research.

Before long, companies may be tempted to try the latest thing. As the CEO of Graphcore Toon says, "Everyone is trying to innovate, trying to find an advantage."

This story originally appeared on wired.com.

Image listing by Graphcore

Source link