Home https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ Technology https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ “We’re dizzy” – an interview with Apple about its silicon revolution on the Mac

“We’re dizzy” – an interview with Apple about its silicon revolution on the Mac



The graphics representing the Apple M1 chip presented by Apple at an event earlier this month.

The graphics representing the Apple M1 chip presented by Apple at an event earlier this month.

Some time ago, a group of engineers gathered in an Apple campus building. Isolated from the rest of the company, they took the guts of the old MacBook Air laptops and connected them to their own prototype boards to build the first machines to run macOS on their own, specially designed ARM silicon based on ARM.

To hear Apple̵

7;s Craig Federigi tell the story sounds a bit like a call back to Steve Wozniak in a Silicon Valley garage so many years ago. And this week, Apple finally took the big step these engineers were preparing for: the company launched the first Macs running Apple Silicon, launching a transition to the Mac product line from Intel processors, which were the industry standard for desktops and laptops. computers for decades.

In a conversation shortly after the announcement of the M1 with Apple’s SVP of software engineering Craig Federigi, SVP of global marketing Greg Josviak and SVP of hardware technology Johnny Suruji, we learned that – not surprisingly – Apple has been planning this change for many, many years.

Ars talked at length with these artists about the architecture of the first Apple Silicon chip for Mac (Apple M1). Although we had to ask a few questions about the latest cases of software support, we really had a big question in our minds: What are the reasons for the radical change of Apple?

Why? And why now?

We started with that big idea, “Why? And why now?” We received a lot of Apple response from Federighi:

Mac is the soul of Apple. I mean, the Mac is what brought many of us to computers. And the Mac is what brought many of us to Apple. And the Mac remains the tool we all use to do our job, to do everything we do here at Apple. And so being able to … apply everything we’ve learned to the systems that underlie the way we live our lives is obviously a long-term ambition and a kind of dream come true.

“We want to create the best products we can,” added Srouji. “We really needed our own silicon to deliver the really best Macs we could deliver.”

Apple started using Intel x86 processors in 2006 after it seemed clear that PowerPC (the previous architecture for Mac processors) was coming to an end. For the first few years, these Intel chips were a huge boon for the Mac: they allowed interoperability with Windows and other platforms, making the Mac a much more flexible computer. They allowed Apple to focus more on increasingly popular laptops in addition to desktops. They also made the Mac more popular in general, along with the escaped success of the iPod and soon after the iPhone.

And for a long time, Intel’s performance was first class. But in recent years, Intel’s processor roadmap has been less reliable, both in terms of performance and consistency. Mac users have noticed. But all three men we spoke to insisted that this was not the driving force behind the change.

“That’s about what we could do, isn’t it?” said Joshua. “Not about what someone else can or can’t do.”

“Every company has an agenda,” he continued. “The software company wants the hardware companies to do that. The hardware companies want the OS company to do that, but they have competing programs. And that’s not the case here. We had an agenda.”

When the decision was finally made, the circle of people who knew about it was initially quite small. “But those people who knew walked around smiling from the moment we said we were on this path,” Federi recalled.

Srouji described Apple as being in a special position for success: “As you know, we don’t design chips as retailers, as vendors, or as common solutions – which allows for really tight integration with software and the system, and the product is exactly what we need. “

Our virtual session included: Greg
Zoom in / Our virtual session included: Greg “Jos” Josuiak (Senior Vice President, Marketing worldwide), Craig Federi (Senior Vice President, Software Engineering) and Johnny Suruji (Senior Vice President, Hardware Technology)

Aurich Lawson / Apple

Design of M1

What Apple needed was a chip that took the lessons learned from years of perfecting mobile systems on a chip for iPhone, iPad and other products, then added all sorts of extra features to meet the advanced needs of a laptop or desktop computer.

“During pre-violence, when we even designed the architecture or defined the features,” Srouji recalls, “Craig and I sit in the same room and say, ‘Okay, here’s what we want to design. Here are the things that matter. ”

When Apple first announced plans to release the first Apple Silicon Mac this year, viewers speculated that the iPad Pro’s A12X or A12Z chips were the plan and that the new Mac chip would be something like the A14X – an improved version of the chips shipped to the iPhone. 12 this year.

No exactly so, said Federighi:

The M1 is essentially a superset if you want to think about it compared to the A14. Because when we set out to build a chip on a Mac, there were a lot of differences from what we would otherwise have in the corresponding, say, A14X or something like that.

We did a lot of analysis of Mac application loads, the types of graphics / GPU capabilities needed to perform a typical Mac load, the types of texture formats needed, support for different types of GPU calculations, and things that were available on Mac … only even the number of cores, the ability to manage Mac-sized displays, virtualization support and Thunderbolt.

There are many, many features that we have designed in the M1 that are requirements for the Mac, but they are all super-complete features compared to what is expected from the application compiled for the iPhone.

Srouji expanded the question:

The foundation of many of the IP addresses we have built that have become the foundation for the M1 to continue to build on it … began more than a decade ago. As you may know, we started with our own CPU, then graphics and ISP and Neural Engine.

So we built these great technologies for a decade, and then a few years ago, we said, “Now it’s time to use what we call scalable architecture.” Because we had the basis of these great IP addresses, and the architecture is scalable with UMA.

Then we said, “Now it’s time to create a custom chip for Mac,” which is M1. It’s not like an iPhone chip that’s on steroids. It’s a completely different custom chip, but we use the basis of many of these great IP addresses.

Unified memory architecture

UMA stands for “Unified Memory Architecture”. When potential consumers look at the M1’s benchmarks and wonder how a relatively low-power mobile chip might be capable of that kind of performance, Apple cites UMA as a key ingredient to that success.

Federighi claims that “modern computational or graphical rendering pipelines” have evolved and they have become a “hybrid” of GPU computing, GPU rendering, image signal processing and more.

UMA essentially means that all components – central processing unit (CPU), graphics processor (GPU), neural processor (NPU), image signal processor (ISP), etc. – share a pool of very fast memory, positioned very close to all of them. This is in contrast to the general desktop paradigm of, say, allocating one memory pool to the CPU and another to the GPU on the other side of the board.

A slide that Apple uses to present the architecture of the M1's unified memory at this year's event.

A slide that Apple uses to present the unified architecture of the M1’s memory at this year’s event.

Samuel Axon

When users run demanding, versatile applications, traditional pipelines can end up wasting a lot of time and efficiency moving or copying data around to be accessible by all of these different processors. Federighi suggested that Apple’s success with the M1 was due in part to the rejection of this inefficient paradigm at both the hardware and software levels:

Not only did we get the big advantage just from the raw performance of our GPU, but just as important was the fact that with the unified memory architecture, we didn’t constantly move data back and forth and change formats that slowed it down. And we have a huge increase in productivity.

So, I think the loads of the past, where it’s similar, invent the triangles you want to draw, send them to the discrete graphics processor, and let it do its thing and never look back – that’s not the way the modern one looks. computer imaging pipeline today. These things move back and forth between many different executive units to achieve these effects.

This is not the only optimization. For several years, Apple’s graphics graphics API has been using “delayed tile-based imaging,” which the M1’s graphics processor is designed to take full advantage of. Federi explained:

When the old school GPUs would operate on the whole frame at once, we work on tiles that we can move to the chip’s extremely fast memory, and then perform a huge sequence of operations on all the different executables on that board. This is incredibly efficient for bandwidth in a way that these discrete GPUs are not. And then you just combine that with the huge width of our pipeline to RAM and other chip efficiencies, and that’s a better architecture.


Source link