Windows on ARM

No, they use the vCPU terminology since hyper-threading/simultaneous multithreading (SMT) exists on chip designs and platforms other than Apple that they cater to in their other products, especially RAS. So far, Apple has not used that approach in their CPU design, so since thread count equals core count on Apple chips, vCPU equals core count here. But in case Apple does implement SMT in the future, know that vCPU equals thread count.

1 Like

I forgot that they support other chipsets than Apple Silicon in that SKU. :doh:

Well, technically you can run VMs with more threads than on the host, but that is overprovisioning and highly unrecommended since the VMs will be halting some of their threads until each others’ threads complete.

1 Like

Ok, this is already looking grim then. Then one has to surmise that QC’s repeated delays in sampling the chips to customers, is because they are struggling to hit those lofty targets?


At Qualcomm’s investor conference in November 2021, Dr. James Thompson, chief technology officer at Qualcomm, described the current Nuvia roadmap at that time. “They’re pretty far along at this point,” Thompson said, talking about the first Snapdragon processors featuring Nuvia technology. “We’ll be sampling a product nine months from now, or something like that.”

If Thompson’s timeframe was accurate, that would have put the sampling period at around August 2022, with a shipping product scheduled for sometime in 2023.

They’ve now delayed it into 2024, but (if take the hopeful view :wink: ), could it be that the delays are actually from the desktop side of development?

(Windows Central)

A new report from Kuba Wojciechowski, aka @Za-Raczke on Twitter, claims that one of Qualcomm’s next-gen Nuvia “Phoenix” designs, for 2024 is targeting desktop use. Codenamed “Hamoa” the chip is claimed to feature 12 “in-house” cores made up of eight performance cores and four efficiency ones…

Qualcomm’s working on a 2024 desktop chip codename “Hamoa” with up to 12 (8P+4E) in-house cores (based on the Nuvia Phoenix design), similar mem/cache config as M1, explicit support for dGPUs and performance that is “extremely promising”, according to my sources.November 6, 2022

dGPU support sounds incredible, but I just hope they aren’t letting all these architectural features get in the ways of just delivering solid M1-class performance on mobile.

Seems like there’s lots of promising going on - I’m with @Dellaster - show me the money…

Oh ye of little faith, @dstrauss and @Dellaster. Here is a hint. Ian Cutress at AnandTech already gave us the answer key to this engineering mystery three years ago that most glossed over. In 2020, it was revealed that one of NUVIA’s design objectives was to hit a Geekbench 5 single-threaded score of 2200 at a mere 3W. Get the picture yet? :go_sonic:

I wouldn’t get too worried about how long ago that early samples were seeded. When first-generation “cornerstone” releases are being laid for a microarchitecture family, they have samples out in the wild as early as 1-2 years before the product is released to the public. A similar pattern was seen with AMD’s first-generation Ryzen, or Zen 1.

1 Like

@Hifihedgehog I’m glad to see you are obviously on board with ARM and/or Qualcomm. I have to ask though what happened to your first love, AMD ? :slight_smile: :smile:

I have doubts because they have promised things in the past that didn’t measure up in reality. And back then there were tech journalist on board, too. So, again, I am in the “show me” camp. Put it in a real device that’s for sale with independent individuals doing real world tests.

But that aside, what does it even matter to a consumer even if the predictions are true? A consumer like me can’t do anything with those predictions, even if they’re absolutely true, until a device is actually for sale. That is unless I want to buy stock? Is this a way for them to pump up the stock prices and reassure investors?

Even as an Apple Silicon owner it doesn’t affect me in any way. It can be the most amazing speediest chipset ever imagined but it won’t make my Mac mini run any slower, right? And I’m happy with it.

So I don’t care. I’ll be interested when it comes out but until then I don’t care.

Nothing. :slight_smile: I love a company as long as they are on track to deliver, meaning that I love good products. AMD still is doing an admirable job in server and desktop, and you can clearly see that I hold them in high regard in my signature based on my current desktop configuration. Intel is getting close in performance per watt with 12th and 13th Gen and to their credit, they have also been running after and achieving double-digit IPC gains too (finally!). However, their power draw is still a bit of a throttling mess unless I were to step up to a 360mm AIO cooler which is absolutely ridiculous and not on my flight plan.

I like ARM which I have stated here on occasion before and in fact, one of college professors was hired as a senior principal engineer at ARM to do microarchitecture performance evaluation. He was particularly interested too in seeing ARM eventually scaling up into the desktop and server space in the mainstream. This I imagine is just several years away now from coming to pass. What I cannot stand is the Tim Cook-era late 2010s and now 2020s Apple that is devoid of drive and passion. “Iterate, iterate, iterate, gosh darn it, iterate, but do not do anything to upset the winning formula that Steve Jobs handed us.”

If you were selling lemonade at a lemonade stand, that accountant-supply chain battle plan might work, but their customers are computers, not fresh squeezed fruit juice. My beef is Apple is detached from the rest of ARM and is all but content with gorging themselves on the fruits of early teams’ labors and aren’t interested in chasing after the double-digit IPC breakthroughs like in the Jim Keller days. So we get these ho-hum annual <10% IPC gains from Apple while the rest of the ARM and X86 world is giving us double digits IPC annual gains. While Apple is coasting because, let’s face it, A16 and M2 is their Kaby Lake/Skylake Part 2 to A15 and M1, the rest of the world’s CPU architecture designers are running double, triple, or even quadruple time and will soon pass them.

Yesterday, @dstrauss verbally reminded me just how much the public are not realizing how much and how fast the gap is narrowing between Apple and the rest of the processor world. Their lead was as much as 3 or even 4 generations ahead just a few years ago, but now we are down to 1-2 generations gap as of last year, with ARM and other just 1-2 generations away from closing the gap with their double-digit gains. You can only sit on your hands for so long. You can only live lavishly off of those previous sweeping victories for so long before that performance mountain you rest on comes tumbling down.

I am absolutely with you on this point. We have to live in the present and use the tools that work best today for what we want and need to do today. Understand that I am not speaking about now and what you are buying for now and want and need to use today. Is it enough? Does it do what you want well enough? If yes, then my description is not casting a shadow on what you have nor is it an advisement to abandon what you find works best for your needs, preferences, and outlook.

I am describing what can be accomplished when more and better is on someone’s radar only if someone feels sufficiently compelled once it drops to move on from what they already have. I am speaking of planning for tomorrow, for what is around the corner, for what lies ahead. Speaking of today, if an Apple MacPad came out today that allowed sideloading, would I buy it? Absolutely, in a heartbeat. That would be the best there is today because it would cover the performance and productivity sides of the computing coin sufficiently for me.

Will that happen? Not a chance. So I am content to stay in the Windows corner of computing. I held off on the SQ3 Surface Pro 9 because, while good, I have every indication to believe the WOA “wow” moment is going to happen in a NUVIA-based SQ4 Surface Pro 10. Speaking for myself, I am content with using my daily driver (a Surface Pro 8) I have now because I have every indication to believe that it will be significantly and earth-shatteringly larger of a leap from the SQ3 Surface Pro 9 to SQ4 Surface Pro 10 than the Pro X to Pro 9 transition.

Looks like you might be right. I just found this article claiming a 10" prototype with Hamoa:

(GSM Arena)

Qualcomm is apparently testing its Snapdragon 8cx Gen 4 chipset, aka SC8380, inside a development device with a 10” display…

The Snapdragon 8cx Gen 4 (code name Hamoa) will have 12 CPU cores, 8 performance (up to 3.4GHz) and 4 efficiency cores (up to 2.5GHz). There will be a built-in Adreno 740 GPU, which is fairly capable, though the larger form factors like laptops can also be equipped with external GPUs (connected over 8x PCIe 4.0), as long as they have the cooling for it.

Though the leaks about Hamoa really seem all over the place. Is it desktop chip or laptop chip? Or is it an architecture that refers to both? But then I thought “Phoenix” was the name of the Nuvia-based architecture. (And then why come up with “Phoenix”, if you are going to keep using the Snapdragon 8cx moniker.)

At at any rate, I gotta say sonic’s enthusiasm is really rubbing off on me. All the naysayers need to put in more effort, like sonic. :stuck_out_tongue:

1 Like

So much to unpack with this thread.

Somewhat on a side note regarding ARM, a odd positive thing I’ve noticed

So at my day job I have access to a Surface Hub 2S. Putting Photoshop/Clip Studio on that is still a dream I intend to make happen one day (migrating to full Windows 10/11 on a Hub is more complicated then a simple image), but a work around I tried was connecting the Hub to work as a second monitor of sorts to another device. This can be done over wireless or by a simple USB-C connection.

So a few months back, I Connected it to my Pro 8…and it was abysmal. The latency and Pen Lag made it effectively unusable. I can certainly demo a powerpoint, but any kind of drawing was just so lagged it was dreadful. This was both over wired and wireless. Wired using multiple USB-C & Thunderbolt cables

Connected it to a few other devices, a Pro X SQ2 and a Laptop 4…just as bad if not worse.

So in trying to demonstrate this horrible latency to a co-worker from a different department, I didn’t have the Pro 8/X/lp4, but I did have a Pro 9 5G on hand. Since the SQ2 X preformed terrible, I was all but certain this was going to be nothing but a horror show.

Imagine my shock and utter surprise, when the USB-C connected Pro 9 5G had significantly less latency/lag when connected to the Hub. There was still a bit of latency/lag, but compared to the Pro 8, it was like night and day. I even tried repeating the test with the same cables the next day, no matter what the Pro 8 was a disaster and the Pro 9 5G was actually somewhat usable.

Since the Pro X and Laptop 4 performed just as bad, as the Pro 9 it can’t be the Pro 8’s thunderbolt messing it up.

I don’t know if it has something to do with the SQ3, or the Neural Engine…but they certainly did something right. Granted its a very specific use case


Well I’m sure that Qualcomm or Microsoft would never overstate or cherry pick benchmarks to make their new shiny thing look it’s best… :laughing:

And you yourself brought up the reminder what Qualcomm and MS were touting prior to the release of the Sq1…

But again I REALLY want/hope the hype is true. The market needs more options and I think Intel needs a genuine threat again to get them to truly innovate, similar to what AMD did which spurred intel to bring forth the Core I series

PS: note that I didn’t even bring Intel’s’ Lakefield in to this… :smile:

1 Like

I don’t think most people cared or even noticed tbh, other than perhaps battery life. Especially with the status symbol of Apple products being so strong.

1 Like

Here is a handy dandy decoder ring. Phoenix is the CPU IP microarchitecture codename. Homoa is the codename for the chip. Given its power and performance characteristics, it will be targeting Windows tablets and laptops but can also be used in thin-and-light desktops akin to the Windows Dev Kit 2023.

I also shared this tidbit:

Something the leaker did not discuss in depth bears mentioning here. The integrated GPU is Adreno 740 - the same as in the Snapdragon 8 Gen 2 which was used last year’s Qualcomm-based flagship smartphones. Adreno 740, that integrated GPU, will perform on par with AMD’s RDNA2-based Radeon 680 integrated graphics. That means a doubling in graphics performance over the SQ3/Snapdragon 8xc Gen 3’s Adreno 690. Can you feel the excitement yet?

A 2024 WOA “wow” moment is coming, folks!

1 Like

I just hope it isn’t “Wow” that’s a disappointment… :slight_smile: Which was definitely the reaction to the Pro X

Well, if we need a tangible, we can easily bank on the graphics being vastly improved since the performance difference between Adreno 740 (NUVIA-based SQ4/Snapdragon 8xc Gen 4) and Adreno 690 (SQ3/Snapdragon 8xc Gen 3) is a third-party verified known quantity thanks to the Snapdragon 8 Gen 2 (also using Adreno 740) being a widely used and well-understood chip now. In desktop GPU user terms, given that we know how the Adreno 740 performs, that puts the first-gen NUVIA chip squarely between a GTX 1050 Ti and GTX 1060 in relative graphics performance and (compared to being in a smartphone) very possibly leaning closer to the GTX 1060 in this application. The reason being is Adreno 740 in a tablet and laptop NUVIA chip will not be power and thermally limited like the Adreno 740 is in a smartphone.

To be clear I’m on board with what you are talking about and at least as importantly MS has consistently been tweaking and improving Windows 11 WOA performance with nearly every patch Tuesday regardless if they don’t always point it out.

I do wonder about the 3rd party apps market though which IMHO will make or break whether WOA will grow beyond what is right now a small niche in the overall market. And even a big company like Adobe after talking a big game and releasing a partial group of semi-native , semi-optimized versions, hasn’t see fit to do much since other than a handful of exploit patches.

Of course, Adobe seems to be most interested in their own subscription web apps right now and catering to their long term Mac customers.

Perhaps @Marty is right and I’m more cynical than I realize…

BTW: If I was MS/Qualcomm I would be doing almost any and everything to get ProCreate native on WOA. Not because it’s the best app of it’s type per se, but they have huge mindshare. I’ve mentioned in other threads how many iPad /iPad Pro sales I’ve seen driven by ProCreate

1 Like

That makes sense, but then why do some leaks (like WindowsCentral) state Hamoa targets desktop, while others state mobile/tablet? Is it scalable power limit that enables it go from thin-and-light, all the way to full on workstations (eg. with dGPU application)?

If that’s the case, they will have surpassed Apple M1, at least architecturally.

Actually, I think you’re an secretly an optimist. Considering, you got personally burned by the SPX, I think the fact your still ‘skeptically hopeful’ says a lot. :wink:

Why not join the hype train one more time? It’s taking off at sonic speed. :go_sonic: