Benchmarks; Comparisons; BS

RANT WARNING…

Just stumbled across Gordon Ung’s PCWorld comparison of the latest 12rth Gen Intel i9 and the M1 Max. Some of it is interesting (as you all know, I did 10 days with the plain M1 Pro MacBook Pro 14 and found it NOT to be my cup of tea) but these “benchmark battles” fry me to no end, beginning with Ung’s opening caveat:

" Since we don’t have access to an Apple MacBook Pro 16 ourselves, we tapped results published by other reviews, such as Anandtech, and also the user-generated results from the benchmark companies.

For what it’s worth, the MSI GE76 Raider that we used to test the Core i9 weighs more than 6 pounds (not counting its large 280-watt power brick) and is primarily aimed at gamers who love enthusiast-class features like the GeForce RTX 3080 Ti laptop GPU and 17.3-inch 360Hz 1080p panel. The MacBook Pro 16 weighs about 4.8 pounds and features a pretty tiny 140-watt power brick along with a 16.2-inch panel with a resolution of 3456×2234 and 120Hz refresh rate. Apple’s laptop is primarily designed for content creation and in no universe would a rational person even try to compare an orange to a hammer, as our sister site Macworld delved into in their own M1 Max vs. Intel Alder Lake analysis. Zealots aren’t rational though." (emphasis added)

ZEALOTS? Who’s comparing a 280w behemoth to the 140w MacBook Pro? And he doesn’t even have both systems to do the testing - he’s relying on “reported results” for the M1 Max. But, he’s probably right that his sister publication, MacWorld, cherry picked data and arguments with equal alacrity.

As I said, the MBP 14 ain’t my cup of tea, but somebody really needs to talk to the tech press about straw man arguments - we learned about that in 8th grade debate… I hope users out there will do their own real world testing - benchmarking is becoming as unreliable as political prognostication based on your own beliefs…

4 Likes

People need to figure out their needs and stop buying laptops and desktops that far exceed those needs. It’s such a waste of resources and money. It doesn’t matter who wins the benchmark pissing contest. Only a small percentage of users need the full performance of either the i9 Alder Lake or M1 Max.

More important are the input and output factors—display, keyboard, mouse/trackpad and pen. The quality of those and how well they suit the particular user can be a constant pleasure or annoyance, depending.

Form factor is more important, too. Size, weight, battery life, and if it’s a convertible, detachable, or conventional laptop.

Even aesthetics can be more important than raw performance. Depends on the user.

5 Likes

Spot on Ted. The MBP 14 that I tried (10core/16core SKU) was an absolute beast, but made no difference in all of the software I use. Now granted I am not a “creative” nor a “gamer” so YMMV, but without a doubt it was severe overkill, AND I still had to carry two devices to client meetings for note taking and reviewing documents. I am very happy with the SP8, and that also speaks to your aesthetics point. Yes, I will have two devices still, but the iPad Mini is for my Apple needs (News+, FaceTime, iMessage) and not my productivity device.

1 Like

There’s a lot of acrimony in the social world of tech. Part of it is that because it’s become 21st century jewellery. Only with technical specs now being considered as important attributes as well as aesthetics, quality, and provenance.

For too many these days, liking something they don’t like is at best deeply uncool and a point for mockery. Then add in the incessant need for clicks, and you have an unsocial mess.

And it extends to all parts of modern life.

You could write papers, or even make a career out of studying it (and some do). I think it all boils down to the Internet World Wide Web and how connected we are now, all the time. A democratisation of communication if you will (though I’d almost say anarchy at times).

On the whole it’s positive, but socially we haven’t really evolved fast enough. Perhaps younger generations will adapt to this better, having been immersed in it since we long as they can remember. For me, I grew up along with the WWW becoming mainstream, but that was before the kind of social media we have now.

That all come back to why ‘reviews’ like this exist. Of course, part of it is there always have been and always will be lazy people; that never changes.

That’s my two pence at least. Thank you for reading my rant.

Edit: come to think of it; it’s just narcissism. And the WWW just helps and courages us to indulge in it more than ever before in history.

PS Taking bets for how long it takes for some tech ‘journalist’ gets a ‘scoop’ from here (probably from one of @Desertlap’s posts).

4 Likes

Mortal Sin Time - quoting one’s own post:

But there is method to my madness. I have glazed over from reading reviews of the monster-beast Apple M1 Ultra, but a portion of the Geekbench 5 benchmark (everyone knows how I revile benchmarks) answered all my questions at one time:

M1 Ultra - 1793 single-core and 24055 multi-core

Now. let’s look at some of the rest (taking the best from page 1, most recent tests, for each):

M1 Max - 1773/12692
M1 Pro - 1716/10421
M1 - 1707/7198
i7 1185 - 1509/5812 (MS SP8)
i7 1165 - 1450/4125 (HP Spectre x360-14)

My work (Office 365, Adobe Acrobat Pro DC, Chrome, DrawBoard PDF, Outlook, OneNote, etc.) is apparently heavily single core performance. Look how surprisingly close all of the M1’s are on single core, and how much LESS of a spread between Apple and Intel on single core performance vs multi-core performance. Now all you engineers and tech jockeys are nodding your heads, but for this non-technically educated noob this really drove home why I WASN’T giddy over the M1 Pro performance I tried out, because I’m likely not using it.

3 Likes

But it’s doing so at less than half the watts. That is what has always impressed me about the M1. Very few people, including me, need the multiprocessor muscle.

Most graphically-intensive games are bottlenecked on the GPU, not the CPU. I typically saw the CPU at around 50% while the GPU was pegged at 99% when I had the ThinkPad X1 Extreme G2 with a 9th gen i5 H-series chip and a Nvidia GTX 1650. I knew it would be that way, that’s why I chose the i5 instead of paying a couple hundred more for the i7 and I even turned off Turbo in BIOS because it was clearly unnecessary.

2 Likes

Trust me, I didn’t mean to ignore that fact, which is outstanding. What I am just getting at is that for the average office drone like me, I’ll never experience a “rush” from all that immense low wattage power…

1 Like

I get you, Dale. Just wanted to insert that bit for those who might not be aware. :vibing_cat:

1 Like

Ted - you are so right - speaking of processing vs watts, look at this review:

ArsTechnica - Intel’s Core i7-12700 tested: Top speeds or power efficiency—pick one

Core i7-12700 GB5 2024/17628 190 watts

M1 Ultra GB5 1793/24055 (all I could find was “way under 100 watts”)

I would say Apple says “PICK TWO!” At least Intel got somewhat of a lead in single core performance.

1 Like

Apple has done a lot in a short amount of time, but there are some things that the new outlets are ignoring. First, what needs to be emphasized is price to performance and performance to transistor count. Apple is getting this efficiency and performance since they are just haphazardly throwing a trap-ton of transistors at a problem and cramming them into a prohibitively costly manufacturing process. That lets them run low and wide. Let’s be clear here: an M1 Ultra has over double the combined transistor count of an RTX 3090 (29 billion) and Core i9-12900K (~10-15 billion): 114 billion total transistors from its two 57-billion transistor dies. Let that sink in for a minute. How is this an efficient use of logic gates to achieve compute? Answer: it isn’t! Its performance is only barely surpassing the combination of those two which is, in fact, underwhelming when you consider Apple has to use double the transistor budget to pull off this parlor trick.

And while we are discussing factors and multiples, that leads me to the second issue: Geekbench 5. Geekbench 5 doesn’t scale well above 10 cores (it has a history of misleading results in other areas too), so you are not going to properly see the performance scaling like you would in Cinebench, which is far more indicative of performance with higher core counts. In the PC market, even a 64-core AMD Ryzen Threadripper 3990X that has a mere 30,400 transistors was able to hit not only double but triple this performance a couple years ago, and yes, at higher power consumption, but with a far more efficient performance-to-transistor-count ratio (30.4 billion transistors).

Apple is doing what they are doing because they are clumsily throwing transistors right and left at the problem rather than working for the most transistor efficient design (that design approach represents many more years of engineering work), and then paying top dollar for the best process so they can forget about the overabundance of transistors they are using and run low and wide. To top it off, you have yield issues when you have to etch dies large enough to house enormous transistor counts like that, which is why not even NVIDIA who produces already gigantic dies in BFGPUs (big freaking GPUs) treads where fools rush in. You are going to have an alarming amount of die rejects (or very low yields) which comes with an additional massive uptick in pricing on top of using the world’s best manufacturing process.

I feel like Apple is forgetting one of the cardinal rules that are drilled into computer engineering students’ heads and that is logical gate efficiency. Use the fewest transistor gates as possible to solve the compute problem as quickly and efficiently as possible. (This is why I actually admire Steve Wozniak more than Steve Jobs as far as technical competence is concerned. Read this story here and you will begin to appreciate what I am talking about: Breakout (video game) - Wikipedia) In many respects, you might argue that Apple is moving into the same rut as Intel was five years ago, only instead of them having an issue with power consumption, it is a problem with transistor count and by extension die size. If Apple ever loses their manufacturing process advantage (and with how precarious the world situation is right now, that is a growing possibility with their total reliance on TSMC), they will not be able to be a leader anymore. One slip-up, and they will fall behind given this frankly lazy approach to processor architecture. Wake me up when there is an M1 Ultra that uses even 20% less transistors than this to produce the same performance and power outcomes, and then I will begin to be impressed. End of soapbox.

3 Likes

I hadn’t thought about the performance-to-transistor comparison. As an end user, how do I benefit from efficiency in that dimension? Does it make for a less expensive chip? Apologies if this is a dumb question-- I don’t think I even know enough to be dangerous when it comes to chip stuff. I’ve only ever used Atom and Pentium and laptop Core i3/5/7, and I’m not a gamer. I know Xeon and Threadripper are high end. So is the M1 Ultra a lot more expensive than the Threadripper because of all the extra transistors? You know the old joke about “pick two” of whatever the three benefits are. Maybe in chips it would be price, performance, and temperature/power draw. So the Ultra gives a ton of performance at lower draw, but at the cost of more transistors and more money? Or maybe I’m missing the idea altogether. I must admit that benchmark numbers make my eyes glaze over a little bit, lol.

So for the end user, it has currently nil impact, but to the manufacturer, it represents billions in cost savings and perhaps even more if a shortage were to occur. Here’s a short explanation why and this is not even including the tens to hundreds of billions in R&D that proceed production: " The price of a 10nm wafer costs almost $ 6000 and a 7nm wafer costs $ 9346 . A price per wafer that nearly doubles at 5nm, each costing nearly $ 17000." (The price of a 5nm wafer from TSMC is a whopping $ 16.988 - HardwarEsfera). A price per wafer that nearly doubles at 5nm, each costing nearly $ 17000." Something fun to play with that Ian Cutress has shared with folks in the “silicon gang” is this nifty tool: Die Per Wafer Calculator - CALY Technologies (caly-technologies.com). Per this next article, “[w]e can see that the M1 chip is worth $40 per die at a yield rate of 80% , M1 Pro worth $96 per die at a yield rate of 70% and M1 Max at $200 per die [two M1 Maxs make a single M1 Ultra] at a yield rate of 60%” ( Is Apple Fleecing you?. Cost break-down of the MacBook Air and… | by AD Reviews | Mac O’Clock | Medium). So as two M1 Maxes make a single M1 Ultra, $400 is just the base die cost, and that doesn’t even include packaging the dies (since these are chiplets in a multi-chip module [MCM], this packaging process is going to be significantly more pricy than the other members of the M1 family) and the proceeding R&D which both significantly increase costs as well. Most processor makers like AMD and Intel aim for >90% yields given just how costly it is to manufacture on the current manufacturing processes today. So to say Apple is wasteful is an understatement. Consider how silicon production could get very tight and costly very quick in our current world climate, and that is what make what Apple does very risky. That is especially true with today’s latest news of a neon shortage (very essential to chip production) with Ukraine now unable to deliver that essential production element.

@Hifihedgehog - I will not even pretend to understand what you are talking about, but from a user perspective it comes across as “So what difference does it make?” Even if design aesthetics and efficiency are in high disregard by Apple, what does it matter so long as the performance and power utilization surpasses the rest of the industry. By way of analogy, a Tesla has a far better electric mileage range (300) than my plug in Honda Clarity (50) - but because it has a big @$$ battery. That’s what it feels like when you say they are just throwing transistors at the problem.

1 Like

Think of it this way: transistors are the building blocks of solving math problems. The best designs use less transistors and less power. Let’s say manufacturer A uses two transistors to solve one problem. Manufacturer B uses four transistors to solve the same problem. Manufacturer B is able to hide their problem by using smaller transistors by in a newer manufacturing process, and thereby use less power than manufacturer A. Think of it like them switching from incandescents to LEDs. Applied here, this manufacturer B is Apple. Apple hides the problem of using more transistors to solve problems by paying for the latest process that makes smaller transistors that draw individually less power. This is a gross simplification, but erase the process advantage and you begin to erase their advantages.

The other problem is the per-die cost that Apple hides from the end user. If the situation with chip design should take a turn for the worse given the scarcity of certain raw materials like neon, Apple very realistically would not longer be able to shoulder the burden and it would, in fact, be cheaper for them to buy from a company that uses more compact, transistor efficient designs. For example, a single die for Apple that might cost them $200 might cost only $20 for AMD or Intel. Should scarcity hit due to a lack of a critical raw material and we see a 3x price increase, that would be $600 for them but a mere $60 for AMD or Intel given how many fewer transistors they use. Suddenly, Apple’s monolithic, transistor sucking design philosophy falls apart when its waves crash against the rocky cliffs of hard economics.

Won’t that just lead to them losing some of their outrageous profits rather than turning to other foundry solutions?

I guess CNET is wrong on its theory of “M1 Utra Shows the Future of Computer Chips.”

Correct. Right now, the silicon itself is not as much as you pay for as the R&D and other associated costs. Should that change though, companies will be forced to be more cognizant of die cost since that will dominate total cost in profitability.

Thinking about this some more, isn’t throwing more transistors at it just the continued evolution of the move to dual core and then quad core chips? It has to be fifteen years since Intel first hit a wall with single core and decided to go to dual core. I don’t think anyone thinks of multi-core processing as some sort of cheesy hack.

As far as Apple being at risk, I’m skeptical. First, they’re going to sell relatively few Ultra or even Max or Pro so the vast majority of their sales will be with base M1 or M_ computers, which they sell at high cost. Second, and relatedly, unlike Intel and AMD, Apple does not sell chips and is not playing the cost-per-chip game; they have entirely different economics and can use massive profits from other businesses to pay down chip costs. Third, as a key customer Apple is better positioned than other manufacturers to secure scarce materials. They lock up entire supply chains and get first access.

Would it be cooler if they did this by coming up with massively faster cores that were also cooler? I guess, but I still have to tip my cap.

2 Likes

I had even thought in these terms and you are right, Apple has a lot of “insulation.” First and foremost is the economic angle of not having to buy chips from third parties. Cost of goods sold benefits a lot here. Add to that $200 BILLION in the bank, and you can weather a lot of storms. Best of all, they already make more of the profits in the industry, so margins are way in their favor.

Looks like Intel really should start competing against TSMC as Apple’s foundry…oops, those cherry picked comparisons might come back to haunt them there…

1 Like

So I have LOTS of issues with some of the asserted “facts” in this thread and perhaps I’ll post my rebuttal at some point.

However, so far IMHO the cross optimization that Apple can and does between chip and OS is at least as important if not more than arguing about transistor counts (which are virtually meaningless when you compare RISC to CISC)

Apple absolutely has a unique advantage with their essentially closed system of OS and hardware. And as has been shown time and time again, unfortunately open systems suffer from developers aiming the vast majority of the time at the lowest common denominator.

We’ve seen custom purpose built Linux builds for X86 architectures that absolutely clobber Windows for specific tasks, And for that matter OTOH , We’ve heard similar things about the custom OS that MS is using in some of their data centers with their own custom ARM designs.

The point I’m making is that anyone that uses core counts or transistor counts to “prove” their chosen platforms “superiority” is at a minimum being disingenuous . It 's the entire system applied to real world tasks that matter.

1 Like