FFXI Benchmark On M1 Mac

Language: JP EN DE FR
New Items
2023-11-19
users online
Forum » FFXI » General » FFXI Benchmark on M1 Mac
FFXI Benchmark on M1 Mac
 Asura.Icilies
Guildwork Premium
Offline
Server: Asura
Game: FFXI
user: icilies
Posts: 227
By Asura.Icilies 2020-11-29 17:32:38
Link | Quote | Reply
 
Shiva.Thorny said: »
I didn't give 'no credit' to apple, I said that your claim is not in line with reality. It is a significant improvement for mobile devices, but it is not competitive with desktop cpus. I have said nothing bad about apple, nor the chip. I said it won't revolutionize your laptop, which is true, it is an improvement but not a colossal one.

You dropped another anecdote, mention some other irrelevant things, and change the topic. Have you been indoctrinated into a cult? Is Steve Jobs holding you hostage? A mobile cpu doesn't have to beat a top tier desktop cpu to be good. But, claiming it does just makes you sound ridiculous. Focus on the reality and it is still a pretty cool advancement.

So, you think benchmarks are the be all? So I'm understanding correctly.

Real world performance means nothing? How the chip actually performs does not matter.
 Shiva.Thorny
Online
Server: Shiva
Game: FFXI
user: Rairin
Posts: 2167
By Shiva.Thorny 2020-11-29 17:34:06
Link | Quote | Reply
 
Real world performance matters more than benchmarks, but anecdotal observations from biased reviewers and fanboys are not reliable indicators of real world performance.

If you want to measure real world performance and claim that it performs better in reality than it does in benchmarks, a test with controlled parameters needs to be designed that can substantiate that claim.

This isn't rocket science. They teach the scientific method in 6th grade around here.
 Asura.Icilies
Guildwork Premium
Offline
Server: Asura
Game: FFXI
user: icilies
Posts: 227
By Asura.Icilies 2020-11-29 17:36:56
Link | Quote | Reply
 
Shiva.Thorny said: »
Real world performance matters more than benchmarks, but anecdotal observations from biased reviewers and fanboys are not reliable indicators of real world performance.

If you want to measure real world performance and claim that it performs better in reality than it does in benchmarks, a test with controlled parameters needs to be designed that can substantiate that claim.

This isn't rocket science. They teach the scientific method in 6th grade around here.

As a matter of educating myself. Please provide me with what you would call an acceptable comparison of real world performance between the M1 and a desktop processor. Don't give me any specs.
 Asura.Arico
Offline
Server: Asura
Game: FFXI
user: Tename
Posts: 535
By Asura.Arico 2020-11-29 17:37:48
Link | Quote | Reply
 
Jetackuu said: »
Give credit to them for what? Taking an existing technology, slapping their label on it and calling it innovation?

Yeah, no.

To be fair they do have an architectural licence and not a core licence, so Apple is doing a fair bit more than slapping their name on it.
 Asura.Icilies
Guildwork Premium
Offline
Server: Asura
Game: FFXI
user: icilies
Posts: 227
By Asura.Icilies 2020-11-29 17:39:49
Link | Quote | Reply
 
Asura.Arico said: »
Jetackuu said: »
Give credit to them for what? Taking an existing technology, slapping their label on it and calling it innovation?

Yeah, no.

To be fair they do have an architectural licence and not a core licence, so Apple is doing a fair bit more than slapping their name on it.

Yes they surpassed what ARM could do within three years of obtaining this license.
 Shiva.Thorny
Online
Server: Shiva
Game: FFXI
user: Rairin
Posts: 2167
By Shiva.Thorny 2020-11-29 17:43:34
Link | Quote | Reply
 
A real world performance comparison involves:

-Deciding which application you're using to compare.
-Selecting enough different sample data sets to avoid any inherent bias.
-Creating firm parameters for what measurement you will use to determine which chip is better.
-Eliminating as many outside variables as possible.

If you take the example from last page, processing raw 4k 60fps footage, you would use the same software under the same conditions, with a half dozen or more sample clips. Process each clip 5-10 times using the M1, then 5-10 times using your comparison CPU, with as few other variables as possible changed. Record all of this, and then if the results seem suspicious other content creators will run comparable tests. Eventually, enough tests come out that a clear conclusion can be drawn.

Seems like a lot to handle? That's why people use benchmarks. A well designed benchmark is doing the same sort of tasks to begin with, it's just taking the difficulty of controlling for everything out of the picture and making crowd data collection much faster and easier.

Cinebench uses an imitation of Cinema 4's tasks, so it's already in the same arena as what you're talking about.
 Asura.Icilies
Guildwork Premium
Offline
Server: Asura
Game: FFXI
user: icilies
Posts: 227
By Asura.Icilies 2020-11-29 17:50:32
Link | Quote | Reply
 
Shiva.Thorny said: »
A real world performance comparison involves:

-Deciding which application you're using to compare.
-Selecting enough different sample data sets to avoid any inherent bias.
-Creating firm parameters for what measurement you will use to determine which chip is better.
-Eliminating as many outside variables as possible.

If you take the example from last page, processing raw 4k 60fps footage, you would use the same software under the same conditions, with a half dozen or more sample clips. Process each clip 5-10 times using the M1, then 5-10 times using your comparison CPU, with as few other variables as possible changed. Record all of this, and then if the results seem suspicious other content creators will run comparable tests. Eventually, enough tests come out that a clear conclusion can be drawn.

Seems like a lot to handle? That's why people use benchmarks. A well designed benchmark is doing the same sort of tasks to begin with, it's just taking the difficulty of controlling for everything out of the picture and making crowd data collection much faster and easier.

Cinebench uses an imitation of Cinema 4's tasks, so it's already in the same arena as what you're talking about.

So you have zero examples. Got it

Your argument against the hundreds of visual examples of the M1 beating intel in A B test are because they didn’t run the test a dozen times. Am I understanding you correctly?
 Shiva.Thorny
Online
Server: Shiva
Game: FFXI
user: Rairin
Posts: 2167
By Shiva.Thorny 2020-11-29 17:52:28
Link | Quote | Reply
 
I'm arguing against visual examples that did not adhere to a scientific standard. If I flip a coin 10 times, and get tails 8 of them, is it fair for me to say that my coin is 80% likely to land tails up?

There is a reason that the scientific method exists, and we use large samples and finite measuring criteria when drawing conclusions. If you are unable to comprehend that, I am sorry that your school system failed you, but there's not much else I can add.

Real world examples of the M1 processor when compared to other processors will exist once it has been out longer.
[+]
Offline
Posts: 42646
By Jetackuu 2020-11-29 17:53:17
Link | Quote | Reply
 
ooh, another fallacy, come on, I almost have bingo
[+]
 Asura.Saevel
Offline
Server: Asura
Game: FFXI
Posts: 9736
By Asura.Saevel 2020-11-29 17:57:33
Link | Quote | Reply
 
Asura.Arico said: »
Asura.Icilies said: »

Wrong.

Youtube. Hundreds of Videos at this point

Hey guys welcome to ArmIsTheFuture Youtube channel we're here today to compare the iPad Air on the a 14 Bionic with an intel 10900k on a super niche 20 year old benchmark. Who will win? WHAT?! THE A14 BEAT INTEL?!

Yeah the fanbois going nuts with cherry picked synthetics. Synthetic benchmarks need to be taken with a truck load of salt because context is extremely important with those, like what exactly are we testing and what would be the RW impact. Of course real application testing has been limited due to Apple's walled garden approach, which is why I'm waiting for GCC and Homebrew to be updated.

What I suspect so far is that there is nothing special about the M1 uarch, it's just a typical tablet ARM SoC. Instead the performance benefits come from TSMC's 5nm process which Apple was the first customer for. This is really about TSMC vs Intel fabrication process.

https://wccftech.com/amd-zen-4-5-nm-launching-2021/

Quote:
What is really amazing to hear in the report is that TSMC's 5nm yield has already crossed 7nm - which is quite the feat. This would mean that TSMC's 5nm will become viable sooner than expected and the transition from 7nm to 5nm can begin in earnest as well. The three customers that will be able to grab the first wave of production capacity are Apple, HiSilicon and AMD

In the new few months AMD is going to release it's next generation Ryzen on 5nm while Intel is still working on getting it's 7nm out the door. All those Intel CPU's are using older 10-14nm process's, meaning the M1 is quite literally 200-280% more efficient. Heck AMD going from 7nm to 5nm is a 40% increase in efficiency alone.

But fanbois gonna be fanbois.
 Leviathan.Celebrindal
Offline
Server: Leviathan
Game: FFXI
Posts: 3753
By Leviathan.Celebrindal 2020-11-29 18:00:54
Link | Quote | Reply
 
Let's be honest. Apple makes more money off Tech-illiterate buyers than those who know what they're buying. They always have since the 80s when compared against the PC market. So cherry-picked benchmarks are right up those fanbois alleys.

As has been said by those with far better tech credentials than me- let's see what real testing based in Scientific Method reveal when comparing this to comparable equivalencies rather than legacy ***.
Administrator
Offline
Server: Excalibur
Game: FFXIV
user: Rooks
Posts: 665
By Idiot Boy 2020-11-29 18:05:06
Link | Quote | Reply
 
The interesting part isn't even the performance aspect, it's the "unified architecture across all their devices" aspect.
[+]
 Asura.Saevel
Offline
Server: Asura
Game: FFXI
Posts: 9736
By Asura.Saevel 2020-11-29 18:08:32
Link | Quote | Reply
 
Asura.Arico said: »
Jetackuu said: »
Give credit to them for what? Taking an existing technology, slapping their label on it and calling it innovation?

Yeah, no.

To be fair they do have an architectural licence and not a core licence, so Apple is doing a fair bit more than slapping their name on it.

Well Apple was one of the founders of ARM they don't need a "core" license as they have full access to all the designs.

https://community.arm.com/developer/ip-products/processors/b/processors-ip-blog/posts/a-brief-history-of-arm-part-1#:~:text=Arm%20is%20founded&text=The%20Arm%20logo%20used%20until,and%20VLSI%20Technology.&text=In%201993%20the%20Apple%20Newton%20was%20launched%20on%20Arm%20architecture.

There is nothing special about this design, it's just a big little design that's been tweaked for more L1 cache and internal integer units per core but otherwise it's still an ARMv8 (actually ARMv8.4‑A). That's one of the downsides of being on a RISC uarch, one design of a version level is more or less the same a every other design of that version level. It has to be otherwise it's not binary compatible. Design is cheaper as the components are standardized. Neither Intel nor AMD actually make x86 CPU's anymore, instead they make RISC CPU's that have a large external instruction decoder bolted on that makes it appear to be x86 to the computer. That front end layer lets them redesign the back end without worrying about binary compatibility. Downside is that it's much more expensive to design. Or in other words, Apple spent much less designing it's M1 CPU then either Intel or AMD did on theirs.
 Asura.Saevel
Offline
Server: Asura
Game: FFXI
Posts: 9736
By Asura.Saevel 2020-11-29 18:14:59
Link | Quote | Reply
 
Idiot Boy said: »
The interesting part isn't even the performance aspect, it's the "unified architecture across all their devices" aspect.

That's just the usual walled garden approach. Apple people are going to use Apple software to speak in Apple Language to other Apple people.

Take a tablet, attach a USB dongle to it, plug a Keyboard and Mouse into that dongle, then plug a mini-HDMI cable into a monitor, congrats "desktop PC".

I prefer anything with Android on it because it's linux and I can use ADB to do things to it. As long as I can override any carrier lockouts, I can load whatever drivers or software I want and alter any system settings I desire. Can even load a new cleaner OS onto it.

https://www.xda-developers.com/what-is-adb/
[+]
 Asura.Icilies
Guildwork Premium
Offline
Server: Asura
Game: FFXI
user: icilies
Posts: 227
By Asura.Icilies 2020-11-29 18:22:21
Link | Quote | Reply
 
Shiva.Thorny said: »
A real world performance comparison involves:

-Deciding which application you're using to compare.
-Selecting enough different sample data sets to avoid any inherent bias.
-Creating firm parameters for what measurement you will use to determine which chip is better.
-Eliminating as many outside variables as possible.

If you take the example from last page, processing raw 4k 60fps footage, you would use the same software under the same conditions, with a half dozen or more sample clips. Process each clip 5-10 times using the M1, then 5-10 times using your comparison CPU, with as few other variables as possible changed. Record all of this, and then if the results seem suspicious other content creators will run comparable tests. Eventually, enough tests come out that a clear conclusion can be drawn.

Seems like a lot to handle? That's why people use benchmarks. A well designed benchmark is doing the same sort of tasks to begin with, it's just taking the difficulty of controlling for everything out of the picture and making crowd data collection much faster and easier.

Cinebench uses an imitation of Cinema 4's tasks, so it's already in the same arena as what you're talking about.

So condescending.

At least I understand your perspective. If you run across a test that satisfies your criteria please post.
 Asura.Saevel
Offline
Server: Asura
Game: FFXI
Posts: 9736
By Asura.Saevel 2020-11-29 18:33:34
Link | Quote | Reply
 
Normally the tech sites do a set of real benchmarks on new products, but because it's Apple their not really allowed to, only the pre-approved ones that are available inside the Garden are allowed.

Example of Toms

https://www.tomshardware.com/reviews/amd-ryzen-9-5950x-5900x-zen-3-review/8

Geomean, Blender, Handbrake, Pov-ray and so forth. Many of those are opensource so they be built for specific situations and the source is easy to understand.

Puget systems did a comparison, they specialize in media workstations for professional content creators.

https://www.pugetsystems.com/labs/articles/Apple-M1-MacBook-vs-PC-Desktop-Workstation-for-Adobe-Creative-Cloud-1975/

Now remember those are pretty high end desktop PC's, but it shows that the Apple M1 isn't anywhere near a fast as people are pumping it to be. Lots of cherry picked synthetics that don't take the whole platform into account.
 Asura.Icilies
Guildwork Premium
Offline
Server: Asura
Game: FFXI
user: icilies
Posts: 227
By Asura.Icilies 2020-11-29 18:40:55
Link | Quote | Reply
 
Asura.Saevel said: »
Normally the tech sites do a set of real benchmarks on new products, but because it's Apple their not really allowed to, only the pre-approved ones that are available inside the Garden are allowed.

Example of Toms

https://www.tomshardware.com/reviews/amd-ryzen-9-5950x-5900x-zen-3-review/8

Geomean, Blender, Handbrake, Pov-ray and so forth. Many of those are opensource so they be built for specific situations and the source is easy to understand.

Puget systems did a comparison, they specialize in media workstations for professional content creators.

https://www.pugetsystems.com/labs/articles/Apple-M1-MacBook-vs-PC-Desktop-Workstation-for-Adobe-Creative-Cloud-1975/

Now remember those are pretty high end desktop PC's, but it shows that the Apple M1 isn't anywhere near a fast as people are pumping it to be. Lots of cherry picked synthetics that don't take the whole platform into account.

I'll reserve anymore until all the programs you just linked are running through native vs rosetta.
 Asura.Arico
Offline
Server: Asura
Game: FFXI
user: Tename
Posts: 535
By Asura.Arico 2020-11-29 19:17:58
Link | Quote | Reply
 
Asura.Icilies said: »
I'll reserve anymore until all the programs you just linked are running through native vs rosetta.

In one to two years Adobe will release native versions and they will do better. How is that relevant? This is a real-world example. Some editors use Premier.

If some account manager told me "Hey you should upgrade your 3950X based computer to a MacMini because you're going to get the same or better real world performance" and I bought it and got the performance numbers shown in his link and was told "Oh yeah, Adobe should be releasing native premier in 1-2 years." I would be livid.


YouTube Video Placeholder


Here's a video of The M1 Mac Book Pro barely losing to an intel-based MacBook Pro 16" in Final Cut Pro, an Apple Built software suite specifically designed to run on the M1. That's a laptop-grade i9, not a desktop-grade.

Again, this is a *** insane accomplishment, but it's not more powerful than a high end desktop CPU. I don't understand why the hill you want to die on is that it's more powerful in real world examples than a high end desktop CPU when everyone here is already saying it's a great chip, but not as good as you're hyping it up to be.
[+]
 Asura.Saevel
Offline
Server: Asura
Game: FFXI
Posts: 9736
By Asura.Saevel 2020-11-29 19:28:59
Link | Quote | Reply
 
Asura.Arico said: »
Again, this is a *** insane accomplishment, but it's not more powerful than a high end desktop CPU. I don't understand why the hill you want to die on is that it's more powerful in real world examples than a high end desktop CPU when everyone here is already saying it's a great chip, but not as good as you're hyping it up to be.

Because Apple beat the hype drum that it was better then 97% of the worlds computers. In small footnotes they made a bunch of exceptions that nobody bothered to read. Basically Apple wants people to buy it's new M1 based platform and needs to give them a reason to do it.
[+]
 Asura.Icilies
Guildwork Premium
Offline
Server: Asura
Game: FFXI
user: icilies
Posts: 227
By Asura.Icilies 2020-11-29 19:34:26
Link | Quote | Reply
 
Asura.Arico said: »
Asura.Icilies said: »
I'll reserve anymore until all the programs you just linked are running through native vs rosetta.

In one to two years Adobe will release native versions and they will do better. How is that relevant? This is a real-world example. Some editors use Premier.

If some account manager told me "Hey you should upgrade your 3950X based computer to a MacMini because you're going to get the same or better real world performance" and I bought it and got the performance numbers shown in his link and was told "Oh yeah, Adobe should be releasing native premier in 1-2 years." I would be livid.


YouTube Video Placeholder


Here's a video of The M1 Mac Book Pro barely losing to an intel-based MacBook Pro 16" in Final Cut Pro, an Apple Built software suite specifically designed to run on the M1. That's a laptop-grade i9, not a desktop-grade.

Again, this is a *** insane accomplishment, but it's not more powerful than a high end desktop CPU. I don't understand why the hill you want to die on is that it's more powerful in real world examples than a high end desktop CPU when everyone here is already saying it's a great chip, but not as good as you're hyping it up to be.

Admittedly, I associated "mobile" with phone and not portable laptop.

Also, the chip IS outperforming high end cpu in optimized applications. All reviews are indicating the same results.

Clearly non portable units are going to be superior for now given the foot print of the unit. However the example of Canon R5 10 Bit 4k 60 FPS shows that this may not always be true.

And his comment about this being anything less than a milestone in processing is disingenuous.
 Asura.Arico
Offline
Server: Asura
Game: FFXI
user: Tename
Posts: 535
By Asura.Arico 2020-11-29 19:35:27
Link | Quote | Reply
 
Asura.Icilies said: »
Admittedly, I associated "mobile" with phone and not portable laptop.

Everyone was talking about mobile as in Intel/AMD's Mobile lines of CPUs(laptops/ultrabooks)

Asura.Icilies said: »
Clearly non portable units are going to be superior for now given the foot print of the unit. However the example of Canon R5 10 Bit 4k 60 FPS shows that this may not always be true.

That's literally been everyone's argument this whole time.
[+]
 Leviathan.Celebrindal
Offline
Server: Leviathan
Game: FFXI
Posts: 3753
By Leviathan.Celebrindal 2020-11-29 19:38:19
Link | Quote | Reply
 
Asura.Icilies said: »
The gilsellers comment about this being anything less than a milestone in processing is disingenuous.

Just thought I'd get the OG version for posterity's sake.

Don't mix arguments- it diminishes those of us reading with less info than any of y'all hoping to learn something.
Offline
Posts: 42646
By Jetackuu 2020-11-29 19:40:12
Link | Quote | Reply
 
lol, another fallacy for the bingo board
 Asura.Icilies
Guildwork Premium
Offline
Server: Asura
Game: FFXI
user: icilies
Posts: 227
By Asura.Icilies 2020-11-29 19:40:40
Link | Quote | Reply
 
Leviathan.Celebrindal said: »
Asura.Icilies said: »
The gilsellers comment about this being anything less than a milestone in processing is disingenuous.

Just thought I'd get the OG version for posterity's sake.

Don't mix arguments- it diminishes those of us reading with less info than any of y'all hoping to learn something.


Did this to see if they are spamming the refresh page. I was hoping they would quote it tbh.
 Shiva.Thorny
Online
Server: Shiva
Game: FFXI
user: Rairin
Posts: 2167
By Shiva.Thorny 2020-11-29 19:41:45
Link | Quote | Reply
 
Asura.Icilies said: »
Did this to see if they are spamming the refresh page. I was hoping they would quote it tbh.

"I'm not an idiot, I'm just trolling!"

I mentioned the 4600U on the first page, if you can't even tell ryzen's CPU model numbers from mobile phone processors then I'm guessing you're not much of an enthusiast. Let's be real here, your unhealthy obsession with apple and unwillingness to apply any standards to your claim is just silly at this point. There will be more data in the future, but the current data does not in any way support the idea that M1 can compete with top end desktop cpus.
[+]
 Asura.Icilies
Guildwork Premium
Offline
Server: Asura
Game: FFXI
user: icilies
Posts: 227
By Asura.Icilies 2020-11-29 19:42:24
Link | Quote | Reply
 
Shiva.Thorny said: »
Asura.Icilies said: »
Did this to see if they are spamming the refresh page. I was hoping they would quote it tbh.

"I'm not an idiot, I'm just trolling!"

Lol. I did troll a bit.

Apologies. Kinda.
 Asura.Saevel
Offline
Server: Asura
Game: FFXI
Posts: 9736
By Asura.Saevel 2020-11-29 20:43:19
Link | Quote | Reply
 
Asura.Icilies said: »
Also, the chip IS outperforming high end cpu in optimized applications.

Considering the applications we care about haven't been recompiled / ported to ARM8 this statement is false.

About this only place I can see this thing "outperforming" high TDP processors is single threaded integer tasks that have poor prefetching and therefor memory bandwidth limited. That is an incredibly small subset of workloads which are almost always better done on the massive vector processors we call GPUs. This is largely a result of clock rates, the M1 is around 3ghz and virtually every processor it's been "bench-marked" against has also been in the 2.5-3ghz range. This space is dominated by two types of chips, low power mobile and high core desktop, though the desktop chips also have a "boost" ability which can be flaky.

To understand why a 10W TDP chip has absolutely zero chance at competing with a 65 to 85W TDP chip you need to first realize what those numbers mean. Thermal Design Power is the designed thermal envelop of a processor or really anything computing related. It means the cooling system needs to be capable of removing that much thermal waste heat continuously otherwise the chip overheats and ceases to function. That thermal heat came from electricity the CPU was using. Thus the capabilities of any processor is strictly limited by the amount of thermal heat that needs to be removed.

Now before anyone says "but but muh super Apple Clark Tech", the answer is no. This is a pretty high level description with some details glossed over because it would be a seminar otherwise.

Physics is a harsh mistress and the amount of thermal energy released is based entirely on thermodynamics. Heat generated by a circuit is H = (I^2) * R * t with I being the voltage, R being the resistance and t being the time. For processors R is determined by the processing node with smaller values being better then larger values. A transistor size of 5 nanometers has less ohmic resistance then a 7, 10 or 12 nanometer transistor. That is why the processing node is so important. I and t are tightly linked in processors because to reach higher clock rates you end up needing to push more voltage, lowering one lowers the other and lowering the clock rate directly lowers performance. Performance is described as Instructions per Clock, so we can see why we don't want to lower our clock rates. By doubling our transistors we can double out performance per clock (not exactly but bigger discussion), but that also doubles our power consumption. Doubling clock rate means increasing voltage and the power consumption almost quadruples. Yeah running anything at 5ghz consumes a stupid amount of power at current node sizes much less trying to run it at 6 or 7ghz.

So there, basic understanding on why that TDP number is so important. A chip with a TDP of 65 to 85W simply has a higher headroom to either pack more transistors, raise the clock rate, or both over a chip with a 10W TDP. Going to a smaller node lowers the resistance and allows for more transistors, higher clock rates, or both. At 5nm that 10W Apple chip would have approximately the same processing power as a 10-14nm CPU in the 35-55W category. All that we've seen here is Apple being the first customer for TSMC's 5nm process and thus getting the spotlight, until someone else releases a 65-100W TDP CPU.

:Note:
I simplified the above because it's way too complicated to start discussing the various tweaks that processor designers implement to squeeze into a thermal envelope. And while those tweaks can change things up a bit, it's normally in the 5-15% range of improvement and not the 25-30%+ from a node shrink.
 Asura.Icilies
Guildwork Premium
Offline
Server: Asura
Game: FFXI
user: icilies
Posts: 227
By Asura.Icilies 2020-11-29 20:55:15
Link | Quote | Reply
 
Asura.Saevel said: »
Asura.Icilies said: »
Also, the chip IS outperforming high end cpu in optimized applications.

Considering the applications we care about haven't been recompiled / ported to ARM8 this statement is false.

About this only place I can see this thing "outperforming" high TDP processors is single threaded integer tasks that have poor prefetching and therefor memory bandwidth limited. That is an incredibly small subset of workloads which are almost always better done on the massive vector processors we call GPUs. This is largely a result of clock rates, the M1 is around 3ghz and virtually every processor it's been "bench-marked" against has also been in the 2.5-3ghz range. This space is dominated by two types of chips, low power mobile and high core desktop, though the desktop chips also have a "boost" ability which can be flaky.

To understand why a 10W TDP chip has absolutely zero chance at competing with a 65 to 85W TDP chip you need to first realize what those numbers mean. Thermal Design Power is the designed thermal envelop of a processor or really anything computing related. It means the cooling system needs to be capable of removing that much thermal waste heat continuously otherwise the chip overheats and ceases to function. That thermal heat came from electricity the CPU was using. Thus the capabilities of any processor is strictly limited by the amount of thermal heat that needs to be removed.

Now before anyone says "but but muh super Apple Clark Tech", the answer is no. This is a pretty high level description with some details glossed over because it would be a seminar otherwise.

Physics is a harsh mistress and the amount of thermal energy released is based entirely on thermodynamics. Heat generated by a circuit is H = (I^2) * R * t with I being the voltage, R being the resistance and t being the time. For processors R is determined by the processing node with smaller values being better then larger values. A transistor size of 5 nanometers has less ohmic resistance then a 7, 10 or 12 nanometer transistor. That is why the processing node is so important. I and t are tightly linked in processors because to reach higher clock rates you end up needing to push more voltage, lowering one lowers the other and lowering the clock rate directly lowers performance. Performance is described as Instructions per Clock, so we can see why we don't want to lower our clock rates. By doubling our transistors we can double out performance per clock (not exactly but bigger discussion), but that also doubles our power consumption. Doubling clock rate means increasing voltage and the power consumption almost quadruples. Yeah running anything at 5ghz consumes a stupid amount of power at current node sizes much less trying to run it at 6 or 7ghz.

So there, basic understanding on why that TDP number is so important. A chip with a TDP of 65 to 85W simply has a higher headroom to either pack more transistors, raise the clock rate, or both over a chip with a 10W TDP. Going to a smaller node lowers the resistance and allows for more transistors, higher clock rates, or both. At 5nm that 10W Apple chip would have approximately the same processing power as a 10-14nm CPU in the 35-55W category. All that we've seen here is Apple being the first customer for TSMC's 5nm process and thus getting the spotlight, until someone else releases a 65-100W TDP CPU.

:Note:
I simplified the above because it's way too complicated to start discussing the various tweaks that processor designers implement to squeeze into a thermal envelope. And while those tweaks can change things up a bit, it's normally in the 5-15% range of improvement and not the 25-30%+ from a node shrink.

lol. All this and the M1 still dominating

You ok?
 Asura.Icilies
Guildwork Premium
Offline
Server: Asura
Game: FFXI
user: icilies
Posts: 227
By Asura.Icilies 2020-11-29 21:00:13
Link | Quote | Reply
 
Asura.Icilies said: »
Asura.Saevel said: »
Asura.Icilies said: »
Also, the chip IS outperforming high end cpu in optimized applications.

Considering the applications we care about haven't been recompiled / ported to ARM8 this statement is false.

About this only place I can see this thing "outperforming" high TDP processors is single threaded integer tasks that have poor prefetching and therefor memory bandwidth limited. That is an incredibly small subset of workloads which are almost always better done on the massive vector processors we call GPUs. This is largely a result of clock rates, the M1 is around 3ghz and virtually every processor it's been "bench-marked" against has also been in the 2.5-3ghz range. This space is dominated by two types of chips, low power mobile and high core desktop, though the desktop chips also have a "boost" ability which can be flaky.

To understand why a 10W TDP chip has absolutely zero chance at competing with a 65 to 85W TDP chip you need to first realize what those numbers mean. Thermal Design Power is the designed thermal envelop of a processor or really anything computing related. It means the cooling system needs to be capable of removing that much thermal waste heat continuously otherwise the chip overheats and ceases to function. That thermal heat came from electricity the CPU was using. Thus the capabilities of any processor is strictly limited by the amount of thermal heat that needs to be removed.

Now before anyone says "but but muh super Apple Clark Tech", the answer is no. This is a pretty high level description with some details glossed over because it would be a seminar otherwise.

Physics is a harsh mistress and the amount of thermal energy released is based entirely on thermodynamics. Heat generated by a circuit is H = (I^2) * R * t with I being the voltage, R being the resistance and t being the time. For processors R is determined by the processing node with smaller values being better then larger values. A transistor size of 5 nanometers has less ohmic resistance then a 7, 10 or 12 nanometer transistor. That is why the processing node is so important. I and t are tightly linked in processors because to reach higher clock rates you end up needing to push more voltage, lowering one lowers the other and lowering the clock rate directly lowers performance. Performance is described as Instructions per Clock, so we can see why we don't want to lower our clock rates. By doubling our transistors we can double out performance per clock (not exactly but bigger discussion), but that also doubles our power consumption. Doubling clock rate means increasing voltage and the power consumption almost quadruples. Yeah running anything at 5ghz consumes a stupid amount of power at current node sizes much less trying to run it at 6 or 7ghz.

So there, basic understanding on why that TDP number is so important. A chip with a TDP of 65 to 85W simply has a higher headroom to either pack more transistors, raise the clock rate, or both over a chip with a 10W TDP. Going to a smaller node lowers the resistance and allows for more transistors, higher clock rates, or both. At 5nm that 10W Apple chip would have approximately the same processing power as a 10-14nm CPU in the 35-55W category. All that we've seen here is Apple being the first customer for TSMC's 5nm process and thus getting the spotlight, until someone else releases a 65-100W TDP CPU.

:Note:
I simplified the above because it's way too complicated to start discussing the various tweaks that processor designers implement to squeeze into a thermal envelope. And while those tweaks can change things up a bit, it's normally in the 5-15% range of improvement and not the 25-30%+ from a node shrink.

lol. All this and the M1 still dominating

You ok?
You are realling trying to say 5NM is the only reason a fanless macbook air is destroying the market right now?
 Asura.Arico
Offline
Server: Asura
Game: FFXI
user: Tename
Posts: 535
By Asura.Arico 2020-11-29 21:15:36
Link | Quote | Reply
 
Asura.Icilies said: »
You are realling trying to say 5NM is the only reason a fanless macbook air is destroying the market right now?

I really want to be on your side because I like my MacMini, but come on. Are you trying to say it being on 5NM didn't make it possible?
[+]
Log in to post.