Thread 'iGPU vs. CPU Efficiency: Why I stopped using my CPU for BOINC'

Message boards : Projects : iGPU vs. CPU Efficiency: Why I stopped using my CPU for BOINC
Message board moderation

To post messages, you must log in.

AuthorMessage
kasdashdfjsah

Send message
Joined: 29 Jan 24
Posts: 96
Message 118020 - Posted: 9 Jan 2026, 21:15:29 UTC

I’ve been testing my base M4 Mac Mini on BOINC projects like PrimeGrid, and the efficiency gap between the iGPU and the CPU is huge.

Running on the iGPU, I’m finishing GFN-21 tasks in about 50 hours.

To get that same result using the CPU cores, it would take nearly 9x longer (around 450 hours) despite pulling the exact same power—roughly 25-30W.

The biggest difference is the noise; on the iGPU, the fan stays at a dead whisper, while the CPU pins the heat and makes the machine much louder.

Even though my Mac is a "base" model, this seems to be the rule now for all modern iGPU/CPU combos.

The integrated graphics are just significantly better at handling the parallel math these projects require.

If you want to contribute the most while keeping your power bill and fan noise down, stick to the iGPU.

In short, If your project has a GPU app, use it and leave the CPU cores for something else.

You get way more credits for the same electricity and a much quieter machine.
ID: 118020 · Report as offensive
Grant (SSSF)

Send message
Joined: 7 Dec 24
Posts: 243
Message 118022 - Posted: 9 Jan 2026, 22:06:47 UTC - in response to Message 118020.  

In reply to kasdashdfjsah's message of 9 Jan 2026:
Even though my Mac is a "base" model, this seems to be the rule now for all modern iGPU/CPU combos.
It isn't for Intel or AMD iGPUs in most cases.
A current mid-range AMD/Intel CPU can pump out much more work per hour per watt than their iGPUs can, their high end CPUs- even more still.

A low end Strix Halo or one of the yet to be released new Intel CPUs with Xe3 iGPUs may provide significantly improved iGPU performance, but for all other current and older AMD/Intel CPUs, the CPU provides way more output per hour per watt then their iGPUs.

They are good for driving a monitor, they are no good for compute work.
Grant
Darwin NT.
ID: 118022 · Report as offensive
ProfileDave
Help desk expert

Send message
Joined: 28 Jun 10
Posts: 3256
United Kingdom
Message 118027 - Posted: 10 Jan 2026, 9:10:06 UTC

As my iGPU doesn't get used with the card doing the heavy lifting, it is a shame my main projects won't use either of my GPU's. In the past, I have been told that the type of computation done for CPDN is less suited to GPUs but, we were also told that with respect to multi-core processing and now tasks using 4 cores are one of the main task types. This makes me wonder if the real issue is needing someone with the relevant coding skills.
ID: 118027 · Report as offensive
kasdashdfjsah

Send message
Joined: 29 Jan 24
Posts: 96
Message 118034 - Posted: 10 Jan 2026, 18:09:30 UTC - in response to Message 118022.  

Do you have a source for these claims you can share? :)
ID: 118034 · Report as offensive
Grant (SSSF)

Send message
Joined: 7 Dec 24
Posts: 243
Message 118038 - Posted: 10 Jan 2026, 22:13:46 UTC - in response to Message 118034.  

In reply to kasdashdfjsah's message of 10 Jan 2026:
Do you have a source for these claims you can share? :)
The forums of any BOINC project where people have compared the output of their systems when using/not using their AMD/Intel iGPUs.
And while few and far between, there have been a few hardware reviews where the iGPU compute capability was tested as well as it's gaming capability.


And a few months ago on Numberfields i tried using my Ryzen laptop's iGPU- it''s output was roughly on par with a single core/thread of the CPU.
With the increased heat from the iGPU running flat out, that reduced the thermal headroom of the whole package and impacted on the CPU's maximum sustained clock speed. So running all CPU cores/threads and not using the iGPU, while using much more power, actually not only produced more work per hour, but also used less power per Task processed.


As i said, for the Strix Halo and the yet to be released Xe3 iGPUs, they may be worth running. Same for the high-end Apple iGPUs- and even then, everything will depend very much on the application programming.
But even with an extremely efficient application running on an iGPU it won't be enough to offset the loss of CPU output using a low efficiency application- particularly with many of the high and very high core/thread count CPUs. The only way to be sure would be to try all the possible combinations of cores/threads/iGPU usage, and see what results.
But for now- with current AMD and Intel hardware the iGPU is good for driving your display, not for compute work.
Grant
Darwin NT.
ID: 118038 · Report as offensive
ahorek

Send message
Joined: 18 Jan 26
Posts: 2
Czech Republic
Message 118122 - Posted: 18 Jan 2026, 22:09:13 UTC - in response to Message 118038.  

Most laptop and desktop iGPUs are usually far weaker, mainly handling display output and sharing resources with the CPU, which can reduce CPU performance when both are under load. They are not intended to serve as the primary GPU for gaming or compute workloads in the way Apple’s GPU is. Strix Halo is another example of a higher-performance integrated GPU.

For comparison, the M4 Mac mini should offer performance comparable to a (6yo) GeForce GTX 1650 SUPER, while being 3 times more power efficient.

Also, not all projects benefit from GPU acceleration the way PrimeGrid does. It depends on the application and your specific CPU&GPU combo, so measure the results and stick with what works best on your system.
ID: 118122 · Report as offensive

Message boards : Projects : iGPU vs. CPU Efficiency: Why I stopped using my CPU for BOINC

Copyright © 2026 University of California.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.