Thread 'Re-opening the efficiency debate: The 2026 hardware reality'

Message boards : Promotion : Re-opening the efficiency debate: The 2026 hardware reality
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
kasdashdfjsah

Send message
Joined: 29 Jan 24
Posts: 106
Message 119014 - Posted: 1 May 2026, 17:30:22 UTC

I started a thread on this last year, but the release of newer NPU and iGPU architectures has made the efficiency gap even more extreme.
We can no longer ignore that 15-year-old CPUs are dragging down our average discovery times while wasting massive amounts of power.
It’s time to move beyond "inclusive" legacy support and toward a throughput-first model.

Some have previously argued that high-core-count CPUs were still the efficiency kings, but Panther Lake and Snapdragon X have proven otherwise.
A modern iGPU can now finish the same work as a legacy workstation at a fraction of the wattage.
Continuing to allocate tasks to non-AVX or low-efficiency hardware is effectively a carbon tax on the entire project.

I’m proposing we implement mandatory "Efficiency Tiers" or shortened deadlines for high-level subprojects to prioritize modern silicon.
BOINC needs to evolve into a platform that rewards scientific speed and environmental responsibility.
If we don't set these standards now, we’re just maintaining a high-energy museum instead of a research project.
ID: 119014 · Report as offensive
ProfileDave
Help desk expert

Send message
Joined: 28 Jun 10
Posts: 3288
United Kingdom
Message 119024 - Posted: 2 May 2026, 4:55:09 UTC - in response to Message 119014.  

I can't see this happening for all projects. CPDN for example, uses code written for supercomputers either from the Met Office here in UK. (the older type of models that re used less and less these days) and newer code from ECMWF (European Centre for Medium range Weather Forecasting) that would require a somewhat cash strapped project to recruit a programmer with the knowledge and skills to program for this. I suspect this is a factor in many of the other projects that do not offer GPU work.
ID: 119024 · Report as offensive
Grant (SSSF)

Send message
Joined: 7 Dec 24
Posts: 260
Message 119025 - Posted: 2 May 2026, 5:55:40 UTC - in response to Message 119014.  

In reply to kasdashdfjsah's message of 1 May 2026:
A modern iGPU can now finish the same work as a legacy workstation at a fraction of the wattage.
And a modern workstation still leaves the modern iGPU for dead for both throughput and efficiency.



Continuing to allocate tasks to non-AVX or low-efficiency hardware is effectively a carbon tax on the entire project.
Not all types of work can take advantage of AVX.
And while a phone can process Tasks using extremely low levels of energy, the actual energy used by enough phones to match even an older computer's output still makes the older computer more efficient.



I’m proposing we implement mandatory "Efficiency Tiers" or shortened deadlines for high-level subprojects to prioritize modern silicon.
Credit New by it's design actually punishes GPUs because many of their applications don't come as close to their theoretical capabilities as many of the applications for CPUs do.

Projects need to make use of what is available to them. A well optimised application is the best way to make use of those available resources, be they CPU, GPU, ASIC etc, be they recent, older or ancient hardware.
End of story.



What you believe and the actual reality of compute efficiency are still two very different things.
Grant
Darwin NT.
ID: 119025 · Report as offensive
cadbane

Send message
Joined: 17 Oct 24
Posts: 12
Denmark
Message 119027 - Posted: 2 May 2026, 6:32:42 UTC

The carbon footprint from Boinc, is very small compared to the waste of energy from AI datacenters.
There are only few thousands of Boinc users. Far from all have more than 2 computers running. It may be true that some run on old hardware, but the energy from those is probably negligible on a global scale.
ID: 119027 · Report as offensive
floyd
Help desk expert

Send message
Joined: 23 Apr 12
Posts: 80
Message 119029 - Posted: 2 May 2026, 9:39:56 UTC - in response to Message 119014.  

There's no need to bully people out, I'm sure many will already ask themselves if it's worth it. Let them decide responsibly, if not for the world then for their own wallet. And while they're at it, they should think twice about credit generator projects that make up problems to solve just for fun. No matter how efficiently they do what they do, those waste more energy than some old computers doing real work.
ID: 119029 · Report as offensive
kasdashdfjsah

Send message
Joined: 29 Jan 24
Posts: 106
Message 119031 - Posted: 2 May 2026, 11:24:05 UTC

Comparing BOINC’s energy footprint to AI datacenters is a race to the bottom that ignores our collective responsibility to be as efficient as possible.
While we are a smaller community, that shouldn't be an excuse to tolerate "vampire power" from 15-year-old hardware when 2026 silicon offers such a massive leap in primes-per-joule.
We should aim to be the gold standard for distributed computing efficiency, not just "less bad" than a commercial datacenter.

The argument that enough phones or modern SoCs can't match an older workstation's efficiency is simply no longer supported by the data from chips like Panther Lake or Snapdragon X.
Modern unified memory architectures allow iGPUs and NPUs to process high-throughput tasks with a performance-per-watt ratio that legacy x86 workstations physically cannot reach due to node leakage.
Even if some projects like CPDN are cash-strapped and rely on older code, we should still be incentivizing a transition toward modern pathways rather than settling for legacy constraints indefinitely.

This isn't about "bullying" volunteers, but about evolving BOINC from a hardware museum into a high-performance scientific tool that respects the environmental costs of 2026.
If we continue to defend "Ancient Hardware" support as a primary goal, we risk losing the interest of a new generation of contributors who prioritize sustainability.
"Efficiency Tiers" would allow projects to maximize their scientific output while giving users a clear metric for whether their hardware is actually helping or just wasting heat.
ID: 119031 · Report as offensive
floyd
Help desk expert

Send message
Joined: 23 Apr 12
Posts: 80
Message 119033 - Posted: 2 May 2026, 14:44:16 UTC - in response to Message 119031.  

In reply to kasdashdfjsah's message of 2 May 2026:
evolving BOINC from a hardware museum into a high-performance scientific tool that respects the environmental costs of 2026
It's not the tool that needs to respect the environmental costs, it's the users.

If we continue to defend "Ancient Hardware" support as a primary goal
"Ancient Hardware" support should not be a primary goal (and I think it isn't), neither should be efficiency. Responsible use is required and you can't enforce that. I don't think we could even agree on a definition.

we risk losing the interest of a new generation of contributors
BOINC has for years been losing my interest. Do you really see a new generation of contributors coming? First we'd need a new generation of projects.

allow projects to maximize their scientific output while giving users a clear metric for whether their hardware is actually helping or just wasting heat.
As long as there is enough work there's only one way to maximise output: Accept every help you can get. Every computer that finishes a task in time helps maximising output. I don't see how there can be different opinions on that. But is the result worth the effort? That's an entirely different problem and there can be many different opinions. The project maintainers have their opinion and they can already act upon it by sending work or not. I have my opinion and I can decide to run particular tasks on particular devices or not. You have your opinion which obviously is very different from mine. Everybody else can have their opinion. But who is to define mandatory restrictions? Who is to enforce them? By what authority? There is no such authority and we don't need one. If we can't establish, or keep, a state everybody can live with the whole system is doomed.
ID: 119033 · Report as offensive
ProfileDave
Help desk expert

Send message
Joined: 28 Jun 10
Posts: 3288
United Kingdom
Message 119035 - Posted: 2 May 2026, 16:33:37 UTC - in response to Message 119031.  

It isn't just about lack of money. The current CPDN tasks are dealing with greater amounts of data than many phones can cope with, a fast processor taking 50+hours of four cores and peaking at over 26GB of RAM usage.
ID: 119035 · Report as offensive
Grant (SSSF)

Send message
Joined: 7 Dec 24
Posts: 260
Message 119038 - Posted: 2 May 2026, 19:51:28 UTC - in response to Message 119031.  

In reply to kasdashdfjsah's message of 2 May 2026:
The argument that enough phones or modern SoCs can't match an older workstation's efficiency is simply no longer supported by the data from chips like Panther Lake or Snapdragon X.
And there you are, continually making statements that aren't supported by the actual data.



If we continue to defend "Ancient Hardware" support as a primary goal,
No-one has ever done that, because it's not a primary goal.
The primary goal is being able to do distributed computing. The more hardware that can be utilised, then the more work that can be done.

What is supported is the available hardware.
Spending the time and effort to develop an application that can be used by a few hundred systems instead of hundreds of thousands is a waste of time and effort and a poor use of resources required to develop that application, for a boost in work returned that would be almost immeasurable. Whether that is 30 year old hardware, or just released in the last 6 months hardware makes no difference.
If a project has the resources to do so- then good for them. Most don't.

If they are going to develop a new application, then it will most likely be for newer hardware as it will become more common over time, along with it's much greater ability to process work.


Efficiency, which you carry on about so much, is about more than just the hardware used to process the work. There are the time and resources required to develop and support the applications, not to mention the backends that provided & receive & process the returned results.

You are incapable of seeing the forest for all of the trees.



we risk losing the interest of a new generation of contributors who prioritize sustainability.
Utter rubbish.



"Efficiency Tiers" would allow projects to maximize their scientific output
Even more rubbish. Making things even more complicated and the time and effort required to support those complications along with dealing with the issues of the people trying to use this even more complicated system is even less efficient.



while giving users a clear metric for whether their hardware is actually helping or just wasting heat.
That already happens- people are given Credit for the work they do. The more work they do, the more Credit they get.

Those that are actually interested in efficiency can do what those that are interested in performance have always done- check out the forums of the projects they are interested in for how to maximise the output of their systems. Applications are developed to support the widest range of hardware- that's how the project gets the most amount of work done. A few lines in a configuration file and a cruncher can double, triple, quadruple the amount of work their system does is they have the right hardware.
If they have the resources available, then the project can develop more optimised and specific applications to support the hardware that is contributing to their project.

For those projects that support anonymous applications, those that are able can develop hardware specific optimised applications that will return Valid work in a much shorter time frame- and Seti was the perfect example of that.

The basic applications supported a massive variety of hardware. GPU support was added once GPUs actually became a thing that more than just developers had.
And volunteers helped to optimise the existing applications with their wide hardware support, along with more specific applications for specific hardware.

From memory the optimised CPU application was as much as 30% faster than the stock one. The optimised GPU application almost double.
The special optimised GPU application which was LINUX only was orders of magnitude faster- a Task that took 4 hours on the latest and greatest CPU vould be done in 30sec or less on the latest and greatest GPU.

If someone notices that everyone else is outproducing their ancient system, they can either upgrade it, retire it, or continue along as they are- it's their choice.
If the project notices a huge drop in the work coming in from particular hardware, they can choose to drop support for it. Or, if it's not costing them anything to continue to support it then they can do so. That is their choice.


And all of that said- a project that some (or most) consider to be a waste of time and effort and resources, many (some) others may consider to be worth it.
That is also their choice.



The simple fact of the matter is you have a bee in you bonnet about efficiency.
But it is such a narrow view that it only focuses on the systems doing the work, not everything else that's needed in order to make it possible for those systems to contribute.

What it all comes down to choice- if the user likes a project (for whatever reason), they can choose to do it. If they don't like the project (for whatever reason), they can choose not to.
The choice is entirely theirs (and this is why i don't like Science United- if i like a project, i'll support it. If not, i won't. While there may be particular fields i'm more interested in than others, that doesn't mean i'd rather do a project from a field that interests me, than another project in a field that isn't of nearly as much interest).
It is my hardware. It's my power bill. It is my choice).



As i mentioned before, you can't see the forest for the tress. You don't let the facts get in the way of your beliefs.
I think it's time to put you on my ignore list.
Grant
Darwin NT.
ID: 119038 · Report as offensive
Ian&Steve C.

Send message
Joined: 24 Dec 19
Posts: 247
United States
Message 119042 - Posted: 3 May 2026, 2:08:53 UTC

another "efficiency" post full of AI slop and marketing fluff. just stop bro.
ID: 119042 · Report as offensive
kasdashdfjsah

Send message
Joined: 29 Jan 24
Posts: 106
Message 119044 - Posted: 3 May 2026, 5:38:35 UTC

It’s disappointing to see technical arguments dismissed as "AI slop" or "rubbish" just because they challenge the traditional way BOINC has operated for twenty years.
The "forest" I see is a platform that is slowly losing its relevance because it refuses to adapt its standards to a world where energy efficiency is no longer a hobbyist choice, but a global requirement.
If we keep defining "maximizing output" as simply accepting every possible machine regardless of its footprint, we are choosing to remain a niche legacy platform rather than a modern scientific powerhouse.

To Dave’s point about RAM: 26GB usage is exactly why we should be prioritizing modern SoCs and unified memory architectures that handle large datasets with significantly higher efficiency than old server racks.
The idea that "it doesn't cost the project anything" to support old hardware ignores the opportunity cost and the reputational hit BOINC takes when it’s seen as a major power sink for minor scientific gains.
We need a new generation of projects, but we also need a platform that doesn't look like it belongs in 2006 to attract the developers and contributors of today.

Enforcement doesn't require a "central authority" or a "communist mandate," but it does require projects to take a stand by setting shorter deadlines or specific hardware requirements for certain subprojects.
Individual responsibility is great, but the platform provides the incentives—if the system treats an inefficient heater the same as a 3nm processor, the system is fundamentally broken.
We have the data and the hardware in 2026 to do better, and "it's my choice" shouldn't be a shield for avoidable environmental waste.
ID: 119044 · Report as offensive
floyd
Help desk expert

Send message
Joined: 23 Apr 12
Posts: 80
Message 119045 - Posted: 3 May 2026, 10:32:27 UTC - in response to Message 119044.  

Okay, it seems we do not only have different opinions but also a communications problem. Every time I respond to something I think you said you act like you didn't say it. Apparently you fail to make clear what your plan is, if you have any, and I fail to read your mind. For me this talk - I'll not call it a discussion - is over.
ID: 119045 · Report as offensive
Ian&Steve C.

Send message
Joined: 24 Dec 19
Posts: 247
United States
Message 119047 - Posted: 3 May 2026, 12:51:53 UTC - in response to Message 119044.  
Last modified: 3 May 2026, 12:52:43 UTC

it's disappointing to see you just take our comments and feed them into AI with a prompt to rebut with more AI slop and you just copy paste the answer.

type in your own words and we can maybe get somewhere. if you are using AI to translate, then stop and just use a normal translation software (google translate) that doesnt inject such hyperbole and bias.

this is why you get criticized in every post on the internet that you make about this (various BOINC project forums, FAH forums, Reddit, etc, i've seen them all). you do not respond directly to questions asked and just keep spouting nonsense that isnt true. I understand that buyers remorse can be a big pill to swallow, but you need to accept reality. it's fine to try to get some more support, but you need to approach it in a realistic lens of "my niche device could be useful", and absolutely not "this is a world changing breakthrough and everyone should switch". be for real. you're talking about a low power laptop, not a datacenter.
ID: 119047 · Report as offensive
ProfileDave
Help desk expert

Send message
Joined: 28 Jun 10
Posts: 3288
United Kingdom
Message 119050 - Posted: 3 May 2026, 17:57:28 UTC - in response to Message 119047.  

it's disappointing to see you just take our comments and feed them into AI with a prompt to rebut with more AI slop and you just copy paste the answer.


I haven't done any research to check the validity of your presumption. I assume you did what you allege and got the same result. If these ramblings, (I use that word because they don't come across to me as at all coherent) are the product of AI, then it seems counterintuitive give that AI is such a heavy user of computing resources and hence carbon etc.
ID: 119050 · Report as offensive
Ian&Steve C.

Send message
Joined: 24 Dec 19
Posts: 247
United States
Message 119051 - Posted: 3 May 2026, 18:17:47 UTC - in response to Message 119050.  
Last modified: 3 May 2026, 18:26:42 UTC

he has admitted in other avenues/forums that he uses AI. and the speech patterns are pretty clear AI a lot of the time. especially in previous instances where he makes a long post with chapter headings and tons of em dashes lol. not the mention the constant use of sentence structures like "it's not about [X], it's about [Y]!"

he'll come back with "dont worry that I'm using AI, this is the future!" or some other nonsense.
ID: 119051 · Report as offensive
ProfileJord
Volunteer moderator
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15886
Netherlands
Message 119053 - Posted: 3 May 2026, 19:36:35 UTC - in response to Message 119044.  

In reply to kasdashdfjsah's message of 3 May 2026:
We have the data and the hardware in 2026 to do better, and "it's my choice" shouldn't be a shield for avoidable environmental waste.

BOINC is set up specifically to only run when the computer is idle. The continuous running is your own choice.
Forcing everyone to use only newer hardware is a good way to alienize everyone, as not everyone (no one really) has the funds to buy a new system, especially not now with RAMopcalypse going on, making RAM, SSDs and even HDDs extremely expensive.

https://github.com/BOINC/boinc/wiki/Heat_and_energy_considerations says what it will cost to run BOINC and gives advice when it's not a good idea to run it, also with the environment in mind.

Now, I've had multiple complaints about this thread. I'll leave it open for now, but only if all parties show they want to have a discussion, and not if the OP only wants to press his standpoint forward and ignore what others say. Because, as said, that's not a discussion.
ID: 119053 · Report as offensive
Grumpy Swede
Avatar

Send message
Joined: 30 Mar 20
Posts: 727
Sweden
Message 119063 - Posted: 4 May 2026, 3:02:03 UTC

It's pointless to argue with this "kasdashdfjsah" person, since he is not interested in anything other than trolling you with arguments that AI is constructing from your replies. It's all just another example of AI hallucinations.

In other words, you are not having a conversation with a real person.

This is just the beginning of how some people will use AI, when they can't come up with their own arguments. AI, and AI hallucinations will destroy not only forums like this, but it will destroy the entire society. I'm glad that I'm old enough to not be alive when AI have destroyed the fabric of society, and normal person to person interactions.
ID: 119063 · Report as offensive
kasdashdfjsah

Send message
Joined: 29 Jan 24
Posts: 106
Message 119074 - Posted: 5 May 2026, 5:29:21 UTC

I'm hearing the criticism about my writing style and the use of AI tools to structure these posts. If the phrasing has been a distraction from the actual point, that’s on me, but dismissing the data because you don't like the "hyperbole" doesn't change the hardware reality of 2026.

The RAMapocalypse that Jord mentioned is exactly why we should be looking at unified memory architectures. When 64GB of traditional DDR5 costs a fortune, SoCs that integrate memory and compute at 3nm are the only logical way forward for a project that wants to stay sustainable. It’s not "buyers remorse" about a niche laptop; it’s about looking at the watts-per-task and realizing that our current model is heavily weighted toward legacy systems that leak more heat than they produce results.

Floyd, I'm sorry if my plan seems unclear. My goal isn't to alienate people, but to push for an efficiency floor—shorter deadlines or specific "Green Tiers" for subprojects that actually benefit from modern instruction sets. If we just accept everything "because it's idle," we are settling for a lower scientific output than the hardware in 2026 is capable of delivering.

I’m here for the discussion, but we have to talk about the hardware and the energy, not just my "speech patterns." If the science is the priority, why are we so afraid to set standards that reflect the current tech landscape?
ID: 119074 · Report as offensive
robsmith
Volunteer tester
Help desk expert

Send message
Joined: 25 May 09
Posts: 1447
United Kingdom
Message 119075 - Posted: 5 May 2026, 6:13:01 UTC - in response to Message 119074.  

I'm so sorry, but you are totally missing one of the major objectives of BOINC -that a research organisation is able to get more people involved in there research by donating SPARE time on their computers to do said research.
Your suggestion, while it may be laudable is totally anti that mode of operation on at least two grounds, firstly by deterring users from using their own computer because only the "latest and greatest" computer hardware will be acceptable to run BOINC; and second the amount of capital money required to replace "old fashioned" hardware on a very frequent basis.
Thus I would posit that YOU are actually ANTI-BOINC as your precepts of going to a very centralised, very high capital budget system goes against the founding principals of BOINC.
ID: 119075 · Report as offensive
ProfileDave
Help desk expert

Send message
Joined: 28 Jun 10
Posts: 3288
United Kingdom
Message 119076 - Posted: 5 May 2026, 6:22:24 UTC

When 64GB of traditional DDR5 costs a fortune
And what is an SoC with 64GB RAM in the chip going to cost? As Rob implies, some of us are going to rapidly run out of limbs!
ID: 119076 · Report as offensive
1 · 2 · Next

Message boards : Promotion : Re-opening the efficiency debate: The 2026 hardware reality

Copyright © 2026 University of California.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.