How much embodied carbon is there likely to be in a 100MW hyperscale datacenter? How would you optimise for embodied carbon at this level? #95
Unanswered
mrchrisadams
asked this question in
Ask Anything
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
HI folks,
I don't have much experience with building level life cycle analysis, but one part of the GSF Software Carbon Intensity spec refers to the notion of embodied carbon, so I figured I might ask here.
Optimising embodied carbon through provider choice as a complement to the way we optimise for use-phase carbon via region choice
When companies invest in millions, or billions (or even multiple tens of billions) in digital infrastructure there's obviously a significant part of the footprint associated with the creation of these facilities - not just the chips, and as far as I am aware metrics, like PUE only look at the non-computing use-phase energy consumption.
If you have seen examples of comparing the carbon intensity of work done in a brand new facility, compared to a existing facility, I'd really appreciate comments or pointers, as it's it might not be something you control directly, but then again, neither is the carbon intensity of the electricity you consume - it's something you influence through decisions about when and where you deploy that code.
I've expanded on this idea with this memory aide Change the time, Change the speed, Change the place. in this blog post below:
https://rtl.chrisadams.me.uk/2023/07/options-to-make-software-greener-without-changing-the-code-how-to-remember-them/
Anyway - back the original point. I can see a scenarino where I might be able to optimise for embodied carbon just like I could optimise for use-phase carbon, if this information was exposed to me.
I mean, carbon-aware computing is somewhat based on the idea that if the primitives we use are fungible enough (i.e compute hours, gigabytes of storage) are consistent across regions, then you're able to abstract those away somewhat, which lets you then think about achieving other goals related to a given task, particularly around climate impacts.
The thing is, I think logic can be applied at the provider level too, as the APIs for accessing them are often fairly consistent, or at least can be abstracted away somewhat too, and we see this when defined in how we might consume compute instances, s3 buckets and objects, and other resources as defined in tools like terraform, ansible, pulumi etc.
There's obvious a limit to this fungibility, and these are not like for like comparisons, but I'd be interested - has anyone come across any research in this area?
Beta Was this translation helpful? Give feedback.
All reactions