Can hardware be used to 100% efficient in the public cloud? #106
markus-gsf-seidl
started this conversation in
Community Working Group
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I reference this sentence:
"This is one of the main advantages of the public cloud; you know that when you do need to scale up, the space will be there to take up the slack. With multiple organizations making use of the public cloud, spare capacity can always be made available to whoever needs it, so that no servers are sitting idle."
From: https://github.com/Green-Software-Foundation/learn/blob/a9bcbe07d478a7b0ed8429f4f741e7d2575a1109/docs/hardware-efficiency.mdx#L66
It was pointed out in a working group in my current company, that this allows you to get and remove embodied emissions from your calculation at will. Because the public cloud will always have 100% hardware efficiency.
Do we have any proof for this? Because logically I think this can't be true, it must be <<100%.
If hardware is always used by customer applications (leaving right-sizing out for a moment), a customer would be unable to scale-up an application without someone else scaling-down at the same time (or in an acceptable timeframe for both). Does this really happen?
I would argue that this would hinder the customer experience and customers would always experience a "full cloud where nothing starts the next 15m" (15m being the AWS spot instance timeout you have until the instance is taken away from you). Since I never had the experience with all the clouds, I think there is always a spare capacity available that is roaming free and unused.
Beta Was this translation helpful? Give feedback.
All reactions