No benchmarks. No FLOPs. No comparison to commodity hardware. I hate the cloud servers. "9 is faster than 8 which is faster than 7 which is faster than 6, ..., which is faster than 1, which has unknown performance".
AWS has plenty of AI specific offerings for EC2. The P, G and Trn families hit a wide range of AI use cases. Why wouldn't they also offer a general purpose one for typical compute?
I don't really see how this is a productive comment for the article. Most of big tech focuses on AI and those typically get traction in the news. AWS specically has plenty of non-AI announcements: https://aws.amazon.com/new/
Parent comment made a low quality joke that lacked substance.
https://instances.vantage.sh/ recently added alerts for any pricing changes on EC2, including newly launching new instances. The site rebuilds every 4 hours so it usually breaks the pricing news first. I have it on for myself and its super helpful just to see when AWS changes things.
[Disclaimer, I'm CEO of Vantage - the company that maintains the site]
[Not a sales pitch - just answering the questions]
This AWS EC2 site is just an open-source project and site we maintain for the benefit of the community. So it's not directly our business but it promotes our brand and is just a helpful site that I think should exist. It's very popular and has been around for about 15 years now.
Our main business hosted on the main domain of https://www.vantage.sh/ is around cloud cost management across 25 different providers (AWS, Azure, Datadog, OpenAI, Anthropic, etc) and the use-cases there about carving up one bill and showing specific costs back to engineers for them to be held accountable to, take action on, etc. Cloud costs and their impact on companies' margins is a big enough problem for vendors like us to exist and we're one player in a larger market.
It always strikes me that the best place of information for a cloud provider is not from that provider but a third party website. This is not a good comment for the cloud provider.
We actually recently made the decision to staff someone full time on the site just to maintain it for the community. Even the JSON file for the site gets hit hundreds of thousands of times per day...feels like it's become kind of the de-facto source of truth in the community for where to get reliable AWS pricing information and I believe its powering a pretty remarkable amount of downstream applications with how much usage its getting.
We acquired the site almost 5 years ago and want to continue to improve it for the community. If you have any cloud cost management needs, we're also able to help for our main business here: https://www.vantage.sh/
Thry have to adhere to their marketing words and numbers like "efficiency increase of 99999% in performance per dollar per token per watt per U-235 atom used"
While the 5 variant isn't yet available outside of the preview, you can of course spin a 4 up and run geekbench yourself. Plenty of people have and you can find them in the GB DB. And of course most people spin up their specific workload to see how it compares.
Core per core it pales compared to Apple's superlative processors, and falls behind AMD as well.
But...that doesn't matter. You buy cloud resources generally for $/perf, and the Graviton's are far and away ahead on that metric.
While I know this thread will turn into some noisy whack-a-mole bit of nonsense, an easy comparison is the c8g.2xlarge vs the c8i.2xlarge. The former is Graviton 4 vs Granite Rapids in the latter. Otherwise both 16GB, 15Gbps networking, and both are compute optimized, 8 vCPU machines.
Performance is very similar. Indeed, since you herald the ffmpeg result elsewhere the Graviton machine beats the Intel device by 16%.
And the Graviton is 17% cheaper.
Like, this is a ridiculous canard to even go down. Over half of AWS' new machines are Graviton based, but per your rhetoric they're actually uncompetitive. So I guess no one is using them? Wow, silly Amazon.
The latter is a 4-core machine with 8 HyperThreads. This doesn't actually matter to your price-performance metric but is worth mentioning because it's the reason why the Intel part performs so comparatively poorly. They're fast chips, they're just wildly uneconomical. If you wanted to compare equal core counts (c8i.4xlarge vs. c8g.2xlarge), then the Intel instance type wins on performance but the Graviton is 58% cheaper.
In Amazon's Graviton 5 PR they note that over half of all new compute capacity added to AWS over the past three years has been Graviton-based. That's an amazing stat.
It really is incredible how ARM basically commoditized processors (in a good way).
Inversely, I think it's siloed things in somewhat unhealthy ways. We now have a number of vendors that sell/rent you machines that are not generally purchasable. I don't think we've seen too many negative consequences yet, but if things continue in this direction then choosing a cloud provider for a high performance application (eg, something you'll want to compile to machine code and is therefore architecture specific in some way as opposed to a python flask app or something), one may have to make decisions that lock one into a particular cloud vendor. Or at least, it will further increase the cost of changing vendors if you have to significantly tweak your application for some oddities between diff arm implementations at different hosting providers, etc.
I would much rather see some kind of mandatory open market sale of all cpu lines so that in theory you can run graviton procs in rackspace, apple m5 servers in azure, etc.
Graviton CPUs are just Neoverse cores (V3 in this case). While it’s true that you can’t just buy a box with the same cores, the cores are basically the same as what you’ll get on a Google or Azure cloud instance (eventually… the latter two have yet to make available anything with Neoverse V3 yet).
Strangely the first-generation Graviton chips have actually shown up in MikroTik hardware that you can just buy. Amazon must be selling off their stock to third parties once it's phased out of use at AWS, but I doubt they'll ever sell the stuff they're still using.
> over half of all new compute capacity added to AWS over the past three years has been Graviton-based. That's an amazing stat.
Yes and maybe no. They do "cheat" in that internal / managed services often use Graviton where possible. It works out cheaper without the Intel / AMD "tax".
More specifically, the CPU cores in AWS Graviton5 are Neoverse V3 cores, which implement the Armv9.2-A ISA specification.
Neoverse V3 is the server version of the Cortex-X4 core which has been used in a large number of smartphones.
The Neoverse V3 and Cortex-X4 cores are very similar in size and performance with the Intel E-cores Skymont and Darkmont (the E-cores of Arrow Lake and of the future Panther Lake).
Intel will launch next year a server CPU with Darkmont cores (Clearwater Forest), which will have cores similar to this AWS Graviton5, but for now Intel only has the Sierra Forest server CPUs with E-cores (belonging to the Xeon 6 series), which use much weaker CPU cores than those of the new Graviton5 (i.e. cores equivalent with the Crestmont E-cores of the old Meteor Lake).
AMD Zen 5 CPUs are significantly better for computationally-intensive workloads, but for general-purpose applications without great computational demands the cores of Graviton5 and also Intel Skymont/Darkmont have greater performance per die area and power consumption, therefore lower cost.
>The Neoverse V3 and Cortex-X4 cores are very similar in size and performance with the Intel E-cores Skymont and Darkmont (the E-cores of Arrow Lake and of the future Panther Lake).
That is not entirely accurate. X4 is big core design. All of its predecessor and successor has always had >1mm2 die space design. X4 is already on the smaller scale, it was the last ARM design before they went all in chasing Apple's A Series IPC. IRRC it was about 1.5mm2 depending on L2 cache. E-Core for Intel has always been below 1mm2. And again IRRC that die size has always been Intel's design guidelines and limits for E-Core design.
More recent X5 / X925 and X6 / X930 / C1 Ultra?? ( I can no longer remember those names ) are double the size of X4. With X930 / C1 Ultra very close to A19 Pro Performance. Within ~5%.
I assume they stick with X4 is simply because it offers best Performance / Die Space, but it is still a 2-3 years old design. On the other hand I am eagerly waiting for Zen 6c with 256 Core. I cant wait to see the Oxide team using Zen 6c, forget about the cloud. 90%+ of companies could fit their IT resources in a few racks.
Nope. Cortex-X4 is not a big core design, though you are right that at the time of its launch in 2023 the Arm company was not offering bigger cores yet.
The cores designed now by the Arm company for non-embedded applications are distributed into 4 sizes, of which the smaller 2 sizes correspond to what were the original "big and little sizes", but what was originally the big size has been continued into what are now medium-to-small cores, and the last such core before the rebranding was Cortex-A725.
Cortex-X4 is of the second size, medium-to-large. Cortex-X925 was the last big core design before Arm changed the branding this year, so several recent smartphones use Cortex-X925 as the big core, Cortex-X4 as the medium-sized core and Cortex-A725 as the small cores, omitting the smallest Cortex-A520 cores.
Cortex-X4 and Intel Skymont have exactly the same size, 1.7 square millimeter with 1 MB L2 cache memory (in Dimensity 9400 and Lunar Lake). This is about a third of the area of a big core like an Intel P-core and less than a half of the area of a Zen 5 compact core (but AMD uses an older less dense CMOS process; had AMD also used a "3 nm" process the area ratio would not have been so great, and Zen 5 has a double throughput for array operations).
Moreover, Neoverse V3/Cortex-X4 and Intel Skymont/Darkmont have approximately the same number of execution units of each kind in their backends. Only their frontends are very different, which is caused by the different ISAs that must be decoded, Aarch64 vs. x86-64.
The last Arm big core before rebranding, Cortex-X925, was totally unsuitable as a server core, as it had very poor performance per area, having a double area in comparison with Cortex-X4, but a performance greater by only a few tens percent at most. Therefore the performance per socket of a server CPU would have been much lower than that of a Graviton5, had it been implemented with Cortex-X925, due to the much lower number of cores per socket that could have been achieved.
Cortex-X4 was launched in 2023 and it was the big core of the 2024 flagship smartphones, then it has become the medium core of the 2025 flagship smartphones. Its server variant, Neoverse V3, has been launched in 2024 and it has been deployed in products only this year, first by NVIDIA (in Orin) and now by AWS.
It is not at all an obsolete core. As I have said, Intel will have only next year a server CPU with E-cores as good as Cortex-X4. We do not know yet any real numbers about the newly announced Arm cores that have replaced Cortex-A520, Cortex-A725, Cortex-X4 and Cortex-X925, so we do not know if they are really significantly better. The numbers used by Arm in presentations cannot be verified independently and usually when the performance is measured much later in actual products it does not match the optimistic predictions.
The new generation of cores might be measurably better only for computational applications, because they now include matrix execution units, but their first implementation may be not optimal yet, as it happened in the past with the first implementation of SVE, when the new cores had worse energy efficiency than the previous generation (which was corrected by improved implementations later).
Well, also no licensing costs to AMD/intel. So even if at slightly worse performance per chip it’ll end up being cheaper still. AWS doesn’t need to make money on their chips, as they already have the Ec2 margin.
Good question! I read two different Amazon press releases on this but still had to come here for the answer. It seems strange they don't want to advertise the ISA of a compute product - does marketing think it might scare people away?
It seems they don't document the ISA for any instance types. This could be deliberate (and unrelated to marketing) in case they decide to pull features from the instance types in a microcode update. Without any ISA specifics, previous customer commitments towards instance types would still apply.
Awhile back I was researching cloud instances for performance, And I noticed that AWS didn't have the latest generations of AMD/Intel. Which are far superior to Graviton 4.
It seems obvious to me that AWS using their market dominance to shift workloads to Graviton.
This sort of makes sense. If there is no competitive advantage in buying the latest AMD or Intel CPUs, why buy them when you can just deploy a generic (ARM licensed) CPU at cheaper prices.
The competitive advantage right now is in NVIDIA chips and I guess AWS needs all their free cash to buy those instead of non-competitive advantage CPUs.
I imagine it can take time to actually validate and build out that new infrastructure at scale after AMD/Intel announces these products to the market. It wouldn't surprise me if hyperscalers like AWS, Google, Microsoft, et. al. get a little bit of early previews of this hardware, but it still takes time to negotiate sales, buy the chips, and then actually receive the new chips and make actually useful systems.
Meanwhile, when AWS announces a new chip its probably something they have already been building out in their datacenters.
No benchmarks. No FLOPs. No comparison to commodity hardware. I hate the cloud servers. "9 is faster than 8 which is faster than 7 which is faster than 6, ..., which is faster than 1, which has unknown performance".
As soon as they're publicly usable people benchmark them carefully. All currently available models have clear metrics.
General purpose not AI specific? I can't believe it.
AWS has plenty of AI specific offerings for EC2. The P, G and Trn families hit a wide range of AI use cases. Why wouldn't they also offer a general purpose one for typical compute?
Plus with the AI boom, making sure that general purpose compute jobs aren't competing for valuable GPUs is very worthwhile...
Where do you see GPUs in this release? This is a CPU-based instance.
You missed the touch of sarcasm. It's a joke, recent AWS announcements have been heavily AI-focused.
I don't really see how this is a productive comment for the article. Most of big tech focuses on AI and those typically get traction in the news. AWS specically has plenty of non-AI announcements: https://aws.amazon.com/new/
Parent comment made a low quality joke that lacked substance.
discussed a couple days ago: https://news.ycombinator.com/item?id=46191993
AWS introduces Graviton5–the company's most powerful and efficient CPU (14 comments)
Pricing when? :(
https://aws.amazon.com/ec2/pricing/on-demand/
https://instances.vantage.sh/ recently added alerts for any pricing changes on EC2, including newly launching new instances. The site rebuilds every 4 hours so it usually breaks the pricing news first. I have it on for myself and its super helpful just to see when AWS changes things.
[Disclaimer, I'm CEO of Vantage - the company that maintains the site]
Are the built in AWS cost monitoring tools so bad that multiple businesses( including yours) exist just to monitor cost externally ?
Or is your value proposition for companies that use a bunch of different cloud providers ?
[Not a sales pitch - just answering the questions]
This AWS EC2 site is just an open-source project and site we maintain for the benefit of the community. So it's not directly our business but it promotes our brand and is just a helpful site that I think should exist. It's very popular and has been around for about 15 years now.
Our main business hosted on the main domain of https://www.vantage.sh/ is around cloud cost management across 25 different providers (AWS, Azure, Datadog, OpenAI, Anthropic, etc) and the use-cases there about carving up one bill and showing specific costs back to engineers for them to be held accountable to, take action on, etc. Cloud costs and their impact on companies' margins is a big enough problem for vendors like us to exist and we're one player in a larger market.
If only dedicated game servers could run on aarch64...
I've been experimenting FEX on Ampere A1 with x86 game servers but the performance is not that impressed
Doesn't help that Unity requires forking over a pile of cash just to build for Linux ARM ("Embedded Linux") and everything else is free.
Is there a list of Geekbench performance metrics for the various Graviton CPUs?
I need a reference point so I can compare it to Intel/AMD and Apple's ARM cpus.
Otherwise it is buzzwords and superlatives. I need numbers so I can understand.
https://instances.vantage.sh/ shows coremark scores for each EC2 instance type.
It always strikes me that the best place of information for a cloud provider is not from that provider but a third party website. This is not a good comment for the cloud provider.
Funny story: When I was at AWS, I found that the easiest way automate instance data collection was by using the Vantage website code (it's on GitHub).
The cobbler's children have no shoes.
Founder of Vantage here and former AWS employee.
We actually recently made the decision to staff someone full time on the site just to maintain it for the community. Even the JSON file for the site gets hit hundreds of thousands of times per day...feels like it's become kind of the de-facto source of truth in the community for where to get reliable AWS pricing information and I believe its powering a pretty remarkable amount of downstream applications with how much usage its getting.
We acquired the site almost 5 years ago and want to continue to improve it for the community. If you have any cloud cost management needs, we're also able to help for our main business here: https://www.vantage.sh/
Awesome to see all the comments on it here!
Thry have to adhere to their marketing words and numbers like "efficiency increase of 99999% in performance per dollar per token per watt per U-235 atom used"
Also, use the ffmpeg fps column to check single threaded score.
While the 5 variant isn't yet available outside of the preview, you can of course spin a 4 up and run geekbench yourself. Plenty of people have and you can find them in the GB DB. And of course most people spin up their specific workload to see how it compares.
Core per core it pales compared to Apple's superlative processors, and falls behind AMD as well.
But...that doesn't matter. You buy cloud resources generally for $/perf, and the Graviton's are far and away ahead on that metric.
Not true at all. Single thread CPU scores for Graviton2 are about half that of Intel, while only being about 20% cheaper at best.
Do you realize we're talking about Graviton5 now?
Groan. Yes, absolutely true.
While I know this thread will turn into some noisy whack-a-mole bit of nonsense, an easy comparison is the c8g.2xlarge vs the c8i.2xlarge. The former is Graviton 4 vs Granite Rapids in the latter. Otherwise both 16GB, 15Gbps networking, and both are compute optimized, 8 vCPU machines.
Performance is very similar. Indeed, since you herald the ffmpeg result elsewhere the Graviton machine beats the Intel device by 16%.
And the Graviton is 17% cheaper.
Like, this is a ridiculous canard to even go down. Over half of AWS' new machines are Graviton based, but per your rhetoric they're actually uncompetitive. So I guess no one is using them? Wow, silly Amazon.
The latter is a 4-core machine with 8 HyperThreads. This doesn't actually matter to your price-performance metric but is worth mentioning because it's the reason why the Intel part performs so comparatively poorly. They're fast chips, they're just wildly uneconomical. If you wanted to compare equal core counts (c8i.4xlarge vs. c8g.2xlarge), then the Intel instance type wins on performance but the Graviton is 58% cheaper.
In Amazon's Graviton 5 PR they note that over half of all new compute capacity added to AWS over the past three years has been Graviton-based. That's an amazing stat.
It really is incredible how ARM basically commoditized processors (in a good way).
Inversely, I think it's siloed things in somewhat unhealthy ways. We now have a number of vendors that sell/rent you machines that are not generally purchasable. I don't think we've seen too many negative consequences yet, but if things continue in this direction then choosing a cloud provider for a high performance application (eg, something you'll want to compile to machine code and is therefore architecture specific in some way as opposed to a python flask app or something), one may have to make decisions that lock one into a particular cloud vendor. Or at least, it will further increase the cost of changing vendors if you have to significantly tweak your application for some oddities between diff arm implementations at different hosting providers, etc.
I would much rather see some kind of mandatory open market sale of all cpu lines so that in theory you can run graviton procs in rackspace, apple m5 servers in azure, etc.
Graviton CPUs are just Neoverse cores (V3 in this case). While it’s true that you can’t just buy a box with the same cores, the cores are basically the same as what you’ll get on a Google or Azure cloud instance (eventually… the latter two have yet to make available anything with Neoverse V3 yet).
Strangely the first-generation Graviton chips have actually shown up in MikroTik hardware that you can just buy. Amazon must be selling off their stock to third parties once it's phased out of use at AWS, but I doubt they'll ever sell the stuff they're still using.
> over half of all new compute capacity added to AWS over the past three years has been Graviton-based. That's an amazing stat.
Yes and maybe no. They do "cheat" in that internal / managed services often use Graviton where possible. It works out cheaper without the Intel / AMD "tax".
Didn't M8g just come out? Am I crazy?
Not crazy. They just have a pretty rapid release cadence for Graviton. New chips ~ every two years.
So these are aarch64, right?
More specifically, the CPU cores in AWS Graviton5 are Neoverse V3 cores, which implement the Armv9.2-A ISA specification.
Neoverse V3 is the server version of the Cortex-X4 core which has been used in a large number of smartphones.
The Neoverse V3 and Cortex-X4 cores are very similar in size and performance with the Intel E-cores Skymont and Darkmont (the E-cores of Arrow Lake and of the future Panther Lake).
Intel will launch next year a server CPU with Darkmont cores (Clearwater Forest), which will have cores similar to this AWS Graviton5, but for now Intel only has the Sierra Forest server CPUs with E-cores (belonging to the Xeon 6 series), which use much weaker CPU cores than those of the new Graviton5 (i.e. cores equivalent with the Crestmont E-cores of the old Meteor Lake).
AMD Zen 5 CPUs are significantly better for computationally-intensive workloads, but for general-purpose applications without great computational demands the cores of Graviton5 and also Intel Skymont/Darkmont have greater performance per die area and power consumption, therefore lower cost.
>The Neoverse V3 and Cortex-X4 cores are very similar in size and performance with the Intel E-cores Skymont and Darkmont (the E-cores of Arrow Lake and of the future Panther Lake).
That is not entirely accurate. X4 is big core design. All of its predecessor and successor has always had >1mm2 die space design. X4 is already on the smaller scale, it was the last ARM design before they went all in chasing Apple's A Series IPC. IRRC it was about 1.5mm2 depending on L2 cache. E-Core for Intel has always been below 1mm2. And again IRRC that die size has always been Intel's design guidelines and limits for E-Core design.
More recent X5 / X925 and X6 / X930 / C1 Ultra?? ( I can no longer remember those names ) are double the size of X4. With X930 / C1 Ultra very close to A19 Pro Performance. Within ~5%.
I assume they stick with X4 is simply because it offers best Performance / Die Space, but it is still a 2-3 years old design. On the other hand I am eagerly waiting for Zen 6c with 256 Core. I cant wait to see the Oxide team using Zen 6c, forget about the cloud. 90%+ of companies could fit their IT resources in a few racks.
Nope. Cortex-X4 is not a big core design, though you are right that at the time of its launch in 2023 the Arm company was not offering bigger cores yet.
The cores designed now by the Arm company for non-embedded applications are distributed into 4 sizes, of which the smaller 2 sizes correspond to what were the original "big and little sizes", but what was originally the big size has been continued into what are now medium-to-small cores, and the last such core before the rebranding was Cortex-A725.
Cortex-X4 is of the second size, medium-to-large. Cortex-X925 was the last big core design before Arm changed the branding this year, so several recent smartphones use Cortex-X925 as the big core, Cortex-X4 as the medium-sized core and Cortex-A725 as the small cores, omitting the smallest Cortex-A520 cores.
Cortex-X4 and Intel Skymont have exactly the same size, 1.7 square millimeter with 1 MB L2 cache memory (in Dimensity 9400 and Lunar Lake). This is about a third of the area of a big core like an Intel P-core and less than a half of the area of a Zen 5 compact core (but AMD uses an older less dense CMOS process; had AMD also used a "3 nm" process the area ratio would not have been so great, and Zen 5 has a double throughput for array operations).
Moreover, Neoverse V3/Cortex-X4 and Intel Skymont/Darkmont have approximately the same number of execution units of each kind in their backends. Only their frontends are very different, which is caused by the different ISAs that must be decoded, Aarch64 vs. x86-64.
The last Arm big core before rebranding, Cortex-X925, was totally unsuitable as a server core, as it had very poor performance per area, having a double area in comparison with Cortex-X4, but a performance greater by only a few tens percent at most. Therefore the performance per socket of a server CPU would have been much lower than that of a Graviton5, had it been implemented with Cortex-X925, due to the much lower number of cores per socket that could have been achieved.
Cortex-X4 was launched in 2023 and it was the big core of the 2024 flagship smartphones, then it has become the medium core of the 2025 flagship smartphones. Its server variant, Neoverse V3, has been launched in 2024 and it has been deployed in products only this year, first by NVIDIA (in Orin) and now by AWS.
It is not at all an obsolete core. As I have said, Intel will have only next year a server CPU with E-cores as good as Cortex-X4. We do not know yet any real numbers about the newly announced Arm cores that have replaced Cortex-A520, Cortex-A725, Cortex-X4 and Cortex-X925, so we do not know if they are really significantly better. The numbers used by Arm in presentations cannot be verified independently and usually when the performance is measured much later in actual products it does not match the optimistic predictions.
The new generation of cores might be measurably better only for computational applications, because they now include matrix execution units, but their first implementation may be not optimal yet, as it happened in the past with the first implementation of SVE, when the new cores had worse energy efficiency than the previous generation (which was corrected by improved implementations later).
Well, also no licensing costs to AMD/intel. So even if at slightly worse performance per chip it’ll end up being cheaper still. AWS doesn’t need to make money on their chips, as they already have the Ec2 margin.
Do you have any insight on when these will be generally available?
Amazon says "Sign up for the preview today".
I have no connection with them, so I have no idea when these instances will be generally available.
Privileged big customers appear to be already testing them.
Yes, Graviton chips are aarch64.
Good question! I read two different Amazon press releases on this but still had to come here for the answer. It seems strange they don't want to advertise the ISA of a compute product - does marketing think it might scare people away?
It seems they don't document the ISA for any instance types. This could be deliberate (and unrelated to marketing) in case they decide to pull features from the instance types in a microcode update. Without any ISA specifics, previous customer commitments towards instance types would still apply.
They list what specific cpus you get for each instance type, see eg https://docs.aws.amazon.com/ec2/latest/instancetypes/gp.html
At this point I think they just assume that everyone who cares already know that graviton=arm
Awhile back I was researching cloud instances for performance, And I noticed that AWS didn't have the latest generations of AMD/Intel. Which are far superior to Graviton 4.
It seems obvious to me that AWS using their market dominance to shift workloads to Graviton.
This sort of makes sense. If there is no competitive advantage in buying the latest AMD or Intel CPUs, why buy them when you can just deploy a generic (ARM licensed) CPU at cheaper prices.
The competitive advantage right now is in NVIDIA chips and I guess AWS needs all their free cash to buy those instead of non-competitive advantage CPUs.
I think Graviton would still be much more energy efficient though? (I'm not sure)
I believe the main motivator for AWS is efficiency, not performance. $ of income per watt spend is much better for them on Graviton.
At what point was that true? For example right now ec2 has granite rapids cpus available which are very much the latest and greatest from intel.
>Which are far superior to Graviton 4.
Not if you are looking at price/performance. AWS could be taking a loss to elevate the product though, no way to know for sure.
If they were taking a loss, they wouldn't run a crapton of internal workloads on Graviton.
I imagine it can take time to actually validate and build out that new infrastructure at scale after AMD/Intel announces these products to the market. It wouldn't surprise me if hyperscalers like AWS, Google, Microsoft, et. al. get a little bit of early previews of this hardware, but it still takes time to negotiate sales, buy the chips, and then actually receive the new chips and make actually useful systems.
Meanwhile, when AWS announces a new chip its probably something they have already been building out in their datacenters.
>Best price performance
Don't they still offer free nano EC2s? This is not a better price than $0.