Lovecraft

Dramacrat
well their roots was originally budget competition
sucks though
This is like the fourth time they cede the high end. Mid-range is where the majority of sales are, and what they need now is market share - not peak performance. Taping out a large monolithic chip is incredibly expensive, and the larger the chip the higher percentage of defective chips per wafer. If they can have a RX480 moment again and not have their stockpile eaten by some stupid fad like crypto currency mining, then doubling their current market share in a year is achievable.
 

Likeicare

Dramacrat
Janny
Mid-range is where the majority of sales are, and what they need now is market share
never going to happen

image_2024-09-22_190444633.png



more people are running 4090 and 4080s than 6600s


AMDs paid marketing towards influencers and tech outlets doesnt work out in real world results, same thing on the CPU side
 

SuperChungus

That'll hold 'em awight
and i agree with LIC
Blender performs significantly better with CUDA than OpenCL or Vulkan(beaten in)
this was my use case that had me select a 3090FTW3 a while ago

Nvidia managed to get everybody into CUDA, and now the alternatives are lacking.
Point blank, you are not going to find a scientific program that can work better with the alternatives than CUDA.
and scientific computing contracts are where the money lives. This is why Intel Xeon still lives.
 

Lovecraft

Dramacrat
never going to happen

View attachment 20903


more people are running 4090 and 4080s than 6600s


AMDs paid marketing towards influencers and tech outlets doesnt work out in real world results, same thing on the CPU side
and i agree with LIC
Blender performs significantly better with CUDA than OpenCL or Vulkan(beaten in)
this was my use case that had me select a 3090FTW3 a while ago

Nvidia managed to get everybody into CUDA, and now the alternatives are lacking.
Point blank, you are not going to find a scientific program that can work better with the alternatives than CUDA.
and scientific computing contracts are where the money lives. This is why Intel Xeon still lives.
Same could be said about their CPU market share that has more than tripled in the HPC segment since 2016, more than doubled in the general server segment.
Still lagging in OEM desktop due to vendor inertia, still making gains in server.
Current intel problems means that they can make further inroads in the OEM segment.

With the current and future inflated Nvidia pricing there is an impetous in many markets to find viable alternatives, AMDs problem being exactly that CUDA is so entrenched in many segments.
If they can provide comparable performance with ROCm or an entirely new framework and persuade enough of the big vendors to support it, then doubling their current measly market share is quite doable.
On the gaming side it is the same deal, they currently have a market share of ~12% and if they can provide RTX4070/4070Ti pricing and feature sets at an attractive price point, then there's a good chance they can catch a good proportion of sales in the entry and mid-range segment.
The very recent update to FSR4 with better framegen and upscaling, the PS6 design win and probable Xbox design win also means frameworks, game engines and services intended to run on these systems will have to be optimized for their architecture.

AMD has a history of shooting themselves in the feet, but I am carefully optimistic that they may become more competitive and thus present a viable alternative on the GPU side. We'll see in six months or so
 

SuperChungus

That'll hold 'em awight
Same could be said about their CPU market share that has more than tripled in the HPC segment since 2016, more than doubled in the general server segment.
Still lagging in OEM desktop due to vendor inertia, still making gains in server.
Current intel problems means that they can make further inroads in the OEM segment.

With the current and future inflated Nvidia pricing there is an impetous in many markets to find viable alternatives, AMDs problem being exactly that CUDA is so entrenched in many segments.
If they can provide comparable performance with ROCm or an entirely new framework and persuade enough of the big vendors to support it, then doubling their current measly market share is quite doable.
On the gaming side it is the same deal, they currently have a market share of ~12% and if they can provide RTX4070/4070Ti pricing and feature sets at an attractive price point, then there's a good chance they can catch a good proportion of sales in the entry and mid-range segment.
The very recent update to FSR4 with better framegen and upscaling, the PS6 design win and probable Xbox design win also means frameworks, game engines and services intended to run on these systems will have to be optimized for their architecture.

AMD has a history of shooting themselves in the feet, but I am carefully optimistic that they may become more competitive and thus present a viable alternative on the GPU side. We'll see in six months or so
Unfortunately, I like your sentiment, but it's not going to happen.
Point blank, in the high-end market Nvidia dominates. Period. Very little contracts were signed with AMD for Radeons compared to Nvidia because everything GPGPU is so heavily built on CUDA. The millions of "regular joes" that buy such high-end cards is so miniscule both in quantity and profit that Nvidia primarily makes profit, again, with large purchase and support contracts with organizations.

Console doesn't matter, that is typically considered "budget" unless you're Sony trying to make up for Concord losses.

Current intel problems means that they can make further inroads in the OEM segment.

With the current and future inflated Nvidia pricing there is an impetous in many markets to find viable alternatives, AMDs problem being exactly that CUDA is so entrenched in many segments.
If they can provide comparable performance with ROCm or an entirely new framework and persuade enough of the big vendors to support it, then doubling their current measly market share is quite doable.

Point blank. Nobody can compare with Nvidia+CUDA in the GPGPU market, which is where these high end cards positioned themselves into.
Intel? "They can't even make CPUs right anymore. Why would I trust them with my data analysis?"
AMD? "My program doesn't support OpenCL or Vulkan. And when it does, it underperforms compared to CUDA on comparable hardware." For a comparison, my GPT4All computer with a 2070 has a 6 token per second loss when using Vulkan compared to CUDA. That's big. With my Eurocom Sky R2, with a 3060 it's a 3 token per second loss. I unfortunately don't have comparable radeons to make a better comparison.
Qualcomm? They have made it clear they are mobile-oriented.
Samsung? No.
TSMC? They're a fab. They do whatever someone tells them.

This is a good decision by AMD. Period. They can now focus on the market that trusts them the most, which is the consumer market.
 
Last edited:

Lovecraft

Dramacrat
Unfortunately, I like your sentiment, but it's not going to happen.
Point blank, in the high-end market Nvidia dominates. Period. Very little contracts were signed with AMD for Radeons compared to Nvidia because everything GPGPU is so heavily built on CUDA. The millions of "regular joes" that buy such high-end cards is so miniscule both in quantity and profit that Nvidia primarily makes profit, again, with large purchase and support contracts with organizations.

Console doesn't matter, that is typically considered "budget" unless you're Sony trying to make up for Concord losses.



Point blank. Nobody can compare with Nvidia+CUDA in the GPGPU market, which is where these high end cards positioned themselves into.
Intel? "They can't even make CPUs right anymore. Why would I trust them with my data analysis?"
AMD? "My program doesn't support OpenCL or Vulkan. And when it does, it underperforms compared to CUDA on comparable hardware." For a comparison, my GPT4All computer with a 2070 has a 6 token per second loss when using Vulkan compared to CUDA. That's big. With my Eurocom Sky R2, with a 3060 it's a 3 token per second loss. I unfortunately don't have comparable radeons to make a better comparison.
Qualcomm? They have made it clear they are mobile-oriented.
Samsung? No.
TSMC? They're a fab. They do whatever someone tells them.

This is a good decision by AMD. Period. They can now focus on the market that trusts them the most, which is the consumer market.
To adress the CUDA point, yes - many vendors are waking up to the lock-in.
AMD has had a few big wins recently, with both MS, IBM, HPE and Oracle signing up to buy loads of MI300 series accelerators, and they are tuning the workloads to fit. Not that they aren't hedging their bets by also investing in Nvidia, but letting an almost de-facto monopoly develop and persists is in nobody's interest.
Again, they are quite well represented on the Top 500 list of super computers, a reason of which is that large scientific and government institutions don't like vendor lock-in, and has the resources to program for the hardware in question instead of running off to PyTorch and CUDA frameworks.

You are making the mistaken assumption that because workloads and skillsets are shifted towards CUDA now, that means that the status quo will stay that way. When vendors have to queue up to buy a minimum number of GPUs to even be considered from Nvidia (or any vendor for that matter) at vastly inflated prices, that in turn is a huge motivation to invest in other alternative frameworks.
Just look at VmWare and Oracle Java. Usage terms and license pricing changed, and now a majority of the customer base are looking very hard at alternatives.

Summing some of your other points:
Qualcom has been very open about their desire to make server chips, which would probably be on the market right now if they weren't ensconced in license litigation with ARM.
Console architecture wins are beneficial for the winner as a substantial number of game engines will be optimized for the winners architecture and architectural quirks
Why bring up Samsung? Their custom ARM architectural work is currently exclusively mobile, and their leading edge CPU/GPU foundry efforts are suffering from suboptimal yields.

At the moment I am sitting at a rig with an Nvidia card exactly because I have to deal with CUDA rubbish, and I'll say this for it; Overpriced bandwidth-starved garbage with superior software support.
When the software in question supports ROCm or its replacement (on the roadmap for 2025) I'm chucking this Nvidia dud for something with twice the memory and memory bandwidth in the same price bracket.
 
Back
Top Bottom