Unfortunately, I like your sentiment, but it's not going to happen.
Point blank, in the high-end market Nvidia dominates. Period. Very little contracts were signed with AMD for Radeons compared to Nvidia because everything GPGPU is so heavily built on CUDA. The millions of "regular joes" that buy such high-end cards is so miniscule both in quantity and profit that Nvidia primarily makes profit, again, with large purchase and support contracts with organizations.
Console doesn't matter, that is typically considered "budget" unless you're Sony trying to make up for Concord losses.
Point blank. Nobody can compare with Nvidia+CUDA in the GPGPU market, which is where these high end cards positioned themselves into.
Intel? "They can't even make CPUs right anymore. Why would I trust them with my data analysis?"
AMD? "My program doesn't support OpenCL or Vulkan. And when it does, it underperforms compared to CUDA on comparable hardware." For a comparison, my GPT4All computer with a 2070 has a 6 token per second loss when using Vulkan compared to CUDA. That's big. With my Eurocom Sky R2, with a 3060 it's a 3 token per second loss. I unfortunately don't have comparable radeons to make a better comparison.
Qualcomm? They have made it clear they are mobile-oriented.
Samsung? No.
TSMC? They're a fab. They do whatever someone tells them.
This is a good decision by AMD. Period. They can now focus on the market that trusts them the most, which is the consumer market.
To adress the CUDA point, yes - many vendors are waking up to the lock-in.
AMD has had a few big wins recently, with both MS, IBM, HPE and Oracle signing up to buy loads of MI300 series accelerators, and they are tuning the workloads to fit. Not that they aren't hedging their bets by also investing in Nvidia, but letting an almost de-facto monopoly develop and persists is in nobody's interest.
Again, they are quite well represented on the Top 500 list of super computers, a reason of which is that large scientific and government institutions don't like vendor lock-in, and has the resources to program for the hardware in question instead of running off to PyTorch and CUDA frameworks.
You are making the mistaken assumption that because workloads and skillsets are shifted towards CUDA now, that means that the status quo will stay that way. When vendors have to queue up to buy a minimum number of GPUs to even be considered from Nvidia (or any vendor for that matter) at vastly inflated prices, that in turn is a huge motivation to invest in other alternative frameworks.
Just look at VmWare and Oracle Java. Usage terms and license pricing changed, and now a majority of the customer base are looking very hard at alternatives.
Summing some of your other points:
Qualcom has been very open about their desire to make server chips, which would probably be on the market right now if they weren't ensconced in license litigation with ARM.
Console architecture wins are beneficial for the winner as a substantial number of game engines will be optimized for the winners architecture and architectural quirks
Why bring up Samsung? Their custom ARM architectural work is currently exclusively mobile, and their leading edge CPU/GPU foundry efforts are suffering from suboptimal yields.
At the moment I am sitting at a rig with an Nvidia card exactly because I have to deal with CUDA rubbish, and I'll say this for it; Overpriced bandwidth-starved garbage with superior software support.
When the software in question supports ROCm or its replacement (on the roadmap for 2025) I'm chucking this Nvidia dud for something with twice the memory and memory bandwidth in the same price bracket.