Nvidia Stock Rises. AMD's New AI Chip Is Not Competitive

34 pointsposted 18 hours ago
by nabla9

48 Comments

markhahn

16 hours ago

the stock market is just a casino, and this article is just an attempt to pump NVDA.

especially if you line up availability dates, AMD is competitive. not to mention price!

if there's any news here, it's that the recent announcements are just small tweaks of the MI300. then again, nvidia has announced nothing revolutionary either. does the market (people doing AI, not biz/stock morons) actually want something revolutionary?

vladimirralev

15 hours ago

It used to be a casino. Now it's a matter of monetary policy transmission and a national security issue. They fully and shamelessly embraced the wealth effect as the driving force of the economy. Market always has to go up or else bad things are going to happen.

jl6

15 hours ago

Yes, the market wants a revolution in the amount of VRAM you can fit on a GPU.

doctorpangloss

15 hours ago

Maybe DRAM manufacturers should be making GPUs.

to11mtm

15 hours ago

TBH It would be interesting to think about some variant of 3d XPoint on an inferrence-oriented GPU device.

nabla9

18 hours ago

Nvidia’s Blackwell GPUs sold out for next 12 months. This likely means that their profit margins jump again when sales from those chips come in.

Bernstein Research:

MI325X: "Training performance seems 1 year behind Blackwell (on par with H200) while inferencing is only slightly better,"

MI350X: "Even the company's MI350X tease shows raw performance that, while on par with Blackwell on paper, arrives a year later, just in time to compete against Nvidia's next offerings. Hence we do not see even AMD's accelerated road map closing the competitive gap."

https://www.businessinsider.com/amd-latest-gpu-still-lags-be...

linotype

17 hours ago

Blackwell being sold out for 12 months sounds like market opportunity for AMD. A chip is better than none.

nabla9

15 hours ago

Nvidia will compete with pricing using H100, H200 against AMDs latest. Basically AMD will get sales, but profit margins are nowhere where Nvidia is.

Operating income to sales (ttm):

  AMD:   4%
  NVDA: 62%
Nvidia and AMD and both competing as customers to TSMC for fab supply they need to order 1-2 years in advance. Apple and Nvidia are served first because they are the best paying customers.

ps. When Intel was the big dog, it almost killed AMD every time they made a x86 chip that was better than Intel. All Intel had to do was to sacrifice little profit margins, removing AMDs profit. This time the demand is so high that it's not going to happen, so AMD can enjoy the piece it has.

kolbe

14 hours ago

How did that strategy work out for Intel?

gdiamos

16 hours ago

H100s are available at increasingly affordable prices

electronbeam

15 hours ago

Depends on what yield AMD has, they may be able to undercut that if aiming for marketshare rather than revenue.

The marginal cost of each chip is dollars. The 5 digit prices for H100s are just margins to be undercut

robotnikman

16 hours ago

Now if only they could be affordable for the average consumer... A man can dream...

blihp

15 hours ago

When we get to the end of the hype cycle, they will be. The only question is if people will be interested in footing the power bill for any of the ocean of obsolete data center GPUs that companies will be dumping.

dboreham

17 hours ago

Does it need to be performance competitive if the price is right?

user

16 hours ago

[deleted]

nabla9

15 hours ago

Operating income to sales:

  AMD:  4%
  NVDA: 62%

Who you think has the pricing power?

manquer

14 hours ago

Hardly 1:1 comparison, AMD is not just only GPU maker, GPU is not even largest revenue contributor, the margins on x86 CPU and various custom processors they do ( like for Playstation) is wafer thin.

transcriptase

16 hours ago

Gestures broadly at Intel ARC

moffkalast

16 hours ago

The A770 launched costing 100 USD more than the RTX 4060, it pulls twice the wattage while underperforming it in every way.

to11mtm

15 hours ago

Intel continues to toss a stick in their own front wheel and blame whatever.

If they made an A775 or whatever with 32GB and sold it for 500, hell even 600 bucks a lot of people would buy it, myself likely included. Lots of people would be happy with a 'slow but can fit big models and still faster than reaching to CPU' card.

throwaway48476

14 hours ago

They used the same stick on the homelab users that want desktop SR-IOV in the A770 which was fused off. Intel is a very uncompetitive company.

moffkalast

14 hours ago

Yeah I mean getting 24GB on one card is extremely expensive and it's not the raw GDDR costs, it's just artificially inflated. Intel could easily do that and even if the prompt processing is supposedly really lackluster on the arcs right now, people would move literal heaven and earth to get it optimized.

to11mtm

14 hours ago

> Yeah I mean getting 24GB on one card is extremely expensive and it's not the raw GDDR costs, it's just artificially inflated

I've gotten on this thought train enough times I started doing some digging...

They might need a -little- extra work. Ada 5000 Had 32GB with a 256 bit bus but it's a bit of an outlier... I say it that way because as I did my digging, I'm finding that most boards are 8x16Gb (gigabit) modules resulting in a 256-bit width and 16GB of memory. A 4090 gets to 24GB by going to 384 bit.

Obviously, upping the width would potentially be a redesign, but we can again point to the Ada 5000 as a case where 32GB was done on a 256-bit bus. Might be some extra work somewhere but it's doable.

Even my quoted price is likely giving Intel and/or board partners some margin, unless I'm missing something about DRAM costs and ability to get the densities required. But as it stands, a 16GB A770 is something on the range of 250-300 USD range. A 32GB version for 600$, should actually give them some good margin compared to 16GB a770s.

petermcneeley

16 hours ago

Can someone summarize what they mean by not competitive? Yes a new chip from nvidia will not compete with CUDA (a software ecosystem).

wmf

15 hours ago

AMD's MI325 is slower (maybe 2x slower) than Nvidia B100. Sure it's cheaper and maybe consumes less power but you need more racks, more networking, and more labor to get the same performance.

2c2c2c

16 hours ago

is it naive to look at the market and just assume there's 500b of market cap screaming for amd to throw everythign at a competent cuda competitor and eventually see commoditization here? is this not possible (why hasnt this happened)?

zmmmmm

16 hours ago

My take in a nutshell: when raw performance and micro-optimisation are the core value proposition, portability and alternative equivalent technologies stop being viable competitive levers. There is just too much sunk into micro-optimisation around nVidia's architecture at every layer up and down the stack.

The only thing that will save us I think is when competition authorities finally wake up on this and force nVidia to share it's tech at some level. The equivalent of the cross licensing deals b/w Intel and AMD that kept the x86 architecture from being a monopoly (sort of).

jjmarr

16 hours ago

That takes time which is why AMD is making acquisitions and hiring like crazy.

p1esk

15 hours ago

AMD had 12 years to become competitive. Deep learning revolution started in 2012.

trynumber9

15 hours ago

AMD was nearly bankrupt for the first half of that. In my opinion it was a herculean feat that they survived at all.

ErneX

15 hours ago

Agree, not even at 2 billion market cap back then, now it’s almost 272 billion.

1st Ryzens were launched just 7 years ago.

p1esk

15 hours ago

Whose fault is that?

kuschku

10 hours ago

Avcording to the US and EU's highest courts, Intel. Not entirely sure what you're trying to argue.

to11mtm

14 hours ago

Whoever thought sticking with Bulldozer was a good idea while the GloFo thing was happening. The move towards more 'normal' process tech vs the tighter coupling when they owned the fabs led to probably at least a couple of missteps. Of course then all the other weirdness with Bulldozer...

Jaguar saved their butts via the XB1/PS4 to a large extent, (and my Puma laptop was way nicer than the Atom laptops for it's day,) Bulldozer was a huge stumble for the company tho.

I -will- say, around 2014-2015 I tossed together a 'low-end' 15h (Probably a Steamroller) and it was a competent machine, albeit relegated to 'retro-ish' steam games and DVR purposes. The Radeon core 3d performance at least did a good job of balancing real world performance compared to a Core i3

user

14 hours ago

[deleted]

eptcyka

15 hours ago

12 years ago, Nvidia cared more about gamers than GPGPU, and 8bit floats were definitely not something anyone optimized for.

manquer

14 hours ago

And 6 years ago they cared about crypto miners (whether they wanted to admit public or not).

Nvidia really has thick plot armor to be able to ride two massive hype waves.

user

14 hours ago

[deleted]

dkasper

16 hours ago

Sure it’s possible, but it’s also incredibly difficult.

user

15 hours ago

[deleted]

steeve

16 hours ago

I mean, we (zml) clocked MI300X ($20k) at +30% than H100 ($30k).

So…

wmf

15 hours ago

That was then. Now it's about MI325 vs. B100.

peterhhchan

16 hours ago

What about power consumption? edit: My understanding from about a year ago is that AMD and NVDA's chips were priced similarly in terms of performance per watt.