Intel Core Ultra Series 3 Debut as First Built on Intel 18A

112 pointsposted 2 days ago
by osnium123

173 Comments

DrammBA

2 days ago

> Today at CES, Intel unveiled Intel Core Ultra Series 3 processors, the first AI PC platform built on Intel 18A process technology that was designed and manufactured in the United States. Powering over 200 designs from leading, global partners, Series 3 will be the most broadly adopted and globally available AI PC platform Intel has ever delivered.

What in the world is this disaster of an opening paragraph? From the weird "AI PC platform" (not sure what that is) to the "will be the most broadly adopted and globally available AI PC platform" (is that a promise? a prediction? a threat?).

And you just gotta love the processor names "Intel Core Ultra Series 3 Mobile X9/X7"

jmward01

2 days ago

I think I have given up on chip naming. I honestly can't tell anymore there are so many modifiers on the names these days. I assume 9 is better than 7 right? Right?

chrismorgan

a day ago

> I assume 9 is better than 7 right? Right?

Oh, the number of times I’ve heard someone assume their five- or ten-year-old machine must be powerful because it’s an i7… no, the i3-14100 (released two years ago) is uniformly significantly superior to the i7-9700 (released five years before that), and only falls behind the i9-9900 in multithreaded performance.

Within the same product family and generation, I expect 9 is better than 7, but honestly it wouldn’t surprise me to find counterexamples.

gambiting

a day ago

>>Within the same product family and generation, I expect 9 is better than 7

Ah the good old Dell laptop engineering, where the i9 is better on paper, but in reality it throttles within 5 seconds of starting any significant load and the cpu nerfs itself below even i5 performance. Classic Dell move.

stefanfisk

a day ago

Apple had the same problem before they launched the M1. Unless your workloads are extremely bursty the i9 MacBook is almost guaranteed to be slower than the base i7.

zozbot234

a day ago

The latest iPhone base model performs better than the iPhone Air despite the latter having a Pro chip, because that Pro is so badly throttled due to the device form factor.

ZiiS

a day ago

Even thier ultra efficent silicon didn't fully solve this; a 16" M4 Pro often outperforms a 14" M4 Max stuck throttling.

MBCook

a day ago

I can’t comment on that.

But at least you always know an A7 is better than an A6 or an A4. The M4 is better than the M3 and M1.

The suffixes make it more complicated, but at least within a suffix group the rule still holds.

ZiiS

5 hours ago

But if you buy a Mac Studio today, you have to choose a M4 Max or a much faster M3 Ultra.

zuhsetaqi

a day ago

First time I’m hearing this. Do you have any sources on this?

flyinglizard

a day ago

Are they throttling with the fan off? Because I don't recall ever hearing the fan on my M3 Max 14" (granted no heavy deliberate computational beyond regular dev work).

ZiiS

a day ago

No this shows up when you really fully load them and the fans can't keep up. Most people never do, but then why buy the Max?

stefanfisk

a day ago

AFAIK it’s only something that happens under sustained heavy load. The 14” Max should still outperform the Pro for shorter tasks but I’d reckon few people buy the most expensive machine for such use cases.

Personally I think that Apple should not even be selling the 14” Max when it has this defect.

christkv

a day ago

I still have the i9 macbook pro and its a dog for sure throttles massively

chrismorgan

a day ago

Within the same family and generation, I don’t think this should happen any more. But especially in the past, some laptops were configurable with processors of different generations or families (M, Q, QM, U, so many possibilities) so that the i7 option might have worse real-world performance than the i5 (due to more slower cores).

tracker1

a day ago

It's been a cooling problem on a lot of i9 laptops... the CPU will hit thermal peaks, then throttle down, this has an incredibly janky feel as a user... then it spins back up, and down... the performance curves just wacky in general.

Today is almost worse, as the thermal limits will be set entirely different between laptop vendors on the same chips, so you can't even have apples to apples performance expectations from different vendors.

tracker1

a day ago

Same for the later generation Intel Macbook Pros... The i9 was so bad, and the throttling made it practically unusable for me. If it weren't a work issued laptop, I'd have either returned it, or at least under-volted and under-clocked it so it didn't hiccup every time I did anything at all.

dehrmann

a day ago

I had an X1 Carbon like this, only it'd crash for no apparent reason. The internet consensus that Lenovo wouldn't own up to was that the i7 CPUs were overpowered for the cooling, so your best bet is either underthrottling them or getting an i5.

mrandish

a day ago

Yeah, putting an i9 in any laptop that's not an XL gaming rig with big fans is very nearly always a waste of money (there might exist a few rare exceptions for some oddball workloads). Manufacturers selling i9s in thin & light laptops at an ultra price premium may fall just short of the legal definition of fraud but it's as unconscionable as snake-oil audiophile companies selling $500 USB cables.

wtallis

a day ago

That's still assigning too much significance to the "i9" naming. Sometimes, the only difference between the i9 part and the top i7 part was something like 200MHz of single-core boost frequency, with the core counts and cache sizes and maximum power limit all being equal. So quite often, the i7 has stood to gain just as much from a higher-power form factor as the i9.

gambiting

a day ago

Tbf 2 jobs ago I had a Dell enterprise workstation laptop, an absolute behemoth of a thing, it was like 3.5kg, it was the thicker variant of the two available with extra cooling, specifically sold to companies like ours needing that extra firepower, and it had a 20 core i9, 128GB of DDR5 CAMM ram, and a 3080Ti - I think the market price of that thing was around £14k, it was insane. And it had exactly that kind of behaviour I described - I would start compiling something in Visual Studio, I would briefly see all cores jump to 4GHz and then immediately throttle down to 1.2GHz, to a point where the entire laptop was unresponsive while the compilation was ongoing. It was a joke of a machine - I think that's more of a fraud than what you described, because companies like ours were literally buying hundreds of these from Dell and they were literally unsuitable for their advertised use.

(to add insult to the injury - that 3080Ti was literally pointless as the second you started playing any game the entire system would throttle so hard you had extreme stuttering in any game, it was like driving a lamborghini with a 5 second fuel reserve. And given that I worked at a games studio that was kinda an essential feature).

avadodin

a day ago

A machine learning model can place a CPU on the versioning manifold but I'm not confident that it could translate it to human speech in a way that was significantly more useful than what we have now.

At best, 14700KF-Intel+AMD might yield relevant results.

cherioo

a day ago

AI PC has been in the buzz for more than 2 years now (despite itself being a near useless concept), and intel has like 75% marketshare for laptop. Both of those are well with in norm for an intel marketing piece.

It’s not really meant for consumer. Who would even visit newsroom.intel.com?

lostlogin

a day ago

Apparently it’s been a thing for a while:

What is an AI PC? ('Look, Ma! No Cloud!')

An AI PC has a CPU, a GPU and an NPU, each with specific AI acceleration capabilities. An NPU, or neural processing unit, is a specialized accelerator that handles artificial intelligence (AI) and machine learning (ML) tasks right on your PC instead of sending data to be processed in the cloud. https://newsroom.intel.com/artificial-intelligence/what-is-a...

sidewndr46

a day ago

It'd be interesting to see some market survey data showing the number of AI laptops sold & the number of users that actively use the acceleration capabilities for any task, even once.

sixothree

a day ago

I'm not sure I've ever heard of a single task that comes built into the system and uses the NPU.

fassssst

a day ago

Remove background from an image. Summarize some text. OCR to select text or click links in a screenshot. Relighting and centering you in your webcam. Semantic search for images and files.

A lot of that is in the first party Mac and Windows apps.

lostlogin

a day ago

Selecting text in a photo is a game changer. I love it.

MBCook

a day ago

Wasn’t built in OCR an amazing feature?

We probably could have done it years earlier. But when it showed up… wow.

olyjohn

19 hours ago

CES stands for Consumer Electronics Show last I checked.

Laptop names are even worse:

> Are ZBooks good or do I want an OmniBook or ProBook? Within ZBook, is Ultra or Fury better? Do I want a G1a or a G1i? Oh you sell ZBook Firefly G11, I liked that TV show, is that one good?

https://geohot.github.io/blog/jekyll/update/2025/11/29/bikes...

jhickok

a day ago

TIL Geohot pretty much want the exact same thing in a laptop. Basically a Macbook Pro running Linux.

lostlogin

a day ago

And that root of all that shit lies Apple and the ‘book’ suffix.

kergonath

a day ago

Apple is very consistent. You have the MacBook Air (lighter, more portable variant) and the MacBook Pro (more expensive and powerful variant). They don’t mess around with model numbers.

yencabulator

a day ago

Apple is so "consistent" the way to know which kind of an Air or Pro it is, is to find the tiny print on the bottom that's a jumble of letters like "MGNE3" and google it.

And depending on what you're trying to use it for, you need to map it to a string like "MacBookAir10,1" or "A2337" or "Macbook Air Late 2022".

Oh also the Macbook Air (2020) is a different processor architecture than Macbook Air (2020).

kergonath

a day ago

The canonical way if you need a version number is the "about this Mac" dialog (here it says Mac Studio 2022).

If you need to be technical, System Information says Mac13,1 and these identifiers have been extremely consistent for about 30 years.

Your product number encodes much more information than that, and about the only time when it is actually required is to see whether it is eligible for a recall.

> Oh also the Macbook Air (2020) is a different processor architecture than Macbook Air (2020).

Right, except that one is MacBook Air (retina, 2020), Macbookair9,1, and the other is MacBook Air (M1, 2020), MacBookAir10,1. It happens occasionally, but the fact that you had to go back 5 years to a period in which the lineup underwent a double transition speaks volume.

lostlogin

a day ago

> Apple is very consistent. You have the MacBook Air (lighter, more portable variant) and the MacBook Pro (more expensive and powerful variant).

What about the iBook? That wasn’t tidy. Ebooks or laptops?

Or the iPhone 9? That didn’t exist.

Or MacOS? Versioning got a bit weird after 10.9, due the X thing.

They do mess around with model numbers and have just done it again with the change to year numbers. I don’t particularly care but they aren’t all clean and pure.

https://daringfireball.net/linked/2025/05/28/gurman-version-...

kergonath

a day ago

> What about the iBook? That wasn’t tidy. Ebooks or laptops?

Back then, there were iBooks (entry-level) and PowerBooks (professional, high performance and expensive). There had been PowerBooks since way back in 1991, well before any ebook reader. I am not sure what your gripe is.

> Or the iPhone 9? That didn’t exist.

There’s a hole in the series. In what way is it a problem, and how on earth is it similar to the situation described in the parent?

> Or MacOS? Versioning got a bit weird after 10.9, due the X thing.

It never got weird. After 10.9.5 came 10.10.0. Version numbers are not decimals.

Seriously, do you have a point apart from "Apple bad"?

lostlogin

a day ago

You were saying that Apple is very consistent. I’m pointing out they aren’t particularly.

> It never got weird. After 10.9.5 came 10.10.0. Version numbers are not decimals.

They turned one of the numbers into a letter then started numbering again.

There was Mac OS 9, then Mac OS X. That got incremented up past 10.

You say they don’t mess around with model numbers. Yes they do, with software and hardware.

I like using them both.

kergonath

a day ago

> They turned one of the numbers into a letter then started numbering again.

They did not. It has been MacOS X 10.0 through macOS 10.15. In never was X.1 or anything like that.

MBCook

a day ago

Right. MacOS X was the marketing name. But it was pronounced 10, just a stylization with Roman numerals.

The version number the OS reported always said 10.whatever. Exactly as you said.

kergonath

a day ago

Yes, and you did sound silly when saying it out loud the official way (OS ten ten ten was a famous one, for Yosemite).

lostlogin

12 hours ago

I stand corrected. I thought the X(10) was part of the version number, not a prefix that got added.

MBCook

an hour ago

I’m not sure I hear people call MacOS X 10.10 “ten ten ten”. I think I remember them calling it “ten ten” verbally.

So you’d say “MacOS ten ten”.

At least that’s what I’m used to, it is entirely possible that’s what other people said and you would write it that way. No one wrote “MacOS X.10” or “MacOS X .10” but they would write “MacOS X 10.10”.

So yeah it’s all a bit of a mess. There’s a reason people often use the name of the release, like Snow Leopard or Tahoe, instead of the number numbers.

stefanfisk

a day ago

It was a response to you specifically calling out the book suffix.

And what was unclear iBook VS PowerBook?

lostlogin

a day ago

The iBook store.

Sorry, I thought you were saying that they don’t use model numbers at all.

I think you were actually saying that they don’t just them for laptops.

wtallis

a day ago

"iBook" referred to a laptop from 1999 to 2006. "iBooks" referred to the eBook reader app and store from 2010 to 2019. I'll grant that there is some possibility for confusion, but only if the context of the conversation spans multiple decades but doesn't make it clear whether you're talking about hardware or software.

bebna

a day ago

I got a MacBook. No, not an air or pro, just MacBook.

kergonath

a day ago

Back when there were MacBooks, it was MacBook (standard model), MacBook Air (lighter variant), and MacBook Pro (more expensive, high-performance variant). Sure, 3 is more complicated than 2, but come on.

If you really want to complain, you can go back to the first unibody MacBook, which did not fit that pattern, or the interim period when high-DPI displays were being rolled out progressively, but let’s be serious. The fact is that even at the worst of times their range could be described in 2 sentences. Now, try to do that for any other computer brand. To my knowledge, he only other with an understandable lineup was Microsoft, before they lost interest.

lostlogin

a day ago

> The fact is that even at the worst of times their range could be described in 2 sentences.

It’s a good time to buy one. They are all good.

It would be interesting to know how many SKUs are hidden behind the simple purchase interface on their site. With the various storage and colour options, it must be over 30.

kergonath

a day ago

Loads, I assume. But those are things like "MacBook Pro M1 Max with a 1TB SSD and a matte screen coating" versus "MacBook Pro M1 with a 256GB SSD and a standard screen". The granularity of say Dell’s product numbers is not enough for that either, and you still need a long product number when searching their knowledge base.

dangus

2 days ago

Intel marketing isn’t the best but I am struggling to understand what issue you’re taking with this.

It’s an AI PC platform. It can do AI. It has an NPU and integrated GPU. That’s pretty straightforward. Competitors include Apple silicon and AMD Ryzen AI.

They’re predicting it’ll sell well, and they have a huge distribution network with a large number of partner products launching. Basically they’re saying every laptop and similar device manufacturer out there is going to stuff these chips in their systems. I think they just have some well-placed confidence in the laptop segment, because it’s supposed to combine the strong efficiency of the 200 series with the kind of strong performance that can keep up with or exceed competition from AMD’s current laptop product lineup.

Their naming sucks but nobody’s really a saint on that.

webdevver

a day ago

i cant believe we're still putting NPUs into new designs.

silicon taken up that couldve been used for a few more compute units on the GPU, which is often faster at inference anyway and way more useful/flexible/programmable/documented.

zmb_

a day ago

You can thank Microsoft for that. Intel architects in fact did not want to waste area on an NPU. That caused Microsoft to launch their AI-whatever branded PCs with Qualcomm who were happy to throw in whatever Microsoft wanted to get to be the launch partner. After than Intel had to follow suit to make Microsoft happy.

dangus

a day ago

That doesn’t explain why Apple “wastes” die area on their NPU.

The thing is, when you get an Apple product and you take a picture, those devices are performing ML tasks while sipping battery life.

Microsoft maybe shouldn’t be chasing Apple especially since they don’t actually have any marketshare in tablets or phones, but I see where they’re getting at: they are probably tired of their OS living on devices that get half the battery life of their main competition.

And here’s the thing, Qualcomm’s solution blows Intel out of the water. The only reason not to use it is because Microsoft can’t provide the level of architecture transition that Apple does. Apple can get 100% of their users to switch architecture in about 7 years whenever they want.

cromka

a day ago

Guess they're following Apple here whose NPUs get all the support possible, as far as I can tell.

dangus

a day ago

Bingo. Maybe Microsoft shouldn’t even be chasing them but I think they have a point to try and stay competitive. They can’t just have their OS getting half the battery life of their main competitor.

When you use an Apple device, it’s performing ML tasks while barely using any battery life. That’s the whole point of the NPU. It’s not there to outperform the GPU.

astrange

a day ago

NPUs aren't designed to be "faster", they are designed to have better perf/power ratios.

Every modern chip needs some percentage dedicated to dark silicon. There is no cheating the thermal reality. You could add more compute units in the GPU, but you then have to make up for it somewhere else. It’s a balancing act.

The Core Ultra lineup is supposed to be low-power, low-heat, right? If you want more compute power, pick something from a different product series.

wtallis

a day ago

> Every modern chip needs some percentage dedicated to dark silicon. There is no cheating the thermal reality. You could add more compute units in the GPU, but you then have to make up for it somewhere else. It’s a balancing act.

I think that "dark silicon" mentality is mostly lingering trauma from when the industry first hit a wall with the end of Dennard scaling. These days, it's quite clear that you can have a chip that's more or less fully utilized, certainly with no "dark" blocks that are as large as a NPU. You just need to have the ability to run the chip at lower clock speeds to stay within power and thermal constraints—something that was not well-developed in 2005's processors. For the kind of parallel compute that GPUs and NPUs tackle, adding more cores but running them at lower clock speeds and lower voltages usually does result in better efficiency in practice.

The real answer to the GPU vs NPU question isn't that the GPU couldn't grow, but that the NPU has a drastically different architecture making very different power vs performance tradeoffs that theoretically give it a niche of use cases where the NPU is a better choice than the GPU for some inference tasks.

It's a disaster along with the title. There isn't a lot of clear information.

hnuser123456

a day ago

It means they did cost cutting on Lunar Lake and are excited to sell a lot of them at similar or higher prices.

etempleton

a day ago

Cost cutting? 18a probably has more invested in it then every other process Intel has ever produced combined.

ajross

a day ago

> cost cutting on Lunar Lake

It's... the launch vehicle for a new process. Literally the opposite of "cost cutting", they went through the trouble of tooling up a whole fab over multiple years to do this.

Will 18A beat TSMC and save the company? We don't know. But they put down a huge bet that it would, and this is the hand that got dealt. It's important, not something to be dismissed.

hnuser123456

a day ago

Lunar Lake integrated DRAM on the package, which was faster and more power efficient, this reverts that. They also replaced part of the chip from being sourced from TSMC to from themselves. And if their foundry is competitive, they should be shaking other foundry customers down the way TSMC is.

If they have actually mostly caught up to TSMC, props, but also, I wish they hadn't given up on EUV for so long. Instead they decided to ship chips overclocked so high they burn out in months.

ac29

a day ago

> Lunar Lake integrated DRAM on the package, which was faster and more power efficient, this reverts that.

On package memory is slightly more power efficient but it isnt any faster, it still uses industry standard LPDDR. And Panther Lake supports faster LPDDR than Lunar Lake, so its definitely not a regression.

ajross

a day ago

I don't see how any of that substantiates "Panther Lake and 18A are just cost cutting efforts vs. Lunar Lake". It mostly just sounds like another boring platform flame.

hnuser123456

a day ago

ajross

a day ago

Again, you're talking about the design of Panther Lake, the CPU IC. No one cares, it's a CPU. The news here is the launch of the Intel 18A semiconductor process and the discussion as to if and how it narrows or closes the gap with TSMC.

Trying to play this news off as "only cost cutting" is, to be blunt, insane. That's not what's happening at all.

Tostino

a day ago

I'm not GP, but I think that it really doesn't matter if Intel is able to sell this process to other companies. But if they're only producing their own chips on it, that's quite a valid criticism.

ajross

21 hours ago

And for the fourth time, it may be a valid "criticism" in the sense of "Does Intel Suck or Rule?". It does not validate the idea that this product release, which introduces the most competitive process from this company in over a decade, is merely a "cost reduction" change.

hnuser123456

5 hours ago

It's only as exciting as a cost reduction because they're playing catch-up by trying to not need to outsource their highest performance silicon. Let me know when Intel gets perf/watt to be high enough to be of interest to Apple, gamers, or anyone who isn't just buying a basic PC because their old one died, or an Intel server because that's what they've always had.

Every single performance figure in TFA is compared to their own older generations, not to competitors.

alecco

a day ago

I really, really want Intel to do well. I like their open oneAPI for unified CPU-GPU programming. It would be nice to have some competition/alternative against NVIDIA and TSMC.

But I wont be investing time and money again on Intel while the same anti-engineering beancounter board is still there. For example, they never owned the recent Raptor Lake serious hardware issues and they never showed clients how this will never happen again.

https://en.wikipedia.org/wiki/Raptor_Lake#Instability_and_de... "Intel has decided not to halt sales or recall any units"

skystarman

a day ago

Great point. This Board nearly destroyed one of the world's great tech companies and they are STILL in charge after not being held accountable or admitting their mistakes over the past decade +

The only reason INTC isn't in a death spiral is because the US Govt. won't let that happen

etempleton

a day ago

They did reshuffle their board a bit after firing Pat to bring in some people with industry and domain expertise and not just academics / outside industry folks.

w-m

a day ago

“With Series 3, we are laser focused on improving power efficiency, adding more CPU performance, a bigger GPU in a class of its own, more AI compute and app compatibility you can count on with x86.” – Jim Johnson, Senior Vice President and General Manager, Client Computing Group, Intel

A laser focus on five things is either business nonsense or optics nonsense. Who was this written for?

pritambarhate

a day ago

It's all the things Apple's processors are excellent at and AMD is not far behind Apple. So unless Intel delivers on all those things they can't hope to gain the market share they have lost.

Can't we just focus on everything?

DannyBee

a day ago

I think you mean laser focus on everything. Maybe they have a prism.

simulator5g

a day ago

I’m sure they have something like a prism. Perhaps, a PRISM.

sidewndr46

a day ago

Somewhat ironically if they were laser focused using infared lasers, wouldn't that imply the company was not very specific at all? Infared is something like 700 nm, which would be huge in terms of transistors

davidmurdoch

a day ago

State of the art lithography currently uses extreme ultraviolet, which is 13.5nm. So maybe they are EUV laser-focused, just with many mirrors pointing it in 5 different directions?

HDThoreaun

a day ago

Well this is the consumer electronic showcase so I would say consumers who are looking at buying laptops

dudeinjapan

a day ago

Meanwhile they are NOT laser-focusing on doing more of Lunar Lake, with its on-package memory and glorious battery life.

Intel called it a “one-off mistake”, it’s the best mistake Intel ever made.

bryanlarsen

a day ago

Intel is claiming that Panther lake has 30% better battery life than Lunar Lake.

dudeinjapan

a day ago

Perhaps in a vacuum…

On package memory is claimed to be a 40% reduction in power consumption. To beat actual LL by 30%, it means the PL chip must actually be ~58% more efficient in an apples-to-apples non-SoC configuration.

Possible if they doped PL’s silicon with magic pixie dust.

wtallis

a day ago

> On package memory is claimed to be a 40% reduction in power consumption.

40% reduction in what power consumption? I don't think memory is usually responsible for even 40% of the total SoC + memory power, and bringing memory on-package doesn't make it consume negative power.

phonon

a day ago

Lunar Lake had a 40% reduction in PHY power use by using memory directly onto the processor packaging (MoP)...roughly going from 3-4 Watts to 2 Watts...

ac29

a day ago

Do you have more information on that? I have a meteor lake laptop (pre-Lunar Lake) and the entire machine averages ~4W most of the time, including screen, wifi, storage and everything else. So, I dont see how the CPU memory controller can use 3-4W unless it is for irrelevantly brief periods of time.

sbinnee

a day ago

I will wait for the actual reviews from users. But I lost faith in Intel chips.

I was in CES2024 and saw snapdragon X elite chip running a local LLM (llama I believe). How did it turn out? Users cannot use that laptop except for running an LLM. They had no plans for translation layer like Apple Rosetta. Intel would be different for sure in that regard, but I just don't think that it will fly against Ryzen AI chips or Apple silicon.

ZuLuuuuuu

a day ago

Isn't it a bit exaggerating to say that users cannot use Snapdragon laptops except for running LLMs? Qualcomm and Microsoft already has a translation layer named Prism (not as good as Rosetta but pretty good nevertheless): https://learn.microsoft.com/en-us/windows/arm/apps-on-arm-x8...

I agree with losing faith in Intel chips though.

webdevver

a day ago

>Isn't it a bit exaggerating to say that users cannot use Snapdragon laptops except for running LLMs?

I think maybe what OP meant was that the memory occupied by the model meant you couldn't do anything alongside inferecing, e.g. have a compile job or whatever running (unless you unload the model once you've done asking it questions.)

to be honest, we could really do with RAM abundance. Imagine if 128GB ram became like 8GB ram is today - now that would normalize local LLM inferencing (or atleast, make a decent attempt.)

ofcourse youd need the bandwidth too...

blell

a day ago

Prism is not as good as Rosetta 2? At least Prism supports AVX.

Numerlor

a day ago

Lost faith from what? On x86 mobile Lunar lake chips are the clear best for battery life at the moment, and mobile arrowlake is competitive with amd's offerings. Only thing they're missing is a Strix halo equivalent but AMD messed that one up and there's like 2 laptops with it.

The new intel node seems to be kinda weaker than tsmc's going by the frequency numbers of the CPUs, but what'll matter the most in a laptop is real battery life anyway

aurareturn

a day ago

Lunar Lake throttles a lot. It can lose 50% of its performance on battery life. It's not the same as Apple Silicon where the performance is exactly the same plugged in or not.

Lunar Lake is also very slow in ST and MT compared to Apple.

Qualcomm's X Elite 2 SoCs have a much better chance of duplicating the Macbook experience.

Numerlor

a day ago

Nobody is duplicating the macbook experience because Apple is integrating both hardware and os, while others are fighting Windows, and OEMs being horrible at firmware.

LNL should only power throttle when you go to power saver modes, battery life will suffer when you let it boost high on all cores but you're not getting great battery life when doing heavy all core loads either way. Overall MT should be better on Panther lake with the unified architecture, as afaik LNLs main problem was being too expensive so higher end high core count SKUs were served by mobile arrow lake. And we're also getting what seems to be a very good iGPU while AMD's iGPUs outside of Strix Halo are barely worth talking about

ST is about the same as AMD. Apple being ahead is nothing out of the ordinary since their ARM switch, as there's the node advantage, what I mentioned with the OS, and just better architecture as they plainly have the best people at the moment working at it

aurareturn

a day ago

LNL throttles heavily even on the default profile, not just power saver modes.[0]

Meanwhile, Qualcomm's X Elite 1 did not throttle.

Lunar Lake uses TSMC N3 for compute tile. There is no node advantage. Yet, M4 is 42% faster in ST and M5 is 50% faster based on Geekbench 6 ST.

[0]https://www.pcworld.com/article/2463714/tested-intels-lunar-...

Numerlor

7 hours ago

> LNL throttles heavily even on the default profile, not just power saver modes.

This does also show it not changing in other benchmarks, but I don't have a LNL laptop myself to test things myself, just going off of what people I know tested. It's still also balanced so best performance power plan would I assume push it to use its cores normally - on windows laptops I've owned this could be done with a hotkey.

> Lunar Lake uses TSMC N3 for compute tile. There is no node advantage.

LNL is N3B, Apple is on N3E which is a slight improvement for efficency

> Yet, M4 is 42% faster in ST and M5 is 50% faster based on Geekbench 6 ST.

Like I said they simply have a better architecture at the moment, which also more focused on client that GB benchmarks because their use cases are narrower. If you compare something like optimized SIMD Intel/AMD will come out on top with perf/watt.

And I'm not sure why being behind the market leader would make one lose faith in Intel, their most recent client fuckup was raptor lake instability and I'd say that was handled decently. For now nothing else that'd indicate Windows ARM getting to Apple level battery performance without all of the vertical integration

ETA: looking at things the throttling behaviour seems to be very much OEM dependent, though the tradeoffs will always remain the same

WithinReason

a day ago

Their comparisons claim to be performing better than both

glzone1

2 days ago

If they are going to be the most broadly adopted AI platform where does that leave nvidia?

What is the AI PC platform? The experience on windows with windows 11 for just the basic UI of the start menu leaves a lot to be desired, is copilot adoption on windows that popular and does it take advantage of this AI PC platform?

Ryzen AI 400 mobile CPU chips are also releasing soon (though RocM is still blah I think)

Nvidia is still playing in the AI space despite all the noise of others on their AI offerings - and despite intel hype, Nividias margins at least recently have been incredible (ie, people still using them) so their platform hasn't yet been killed by intel's "most widely adoptoped" AI platform offering

Traster

a day ago

Firstly,

>Series 3 will be the most broadly adopted and globally available AI PC platform Intel has ever delivered.

The true competitor is Ryzen AI, Nvidia doesn't produce these integrated CPU/GPU/AI products in the PC segment at all.

zamadatix

a day ago

How broad your PC AI hardware adoption is matters little when the overwhelming majority of users use cloud hosted AI.

zapnuk

a day ago

I assume its still x86-64?

What actually makes it an AI platform? Some tight integration of an intel ARC GPU, similar to the Apple M series processors?

They claim 2-5x performance for soem AI workloads. But aren't they still limited by memory? The same limitation as always in consumer hardware?

I don't think it matters much if you're limited by a nvidia gpu with ~max 16gb or some new intel processor with similar memory.

Nice to have more options though. Kinda wish the intel arc gpu would be developed into an alternative for self hosted LLMs. 70b models can be quite good but still difficult / slow to use self-hosted.

vbezhenar

a day ago

These processors have NPU (Neural Processing Unit) which is supposed to accelerate some small local neural networks. Nvidia RTX GPUs have much more powerful NPUs, so it's more about laptops without discrete GPU.

distances

a day ago

And as far as I can see, it's a total waste of silicon. Anything running in it will anyway be so underpowered that it doesn't matter. It'd be better to dedicate the transistors to the GPU.

The latest Ryzen mobile CPU line didn't improve performance compared to its predecessor (the integrated GPU is actually worse), and I think the NPU is to blame.

wtallis

a day ago

If you ask NVIDIA, inference should always run on the GPU. If you ask anybody else designing chips for consumer devices, they say there's a benefit to having a low-power NPU that's separate from the GPU.

dragonwriter

a day ago

Okay, yeah, and those manufacturers’ opinions are both obvious reflections of market position independent of the merits, what do people who actually run inference say?

(Also, the NPUs usually aren't any more separate from the GPU than tensor cores are separate from an Nvidia GPU, they are integrated with the CPU and iGPU.)

zozbot234

a day ago

If you're running an LLM there's a benefit in shifting prompt pre-processing to the NPU. More generally, anything that's memory-throughput limited should stay on the GPU, while the NPU can aid compute-limited tasks to at least some extent.

The general problem with NPUs for memory-limited tasks is either that the throughput available to them is too low to begin with, or that they're usually constrained to formats that will require wasteful padding/dequantizing when read (at least for newer models) whereas a GPU just does that in local registers.

Spellman

a day ago

Depends on how big the NPU is and how much power/memory the inference model needs.

gambiting

a day ago

But like.....what for example. As a normal windows PC user, what kind of software can I run that will benefit from that NPU at all?

KeplerBoy

a day ago

We don't ask that question. In reality everything is done in the cloud. Maybe they package some camera app that applies snapchat-like filters with NPUs, but that's about the extent of it.

Jokes aside: they really seem to do some things like live captions and translations. Pretty sure you could also do these things on the iGPU or CPU at a higher power draw.

https://blogs.windows.com/windows-insider/2024/12/18/releasi...

justin66

a day ago

They're going to find a way to accelerate the Windows start menu with it.

mminer237

a day ago

Oh boy, instead of building an efficient index or optimizing the start menu or its built-in web browser, they're adding more power usage to make the computer randomly guess what I want returned since they still can't figure out how to return search results of what you actually typed.

pjmlp

a day ago

It is another way Microsoft has tried to cater to OEMs as means to bring PC sales back to the glory exponential growth days, especially under the CoPilot+ PC branding, nowadays still siloed into Windows ARM.

In fairness NPUs can use less hardware resources than a general purpose discrete GPU, thus better for laptop workloads, however we all know that if a discrete GPU is available, there is not a technical reason for not using it, assuming enough local memory is available.

Ah, and NPUs are yet another thing that GNU/Linux folks would have to reverse engineer as well, as on Windows/Android/Apple OSes they are exposed via OS APIs, and there is yet no industry standard for them.

kakacik

a day ago

1) tick AI checkbox 2) ??? 3) profit

KeplerBoy

a day ago

Are we calling tensor cores NPUs now?

Marsymars

a day ago

How did we end up with Tensor Cores and a Tensor SoC from two different companies?

mrguyorama

20 hours ago

The same way we ended up with both Groq and Grok branded LLMs

Maybe these people aren't that creative....

phkahler

a day ago

Clock speed? Hyperthreading? AVX512? APX?

aseipp

a day ago

No AVX512, client SKUs are just going to go straight to APX/AVX10, and they are confirmed for Nova Lake which is 2H 2026 (it will probably be "Core Ultra Series 4" or whatever I guess).

cubefox

a day ago

Any speculation on what the equivalent TSMC node is for Intel 18A?

ytch

a day ago

https://x.com/Kurnalsalts/status/1962173515815424003

Logic Density (may be inaccurate, also it's not the only metric for performance ): Raipidus 2nm ≈ TSMC N2 > TSMC N3B > TSMC N3E/P > Intel 18A ≈ Samaung 3GAP

But 18A/20A already has PowerVia, while TSMC will implement Backside Power Delivery in A16 (next generation of N2)

aurareturn

a day ago

So 18A is roughly TSMC N4P. N4P is part of the N5 family.

etempleton

a day ago

18a supposedly has some advantages in power efficiency and some other areas compared to TSMCs approach. Ultimately, TSMC doesn’t have a 2nm product yet, so it is a pretty big deal Intel is competitive again with TSMC latest. Samsung is incredibly far behind at this point.

aurareturn

9 hours ago

TSMC commenced N2 mass production last month.

tuananh

a day ago

tsmc n2 i think (2nm)

tromp

a day ago

Closer to n3 or n5 I would think. Intel's node numbering is far more aspirational than TSMCs.

mhh__

a day ago

If it's still being updated the wikipedia article about semi fabrication has a table with some reasonably comparable numbers (when known) for Intel X and TSMC Y

If that is the case then it's interesting how intel managed to catch up so quickly.

chasil

a day ago

After their 7/10nm delay, they are bringing 2nm into production.

They skipped 5nm and 3nm, and that is indeed an accomplishment.

I hope the yeilds are high.

aurareturn

a day ago

They didn't skip it.

They have Intel 7, Intel 4, Intel 3 nodes. Anyways, Intel's names do not equal to the same number on TSMC. They're usually 1 or 1.5 generation behind with the same name.

So Intel 3 would be something like TSMC N6.

Dylan16807

a day ago

Wow, I didn't realize Intel was slacking on top of their node rename. Sure their "10nm" was ahead at the time, but if they'd left the numbers alone they'd be a much closer match to everyone else today instead of even further off.

sandGorgon

a day ago

this doesnt have integrated ram like lunar lake right ?

klardotsh

a day ago

Nearly all modern SOCs have built in RAM now. Apple Silicon does it, AMD Strix Halo and beyond do it, Intel Lunar Lake does it, most ARM SOCs from vendors other than Apple do it…

Now, unified memory shared freely between CPU and GPU would be cool, like Apple and AMD SH have, if that’s what you meant.

wtallis

a day ago

AMD Strix Halo does not have on-package RAM. What makes it stand out from other x86 SoCs is that it has more memory channels, for a total of a 256-bit wide bus compared to 128-bit wide for all other recent consumer x86 processors.

Qualcomm's laptop chips thus far have also not had on-package RAM. They have announced that the top model from their upcoming Snapdragon X2 family will have a 192-bit wide memory bus, but the rest will still have a 128-bit memory bus.

Intel Lunar Lake did have on-package RAM, running at 8533 MT/s. This new Panther Lake family from Intel will run at 9600 MT/s for some of the configurations, with off-package RAM. All still with a 128-bit memory bus.

notenlish

a day ago

Isnt Strix Halo and apple's m series a bit different. Iirc you need to choose how much ram will be allocated to the igpu, whereas on mac it is all handled dynamically.

Tsiklon

a day ago

On the Mac it's all dynamically handled.

With Strix Halo there's two ways of going about it; either set how much memory you want allocated to GPU in BIOS (Less desirable), or set the memory allocation to the GPU to 512MB in the BIOS, and the driver will do it all dynamically much like on a Mac.

adgjlsfhk1

a day ago

strix also does it dynamically, just with a limit (which is generally set to ~75% of your total RAM)

usagisushi

a day ago

No, it's more like a power-efficient Arrow Lake, with fewer P-cores and more LE-cores. (e.g. P+E+LE: AL 6+8+2 vs. PL 4+8+4)

edit: fix typo

kleinmatic

a day ago

I wonder how much of the funding that led to this came from the Biden-era Chips & Science Act? I can't find a straight answer amid the AI slop and marketing hype about both of them.

Update: Looks like Trump admin converted billions in unpaid CHIPS act grants into an equity in Intel last year https://techhq.com/news/intel-turnaround-strategy-panther-la...

Xe3 GPU could be super super super great. Xe2 is already very strong, and this could really be an incredible breakout moment.

The CPU are also probably also fine!

Intel is so far ahead with consumer multi-chip. AMD has done amazing with having an IOD+CCD (io / core complex dies) chiplet split up (basically having a northbridge on package), but is just trying to figure out how in 2027's Medusa Point they're going to make a decent mainline APU multi-chip, can't keep pushing monolithic APU dies like they have (but they've been excellent FWIW). Like Intel's been doing with sweet EIMB, breaking the work up already, and hopefully is reaping the reward here. Stashing some tiny / very low power cores on the "northbridge" die is a genius move that saves incredible power for light use, a big+little+tiny design that let's the whole CCD shut down while work happens. Some very nice high core configs. Panther Lake could be super exciting.

18A with backside power delivery / "PowerVia" could really be a great leap for Intel! Nice big solid power delivery wins, that could potentially really help. My fingers are so very crossed. Really hope the excitement for this future arriving pans out, at least somewhat!

Their end of year Nova Lake with b(ig)LLC and an even bigger newer NPU6 (any new features beyond TOps?) is also exciting. I hope that also includes the incredible Thunderbolt/USB4 connectivity Intel has typically included on mobile chips but not holding my breath. Every single mobile part is capable of 4X Thunderbolt 5. That is sick. I really hope AMD realizes the ball is in it's court on interconnects at some point!! 20 Lane PCIe configs are also very nice to have for mobile.

Lunar Lake was quite good for what it was, very amazing well integrated chip, with great characteristics. As a 2+4 big/little it wasnt enough for developers. But great consumer chip. I think Intel's really going to have a great total system design with Panther Lake. Yes!

https://www.tomshardware.com/pc-components/cpus/intel-double...

JohnBooty

a day ago

Yes. It's one of those things where even if you will never buy an Intel product, everybody in the world should be rooting for Intel to produce a real winner here.

Healthy Intel/GF/TSMC competition at the head of the pack is great for the tech industry, and the global economy at large.

Perhaps even more importantly, with armed conflict looming over Taiwan and TSMC... well, enough said.

wmf

a day ago

For a laptop chip the optimal dsesign is a single die. Apple, Qualcomm, and AMD agree on this. Chiplets are a last resort when you can't afford a single die due to yield or mask costs.

It feels like a true until it's not problem.

Yes, you do need to spend more energy sending data between chiplets. Intel has been relentlessly optimizing that and is probably the furthest ahead of the game on that, with EIMB and Foveros. AMD just got to a baseline sea-of-wires, where they arent using power hungry PHY to send data, and that is only shipping on Strix Halo at the moment & is slated to be a big change for Zen6. But Intel's been doing all that and more, IMO. https://chipsandcheese.com/p/amds-strix-halo-under-the-hood https://www.techpowerup.com/341445/amd-d2d-interconnect-in-z...

That also has some bandwidth constraints on your system too.

There's the labor cost of doing package assembly! Very non trivial, very scary, very intimidating work. Just knowing that TSMC's Arizona chips have to be shipped back to Taiwan, assembled/packaged there, then potentially shipped where-ever is anec-data but a very real one. This just makes me respect Intel all the more, for having such interesting chips, such as Lakefield ~6 years ago, and their ongoing pursuit of this as a challenge.

So yeah, there are many optimal aspects to a single die. You're making a problem really hard by trying to split it up across chips.

It's not even clear why we want multi chip. As a consumer, if you had your choice, yes, you are right: we do want a big huge slab of a chip. There aren't many structural advantages for us, to get anything other than what we want, on one big chip.

And yet. Your cost savings can potentially be fantastically huge. Yields increase as your square millimeter-age shrinks, at some geometric or some such rate. Being able to push more advanced nodes that don't have the best yields and not have it be an epic fail allows for ongoing innovation & risk acceptance.

There's the modularity dividends. You can also tune appropriately: just as AMD keeps re-using the IOD across generations, Intel can innovate one piece at a time. This again is extremely liberating from a development perspective, to not have to get everything totally right, to be able to suffer faults, but not in the wafer, but at the design level, where maybe ok the new GPU isn't going to ship in 6 months after all, so we'll keep using the old one, but we can still get the rest of the upgrades out.

There's maybe some power wins. I don't really know how much difference it makes, but Intel just shutting down their CCD and using the on IOD (to use AMD's terms) tiny cores is relishably good. It's easy for me to imagine a big NPU or a big GPU that does likewise. I'm expecting similar from AMD with Medusa Point, their 2027 Big APU (but still sub Medusa Halo, which I cannot frelling wait to see).

I think Intel's been super super smart & has incredible vision about where chipmaking is headed, and has been super ahead of the curve. Alas their P-core has been around in one form or another for a long time & is a bit of a hog, and it's been a disaster for shipping new nodes. But I think they're set up well, and, as frustrating and difficult as it is leaving the convenience of a big chip APU, it feels like that time is here, and Intel's top of class at multi-chip, in a way few others are. We are seeing AMD have to do the same (Medua Point).

Optimal is a suboptimal statement. Only the Sith deal in absolutes, Anakin.

x86? max 96GB RAM? is this a joke?

wtallis

a day ago

It's 96 GB max when using LPDDR5, or 128 GB when using DDR5. These are consumer chips with the same 128-bit memory bus width that x86 consumer chips have been using for many years, and this is a laptop-specific product line so they're not trying to squeeze in as many ranks of memory as possible.

etempleton

a day ago

This is a laptop specific product. The next desktop variant will be later in 2026 or 2027 and I imagine that will support more Ram.

fancyfredbot

a day ago

Two things stand out to me:

1) Battery life claims are specific and very impressive, possibly best in class 2) Performance claims are vague and uninspiring.

Either this is an awful press release or this generation isn't taking back the performance crown.

daneel_w

a day ago

Is Intel's 18A (~2nm) their own hardware or did they acquire ASML equipment for this plant?

Intel never made EUV machines, never claimed to make EUV machines, never aspired to make EUV machines, and have run multiple marketing campaigns bragging about the ASML EUV machines they purchased.

wtallis

a day ago

And even prior to EUV, Intel didn't make their own lithography tools.