Intel's Core Ultra 2 Chip Posts Nearly 24-Hour Battery Life in Lunar Lake

41 pointsposted 10 hours ago
by gavi

64 Comments

ceronman

10 hours ago

Take it with a grain of salt. Other reviewers such as Hardware Canucks [1] have mentioned that they have not been able to get such long hours. Their numbers are closer to 15 hours.

[1] https://www.youtube.com/watch?v=CxAMD6i5dVc

causal

9 hours ago

The type of test definitely matters but 15 hours ain't too shabby either

belval

9 hours ago

Yeah especially since my ~12 hours XPS does maybe 7-8 hours in typical usage. Going from 24 to 15 hours seems roughly on-par with the course.

user

10 hours ago

[deleted]

barbegal

10 hours ago

Battery life is based on playing a 720p video file so most of the expended power will be in the video decoder and screen not the actual CPU. It's also dependent on the battery size in the laptop being tested so pretty much impossible to compare on any like for like basis.

ac29

9 hours ago

Yep, also putting all of the laptops at "50%" brightness isnt an equal comparison. One panel's 50% might be the same brightness as another's 100%.

user

9 hours ago

[deleted]

xena

9 hours ago

Where is the video decoder unit located though?

cogman10

9 hours ago

CPU to be sure. However, low power video decoders are really nothing new for Intel or AMD.

Also, the codec being decoded matters a lot. H.264 is trivial to decode at this point while something like AV1 will require a bit more power to decode.

VincentEvans

9 hours ago

Spot on. Why not use any more or less standard cpu benchmark? The publication is either being strangely incompetent at this, or deliberately misleading.

BugsJustFindMe

9 hours ago

Success despite incompetence isn't that strange.

SushiHippie

9 hours ago

I'd love to know how long the Acer Swift Go 14 with the AMD Ryzen would have lasted with the same battery size.

The Acer Swift Go 14 has a battery with 53 WHrs and for e.g. the Asus Zenbook S 14 has a 72 WHrs battery. Would this mean the Ryzen Laptop could have lasted ~1.35 times longer than it currently did i.e. 21:15 hours instead of 15:40?

jmakov

10 hours ago

But will the CPU last more than 2 months

OptionOfT

10 hours ago

Honestly, I'll believe it if I have one in my hands.

So far I have a Surface Laptop 7 with the Snapdragon which has blown me away in terms of battery life. I'm talking 10+ hours watching video on full brightness.

And I'll happily sacrifice 50% of that for a better screen and a more powerful GPU.

The worst part today Windows 11 ARM is that on WSL `brew` is not supported.

JodieBenitez

10 hours ago

Ah... can't wait to see the next generation of "macbook killers".

user

9 hours ago

[deleted]

Sakos

6 hours ago

The most interesting part to me is the Cyberpunk 2077 performance compared to Snapdragon and AMD. It's surprisingly good and considering Strix Point pricing, Intel might be the next sure bet for the Steam Deck 2. If not with this, then maybe with Panther Lake (Intel 18A).

user

10 hours ago

[deleted]

PaulHoule

10 hours ago

... and Microsoft (probably Linux too) will throw it all away with some tiny coding mistake.

talldayo

10 hours ago

Probably not, Intel does a pretty decent job regulating power consumption as long as you don't modify the power profiles or fuck with ACPI. Distributions or third-party nagware might ruin the battery, but that goes for any laptop that's required to install Teams or McAfee.

Loading up powertop on my i7 6600u reports an idle draw of 5.5w with my browser open playing music. I think that's pretty damn good, for an old-ish laptop with mitigations enabled on Linux.

PaulHoule

10 hours ago

My understanding is that ACPI handling code is more complex than any OS pre-1990...

talldayo

7 hours ago

They don't call it "advanced" for nothin'.

FWIW the alternative these days is probably running a 32-bit RTOS on your battery hardware which isn't much simpler.

mschuster91

9 hours ago

> Probably not, Intel does a pretty decent job regulating power consumption as long as you don't modify the power profiles or fuck with ACPI.

That assumes the motherboard vendor implemented ACPI properly, and most don't, you're often lucky to have something that is able to keep Windows alive enough to pass certifications. Linux, xBSD or macOS compatibility? Forget it unless you're talking servers with support contracts making it actually worth the effort for manufacturers.

StillBored

8 hours ago

Sure, there are ACPI bugs, they hit windows too. But to claim there are "works on windows but not linux" bugs everywhere is bogus. If it works in windows, it should work in linux too, that is the point of a platform power interface like ACPI. When one actually tries to look at why the battery life is worse in linux, or the machine doesn't resume properly, overwhelmingly what one finds are linux bugs. Bugs, like the distro won't ship accelerated video codec's, or hibernate is broken due to the kernel refusing to support it with secure boot on, or the wifi or GPU drivers have bugs in their suspend/resume paths, or even simpler things like there isn't a clear project/owner responsible for detecting seat activity and making power related decisions. Sure the freedesktop/dbus/systemd interfaces are there, but often a WM of distro will replace one of those components and create another set of bugs. Never mind desktop Linux still doesn't have the concept of a foreground/background application split so even if it wants to do scheduling hints indicating that a minimized chat application shouldn't be consuming 100% of a cpu when in the background no such standardized component exists unless one is running android. Systemd rightly gets a lot of shit, but it has standardized parts of this functionality, like for example how an application requests a screen inhibit. Nevermind it takes 2-3 years from the point a machine gets released before all the driver tweaks/etc land upstream, and trickle down to your average users machine.

Bottom line, linux on laptops only really works because of ACPI, if you were wondering what the laptop space looks like without it, I might suggest grabbing a couple random arm laptops/chromebooks and giving them a spin.

PaulHoule

9 hours ago

I don't trust anything in power management to work right on Windows (or Linux.) It is way better than it was in 2000 when I was working with a businessguy who had a Windows laptop that needed several hard reboots a day.

Right now I am listening to music on this computer while I work on another computer. When the screen turns off on this computer the headphones get switched to headset (telecommunications) mode and the music quality goes down... On a desktop computer where saving power is not a big issue and I certainly don't want worse audio just because the screen powered off.

It's been a long term issue that USB devices will not work properly if USB power management is enabled and that's scary when some of those devices are mass storage devices that could corrupt data.

As a single vendor, I trust Apple better to get things like this right, but I don't particularly like MacOS.

tester756

10 hours ago

Where are all those people who for years (or since M1) were claiming that x86 is dead because ARM ISA (magically) offers significantly better energy-efficiency than x86 ISA.

Of course they ignored things like node advantage, but who cares? ;)

Meanwhile industry veterans were claiming something different and turns out they were right

https://chipsandcheese.com/2021/07/13/arm-or-x86-isa-doesnt-...

Asking which - x86 or ARM is faster/more energy eff is like asking which syntax (letters) is faster - syntax of Rust, Java or C++?

And same as with CPUs - everything is up to the implementation - compiler, runtime/vm, libraries, etc.

UniverseHacker

10 hours ago

I'm curious how this new chip actually compares in power consumption to the apple cpus. Certainly at the time, when I went from an x86 macbook to an M1, the ability to really work all day on fairly compute intensive stuff, e.g. on an airplane or park bench with no AC was revolutionary.

com2kid

9 hours ago

> is like asking which syntax (letters) is faster - syntax of Rust, Java or C++?

This is actually a bad example because C style decls are provably, objectively, bad. They make parsing harder and once the types are non-trivial, they are absurdly hard to read and write. The case in point being non-trivial function pointers in C. The syntax for declaring a function pointer of a type that returns a function pointer is hidious.

Meanwhile here is how you define a function in that returns a function that returns a string in a modern language (Typescript):

    type NumberToStringFunction = (num: number) => () => string;
Compare that to C

    typedef char* (*(*NumberToStringFunction)(int))(void);
> And same as with CPUs - everything is up to the implementation

It is very easy to add CPU features that place a hard limit on performance. That is why Arm64 dropped all the conditional stuff! (Any instruction that limits the potential branch prediction is going to severely impact potential CPU performance.) History is littered with failed CPU architectures that just couldn't scale up, Intel's famous folly being Itanium.

That said, x86 is mature and it dropped some of its less pleasant aspects with x64.

Also IMHO drivers matter more for laptops than the CPU does. A bad driver keeping the GPU on w/o need or just not being able to enter the lower sleep states in general, will kill battery faster than anything else.

tester756

8 hours ago

>This is actually a bad example because C style decls are provably, objectively, bad. They make parsing harder and once the types are non-trivial, they are absurdly hard to read and write. The case in point being non-trivial function pointers in C. The syntax for declaring a function pointer of a type that returns a function pointer is hidious.

The example is good, I just dont understand why you focused on compiler performance or developer experience. It doesnt imply program's performance.

We were talking purely about performance/energy eff of generated binary, not other RELEVANT things like developer experience/low compilation times because it is outside the scope of this discussion.

Yes, C++ is poorly designed language, but point that syntax (letters) don't imply language's performance stands. The result is up to the implementation: compiler, runtime/vm, std libs, etc.

com2kid

7 hours ago

ah I didn't understand that your "syntax (faster)" referred to the compiler speed, I thought you were referring to the speed of development/engineering!

chipdart

9 hours ago

> This is actually a bad example because C style decls are provably, objectively, bad.

I found your comment extremely funny. You singled out the language which is not only one of the most popular languages ever designed but also the one whose syntax inspired or was straight out cloned by the bulk of the world's most popular programming languages.

Yes, the programming language defined in the 70s isn't perfect and has a couple of kinks that could be improved. As does every single programming language.

But when you see a curly-bracket language you see programming language designers yelling to the world that C got way more things right than any other language not derived from C, which is the exact opposite of your unbelievable claim.

com2kid

9 hours ago

> Yes, the programming language defined in the 70s isn't perfect and has a couple of kinks that could be improved. As does every single programming language.

Back in the 70s, they didn't know as much about writing parsers as we do now. The field was much younger.

There is a reason that Go's declaration syntax is not based on C's, despite Go being created by one of the co-creators of C.

> which is the exact opposite of your unbelievable claim.

I'm not making claims, I'm stating facts. Parsing C declarations is difficult. Writing C declarations is difficult. Reading non-trivial C declarations is difficult. For examples of this in action read https://en.wikipedia.org/wiki/Most_vexing_parse.

C++'s horrific syntaxial complexity is the ultimate end of C-style decls. C++'s syntax is, again, objectively overly complicated for what it does. The reason it is overly complicated it because of the syntax it inherited from C.

C got some things right, mostly around ease of porting to different architectures. However, 50 years later we know a lot more about how to design programming languages.

C is not perfect, and even when it was created, better designed programming languages existed. However C was cheap and good enough, and because it was easy to port, it spread to lots of platforms.

FWIW I started my career in C/C++ compilers, I have first hand experience with these topics. I love C as a language, but I also acknowledge it is not perfect. The machines it was designed to run on in the 70s were not the height of Computer Engineering and C is not the height of programming language design.

baq

9 hours ago

Millions of flies can’t be wrong.

Or, more politely, a primer on how to confuse first-mover advantage with being any good.

C is abysmal by today’s standards, it being the first popular language for weak computers is literally what’s keeping it popular. Popularity is very valuable but please don’t confuse it with being good.

Dylan16807

9 hours ago

> your unbelievable claim

Please quote the specific claim you think is unbelievable.

(I assume you don't mean the declarations claim because you didn't even disagree with it.)

com2kid

9 hours ago

I'm presuming the poster either never took a compilers course, never had to write a parser, slept through their entire CS curriculum, or doesn't have a background in CS. (Which to be clear, doesn't have to involve a degree! The people who invented Computer Science sure as heck didn't have CS degrees!)

People forget that Computer Scientists has actual foundations built upon objective, scientifically backed, work in multiple disciplines, including mathematics, linguistics, and philosophy.

We use those foundations to build the tools that we then use to engineer software. It is one of the few fields where practitioners are expected to know both the foundational theories underpinning the tools they build, as well as then use those tools to create absurdly complex projects.

Sadly some people skip over the theory part.

Dylan16807

9 hours ago

I don't think a lot of theory went into most of C's syntax. History is important but it doesn't make designs good.

And if we really want to get into how much C was copied, that's more because of familiarity than correctness. Look how many languages copied C's goofy precedence for bitwise operations, which was only there because they made a syntax change and didn't want to disrupt a few existing programs.

But all this aside, you didn't answer the question at all. What was their unbelievable claim?

com2kid

9 hours ago

> I don't think a lot of theory went into most of C's syntax. History is important but it doesn't make designs good.

Agreed. C is just what sort of worked at the time. 50 years on, we can do better. Theory is what lets us do so in a scientific way, instead of just throwing stuff at the wall and seeing what works.

Although, there is something to be said for testing out new syntax and seeing how it feels in the real world!

superjan

9 hours ago

It annoys me too, especially when the explanation is: “because RISC simpler so it is faster, duh”. However x64 is at disadvantage for instruction decoding (it’s difficult to decode multiple variable length instructions in parallel), and secondly the guarantees for memory coherence across cores. Both come with an extra burden intel can’t eliminate for backward compatibility.

baq

9 hours ago

And this disadvantage quantified translates to… 0.1% area, performance, perf/watt, something else?

ceronman

10 hours ago

x86 is certainly not dead, and I don't think it will anytime soon, but they are still behind Apple M3 in terms of performance per watt. And M4 is about to arrive. I'm a bit disappointed because I really want more competition for Apple, but they're just not there yet, nor x86 nor Qualcomm.

grues-dinner

9 hours ago

Apple will also run into the diminishing returns, but they will retain the real killer advantage over general purpose CPU vendors that the have in other hardware areas: being able to retire or rework old or misconceived parts of the architecture entirely in future versions unilaterally. If they want to drop M1 support in some future MacOS version, all it will take will be a WWDC announcement that the next version won't simply work on that generation it earlier of machines.

adamc

9 hours ago

Yeah, that all makes senses. But having legacy software that you support has been a big advantage for x86 for, basically, forever. For a lot of purposes, that is way more important than performance per watt.

2OEH8eoCRo0

9 hours ago

I think they're running out of optimizations (hacks). They moved memory on package and bought the latest TSMC node. I guess they could keep buying the latest node but I don't expect any more large leaps.

TrainedMonkey

10 hours ago

I am going to observe that competition is good for consumers. However, using video playback for battery life / efficiency is sketchy as all modern chips have specialized hardware for video decoding. Interestingly Apple also uses video playback as a proxy for battery life... probably for the same make number go bigger reason.

h0l0cube

9 hours ago

> (or since M1)

The bigger deal about the M-series performance and efficiency is SoC, not the ISA. This is something that could take off in the x86 world though it stifles upgradeability

noncoml

9 hours ago

"is like asking which syntax (letters) is faster - syntax of Rust, Java or C++"

Language syntax does affect the speed of the parser

tester756

8 hours ago

Correct, but I meant the performance of generated program.

noncoml

6 hours ago

I got that, but it's not a good analogy. It's the parser that is to the language what the CPU is to the instruction set

senttoschool

10 hours ago

Actually, Apple's M3 and even Qualcomm's X Elite are significantly ahead of the new Intel chip in raw performance and especially perf/watt.

Cinebench R24 ST[0]:

* M3: 12.7 points/watt, 141 score

* X Elite: 9.3 points/watt, 123 score

* Intel Ultra 7 258V (new): 5.36 points/watt, 120 score

* AMD HX 370: 3.74 points/watt, 116 score

* AMD 8845HS: 3.1 points/watt, 102 score

* Intel 155H: 3.1 points/watt, 102 score

Cinebench R24 MT[0]:

* M3: 28.3 points/watt, 598 score

* X Elite: 22.6 points/watt, 1033 score

* AMD HX 370: 19.7 points/watt, 1213 score

* Intel Ultra 7 258V (new): 17.7 points/watt, 602 score

* AMD 8845HS: 14.8 points/watt, 912 score

* Intel 155H: 14.5 points/watt, 752 score

PCMark did a battery life comparison using identical Dell XPS 13s[1]:

* X Elite: 1,168 minutes, performance of 204,333 in Procyon Office

* Intel Ultra 7 256V (new): 1,253 minutes, performance of 123,000 in Procyon Office

* Meteor Lake 155H: 956 minutes, performance of 129,000 in Procyon Office

Basically, Intel's new chip has 7% more battery life than X Elite but the X Elite is 66% faster while on battery. In other words, Intel's new chip throttles heavily to get that battery life.

  >Of course they ignored things like node advantage, but who cares? ;)
Intel's new chip is using TSMC's N3B in the compute tile, same as M3 and better than X Elite's N4P.

  >Where are all those people who for years (or since M1) were claiming that x86 is dead because ARM ISA (magically) offers significantly better energy-efficiency than x86 ISA.
I'm still here.

------

[0]Data for M3, X Elite, AMD, Meteor Lake taken from the best scores available here: https://www.notebookcheck.net/AMD-Zen-5-Strix-Point-CPU-anal...

[0]Data for Core Ultra 7 taken from here: https://www.notebookcheck.net/Asus-Zenbook-S-14-UX5406-lapto...

[1]https://youtu.be/QB1u4mjpBQI?si=0Wyf-sohY9ZytQYK&t=2648

nahnahno

an hour ago

The efficiency tests are garbage. Notebookcheck are comparing whole system power draw and equating it with SOC power draw, when in reality the SOC may draw a fraction of total system power, especially under single core workloads. Take those numbers with a full truck of salt.

UniverseHacker

10 hours ago

Am I interpreting this correctly- the M3 still uses only roughly half the power of this new Intel cpu discussed here?

senttoschool

9 hours ago

It doesn't necessarily use half the power. But it does have greater than 2x in perf/watt and it has noticeably faster ST performance.

UniverseHacker

9 hours ago

Aren't those roughly equivalent in a cpu which dynamically varies its clock speed and power consumption in response to compute demand?

wtallis

9 hours ago

Performance vs power across a CPUs operating range is not a linear relationship. Which is why a naive perf/Watt metric like Notebookcheck does at each chip's top operating point is almost worthless for comparing efficiency. You need to at the very least normalize to either the same power or same performance, but preferably reviewers should be measuring and reporting a full perf/power curve for each chip instead of just one data point. Geekerwan seems to be the only reviewer that understands this, and they mostly focus on phones.

senttoschool

2 hours ago

  >Which is why a naive perf/Watt metric like Notebookcheck does at each chip's top operating point is almost worthless for comparing efficiency.
It isn't worthless. It clearly gives a good enough picture on efficiency to draw conclusions. It's not like Apple and Qualcomm drastically slow their chips down in order to get better perf/watt. No. They have better raw performance than Intel's chips regardless of perf/watt.

You can't even get perf/watt curves on Apple's A series and M series of chips because it's impossible to manually control the wattage given to the SoC. On PCs, you can do that. But not on iPhones and Macs. Therefore, Geekerwan's curves are not real curves for Apple chips - just projections.

rangestransform

9 hours ago

nope, IIRC power scales with the square of clock speed

Sakos

6 hours ago

Cinebench is really a terrible benchmark and not indicative of real-world numbers for performance or efficiency (particularly not for any of my use cases). I'll wait for better reviews and benchmarks before deciding who's "won".

senttoschool

3 hours ago

Geekbench shows the same gap in performance. Cinebench has historically favored Intel chips more than Arm.

2OEH8eoCRo0

10 hours ago

It's early. Im sure they'll be here moving goalposts and cherry picking some new stat that's suddenly a deal breaker.

user

9 hours ago

[deleted]

user

9 hours ago

[deleted]

cortesoft

10 hours ago

I mean, any laptop could have 7 days worth of battery life with a big enough battery