Interviewing Intel's Chief Architect of x86 Cores

118 pointsposted 6 days ago
by ryandotsmith

13 Comments

brucehoult

7 hours ago

Oh em gee ... what a contentless interview.

"We made it wider and deeper".

Gosh. Why didn't anyone think about doing that before?

jng

4 hours ago

He is no Jim Keller, and the mostly[1] automated transcript makes it read cringe, but it is not at all devoid of content.

Some examples of very interesting, non-obvious content:

* Even if store ports are kept fixed (2 in his example), adding store address generators (up to 4 in his example) actually improves performance, because it frees up load port dependencies. * Within the same core, they use two different styles of load/address address contention mechanisms which he describes as two tables, one with explicit "allows" and the other one with explicit "denies" -- which of course end up converging (I understand it refers to two different encodings which vary in what is stored). * Between cores, they have completely separate teams which reach different designs for things like this. * It was interesting to me to discover how isolated the different core design teams work (which makes sense) * It was interesting to me to picture the load/store address contention subsystem, which must be quite complex and needs to be really fast.

And I stop listing, re different types of workloads, gaming workloads being similar to DB workloads, and even more similar between them than to SPEC benchmarks and so on.

Just go read the interview if you're interested in CPU design!

[1] mostly automated: at least the dialog name labels seem to be hand-edited, as one of them has a typo

pixelpoet

3 hours ago

I did the transcription, but not the dialogs and labels etc. So I can say with certainty that it wasn't automated :)

What made the transcription "cringe"? I'd like to believe it's accurate.

brucehoult

3 hours ago

You're right the things you list do contain fresh information. Though the similarity between game logic and business logic is not a new observation ... and web browser in the same ballpark too. I think it's a code size vs data size thing. SPEC programs mostly have a relatively small amount of code, gcc being an obvious exception. And I guess Blender in 2017 FP.

saagarjha

6 hours ago

Because that costs power and area.

brucehoult

5 hours ago

And it still does.

And the last generation was wider and deeper than the one before it, also costing power and area.

The question that should be asked ... but which would never be answered ... is "What was it that you changed that REQUIRED and ALLOWED you to go wider and deeper?"

It's not a new process node every time.

Theres no NEED to have a massive reorder buffer unless you can decode and dispatch that number of instructions in the time it takes for a load to arrive from whichever level of memory hierarchy you're optimising for. And there's no POINT if you're often going to get a misprediction in that number of instructions. Ok, so wider decode is one component of that. Is there a difference in memory latency as well? Wider decode past 3 or 4 instructions increasingly means that you can't just end your packet of decoded instructions at the first branch -- as you get wider you're increasingly going to have to both parse past a conditional branch, and then have to predict more than one branch in the same decode cycle. You'll also get into branches that jump to other instructions in the same decode group (either forward or backward).

There are all kinds of complications there, with no doubt interesting solutions, that go far beyond "we went wider and deeper".

porridgeraisin

4 hours ago

https://chatgpt.com/share/68ef6cc3-1c48-8013-a545-905af89fbc...

I asked chatgpt to give a contentful summary of the interview, it seems to be more or less accurate, albeit surface level. If anyone is interested.

It gets the "why" but not the "how". Maybe someone here can prompt it further to speculate on the "how". I don't think I'll be able to verify its output well enough to do that.

mort96

3 hours ago

I'm not sure what you expect to get out of this. How do you make a "contentful summary" of a contentless interview? Where do you get the content from?

porridgeraisin

2 hours ago

By using general knowledge to write e.g what adding a store address unit accomplishes in the context of the rest of the interview. Did you even read the chat?

MBCook

2 minutes ago

That doesn’t add useful content. It adds definitions. That’s just padding.

Only the interviewee can add content.

I’m also of the opinion “I asked ChatGPT for a summary” type comments are very low effort and don’t add to the discussion.

misja111

2 hours ago

Well isn't Intel mostly alive by capital injections from the US government and NVidia nowadays? How much content did you expect from a straw puppet.

BoredPositron

3 hours ago

Odd read especially after that preamble >> The transcript has been edited for readability and conciseness.

Not a lot of novel information either.

norin

7 hours ago

yeah strange sort of