Tinygrad will be the next Linux and LLVM

23 pointsposted 4 hours ago
by alvivar

24 Comments

saagarjha

an hour ago

Well, neither Linux nor LLVM loudly proclaimed that they would be the next Internet or GUI. So I am inclined to believe that this will not be the case and the person doing the proclamation might be a little full of himself.

WoodenChair

an hour ago

> While there may be a legacy Linux running in a VM to manage all your cloud phoning spyware, the core functionality of the lifelike device is boot to neural network.

No, I do not think future devices will be "boot to neural network." Traditional algorithms still have a place. Your robot vacuum cleaner (his example) may still use A* to route plan, and Quicksort to display your cleanings in terms of most energy usage.

> Without CPUs, we can be freed from the tyranny of the halting problem.

Not sure what this means but I think it still makes sense to have a CPU directing things as in current architectures. You don't just have your neural engine, you also have your GPU, Audio system, input devices, etc. and those need a controller. Something needs to coordinate.

mikewarot

2 hours ago

He's got the kernel of a good idea. Deterministic data flows are a good thing. We keep almost getting there, with things like data flow architectures, FPGAs, etc. But there's always a premature optimization for the silicon, instead of the whole system. This leads to failure, over, and over.

He's wrong in the idea of using an LLM for general purpose compute. Using math instead of logic isn't a good thing for many use cases. You don't want a database, or an FFT in a Radar System to hallucinate, for example.

My personal focus is on homogeneous, clocked, bit level systolic arrays.[2] I'm starting to get the feeling the idea is really close to being a born secret[1] though, as it might enable anyone to really make high performance chips on any fab node.

[1] https://en.wikipedia.org/wiki/Born_secret

[2] https://github.com/mikewarot/Bitgrid

KeplerBoy

2 hours ago

You could still build a FFT in tinygrad and it would be as deterministic as it's matmuls (so not bitwise deterministic, due to the non-associativity of floating point math and the way GPUs don't guarantee execution order, but we are okay with that). The matmuls in the NNs don't hallucinate.

mikewarot

an hour ago

TinyGrad is GeoHot's system/compiler to map neural networks onto hardware. He consistently points out this one point: Because the exact number of cycles is know in advance, it can be scheduled, there's no need for branch prediction, or that type of thing in a CPU.

Essentially, he wants to be able to express programs, and even an operating system, as a directed acyclic graph of logical binary operations, so that you can have consistent and deterministic runtime behavior.

The bit about LLMs is a distraction, in my opinion.

zevv

an hour ago

> he wants to be able to express programs, and even an operating system, as a directed acyclic graph of logical binary operations, so that you can have consistent and deterministic runtime behavior.

So how is this different from digital logic synthesis for CPLD/FPGA or chip design we have been doing over the last decades?

mikewarot

an hour ago

FPGAs are (prematurely) optimized for the wrong things, latency and utilization. The hardware is heterogeneous, and there isn't one standard chip. Plus they tend to be expensive.

The idea is to be able to compile/run like you can now with your Von Neuman machine.

FPGA compile runs can sometimes take days! And of course, chips take months and quite a bit of money for each try through the loop.

skybrian

2 hours ago

I don’t understand the LLVM comparison. Is it somehow a compiler backend for conventional programming languages? Can you run C or Rust code?

fuhsnn

2 hours ago

Me either, it's like saying ai-dependency is the next freedom.

melodyogonna

2 hours ago

Makes me wonder if he knows what LLVM does.

If I understand him correctly, if everything becomes a neural network then he expects most neural networks to use Tinygrad

TimSchumann

2 hours ago

> Without CPUs, we can be freed from the tyranny of the halting problem.

Can someone please explain to me what this even means in this context?

Serious question.

mikewarot

an hour ago

Think of it as unwinding a program all the way until it's just a list of instructions. You can know exactly how long that program will take, and it will always take that same time.

krisoft

an hour ago

But will it always solve the task? Because without that it it is trivially easy to “solve” the halting problem by just declaring that the turing machine halts after X steps.

chenzhekl

2 hours ago

I don't know why I should switch from PyTorch to Tinygrad as a researcher and practitioner. In terms of kernel fusion, there is torch.compile. Not to say there is a large ecosystem behind PyTorch and almost every paper today is published with a PyTorch implementation. Probably what Tinygrad shines is bare-metal platforms?

krackers

2 hours ago

> tinygrad has a hardware abstraction layer, a scheduler, and memory management. It's an operating system

Doesn't every ML framework have that?

almostgotcaught

2 hours ago

nah not like he's talking about - TF and PT definitely punt all that down to tensorrt or hip or whatever. doesn't mean there's anything novel here - just that TF and PyTorch don't do it.

WithinReason

an hour ago

The only reason neural networks don't have control flow is because they are not very good. They are incredibly inefficient and the only way to properly solve that is to introduce control flow, for example: https://arxiv.org/abs/2311.10770

akoboldfrying

an hour ago

>Without CPUs, we can be freed from the tyranny of the halting problem.

In the same way that we can be freed of the tyranny of being able to write a for loop.

carrja99

an hour ago

Isn’t this the guy who joined Twitter as an intern to “fix” search?