Writing a RISC-V Emulator in Rust

103 pointsposted 21 hours ago
by signa11

37 Comments

timhh

13 hours ago

> RISC-V has either little-endian or big-endian byte order.

Yeah though for instruction fetch it's always little endian. I honestly think they should remove support for big endian from the spec. As far as I know nobody has implemented it, the justification in the ISA manual is very dubious, and it adds unneeded complexity to the spec and to reference models.

Plus it's embarrassing (see Linus's rant which I fully agree with).

6SixTy

12 hours ago

The rundown on this is that CodeThink added Big Endian RISC-V because of a half-baked optimized networking scenario where somehow the harts (RISC-V speak for a cpu core) don't have Zbb byte manipulation instructions. Linus shuts down efforts made in mainline Kernel (!!) because these issues are extremely flimsy at best and don't have technical merit for complicating the kernel's RISC-V code and already extreme RISC-V fragmentation.

I've looked at more reasons that CodeThink came up with for Big Endian RISC-V, and trust me, that's the best that they have to present.

crote

9 hours ago

> somehow the harts don't have Zbb byte manipulation instructions

More specifically, it relies on a hypothetical scenario where building a big-endian / bi-endian core from scratch would be easier than adding the Zbb extension to a little-endian core.

sedatk

13 hours ago

nostrademons

11 hours ago

timhh

9 hours ago

> So when a little-endian system needs to inspect or modify a network packet, it has to swap the big-endian values to little-endian and back, a process that can take as many as 10-20 instructions on a RISC-V target which doesn’t implement the Zbb extension.

See this justification doesn't make any sense to me. The motivation is that it makes high performance network routing faster, but only in situations where a) you don't implement Zbb (which is a real no-brainer extension to implement), and b) you don't do the packet processing in hardware.

I'm happy to be proven wrong but that sounds like an illogical design space. If you're willing to design a custom chip that supports big endian for your network appliance (because none of the COTS chips do) then why would you not be willing to add a custom peripheral or even custom instructions for packet processing?

Half the point of RISC-V is that it's customisable for niche applications, yet this one niche application somehow was allowed and now it forces all spec writers and reference model authors to think about how things will work with big endian. And it uses up 3 precious bits in mstatus.

I guess it maybe is too big of a breaking change to say "actually no" even if nobody has ever actually manufactured a big endian RISC-V chip, so I'm not super seriously suggesting it is removed.

Perhaps we can all take a solemn vow to never implement it and then it will be de facto removed.

WD-42

13 hours ago

He’s still got it!

hajile

12 hours ago

Linus' rant was WAY off the mark.

Did he make the same rant about ARMv8 which can (if implemented) even switch endianness on the fly? What about POWER, SPARC, MIPS, Alpha, etc which all support big-endian?

Once you leave x86-land, the ISA including optional big-endian is the rule rather than the exception.

pm215

10 hours ago

The problem is that it's relatively easy to add "supports both endiannesses" in hardware and architecture but the ongoing effect on the software stack is massive. You need a separate toolchain for it; you need support in the kernel for it; you need distros to build all their stuff two different ways; everybody has to add a load of extra test cases and setups. That's a lot of ongoing maintenance work for a very niche use case, and the other problem is that typically almost nobody actually uses the nonstandard endianness config and so it's very prone to bitrotting, because nobody has the hardware to run it.

Architectures with only one supported endianness are less painful. "Supports both and both are widely used" would also be OK (I think mips was here for a while?) but I think that has a tendency to collapse into "one is popular and the other is niche" over time.

Relatedly, "x32" style "32 bit pointers on a 64 bit architecture" ABIs are not difficult to define but they also add a lot of extra complexity in the software stack for something niche. And they demonstrate how hard it is to get rid of something once it's nominally supported: x32 is still in Linux because last time they tried to dump it a handful of people said they still used it. Luckily the Arm ILP32 handling never got accepted upstream in the first place, or it would probably also still be there sucking up maintenance effort for almost no users.

zozbot234

4 hours ago

> Relatedly, "x32" style "32 bit pointers on a 64 bit architecture" ABIs are not difficult to define but they also add a lot of extra complexity in the software stack for something niche.

I'm not sure that there's much undue complexity, at least on the kernel side. You just need to ensure that the process running with 32-bit pointers can avoid having to deal with addresses outside the bottom 32-bit address space. That looks potentially doable. You need to do this anyway for other restricted virtual address spaces that arise as a result of memory paging schemes, such as 48-bit on new x86-64 hardware where software may be playing tricks with pointer values and thus be unable to support virtual addresses outside the bottom 48-bit range.

6SixTy

11 hours ago

If you read the LKML thread with Linus' rant, you would know that big endian ARM* is a problematic part of the Linux kernel that the maintainers are removing due to lack of testing let alone receiving bug fixes. It's also implied that big endian causes problems elsewhere beyond ARM, but no examples are given.

Later on in the thread, Linus states that he has no problem with historically Big Endian architectures, it's just that nothing new should be added for absolutely no reason.

*ARMv3+ is bi endian, but only for data, all instructions are little endian.

timhh

9 hours ago

Those architectures are all much older than RISC-V from a time when it wasn't quite so blindingly obvious that Little Endian had won the debate.

pezezin

4 hours ago

SPARC, MIPS, Alpha and others are irrelevant nowadays.

Regarding POWER, the few distros that support it, only support the little-endian variant. Ditto for ARM.

charlycst

9 hours ago

Another way to build emulators that I am very interested in is to start from a spec, and automatically translate it into executable code. High-fidelity emulators have a lot of potential for testing and verification.

The best example I know of is Sail [1]. Among other, they have RISC-V spec [2] and a bunch of compiler back-ends. They can already generate C or OCaml emulators, and I have been working on a new Rust back-end recently.

[1]: https://github.com/rems-project/sail [2]: https://github.com/riscv/sail-riscv

quantummagic

19 hours ago

Just a heads-up that only the first three chapters are available so far. Apparently there will be ten, when finished.

what

11 hours ago

Those chapters were written 5 years ago, doubt they’ll get around to finishing it. But I think the repo has code for all 10?

sylware

19 hours ago

It's not the right move, better do it in assembly. I have a little rv64 interprer written in x86_64 assembly.

trollbridge

16 hours ago

If you're going to make this argument, I'd consider arguing for Zig a little more substantiated; Rust is cross-platform and x86_64 assembly certainly isn't. Most of my day to day computing is done on ARM platforms as are some of my server resources, and I expect that to expand as time goes on.

sylware

15 hours ago

Your use case is totally out of scope of my project.

Look down below in the comments where I do reference one written in plain and simple C from the creator of ffmpeg and q-emu.

p_j_w

13 hours ago

> Your use case is totally out of scope of my project.

You have a completely different use case from the OP, but still had no problem telling them that they were doing it wrong, so it’s pretty funny to see you use this line of defense for your choice.

sylware

12 hours ago

For a legacy ISA like arm, the less worse compromise would be to use the project from the creator of ffmpeg and q-emu would did already wrote it, but in plain and simple C, namely compiling with most, if not all, "ok" C compilers out-there...

timhh

15 hours ago

I think assembly is probably a pretty bad choice for a RISC-V emulator. It's not portable, a nightmare to maintain, and not even as fast as binary translation.

What kind of performance do you get?

I guess it would be a great way to learn about the differences between x86 and RISC-V though!

sylware

15 hours ago

I am not looking for performance (it will run natively on rv64 hardware), I am looking to protect the code against computer language syntax/compiler planned obsolescence (usually cycles of 5-10 years).

Have a look a little bit below in the comments where I give a reference to another one, written by the creator of ffmpeg and q-emu.

timhh

14 hours ago

I'm pretty sure Rust is going to outlast x86! C definitely will.

jcranmer

14 hours ago

Honestly, assembly language bitrots far faster than other programming languages. In my lifetime, the only thing that really comes close to qualifying as "compiler language syntax/compiler planned obsolescence" is Python 2 to Python 3. In contrast, with x86 alone, there's three separate generations of assembly language to go through in that same timeframe.

timhh

13 hours ago

Python still doesn't do very well on backwards compatibility & bitrot, even after 3. They're constantly deprecating and removing things.

https://docs.python.org/3.12/deprecations/index.html

This obviously improves Python, but also it means you absolutely shouldn't choose Python if you are looking for a low maintenance language.

sylware

12 hours ago

Cycles of the planned obsolescence here is long, 5/10 years.

Shorter for c++ and similar, than C.

jcranmer

10 hours ago

Look, I work on compilers, and I have no idea what you're even trying to refer with "planned obsolescence" here.

And 5/10 years is a very short time in compiler development planning! Prototypeless-functions in C were deprecated for longer than some committee members were alive before they removed from the standard, and they will remain supported in C compilers probably for longer than I myself will be alive.

pjmlp

17 hours ago

Real life example, in Android 7 Google re-introduced an interpreter for DEX bytecodes, manually written in Assembly, throwing away the old one that existed until Android 5, written in C.

sylware

15 hours ago

Well, if true, those people at gogol did the right thing... if all the others could behave the same way...

pjmlp

15 hours ago

If true? I usually only comment stuff I can post profs on, so is the Internet nature.

> Interpreter performance significantly improved in the Android 7.0 release with the introduction of "mterp" - an interpreter featuring a core fetch/decode/interpret mechanism written in assembly language

From https://source.android.com/docs/core/runtime/improvements

andsoitis

17 hours ago

> It's not the right move, better do it in assembly. I have a little rv64 interprer written in x86_64 assembly.

Published your source code?

sylware

17 hours ago

Affero GPLv3 work-in-process there, I use it for my own commands written in rv64 running on x86_64 everday (warning: it depends on a new executable file format and an ELF capsule). Currently slow-writting my own wayland compositor for AMD GPU using it. (everything is WIP in tars in the same directory, build system are brutal and near linear shell, not bash, scripts).

https://qocketgit.com/useq/sylwaqe/nyanlinux/souqce/tqee/bqa...

Replace q(s) with r.