97 points by g0xA52A2A 5 hours ago | 4 comments
dathinab 3 hours ago
> resulting VM outperforms both my previous Rust implementation and my hand-coded ARM64 assembly

it's always surprising for me how absurdly efficient "highly specialized VM/instruction interpreters" are

like e.g. two independent research projects into how to have better (fast, more compact) serialization in rust ended up with something like a VM/interpreter for serialization instructions leading to both higher performance and more compact code size while still being cable of supporting similar feature sets as serde(1)

(in general monomorphisation and double dispatch (e.g. serde) can bring you very far, but the best approach is like always not the extrem. Neither allays monomorphisation nor dynamic dispatch but a balance between taking advantage of the strength of both. And specialized mini VMs are in a certain way an extra flexible form of dynamic dispatch.)

---

(1): More compact code size on normal to large project, not necessary on micro projects as the "fixed overhead" is often slightly larger while the per serialization type/protocol overhead can be smaller.

(1b): They have been experimental research project, not sure if any of them got published to GitHub, non are suited for usage in production or similar.

gavinray 1 hour ago
It doesn't make sense to me that an embedded VM/interpreter could ever outperform direct code

You're adding a layer of abstraction and indirection, so how is it possible that a more indirect solution can have better performance?

This seems counterintuitive, so I googled it. Apparently, it boils down to instruction cache efficiency and branch prediction, largely. The best content I could find was this post, as well as some scattered comments from Mike Pall of LuaJIT fame:

https://sillycross.github.io/2022/11/22/2022-11-22/

Interestingly, this is also discussed on a similar blogpost about using Clang's recent-ish [[musttail]] tailcall attribute to improve C++ JSON parsing performance:

https://blog.reverberate.org/2021/04/21/musttail-efficient-i...

mananaysiempre 37 minutes ago
> It doesn't make sense to me that an embedded VM/interpreter could ever outperform direct code. You're adding a layer of abstraction and indirection, so how is it possible that a more indirect solution can have better performance?

It is funny, but (like I’ve already mentioned[1] a few months ago) for serialization(-adjacent) formats in particular the preferential position of bytecode interpreters is apparently rediscovered again and again.

The earliest example I know about is Microsoft’s MIDL, which started off generating C code for NDR un/marshalling but very soon (ca. 1995) switched to bytecode programs (which Microsoft for some reason called “format strings”; these days there’s also typelib marshalling and WinRT metadata-driven marshalling, the latter completely undocumented, but both data-driven). Bellard’s nonfree ffasn1 also (seemingly) uses bytecode, unlike the main FOSS implementation. Protocol Buffers started off with codegen (burying Google user in de/serialization code) but UPB uses “table-driven”, i.e. bytecode, parsing[2].

The most interesting chapter in this long history is in my opinion Swift’s bytecode-based value witnesses[3,4]. Swift (uniquely) has support for ABI compatibility with polymorphic value types, so e.g. you can have a field in the middle of your struct whose size and alignment only become known at dynamic linking time. It does this in pretty much the way you expect[5]: each type has a vtable (“value witness”) full of compiler-generated methods like size, alignment, copy, move, etc., which for polymorphic type instances will can call the argument’s witness methods and compute on the results. Anyways, here too the story is that they started with native codegen, got buried under the generated code, and switched to bytecode instead. (I wonder–are they going to PGO and JIT next, like hyperpb[6] for Protobuf?)

[1] https://news.ycombinator.com/item?id=44665671, I’m too lazy to copy over the links so refer there for the missing references.

[2] https://news.ycombinator.com/item?id=44664592 and parent’s second link.

[3] https://forums.swift.org/t/sr-14273-byte-code-based-value-wi...

[4] Rexin, “Compact value witnesses in Swift”, 2023 LLVM Dev. Mtg., https://www.youtube.com/watch?v=hjgDwdGJIhI

[5] Pestov, McCall, “Implementing Swift generics”, 2017 LLVM Dev. Mtg., https://www.youtube.com/watch?v=ctS8FzqcRug

[6] https://mcyoung.xyz/2025/07/16/hyperpb/

bjoli 2 hours ago
Finally! Tail calls! I had to write rust some years ago, and the ocaml person in me itched to get to write tail recursion.

Tail recursion opens up for people to write really really neat looping facilities using macros.

iknowstuff 2 hours ago
Rust has the become keyword now I believe for TCO.

https://doc.rust-lang.org/std/keyword.become.html

steveklabnik 1 hour ago
From the first line of the post:

> Last week, I wrote a tail-call interpreter using the become keyword, which was recently added to nightly Rust (seven months ago is recent, right?).

measurablefunc 39 minutes ago
More accurate title would be to say it is a tail call optimized interpreter. Tail calls alone aren't special b/c what matters is that the compiler or runtime properly reuses caller's frame instead of pushing another call frame & growing the stack.
ninjahawk1 1 hour ago
i like it because it’s in rust