I gave a talk about the payload software at the Paris OCaml users group.
The reason for selecting that archicture was that I didn't expect to write the whole payload software by myself, and I assumed that when some other developers join in they would, obviously, not want to use a weird language like OCaml, and so they could write their portion in C/C++/whatever and the system could still work. Of course that didn't happen.
I'd be surprised if the company still uses OCaml, as the standad tendency is to revert to "industry-standard" languages to get industry-standard problems. The whole processing and simulation toolchain was also written in OCaml.
Today there is little reason not to use Rust and it can cover both the processing side and the payload software. But people still insist on using C/C++. I'm OK with that as long as I can invoice them.
EDIT: Found my slides https://lambda-diode.com/static/data/GHGSat_OCaml.pdf
The GHGSat constellation's payload software is still mostly OCaml, although a limited amount of newer from scratch components are indeed in Rust. It's been working well and on 16 satellites now - but as you said the main challenge has been training developers to Ocaml and I doubt they would write new code in it now.
Why do I never hear about these kinds of opportunities? I have done some Ocaml, quite a bit of embedded systems, and these days I have to waste the years doing web development.
Where do I have to call to be considered for doing OCaml embedded systems?
And it was only 3-4 years ago (maybe less) that Rust was considered by hiring managers to be in that category, too. Ask me how I know.
I'm going to assume it really means that they can't find people who satisfy some other constraint (location, pay band, "required" degree, experience on some other system or in some industry, etc) and OCaml or whatever.
In any case, LLMs blunt this. Hell, please stop me from opening a tab and starting a new OCaml project right now.
The pool is already small, and gets reduced even further.
Hey Berké! I remember your talk very well (I was in the room), super interesting and it really got me thinking about this area!
Since then, the more I look into it, the more I see a fit with our MirageOS unikernel work. On the ground, you can paper over security and specialisation by throwing more machines (or money) at the problem. In orbit you cannot, so both the compile-time and the runtime guarantees have to be right!
Any reason _not_ to continue using ocaml besides being less popular?
If popularity/mindshare wasn't an issue, I find the development cycle with ocaml to be nicer in several ways compared to rust on a platform where stuff like python is already allowed (I wouldn't call a full-blown linux system, even with limited memory, "embedded").
1) You have been ordered to directly interface with some external library that has a complicated C ABI, and you can't isolate it in a separate process and do IPC, and the FFI would be too clunky or slow.
2) You really need to manipulate lots of bits or bytes or floats very fast, or there are lots of them, and speed or memory footprint is becoming an issue. You need to multicore and/or SIMD, you need efficient abstractions, and you do that kind of thing all over the place, not just in a few functions that you separately implement in C or whatever.
As a bonus here's a good reason for NOT leaving OCaml: You can quickly bootstrap the compiler and run it on small embedded machines. I remember using OCaml on a Cyrix 686 with 64 megs I think, it was perfectly fine. Today, the lights dim a bit when I start cargo build.
That is why I say I see Rust main domains, environments where any form of automated resource management is not possible due to technical reasons, or (your point) it is a waste of time trying to convince people out of their beliefs.
Thanks for the presentation.
OCaml was very much part of the GHG measurements. On the satellite it was controlling the cameras, acquiring the images, losslessly compressing them, encrypting them and transferring them to the platform controller using a clunky but mandated CSP-based file tranfer protocol. On the ground, OCaml was running almost the entire data processing chain, including spectroscopy, image corrections, retrievals and post-retrieval ad hoc bias corrections, as well as simulations.
I simply used an mmap()'d Bigarrays to do parallel processing (back then OCaml wasn't multi-core.)
At a later stage I replaced a few bits of code (e.g. some sparse matrix routines) with Fortran. The only processing-related part that wasn't OCaml (besides the shells scripts to glue the things together) was the image alignment algorithm which was written by someone else in C++. I even had a job scheduling system written in OCaml.
- Support for read-only BigArrays (or sections) : we're starting to switch to just using bytes/string in OCaml 5+ now, since the larger allocations go into malloc'ed pools and do not relocate, so they can be used as part of an FFI (without the Bigarray C value overhead)
- More support for floating-point numbers (exceptions, representation exploration): OxCaml has some of this now! https://oxcaml.org/documentation/miscellaneous-extensions/sm...
- Syntax for extended BigArray indexing: now supported in OCaml https://ocaml.org/manual/5.4/indexops.html#ss:multiindexing
- LaCaml remains too low-level (non-functional) and unreadable: still remains the case, but OxCaml's got initial support for SIMD https://oxcaml.org/documentation/simd/intro
- BigArray and floating-point I/O remains difficult (we would like: I/O to channels, efficient representation retrieval): much easier now with OCaml effects to build custom fast serialisers (see https://github.com/ocaml-multicore/eio)
- Native top-level: ocamlnat is (I think) shipped in OxCaml, but you can also run a wasm toplevel
The size variants for floats and integers is definitely appreciated.
For the "read-only BigArrays": At the time I didn't know any Rust, but today that would simply be passing a mutable or immutable reference. Similar to the Fortran in/out designators in some way. I think that's pretty important when you have some complicated numerical code, sometimes with in-place modification.
Since there is a "zero_alloc checker", maybe a similar kind of annotation exists or could be added? Something like
let foo (x : [@readonly]) = ...
x.{0} <- 1.23
^ Attempt to write to read-only array Switching to OxCaml with exclave_ stack_ annotations drops
p99.9 latency from 29 ns to 9 ns per packet on the dispatch
hot path, and removes GC pressure entirely (394 minor GCs to
zero over 25 million packets). Throughput is comparable [...]
I got a similar result with my 'httpz' stack a few months ago (https://anil.recoil.org/notes/oxcaml-httpz) which my website's been running on without drama. And, I gotta say, OxCaml's a surprisingly robust compiler for being packed full of bleeding edge extensions: not a single crash on my infra is attributable to a compiler bug (plenty of bad OCaml code, but not due to a compilation bug)GCed languages do not have to be slow if you keep the garbage to only where it is necessary (or where you can allocate once and never collect).
Lisp Machines dialects (Genera, TI, Xerox) had primitives for stack allocation.
Them we had Cedar, CLU, Oberon and all its descendants, Modula-2+, Modula-3, Eiffel, Sather, and probably others during the last century.
Ironically the final design for Valhala in Java seems to be quite close to Eiffel already had in 1986.
Mentally only gets changed with people pushing against "this is how it has always been".
Also great to see the OCaml improvements, as my first ML was Caml Light.
Having never been in this situation, I wonder how difficult it is to bend a garbage collected language to behave like a non garbage collected one
Many GC languages do so.
The hard part is that the difference is part of the type system, and you might need to refactor some code moving between value and reference types.
Hey dsab! I agree, but CCSDS is what we have today. We need to support it properly first if we ever want to extend or transition away. It also doesn't help that there's no good open-source implementation of the whole stack, especially the SDLS part, which makes the transition even harder.
On the type-safety side, I found typed combinators really useful for describing parsing and serialising (see my earlier post on ocaml-wire[1]), and keeping the protocol logic pure (separate from I/O) makes the whole thing much easier to test and reason about. OCaml's fuzzing support pairs really well with types too. This is basically the nqsb-TLS approach [2], which has held up in ocaml-tls for a decade.
[1] https://gazagnaire.org/blog/2026-03-31-ocaml-wire.html [2] https://www.usenix.org/conference/usenixsecurity15/technical...
I taught a course on concurrent programming based on OCaml 5 and OxCaml where almost all of the code in the teaching materials were vibe coded. I reviewed all of the code (because I was teaching it to a class of 50+ students) and frankly the agent writes better O(x)Caml (mostly) than me.
There's not that much downside since the annotations only change the performance characteristics of the program, and the static type system rejects inconsistent annotations.
I don't even have good conjecture about why this is the case but right now all my assisted coding is in MLs for this reason.
https://spacebook.com/explorer?scene=8451c006-9e1a-4943-8202...
[1]: https://noelwelsh.com/posts/a-quick-introduction-to-oxcaml/
http://toastytech.com/guis/cedar.html
Unfortunately those attempts end up failing due to human reasons, not technical ones.
Hah, I was reading it as `0x`, a common prefix indicating hexadecimal, though I can't say my brain made any leap as to why "0xCAML" would be any more hex than standard.
For the rest of us, languages with automatic resource management are perfectly usable in systems programming.
Hey Maksadbek! Great question. It's a trade-off between speed of writing and trust in what you wrote, and OCaml (especially OxCaml) sits at a really good point on that curve.
Ada/SPARK has the strongest verification story and decades of space heritage, but the development cost is higher. Rust would work too, but I actively want a GC by default with the option to turn it off on the hot path. That is exactly what OxCaml's mode system gives you: zero minor GCs on the dispatch loop in the post, while the rest stays GC-managed. Haskell is great for type-driven design but its runtime cost-model is harder for low-jitter work.
Plus, the OCaml ecosystem gave me solid foundations on both fronts. For the protocol stack: MirageOS-style clean separation between wire serialisation, pure state-machine management and I/O, with ML modules and GADTs that map naturally onto protocol state machines. For the crypto: mirage-crypto for OCaml-facing primitives (fiat-crypto under the elliptic curves), and libcrux for ML-DSA-65 post-quantum signing. The CCSDS and BPv7/BPSec layers themselves I had to write from scratch (my earlier posts walk through how), and 20 years of OCaml muscle memory definitely helped!
That is quite an affirmation! I would likle to see OCaml being there.
Rust is clearly well positioned for deeply embedded work, and has actual C/C++ level performance. Given AI coding assistance, Rust is looking more and more approachable...and of course faster processors and compiler improvements will solve the compilation speed issue over time.
All that said, there's nothing wrong with a fast, safe language with ML syntax!
(One dark horse in all this is Mojo, which may provide Rust level safety with a more ergonomic language, and a much faster compiler...)