1. Go, when I first saw code I wrote almost a decade ago still compiles and runs in Go, I decided to use Go for everything. There were some initial troubles when I started using it a decade ago, but now it's painless.
2. Haskell, I use it for DSL and state machines.
3. Bash for all deployment scripts and everything.
4. TypeScript, well for the frontend.
Lately, I’ve been using Go and SQLite for nearly everything.
I don't think I’ve any motivation to look at any other language.
I gave up on Java, Python, Ruby, Rust, C++, and C# long ago.
Fun fact:
Same thing for cloud, I just don't use managed cloud services anymore. I only use VMs or dedicated servers. I've found when you want to run a service for decades+, you’ve got to run your own service if you want it not to cost a lot in the long run.
I manage a few MongoDB, PostgreSQL clusters. Most of the apps like email lists marketer (for marketing, sending thousands of email each day) are simple Go app + SQLite using less than 512MB RAM.
Same for SaaS billing, the solution is entirely written in Go and uses Postgres. (I didn’t feel safe here using SQLite for this for a multi-tenant setup.)
Our chat/ticketing system is SQLite + Go. Deployment is easy, just upload Go cross-compiled binary + systemd service file, alloy picks up log and drops it graphana which has all alerts there.
I don't need to worry about "speed" for anything I do in Go, unlike Ruby/Python.
When something has to be correct I define it model it in Haskell as its rich type system helps you write correct code. Though setup is not painless as Go, decent performance.
I write good documentation, deployment instructions right into mono repo. For a small team this is more than enough imho.
No Docker, no Kubernetes, just using simple scripts + graphana + prometheus + Loki and for alloy/nodeexporter. Life couldn't be any simpler than this.
Especially regarding Bash.
Used to be in a few companies where most developers just couldn’t/wouldn’t write in more than one language and it was always a pain to maintain different runtimes, languages, packages and internal dependencies of things that could have been a 20-line bash script, and had to be maintained and updated from time to time.
I understand people have their own limitations and reasons, but having to constantly deal with “wrong tool for the job” for the thousandth time gets frustrating.
Especially in cases where four different languages were used across the company because different people had different preferences. Worst case was Python/Ruby/C#/Javascript.
I get that Bash is not perfect, but I enjoy the simplicity and directness, and dislike the multitude of problems caused by not using it have shown to me it’s a better tradeoff.
This makes me sad and sounds very naive. AWK is a fantastic language on its own and should be called out when used as such.
Shell for the scripts. I haven't tried to work through much DSL as I really am not a fan of DSLs. Maybe I'll give haskell a shot again to see if it sticks.
Despite we have different tastes in language and are in completely different ecosystems, TypeScript is still the lingua franca lol.
In theory most websites could be done statically with rendered HTML and CSS and maybe a little bit JS, but not mandatory, and having noscript fallback flows. MPAs are fine for most things and having noscript fallback flows can also be done kind of systematically, and in many cases isn't that difficult. Just that these days not many people bother or care.
The idea of having a language with most the batteries for a web server built-in is nice. I've never considered Golang, but it is compelling. I'll have to check it out. Though Rust keeps catching my eyes.
This is actually the biggest pain point I am running into as well, which significantly slows down the speed of deployment.
All in all, Java is pretty unique in the level of backwards compatibility it provides, I don't think any other language is comparable to this level. Especially that it is both source and binary compatibility.
While this distinction is often useful, here we have to think about it from the perspective of users: you press the button to upgrade your toolchain, and code that formerly worked stops working. If a language supported upgrading your compiler/interpreter separately from your standard library then that would be different, but generally a standard library version is considered tightly coupled to a language version.
But the platform itself is extremely backwards compatible, you can find some old jar file created for a university course that still runs without an issue. Of course if you have a bunch of libraries that may so stuff like touch internal details, you lose some of that compatibility (sun.misc.unsafe package can access some internal details, memory). Recently even this latter has been locked down more and more, so maybe that's one reason for your experience (e.g. previously one could set a private field to public and access it, now you can only do that if you explicitly give a flag)
And given that Minecraft was (is?) proprietary, reverse engineered code base where plugins were hacked into, I guess this brittleness makes sense.
And fewer dependencies, and fewer vulnerabilities (if any at all, depending on your few dependencies).
Go is "only" a pain when you want to use your own copy of packages (because `replace` directives are always ignored everywhere except on the "root" package), and whenever you want to work with private Git repositories outside of the forges that have hardcoded config in the Go code (like GitHub) (because Go assumes there's an HTTPS server, and the only way to force it to use only SSH is with ugly workarounds AFAIK).
But despite this I still prefer it for personal projects because I can come back after not touching it for years, and the most I need to do is maybe update `golang.org/x/net` or something like that.
Dabbled with Rust some years ago, I think it is an excellent choice for sudo-rs and such but for GUI and web apps I (perhaps too stupid) end up with arcmutex soup.
Certainly replacing a microservices morass with a single bare metal server running a single static go binary makes good fiscally responsible sense for a startup/MVP. But how does a CTO make the case that "we don't need React" when the developer can just get Claude to smack the React app around with a trout until it does what you want?
Basecamp may have done it but I get the feeling that's a major outlier.
We’ve a 1: 1 copy of the app; on JVM, it's using 2GB RAM using Spring Boot, and on Go, it runs on 512MB RAM and is blazingly fast.
ofc, it's possible to tune java app but why bother? when we get same low resource usage and better performance in Go from get go while still writing naive and dumb code?
Deployment is super simple in Go, upload a single cross compiled binary it's done. Very simple and easy.
Rust needs a lot more effort to write correct code than Go in my experience. We get the same performance out of Go, with much less effort. At some point, it's just cheaper to start one extra instance than perform some low-level optimisation; modern hardware is fast enough that Rust-level optimisation is rarely needed for what we do.
In such context, I think Go might be a better or at least, more realistic, compromise in most cases.
If I use a JVM language, running my test suite takes 10 to 30 seconds. With Rust it spends 3 seconds compiling and half a second to run 250 tests.
The irritating parts of Rust are more related with bloated libraries like serde that insist on generating code which massively slows down compilation for not much benefit.
Sounds like a bad build tool.
well there's your answer, isn't it?
> Java is a resource hog when you use patterns and libraries popular in Java land
Which java land. The java 8.0 land which is all about design pattern hell? Or modern java in 2026 which is largely about terseness and functional programming? From the tone, they're referring to the former.
> Deployment is super simple in Go, upload a single cross compiled binary it's done
To me this just sounds like OP is unaware of simple things like jlink or jpackage, and their idea of deploying a java application probably involves launching an IDE.
> But when you'll code the same thing in Go using the same method
Same method would mean using "Springboot for Go". Or, conversely, doing a clean implementation in pure java or with equivalent lightweight libraries. If all you want to do is basic calculations, you don't compare using a computer to a calculator and then complain the computer is heavy and slow to boot.
I agree and appreciate that there's a lot of legacy bloated java libraries out there which are "popular" and possibly for the wrong reasons, and that this is a problem. But, that aside, they're comparing building a bespoke lightweight tool from scratch in a language they enjoy, to using a bloated framework they can't be bothered to fine-tune on a language they associate with pre-2000 patterns. Just say you're more familiar with Go than modern Java and enjoy it more and leave it at that. Java has made great leaps into becoming a very beautiful language recently, and biased rants like this aren't helping.
If trying to avoid the cloud, like OP, which hosting option is suitable for Clojure, what do you use? I believe Clojure (JVM) has higher RAM requirements?
And GO has pocketbase.io which looks quite interesting. Do know whether something similar exists for Clojure, or maybe it's straightforward enough to compose your own by using various Clojure libs?
Curious if you've tried to use agents to read / write Haskell and how the experience has been?
In the realtime/high assurance systems world, where garbage collection can be a huge source of non-determinism and overhead; we don’t have great options.
Zig is really the only language (idk about Odin?) trying to take the same approach that C did in giving you absolute control over a minimally abstracted CPU model. Us folks who need/want maximum control/performance should be allowed to have nice things too.
I can Highly recommend it, specially because you have Haskell experience (you get all the usual suspects, like ADTs, exhaustive pattern matching etc) in Lisette code. It has a fast compiler too, and produces human readable Go code. It also comes with great tooling out of the box (formatter/lsp etc).
But, flipping the script, if you want to see something like Zig's `Io` interface in Haskell then have a look at my capability system Bluefin, particularly Bluefin.IO. The equivalent of Zig's `Io` is called `IOE` and you can't do IO without it!
https://hackage-content.haskell.org/package/bluefin-0.5.1.0/...
Regarding custom allocators and such, well, that could fit into the same pattern, in principle, since capabilities/regions/lifetimes are pretty much the same pattern. I don't know how one would plug that into Haskell's RTS.
[1] Languages designed around capability passing often have other features, like capture checking to ensure capabilities aren't used outside the scope where they are active. There are only two such languages I know of. Effekt (see https://effekt-lang.org/tour/captures) and Scala 3 (see https://docs.scala-lang.org/scala3/reference/experimental/cc...) However, this is not core to the idea of capability passing.
I don't see how it's true in any meaningful sense. It seems about the same as stating that any function is an example of the reader monad.
The whole point of monads in programming languages is as an _abstraction_ that allows one to ignore internals like how the IO token is passed around.
Maybe Zig is a language for people who are scared of abstraction. Otherwise they'd presumably be using something more powerful like Rust.
fn Maybe(comptime T: type) type {
return union(enum) {
value: T,
nothing,
const Self = @This();
pub fn just(the_val: T) Self { return .{ .value = the_val }; }
pub fn nothing() Self { return .nothing; }
}
}
Over this? data Maybe a = Just a | Nothing var value: ?T = null;
Write: value = 10;
Read: if (value) |x| x+=1the annoyingness of the thing you tried to do in zig is a feature. its a "don't do this, you will confuse the reader" signal. as for optional, its a pattern that is so common that it's worth having builtin optimizations, for example @sizeOf(*T) == @sizeOf(usize) but @sizeOf(?*T) != @sizeOf(?usize). if optional were a general sum type you wouldn't be able to make these optimizations easily without extra information
If the article says "functional programmers should take a look at Zig", and Zig makes algebraic data types hard, then maybe they shouldn't use it.
If you even say "the annoyingness is a feature, use zig the way it is intended to be used" then that's another signal for functional programmers that they won't be able to use zig the same way they use functional languages.
zig makes stupid metaprogramming tricks on algebraic types annoying (not hard).
so, being precise: zig is not necessarily annoying for fp programmers (my main tool of trade in Elixir). zig is made to be annoying for architecture astronauts.
Rust has these optimizations (called "niche optimizations") for all sum types. If a type has any unused or invalid bit patterns, then those can be used for enum discriminants, e.g.:
- References cannot be null, so the zero value is a niche
- References must be aligned properly for the target type, so a reference to a type with alignment 4 has a niche in the bottom 2 bits
- bool only uses two values of the 256 in a byte, so the other 254 form a niche
There's limitations though, in that you still must be able to create and pass around pointers to values contained within enum, and so the representation of a type cannot change just because it's placed within an enum. So, for example, the following enum is one byte in size:
enum Foo {
A(bool),
B
}
Variant A uses the valid bool values 0 and 1, whereas variant B uses some other bit pattern (maybe 2).But this enum must be two bytes in size:
enum Foo {
A(bool),
B(bool)
}
...because bool always has bit patterns 0 and 1, so it's not possible for an invalid value for A's fields to hold a valid value for B's fields.You also can't stuff niches in padding bytes between struct fields, because code that operates on the struct is allowed to clobber the padding.
In Rust, which is arguably also a low level language, it looks like this:
enum Option<T> {
None,
Some(T),
}In Zig, that means being able to use the language itself to express type level computations. Instead of Rust’s an angle brackets and trait constraints and derive syntax. Or C++ templates.
Sure, it won’t beat a language with sugar for the exact thing you’re doing, but the whole point is that you’re a layer below the sugar and can do more.
Option<T> is trivial. But Tuple<N>? Parameterizing a struct by layout, AoS vs SoA? Compile time state machines? Parser generators? Serialization? These are likely where Zig would shine compared to the others.
So zig/c/c++/rust all have ways to specify when and where should allocations happen, as well as memory layout of objects.
Expressivity is a completely different axis on which these low-level languages separate. C has ultra-low expressivity, you can barely create any meaningful abstraction there. Zig is much better at the price of remarkably small amount of extra language complexity. And c++ and rust have a huge amount of extra language complexity for the high expressivity they provide (given that they have to be expressive even on the low-level details makes e.g. rust more complex as a language than a similar, GC-d language would be, but this is a necessity).
As for this particular case, I don't really see a level difference here, both languages can express the same memory layout here.
Zig’s comptime is the primitive. Sum types, generics, etc. are things you can build on top.
The original example is the type-level equivalent of looking at:
int foo() {
return 4;
}
and saying “why do I need all this function and return ceremony when I can just write the number 4 verbatim?”I don't see how any of that becomes easier in the Zig case. It's just extra syntactic ceremony. The Rust version conveys the exact same information.
Foo<T> where for<‘a> T: Bar<‘a, baz(): Send>
Information dense, but every new feature needs language design work. Zig lets you express arbitrary logic, loops, conditionals, etc. It’s lower level of abstraction than a type constraints DSL.
For example, adding “the method in this trait is Send” to Rust’s DSL took a whole RFC and new syntax. The Zig equivalent could be implemented with an if statement on a type at comptime.
Or how about the transformation of an async function into a state machine. Years of work, deep compiler integration, no way to write such transforms yourself. Same with generators, which still aren’t stable. I’d really like to be able to write these things like any other program.
If you don’t want or need to express things at this lower level of abstraction, fair, same reason most people stick to scripting languages and don’t think about memory layout. But “extra ceremony” is really underselling it.
But there's literally none of that in the example we're talking about. It's just an inert datatype declaration. And if anything the Zig version is more abstract - for the Rust version I have to understand <T>, whereas for the Zig version I have to understand comptime, Self, and @.
It's dependency injection. and yes, you can model dependecies like a monad but most people, even in less pure fp langs, don't.
i don't really say this to just be a pedant, but if you're an fp enjoyer, you will be disappointed if you get the picture that zig is fp-like, outside of a few squint-and-it-looks-like things
And he does admit you may have to squint, to appreciate the fp capabilities provided by Zig.
For example Swift enums, while in some ways clunky, can do a decent job both as newtypes and as sum types (unlike Java enums, which are a fixed collection of instances of the same class).
here. i am not the only one that refers to it as dependency injection:
https://daily.dev/blog/zig-async-io-io-uring-zig-0-16-rethin...
"Zig 0.16 introduces std.Io, a flexible I/O abstraction that uses dependency injection, similar to the Allocator interface"
I don't think this even qualifies as correlation.
I would take another look at Common Lisp if I were the author. Manual memory management is very much an option where you need it.
In my opinion, the concept of automaton is fundamental and it deserves equal standing with the concept of function (even if it is a higher level concept that is built upon that of function).
I believe that functional programming is preferable wherever it is naturally applicable, and most programs have components of this kind, but most complete application programs, i.e. which do input and output actions, are automata, not functions and it is better to not attempt to masquerade this with tricks that provide no benefits.
Therefore, I prefer a programming language that has a pure functional subset, allowing the use of that subset where desirable, but which also has standard imperative features (e.g. assignment), to be used where appropriate.
The truth is somewhere in the middle but it’s interesting how many ostensibly technical disputes seem to come down to placement on this philosophical axis.
a function from a set X to a set Y assigns to each element of X exactly one element of Y.
[https://en.wikipedia.org/wiki/Function_(mathematics)]If you write this as a monad, your get very similar syntax to procedural code.
An exception is different to an Either result type. Exceptions short circuit execution and walk up the call tree to the nearest handler. They also have very different optimization in practice (eg in C++)
In practice you use something like an exception monad, which makes this a lot more ergonomic since you don't need to carry a case distinction around for every unwrap: an exception monad essentially has an implicit passthrough that says "if it's a value, apply the function, if it's an exception just keep that". You only need to "catch" the exception if you actually need the value. I'm this case the exception monad is not that different from annotating a function with "throws": your calling function either needs it's own throws (=error monad wrapper) in which case exceptions just roll through, or you remove the throws, but now need to handle the exception explicitly (=unwrap the monad).
I don't mind escape hatches - as long as they're visible/greppable in the source code. You can always write undefined/error/panic/trace directives while you're coding, then come back and remove them later.
This feels like the direction Algebraic Effects might take us.
> ...
> What facilities does the language provide me to create correct-by-construction systems and how easily can I program the type-system.
Isn't programming the type-system orthogonal to the program's domain in the same way that manual memory management is?
In which case, what's the term for the "proper sum types and pattern matching" flavour of things?
- most of them are dynamically typed (thus don't need sum types, as there are no types). The ones that do have gradual type systems likely either implement some form of them (off the top of my head I can only remember typed racket, and I think it implements them through union types)
- not all lisps lean functional: I believe that's mostly a prerogative of scheme and clojure (and their descendants); something like CL is a lot more procedural, iirc
- in most lisps, thanks to macros, you probably don't need the language to support some sort of match construct out of the box: just implement it as a macro [1]
In general the "proper sum types" side of functional programming is just the statically typed one, but even in dynamically typed FP languages you end up adopting sum type-esque patterns, like elixir's error handling (which closely resembles the usual Either/Result type, just built out of tuples and atoms rather than a predefined type), and I assume many lisps adopt similar patterns as well
I think you're conflating. "No compile type checking" and "no sum types" are different things. Sum types are about modeling data as "one of these variants". You can do that in any language - the difference is whether the compiler enforces exhaustive handling or not. Clojure (for example) absolutely has the equivalent of sum types, just expressed idiomatically rather than enforced by a compiler - multimethods, keywors as tags or tuple vectors can be used as represenation of tagged unions. Malli and Spec both provide sum types with validation (it just happens runtime).
- Go - backend + CLIs
- TypeScript - fronted, occasionally zx for more complex scripts
- Nushell as my scripting language (I’ve been relentlessly using it everywhere I can instead of bash/zsh and man it is such an improvement)
I heard so much good stuff about both Zig and Rust and would love to eventually get to know one of them.
Yesterday I noticed I still don't know how to write
ls | where modified < ((date now) - 3wk) | each { |fn| rm $fn.name }
in zsh after all years. find . -type f -mtime +21 -delete
Or if you have fd fd -t f --changed-before 3w -X rmThis makes me feel old.
dir | ? LastWriteTime -lt (Get-Date).AddDays(-21) | del dir | ? lastwritetime -lt (get-date).adddays(-21) | del
PowerShell is not case sensitive (but nushell and bash are).I actually ship stuff in Haskell believe it or not. I also think Zig is very cool and have played around with it quite a bit. Yes, garbage collection hurts performance, but the reality is that the overwhelming majority of all software does not suffer from the performance loss between well written code in a reasonably performant functional gc language and a highly performant language with manual memory management. It’s just not important. But not having to deal with the cognitive overhead of managing memory and being able to deal in domain specific abstractions only is a massive win for developer productivity and code base simplicity and correctness.
I think OxCamls approach of opting in to more direct control of performance is interesting. I also think it’s great that many functional patterns are making their way into imperative first languages. Language selection is always about trades offs for your specific use case. My team writes Haskell instead of Rust because Haskell is plenty fast for our use case and we don’t have to write lifetime annotations everywhere and think about borrowing. If we needed more performance we would have no choice but to explore other languages and sacrifice some developer experience and productivity, that’s very reasonable. I’m also not saying performance doesn’t matter (if you’re writing for loops in Python, stop). But this read to me like “because better performance exits with manual memory management, all garbage collectors are bad, so I’ll force zig to be something it’s not in order to gain performance I probably don’t need”. Which to me is an odd take. A more measured way of thinking about this might be, it can be useful to leverage functional patterns where appropriate in low level languages, if you find yourself needing to write code in one.
it happened that in rust you also don't have to write lifetime annotations everywhere. Depending on how your code is structured, compiler infers lifetime very well. In my current project we have lifetime annotations in very few places.
I opened the network log, disabled cache and reloaded to see it only transferred 8kb.
Keep up the good work!
> Monads are not some kind of obscure math-y thing that only the big brains think are necessary. No, instead monads are a fundamental abstract algebraic description of imperative programming as a computational context.
Yep, as a non-big-brainer, I definitely get it now. :)
Can comptime blow up compile times? Does it have arbitrary cutoffs like C++ template depth?
Zig tackles the halting problem a bit differently by putting the evaluation cutoff in userspace through the compiler builtin function `@setEvalBranchQuota`. You bump up the quota as you see fit.
can you elaborate? theres only what 11 datatypes in elixir?
[a: 1, b: 2] == [{:a, 1}, {:b, 2}]
Or maybe atom vs string keys in maps?
%{a: 1} vs %{"b" => 1}
Or keyword lists always needing to come last in lists?
[some: :value, :another] # error
[:another, some: :value] # valid
Or maybe something else entirely. Those are just things I remember having to lookup repeatedly when I was first learning elixir.
Honestly this sounds like monad bullshit. That's a struct/class/ADT/whatever you want to call it, they existed since forever. The only idea Zig had was that maybe we shouldn't make them global instances.
Monad transformers are one solution to this. This lets you write the composition rules for m2 once, and then reuse them for every m1. A solution, but boilerplatey.
I don't understand algebraic effects quite as well, but my understanding is that they do simply compose.
I haven't heard anyone writing code in Elixir complain about performance issues.
btw we do sometimes bitch about performance :)
I'm aware
I am definitely in the minority here, but I am not a fan of the kind of meta-programming that Zig and Rust offer, with Rust being especially atrocious. In the two decades I've been programming I can count on one hand the number of times meta-programming was an appropriate solution to a problem I had. Every time I reached for it, I got bit. There's a reason "when in doubt, use brute force" is sage advice, it may not be fast and glamorous, but it'll be a hell of a lot less opaque.
Odin is also my favorite language in its class. It’s genuinely a gem.
In addition to the normal value to value, type to type, and type to value functions, in comptime, you can write static value to type functions.
In full dependent type, you can in addition write dynamic value to type functions, completing the value to type corner.
So in terms of typing strength, plain Haskell < Zig < dependent type languages.
Ok. Zig is great. But wont it still suffer from same headwinds as every other 'better' language. That industry wont adapt it? They have to much installed base and just want to hire Java/C#/etc...
Why write:
EqPoint.eql(a, c)
When you can write:
Point.eql(a, c)