I'm making a game engine that uses rollback netcode for its multiplayer architecture. As far as I can tell, no physics engine supports incremental rollback thus far. This means the entire physics engine state has to be snapshotted every frame, which basically means it's infeasible to have large worlds with rollback netcode. I've made a physics engine which only snapshots the changes, and so now I think you can have large worlds, as long as most of the world is static. I think that's true in most cases, like when you're walking around a big spaceship for example, all the walls, tables, control panels etc don't really move. I wrote up a bit of a post to describe some of the cool things I discovered while making my own physics engine.
The determinism is partly possible because WebAssembly is deterministic (except for a few known cases https://github.com/WebAssembly/design/blob/main/Nondetermini...), and partly because I’m making sure to use my own trigonometric functions, and the entire game simulation is executed single threaded with a known order of execution.
If you meant to ask how does it compare to non deterministic physics engines, I’m sure they might be faster on the physics but would be slower on the rollback, and I think on most reasonably-sized games the slow rollback would dominate and so they would be slower overall. But, you wouldn’t make a rollback netcode game with that size of world anyway, at least maybe not until now, so it’s a bit of a false comparison. They’re good at their different use cases.
If I understood correctly, the aim of the engine is to lower the in-memory size of the history of game states, by only snapshotting the delta. I'm also curious what would happen if, instead, you'd just run any deterministic snapshottable physics engine, and delta compressed the history on the fly. I think this is how, for example, Braid works.
Might be that it doesn't work, that running the delta check on two big enough snapshots would be too slow, and that's what this engine fixes. But would love to hear if it was considered.
He does this by only rolling back and re-simulating only a subset of the world, greatly reducing the amount of CPU required. It's cool that he's approaching this from the point of view of adding support for it in the physics engine itself, vs. making it something that the game has to do themselves.
Delta compression is an unrelated technique which reduce the amount of bandwidth sent from server to client, by sending only the differences between the snapshot at baseline frame n and the current snapshot frame m on the server.
Just want to clear this up for anybody trying to follow along. Bringing in delta compression is an unrelated thing (but somewhat similar conceptually). It might confuse people to talk about these things at the same time, if they're really just trying to understand what the author is doing in the article.
cheers
- Glenn
The entire gamestate has to be rolled back when using this style of netcode, regardless of bandwidth, reducing the size of snapshots in memory can also reduce make it faster to rebuild.
EDIT: I misunderstood the previous comment, I think you are rolling back everything that changed, and not rolling back objects which were "static" in that timeframe.
Just to add to the general discussion for everyone following along - rollback netcode only sends inputs around, not state, so it doesn't really have much to send. I think I'm doing about 1.5 KB per second. When you point your mouse it sends that data in 10 bytes. There's not a lot to delta compress.
This way if one input packet gets lost, the very next one getting through will have all the inputs for the last 1-2 seconds, and this greatly improves how well your game will play under packet loss.
When you do this, you can even do an encoding from left -> right for all inputs, and actually, sort of delta encode inputs within the packet! Inputs don't change that much, so you can even get smart with the encoding and optimize it down to basically nothing.
I would see rare bursts of packet delay for ~1 second, that would quickly resolve. In a rollback game where inputs are predicted well, often times this would be unnoticeable.
I send up to 2s worth of input history every frame to handle these lag spikes. I also confirm inputs received, so in practice usually players are only sending a handful of recent unconfirmed inputs (with the 2s buffer available if unconfirmed inputs pile up due to lag).
My guess is their 2s window is for similar reasons, as buffer for rare connection issues. Even if lag spikes are incredibly rare, they need to be handled for a reliable player experience
Maybe some people might find it interesting - I’m relaying packets from peer-to-peer using Cloudflare Realtime, which is like a one-to-many broadcast system for WebRTC. Each peer sends their input packet to Cloudflare, then Cloudflare forwards that on to the 10 other players (for example). It’s cool because (a) 10x less upload bandwidth from the peer (b) people IP addresses are not revealed to their peers and (c) Cloudflare is in 400 datacenters around the world so it adds minimal latency. Cloudflare Realtime is a really cool system that maybe more web game developers should look into!
Unfortunately, I’m paying for all the bandwidth that goes through Cloudflare Realtime and so I have perhaps over optimised on minimising bandwidth by sending only one input per packet. The other part of my equation is I’m getting my server to broadcast authoritative batches of inputs every 100ms or so via TCP, so if a packet gets lost, every peer will eventually receive the input but it might be a bit slow, and it will cause a big rollback that might be noticeable.
Reading your comment makes me think it might not be as expensive as I thought, and maybe I can play around with how long of an input period I resend for. Perhaps there is a better balance to strike between cost and reliability. So thanks for bringing this up!
Combining player control, multiplayer, non-player control, and physics is one of the tougher problems. I got it handled (enough) for my project, but I'd be very interested to read the source if Easel's physics engine gets open-sourced.
I picked game dev specifically because I wanted to build some things I had envisioned and found it challenging. And in the beginning each new concept within 3d modeling, optimisation, shaders, physics, lighting, shadows & rendering felt intractable and unmasterable.
Now, I have a basic working understanding of nearly everything that goes into traditional 3d game dev. Except the very cutting edge stuff. And have mastered things I was struggling with 2 years ago.
And recently I felt something that scared me. It was the feeling that within 2 years, I’ll have lost the excitement and challenges that learning game dev has brought me with these past few years. And I could see how to someone experienced this was as boring as full stack web dev was for me.
Declarative state and reactivity, lexical lifetimes and ownership, etc. Really curious how you set it all up and what prior art was your primary inspiration.
The export basically creates a page with an HTML IFRAME in it that embeds the hosted version of your game on easel.games so that all the multiplayer and leaderboards continue to work.
Thanks for your interest!