> It's possible that the outcomes might vary between two games
So just to clarify: in League, the 'dt' passed into each 'simulation step' is NOT constant? Isnt this kinda crazy? In your later articles, floating point imprecision is talked about. Couldnt this variance in dt create odd behaviour and ultimate contribute to weird things like 'character movement speed' being not _exactly_ the same between games? (like really small, but still...)
And beyond that, how does the client and the server synchronize with each other if the frame #s represent different positions in time? My mind is blown right now...
Note: I've worked on many networked games, and have written rollback/resimulate/replay code. I don't really understand _why_ League wouldnt use a fixed time step. Whats the advantage? In our games, the rendering of course uses the real dt passed in from the OS, but the simulation step is always fixed. This means in our games during a replay, your computer could render the outcome differently if your frame rate is different, but the raw simulation is always the same.
For context, to show I at least have some idea what I'm talking about, I made this replay system (and the game has rollback multiplayer):
https://gooberdash.winterpixel.io/?play=5b74f7c0-8591-40dc-b...
I havent played a lot of League, but I always assumed it would use deterministic lock step networking (like it's predecessor, Warcraft 3)
We're recording the integer clocks though, and those don't change between runs. While game code converts things like (QPC ticks over tick-rate) to floating point, we don't sum those numbers directly. Instead, we internally store times as the raw integers then convert them to floats on-demand (typically when an Update function is asking for "elapsedTime" or a timer is asking for a "timeSpan" (like time since the start of the game.))
LoL and TFT don't use a synchronized-simulation (lockstep, sometimes called peer-to-peer) networking model. LoL is Client-Server, meaning the replication is explicit and not based purely on playing back client inputs. This gives us more control over things like network visibility, LODs, and latency compensation at a feature-by-feature level at the cost of increased complexity. Most of the games I've built over the years use this model and the LoL team is super comfortable with it.
The GameClients are not deterministic in the way that the GameServer is, though they're naturally closer to that ideal since the GameServer itself is typically predictable.
Don't get me wrong, there's a time and place for lockstep replication, and LoL probably could have gone that way. I wasn't there when that direction was picked, but I would have likely made the same choice as my predecessors, knowing what I do about our approach to competitive integrity.
All this stuff predates ECS and a fully specified definition of what a live service continent spanning MOBA is. All the tradeoffs make sense to me. The real question is, would it have been possible to define an engine solution that looks more like Overwatch, in the absence of a fully specified game? I feel like that is ECS's greatest weakness.
I feel like you’re skipping a step to make this comment make sense. Which is saying why using the Overwatch model would be better and why the need to introduce an ECS?