algorithmsRcool
7 hours ago
This article undermines itself a bit by introducing an optimistic concurrency primitive that it calls "fence tokens", aka content hashes, data versions or etags which are all the same concept.
Distributed locking, like all pessimistic concurrency, has a place but it comes with serious scaling concerns depending on how granular the scope of the locks are. Lock lived locks suffer from blocking other writers from large chunks of time, destroying latency. Very fine-grained locks eventually become dominated by networking overhead since the lock requires two to three trips to the database for each logical write/update.
Conversely, optimistic concurrency under high contention can lead to excessive waste from clients needing to retry operations because of conflicts.
The only way to win is reduce the scope of your writes/updates to avoid the contention as much as possible.