>You can't trust anything "will resolve in time period X"
As is, this statement just means you can't trust anything. You still need to choose a time period at some point.
My (pedantic) argument is that timestamps/dates/counters have a range based on the number of bits storage they consume and the tick resolution. These can be exceeded, and it's not reasonable for every piece of software in the chain to invent a new way to store time, or counters, etc.
I've seen a fair share of issues resulting from processes with uptime of over 1 year and some with uptime of 5 years. Of course the wisdom there is just "don't do that, you should restart for maintenance at some point anyway" which is true, but it still means we are living with a system that theoretically will break after a certain period of time, and we are sidestepping that by restarting the process for other purposes.
You can have liveness without a timeout. Think about it. Say you set a timeout of 1 minute in your application to transfer 500 mb over a 100mbps link. This normally takes 40s and this is that machines sole job, so it fails fast.
One day, an operator is updating some cabling and changes you over to a 10mbps link for a few hours. During this time, every single one of your transfers is going to fail even though if you were to inspect the socket, the socket is still making progress on the transfer.
This is why we put timeouts on the socket, not the application. The socket knows whether or not it is still alive but your application may not.
Yeah... it has felt kind of ridiculous over the years how many times I have tracked some but I was experiencing down to a timeout someone added in the code for a project I was working with, and I have come to the conclusion over the years that the fix is always to remove the timeout: the existence of a timeout is, inherently, a bug, not a feature, and if your design fundamentally relies on a timeout to function, then the design is also inherently flawed.