> Linux Kernel is incredibly robust, has never been a broken mess, and has basically no type enforcement
Yes, as a result of strong development practices, a high skill floor, and where work is done by a small number of people knowledgable in the domain. These things are not mutually exclusive with type checking.
> In general, all typing does is move error checking into the compiler/preprocessor instead of testing
Which is an enormously powerful thing. Having those things be required by tests requires that the test exists for all the cases, which turns your tests into things that not only have to test behaviour but are also simultaneously responsible for checking types.
Compile-time type checking can essentially eliminate all need for type-based tests and is able to do it automatically for any code that exists, whereas this has to be opted in when using tests to check types.
You also don't get the benefit of compile-time tests automatically checking use cases for the types you don't support (which is likely more than the ones you do), whereas leaving this to test-time is practically impossible and you can only test the valid path. I've never seen a codebase that checks the negative paths for the hundreds-upon-thousands of types that aren't supported.
None of this is to say I'm against tests as end-to-end contracts, but moving type checking to compile time gives you a lot of extra kinds of assertions for free that you likely don't get from having tests to check types.
> There is a reason why NodeJS was the most used language
And the reason was? AFAIK Node came along as a runtime option for using a familiar language outside of the browser. Coupled with a single-threaded event-driven concurrency model out of the box it was an enormously practical/easy choice from the perspectives of both language familiarity for developers and fpr the workloads it was given.
I dunno where this idea comes from that a code erroring out is somehow catastrophic. If you pass a wrong object type to a function, that mistake is very easy to fix. If you are structuring your code with crazy inheritance to where this error can get hidden, thats solely a you problem.
> I dunno where this idea comes from that a code erroring out is somehow catastrophic.
I mean if it's a medical device it might not be great?
> If you pass a wrong object type to a function, that mistake is very easy to fix.
And with compile time checks you can avoid ever having to get to the point where you have to fix it.
> Everyone keeps harping on type safety, but it just doesn't play out in reality.
If you ignore the vast number of cases where it does, and cherry pick an extraordinarily non-representative example like the Linux kernel.
> This is just laughable. Clearly you have extremely little experience with Python.
Or you have extremely little experience with the use cases where it applies, extremely little knowledge of the ongoing effort by the Python developers to address it, and think that ignorant mocking is an argument.
> And time spent on designing and writing type safe code is almost equivalent to time spent writing tests that serve as an end-to-end contract.
Do you write tests for every third-party function that interacts with your code, so that it never fails in runtime after a version bump?
How do you guarantee that your own refactoring is exhaustively covered by the prior tests you've written for the old version?
You don't need a test for every function. You probably want every function call covered by a test, though, otherwise you have untested code.
The exact granularity is a debate that has gone on for a long time. Nowadays, people seem to prefer larger tests that cover more code in one go so as to avoid lots of mocking / stubbing. Super granular tests tend to be reserved for libraries with no internal state.
While what you say could be argued, this is both an insufficient argument against, and irrelevant to, the post you’re commenting on.