sedatk
a day ago
The book looks great actually, and I agree with the first sentiment about S.H.I.T, but the author incorrectly assumes that tests are used as a reliability indicator. Maybe some do use it for that, but most teams don't. Tests are regression detectors. Tests are there to prevent you from causing unintended behavior change while changing other parts of code. Tests never tell you if your code or even if your spec is reliable. They just tell you that code adheres to the given spec.
I also dislike TDD but for a different reason: it incorrectly assumes that spec comes before code. Writing code is a design act too. I talk about that in Street Coder.
rhdunn
21 hours ago
I have a pragmatic approach to TDD, i.e. it doesn't practically matter if you write the code or tests first as long as the relevant code has tests:
1. write the test code first (possibly with a skeleton implementation) if you want to get an idea/feel for how the class/code is intended to be used;
2. write the code first if you need to;
3. ensure that you have at least one test at the point where the code is mimimally functional.
More generally:
1. don't aim for 100% code coverage (around 80-90% should be sufficient);
2. test a representative example and appropriate boundary conditions;
3. don't mock classes/code you control... the tests should be as close to the real thing as possible, otherwise when the mocked code changes your tests will break and/or not pick up the changes to the logic -- Note: if wiring up service classes, try and use the actual implementations where possible;
4. use a fan in/out approach where relevant... i.e. once you have tests for various states/cases in class A (e.g. lexing a number, e.g. '1000', '1e6', '1E6') you only need to test the cases that are relevant to class B (e.g. token types, not lexical variants, e.g. integer/decimal/double);
5. test against publically accessible APIs, etc... i.e. wherever possible, don't test/access internal state; look for/test publically visible behaviour (e.g. don't check that the start and end pointers are equal, check that is_empty() is true and length() is 0) -- Note: testing against internals is subject to implementation changes whereas public API changes should be documented/properly versioned.
d-lisp
21 hours ago
Businesses do business; but there are endavours to make tests be reliability indicators and in some (critical) domains you do write them to perform such a thing. write tests the way test-theory intended; as formal verification.
There is software for which writing code is a design act, and there is software for which you write specs before anything. I don't know if a) they are the same, b) they are different, c) one is better than the other.
fristovic
21 hours ago
I'm sorry I was not familiar with your game sir... Street Coder was never on my radar but right now it is the first thing I'm buying when the learning budget at my company resets in a couple of days!
kelnos
20 hours ago
I didn't read it that way, that the author assumes tests are used as a reliability indicator. I really liked the example of the discounts applied to the shopping cart: it shows the difference between systems thinking and spec-checklist thinking (for lack of a better term). Writing individual tests for specific scenarios that failed (and then were fixed in isolation) is a reactionary approach that fails to think about or understand the system as a whole.
Recognizing and understanding that there's a larger problem with discounts is systems thinking. Fixing the code so that all discounts are applied in a predictable order, rather than just fixing the specific issue reported by a user, is systems thinking. Ditching the individual tests that independently cover the user-reported bug input/output, and replacing it with a test that covers the actual discount application ordering intended and expected and (hopefully) implemented by the code, is systems thinking.
Maybe that doesn't (or does?) illustrate the "Stop Hunting In Tests" concept, but I thought it was important nonetheless.