llm_nerd
5 hours ago
While print-type debugging has a place, the reason there are a lot of articles dissuading the practice is the observed reality that people who lean on print debugging often have incomplete knowledge of the immense power of modern debugging tools.
This isn't just an assumption I'm making: years of being in developer leadership roles, and then watching a couple of my own sons learning the practice, has shown me in hundreds of cases that if print-type debugging is seen, a session demonstrating how to use the debugger to its fullest will be a very rewarding effort. Even experienced developers from great CS programs sometimes are shocked to see what a debugger can do.
Walk the call stack! See the parameters and values, add watches, set conditional breakpoints to catch that infrequent situation? What! It remains eye opening again and again for people.
Not far behind is finding a peer trying to eyeball complexity to optimize, to show them the magic of profilers...
PittleyDunkin
5 hours ago
> the reason there are a lot of articles dissuading the practice is the observed reality that people who lean on print debugging often have incomplete knowledge of the immense power of modern debugging tools.
While perhaps this is true of some sort of junior developer, I have both written my own debuggers and still lean heaviest on print debugging. It's trivially reproducible, incurs basically zero mental overhead, and can be reviewed by another person. Any day I break out a debugger is a bleak day indeed.
Profilers are much easier to argue for as it is very difficult for one to produce equivalent results without also producing something that looks an awful lot like a profiler. But in most cases the mechanisms you mention are just straight unnecessary and are mostly a distraction from a successful debugging session.
Edit: in addition to agreeing with a sibling comment that suggests different problems naturally lend themselves more to debugging (eg when writing low-level code a debugger is difficult to replace), I'd also like to suggest a third option languages can take: excellent runtime debugging ala lisp conditions. If you don't have to unwind a stack to catch an exception, if in fact you can modify the runtime context at runtime and resume execution, you quickly realize the best of both worlds without having to maintain an often extremely complex tool that replicates an astonishing amount of the language itself, often imperfectly.
lolinder
5 hours ago
I find that which tools I need changes immensely depending on what kinds of projects I'm working on.
When debugging parsers for my toy programming languages print debugging is less helpful and I make heavy use of all the debug tools you mention. The same goes for most types of business logic—writing a test and stepping through it in the debugger is usually the way to go.
But when troubleshooting odd behavior in a complex web app, the inverse is true—there are usually many possible points where the failure could occur, and many layers of function calls and API calls to check, which means that sticking a debug statement prematurely slows down your troubleshooting a lot. It's better to sprinkle logs everywhere, trigger the unexpected behavior, and then skim the logs to see where things stop making sense.
In general I think there are two conditions that make the difference between the debugger or print being better:
* Do you already know which unit is failing?
* Is there concurrency involved?
If you don't yet know the failing unit and/or the failing part of the code is concurrent, the debugger will not help you as much as logs will. You could use logs to narrow down the surface area until you know where the failure is and you've eliminated concurrency, but you shouldn't jump straight to the debugger.
llm_nerd
3 hours ago
I think we need to differentiate between printf style debugging and considered, comprehensive logging. Ideally logging with logging levels. While the latter might seem to fall under the same umbrella -- both are printing some sort of text artifact history of execution -- the latter is long-term and engineered, and the former is generally reactionary.
e.g. LOG(INFO_LEVEL, "Service startup") and printf("Here11") are completely different situations.
Indeed, the very submission is arguing for printf style debugging instead of logging. Like it uses it as the alternative.
Real-world projects should have logging. It should have configurable logging levels such that a failing project in the wild can be configured to a higher logging level and you can gather up a myriad of logs from a massive, heterogenous cross-runtimes and platforms project and trace through to figure out where things went awry. But that isn't print debugging or the subject of this discussion.
lolinder
2 hours ago
> Indeed, the very submission is arguing for printf style debugging instead of logging. Like it uses it as the alternative.
Yeah, this is a bogus distinction they're drawing. Logging and printf style debugging are the same thing at different phases of the software lifecycle, which means they can't be alternatives to each other because they can't exist in the same space at the same time.
As soon as your printfs are deployed to prod, they become (bad) logs, and conversely your "printf debugging" may very well actually use your log library, not printf itself.
mark_undoio
4 hours ago
Have you ever been able to try https://replay.io time travel debugging as an alternative to conventional logging?
Last time I tried it you were able to add logging statements "after the fact" (i.e. after reproducing the bug) and see what they would have printed. I believe they also have the ability to act like a conventional debugger.
I think they're changing some aspects of their business model but the core record / replay tech is really cool.
ponector
4 hours ago
This. Last week at work I've been investigating an odd flaky behavior. No way to do it with debugger. Add logging to every suspicious place and run all 40 containers of our distributed monolith in local docker. Turned out there is a race condition between consumption of Kafka messages and REST calls.
marcosdumay
4 hours ago
If your parsers are pure¹, REPR testing and state-transition logging (trying X, X rejected, trying Y, Y is successful with input "abc") will beat any other tool by such a margin that it will feel they aren't even on the same competition.
1 - If your parsers are not pure, you either have a very weird application or should change that.
Macha
5 hours ago
I do sometimes use print debugging despite having (in my opinion), a decent knowledge of the debuggers in the toolchains I use. Part of is that you could set a conditional breakpoint for a condition, if you know what it is, but sometimes you're just probing to see what differs from expectations, and putting every expectation into a conditional breakpoint is a pain with most debugger UIs. In theory you could use logpoints instead of print statements, but again the UI for this is often a pain compared to just typing in print. And even when you get to the breakpoint, you'll run into the dreaded "this variable has been optimised away" increasingly in modern languages, and it also doesn't give you a history of how it got there - maybe if rewind debugging was more commonly supported that would help, but it isn't.
Also suspending a thread to peek around is more likely to hide timing bugs than the extra time spent doing IO to print.
arkh
4 hours ago
The thing is, in interpreted languages land print debugging has a power no debugger gives you: live debugging on your production instance.
Something is broken in prod, you cannot reproduce it in your test environment because you think it may be due to a config (some signing keys maybe) you can't check. And it looks like someone forgot to put logs around whatever is the problem.
You can either: spend multiple hours trying to reproduce and maybe find the cause. Or take 5mn, bash into one of your nodes, add some logging live and have a result right now: either you have your culprit or your hunch is false.
plorkyeran
11 minutes ago
If you can ssh into one of your production nodes and modify the code to add some logging, you can also attach a debugger to your production node.
gwervc
4 hours ago
With modern js stacks being somewhat compiled anyway, and the backend often being C# or Java I don't think that case is very applicable. Not to mention developers logging on production servers and making whatever changes they want being a huge red flag.
waitforit
3 hours ago
> The thing is, in interpreted languages land print debugging has a power no debugger gives you: live debugging on your production instance.
Remote debugging is a thing that exists.
macNchz
4 hours ago
I’ve definitely seen an undercurrent of “I don’t need the crutch of a debugger” sort of attitudes online over the years, never really made sense to me. It can be painful pairing with someone who keeps adding print statements one at a time and repeating the 15 step process to get to them when they could have put in a breakpoint right out of the gate.
I still print stuff plenty, but when the source of an issue is not immediately obvious I’m reaching for the debugger asap.
lolinder
4 hours ago
> It can be painful pairing with someone who keeps adding print statements one at a time and repeating the 15 step process to get to them when they could have put in a breakpoint right out of the gate.
This does sound painful, but this is not what most people who advocate for print debugging are advocating for.
If I'm only going to add one print statement, that's obviously a place where a breakpoint would serve. When I do print debugging, it's precisely because I haven't narrowed down the problem that far yet—I may have ten theories, not one, so I need ten log statements to test all ten theories at the same time.
Print debugging is most useful when the incorrect behavior could be in one of many pieces of a complex system, and you can use it to rapidly narrow down which of those pieces is actually the culprit.
elegantlie
4 hours ago
I wouldn't categorize debuggers as a crutch, for "lazy minds", or anything like that. Everyone should use the tools they feel most productive with.
However, at least personally, I've also felt that there was a lot of truth to that Ken Thompson quote. Something along the lines of: "when your program has a bug, the first thing you should do is turn off the computer and think deeply."
Basically, a bug when where your mental model of the code has diverged from what you've actually written. I think about the symptoms I'm observing, and I try to reason about where in the code it could happen and what it could be.
The suggestion in the parent comment that I'm just too stupid to look into or learn about debuggers is so condescending and just plain wrong. I've looked into them, I know how to use them, I can use them when I want to. I simply tend not to, because they don't solve any problem that I have.
Also, the implication that I don't use completely unrelated tools like profilers is equally asinine. Debuggers and profilers are two completely different tools that solve completely different problems. I use profilers almost every day of my career because it solves an actual problem that I have.
llm_nerd
4 hours ago
"The suggestion in the parent comment that I'm just too stupid"
If your insecurity leads you to misrepresent what someone actually said so disgustingly, maybe Hacker News isn't for you.
zeta0134
2 hours ago
I use a mix of strategies depending on the target platform. Right now, for nearly all of my hobby projects, the target platform is an old processor running on a weird game system without enough memory to run a real debugger and with no ability to expose that debugger's state to the PC. In these cases, I can't even really use printf (where would it print to?) and must instead rely on painting the debug information to the screen somehow. It's a wild and wacky set of techniques.
Of course, I pair this with a modern emulator for the target platform where I can at least see my disassembly, set breakpoints, watch memory values. But when I'm working on some issue that I can only reproduce on hardware, we get to bust out all the fun manual toys, because I just don't have anything else available. On the very worst days, it's "run this routine to crash on purpose and paint the screen pink. Okay, how far into the code do we get before that stops happening? Move the crash handler forward and search. (Each time we do this we are flashing an eeprom and socketing that into the board again.)"
d0mine
5 hours ago
Performance optimization is an excellent example of exactly the opposite. Why thinking first, creating the model of the code, to put a trace, metric, log call is better than mindless debugging. Interactive debugging has its uses but it may be less than appears at first and it encourages the wrong thing (local focus, irrelevant details). You should ask yourself first how fast the code should be and why (build the model), and only then measure. Learning happens when you get unexpected results. There is a limited utility in running profilers without a thought.
llm_nerd
4 hours ago
You should consider performance/efficiency at all stages. And that consideration should be based on an informed feedback loop where assumptions are validated and proven empirically. What developers think are performance patterns often wildly diverges from reality.
The scenario I gave is when there are performance problems with a developed project (you know -- where a profiler is actually usable, after they already decided on an approach and implemented code) and the developer is effectively guessing at the issues, doing iterative optimize-this-part then run and see if it's fixed pattern. This is folly 100% of the time. Yet it's a common pattern.
RedShift1
5 hours ago
I've used both and most of the time I'm still print debugging, because the big advantage of print debugging is that it shows you exactly the kind of information you're looking for and nothing else.
bloomingkales
5 hours ago
I kinda want to push back against the blanket statement that there are articles pushing back on print debugging. That implies there’s well known mind share thinking about it?
Is it real mind share? Is it bullshit?
Print debugging is the literal pocket knife of debugging.
llm_nerd
5 hours ago
There are loads of articles discouraging print debugging, and it's a very real thing that people fight against (and for). Print style debugging is the first thing most programmers learn, and for some it absolutely becomes a bad habit.
And to be clear, print debugging and pervasive, configurable logging are very different things and the latter is hugely encouraged (even with logging levels), while the former is almost always suboptimal. Being able to have your client turn on "DEBUG" logging and send you the logs after some abnormal behaviour is supremely useful. Doing "prinftf("Here!")" in one's project is not, or at least not remotely as useful as better approaches.
bluGill
4 hours ago
Print'here' is very useful. I get a log of how many times that funcion is called. If I log some data I log how that data changes over time. Those are powerful tools.
llm_nerd
21 minutes ago
FWIW, many debuggers have facilities to do precisely this. The JetBrains debuggers allow you to set a breakpoint that -- in a non-stopping way -- simply logs every time it was passed, or logs whatever values you want it to log as an expression. So in one potentially non-stopping run you get an output of all of it.
julik
3 hours ago
The availability of tools is severely dependent on the runtime + language. With most of my work being in interpreted languages, it's just way easier to either use a REPL or print statements - as getting good debugging to work involves having Just That Particular (often - commercial) IDE, Just That Particular Version of the runtime (often outdated) etc. These things frequently break and before you have gotten it to work again you have spent so much time that using it over a REPL just isn't worth it. I never made the effort to master GUI-less debuggers like gdb though.
That said, on one project I did have a semi-decent experience with a debugger for PHP (couple of decades back) and when it worked - it was great. PHP didn't have much of a REPL then, though.
llm_nerd
3 hours ago
Absolutely true that not all runtimes and languages have the same level of tooling. But the state of tooling has dramatically improved and keeps improving.
I use PyCharm for my projects including Python, for instance, and it has absolutely fantastic debugging facilities. I wouldn't want to use an IDE that lacked this ability, and my time and the projects are too valuable to go without. Similar debugging facilities are there for Lua, PHP, Typescript/JavaScript, and on and on. Debuggers can cross processes and even machines. Debuggers can walk through your stored procedures or queries executing on massive database systems.
Several times in this thread, and in the submission, people have referenced Brian Kernighan's preference for print versus debugging. He said it in 1979 (when there was basically an absence of automated debugging facilities), and he repeated it in an interview in 1999. This is used as an appeal to authority and I think it's just massively obsolete.
As someone who fought with debuggers in the year 2000, they were absolute dogshit. Resource limitations meant that using a debugging meant absolutely glacial runtimes and a high probability that everything would just crash into a heap dump. They were only usable for the tiniest toy projects and the simplest scenarios. As things got bigger it was back to printf("Here1111!").
That isn't the case anymore. My IDEs are awesomely comprehensive and capable. My machine has seemingly infinite processor headroom where even a 1000x slowdown in the runtime of something is entirely workable. And it has enough memory to effortlessly trace everything with ease. It's a new world, baby.
mark_undoio
5 hours ago
Another thing I think leads people to print debugging - which is both a strength and a weakness of the approach:
It minimises the mental effort to get to the next potential clue. And programmers are naturally drawn to that because:
1. True focus is a limited resource, so it's usually a good strategy to do the mentally laziest thing at each stage if you're facing a hard problem.
2. It always feels like the next time might be it - the final clue.
But these can lead to a trap when you don't quickly converge on an answer and end up in a cycle of waiting for compilation repeatedly whilst not making progress.
insane_dreamer
4 hours ago
> people who lean on print debugging often have incomplete knowledge of the immense power of modern debugging tools
not always; sometimes print debugging is much more time efficient due to the very slow runtime required to run compute-intensive programs in debug mode. I'll sometimes forego the debugger capabilities in order to get a quick answer.
Thaxll
4 hours ago
Sometimes it's easier to add 3 printf() than to run your project under a debugger.
baq
4 hours ago
You’re absolutely right - but it’s worth mentioning that print debugging is the only sanity-preserving way to debug distributed systems (spans are basically super fancy prints) or systems which need to run at full speed (optimized builds) to reproduce bugs… sometimes the easy way is the only way.
orwin
3 hours ago
Isn't it better to use a logger than print statements for distributed systems? Maybe I'm putting too much logging everywhere, but distributed systems are typically a use case where a bug can appear, be 'fixed' (or 'fix itselfn) , then re-appear two month later (the eisenbug). In this case, DEBUG=true and relaunching the app with a logger is often better imho (and if your logger is good, it prints on stdout/stderr when your app is launched locally).
baq
an hour ago
Absolutely - if you want to make this distinction. I put logger.debug() and print() in the same bucket; if you're fancy, you've configured your print to emit logs or configured your linter to forbid print calls.
_0ffh
5 hours ago
Speaking from my own experience I'm not so sure that printf debuggers just have "incomplete knowledge [...] of modern debugging tools". I use printf (or the file-based equivalent, log files) quite a lot, but nobody can accuse me of not knowing good debugging environments.
Also, what's "modern" about "Walk the call stack! See the parameters and values, add watches, set conditional breakpoints"? Those are all things we had many decades ago (for some languages, at least). If anything, many modern debugging environments are fat and clunky, compared with some of the ones from way back when. What has greatly improved though are time travellers, because we didn't use to have the necessary amounts of memory lots of the time.
So please refrain from calling people with different preferences uneducated. [Ed. I retract this bit, though I think it is not unreasonable to associate lack of knowledge with lack of education (not necessarily formal education!) I don't want to quibble over semantics.]
dgfitz
4 hours ago
> So please refrain from calling people with different preferences uneducated.
OP said:
> … people who lean on print debugging often have incomplete knowledge of the immense power of modern debugging tools.
I am educated. I have a metric fuckton of incomplete knowledge in all areas of life.
You’re poking at something that wasn’t said.
kazinator
4 hours ago
I've never been able to successfully debug anything with a conditional breakpoint or watch, in spite of knowing about these things and trying.
(Well, other than my own conditional breakpoint features built into the code, doing things like programmatically trigger a breakpoint whenever an object with a specific address (that being settable in the debugger interactively) passes certain points in the garbage collector.)
dgfitz
41 minutes ago
I work on a codebase daily that has multiple threads per app, and two event loops.
We have successfully sorted out how to manage both loops, and set effective breakpoints to debug efficiently. We also log extensively.
I know it’s possible because I do it every day.
llm_nerd
4 hours ago
I said such users often have incomplete knowledge. Are there exceptions? Sure. Of course there are.
"Those are all things we had many decades ago"
I didn't claim this is some new invention, though. Though as someone who has been a heavy user of debuggers for DECADES, debuggers have dramatically improved in usability and scenarios where they are useful.
"So please refrain from calling people with different preferences uneducated."
But...I didn't. In fact I specifically noted that graduates of excellent CS programs often haven't experienced how great the debuggers in the platforms they target are.
We all have incomplete knowledge about a lot of things.
_0ffh
2 hours ago
Yeah, it's still the same point of "if only they know what I know". And no need to shout, so have I.
llm_trw
4 hours ago
It got put in vs code so mediocre developers can finally use it. Like all things they cargo cult it because they don't know how to use it properly.
PittleyDunkin
4 hours ago
That's a hell of an assumption to make and I don't quite understand your reasoning. Debuggers are complex, true, and often people don't understand their potential. People, however, approach learning code and associated tools from many different directions, backgrounds, assumptions, and biases. If I were to read beyond your words and guess why you're so emphatic, I sense that you're coming from a distinct background (I won't bother guessing) where this is either obscured from you or where you've been allowed to forget.
IshKebab
4 hours ago
Comparatively modern. Printf debugging is clearly a less modern tool than a fully integrated debugger, even though as you say those are decades old.
gwervc
4 hours ago
I very agree with this view. Print debugging is still useful in some cases, for example in game programming where the state of a lot of objects change rapidly and debugging a single update isn't enough to reproduce the situation.
HPsquared
5 hours ago
Similar vibe to showing office workers how to use Vlookup in Excel. Or fields in Word.
IshKebab
4 hours ago
Exactly this. Nobody is looking down on print debugging. Everybody uses it. People are looking down on those that stop at print debugging and never reach for a full debugger.
It's particularly annoying on projects that are set up without considering proper debuggers, because often it's impossible or difficult to use them, e.g. if your program is started via a complicated bash script or makefile rather than directly.
tom_
4 hours ago
If you use Visual Studio on Windows, you have one nice option for multi-process debugging: https://marketplace.visualstudio.com/items?itemName=vsdbgpla... - auto-attach the debugger to child processes as they are invoked. I've also found this good in the past for debugging some kinds of client/server setup by having a little wrapper program that runs the combination of clients and servers required on your local PC.
All the processes end up being debugging simultaneously in the same instance of the debugger, which I've found to make light work of certain types of annoying bug. You might need a mode where the timeouts are disabled though!
kazinator
4 hours ago
In the embedded world, you use remote debugging (like gdbserver on the target and gdb on a development machine). There are issues like some third party pieces being debugged that are not part of your application, and not built in a way that plays along with your debugging environment. Those pieces may be started not simply by shell scripts, but C code, which hard codes some of their arguments and whatnot. You need networking for remote debugging, but the problem you're debugging might occur before networking is up on the target.
anal_reactor
2 hours ago
The problem with deguggers is that they're scoped to one particular technology, and by the time I learn how to use it, I'm already in a new project, doing new things. Meanwhile print is almost universal.