The snippets are not false, but there's so much context missing it's easy to worsen the situation, especially for beginners which seem to be the target audience.
First, this guide should emphasize the need to measure before doing anything : django silk, django debug toolbarsm, etc.
Of course, measure after the optimizations too, and measure in production with an apm.
Second, some only work sometimes : select_related / prefetch_related / iterator will lead to giga SQL queries with nested joins all over the place, and ends by exploding ram usage. It will help at first, but soon enough one will pay any missing sql knowledge or naive relationships.
Third, caching without taking the context into account will probably lead to data corruption one way or another. Debugging stale cache issues is not fun, since you cannot reproduce them easily.
Fourth, celery is a whole new world, which requires workers, retry and idempotent logic, etc.
Finally, scaling is also about code: architecture, good practices, basic algorithm, etc
I'll end by linking to more complete resources :
- https://docs.djangoproject.com/en/5.1/topics/performance/
- https://loadforge.com/guides/the-ultimate-guide-to-django-pe...
- https://medium.com/django-unleashed/django-application-perfo...
> scaling is also about code
Which is darn hard if you are a beginner in a framework, loops in loops still bites me after reality does the integration test for me. This is especially true when you try to do a simple thing as a beginner. By scaling I am just talking about normal production, going from 2 developers to a couple of thousand customers.
To mind it's a part where the Django guide could be expanded a bit, in order to help scaffold a simple but "open to the future" code architecture.
For instance I would warn against fat models and propose a very light "service pattern" architecture
100% Always measure before you performance optimise. Lots of times the “fast” solution is slower.
If you need a fast solution then add an integration test so that the system stays fast.
> Third, caching without taking the context into account will probably lead to data corruption one way or another.
One can only hope it's data corruption and not a sensitive data leak.
Probably 80% of notable performance problems I’ve seen in the kinds of systems that things like Django and Ruby get used for have been terrible queries or patterns of use for databases (I’ve seen 1,000x or worse costs for this versus something more-correct) and nearly all of the other 20% has been areas that plainly just needed some pretty straightforward caching.
The nice thing about that is that spotting those, and the basic approach to fixing them, if not the exact implementation details, are cross-platform skills that apply basically anywhere.
I actually can’t recall any other notable performance problems in those sorts of systems, over the years. Those are so common and the fixes so effective I guess the rest has just never rated attention. I’ve seen different problems in long-lived worker processes though (“make it streaming—everything becomes streaming when scale gets big enough” is the usual platform-agnostic magic bullet in those cases)
A bunch of TFA is basically about those things, so I’m not correcting it, more like nodding along.
Oh wait I just thought of another I’ve seen: serving large files through a scripting language, as in, reading it in and writing it back out with a scripting language. You run into trouble at even modest scale. There’s a magic response header for that, make Nginx or Apache or whatever serve it for you, it’s a fix that’s typically deleting a bunch of code and replacing it with one or two lines. Or else just use s3 and maybe signed URLs like the rest of the world. Problem solved.
> Probably 80% of notable performance problems I’ve seen in the kinds of systems that things like Django and Ruby get used for have been terrible queries or patterns of use for databases (I’ve seen 1,000x or worse costs for this versus something more-correct)
ActiveRecord pattern saves you a few lines of code now, and explodes your foot off later.
I have had to combine files into a zipped file on demand before. It is hard to avoid the inherent slowness of that.
I have Django code which creates a tar file on the fly from a list of requested files and works well. It doesn't use intermediate storage. The tar format can be pretty simple. I got most of the way into implementing a uncompressed zip version, but then I realised that tar was good enough for my site.
Mmm. If you had the right library, might be able to stream it as it’s being created which might help at least with perceived performance, but yeah, that’s a fun one.
Interesting, was there a business reason to not do that in the background somewhere?
Yeah, very non-technical users that won't check their email or click on a notification when the zip file is ready for them.
Knowing SQL and how relational databases actually work is one of the best superpowers a backend developer can have.
If you want to go deeper than your database manual, the best place is Andy Pavlo's db course, freely available at youtube. I don't write databases, but after watching it I understand trade-offs and performance considerations much better, and feel much more comfortable reading Postgresql manual.
The magic header is probably X-Accel-Redirect
Ah thanks, I thought it was a figure of speech or something :')
Yeah, or the kinda-better-named “x-sendfile” on apache2. Same effect.
At first I wanted to criticize the post, buuut after finishing reading it I actually liked it. Very concise and practical
ps - I didn’t know about template “cache” directive
FWIW, I'd advise against template caching. It's awkward to cache bust, and a network round trip to your cache will almost certainly be more expensive than the Python operations to render the template, even with stock Django templating which is slow.
The only place it's possible worth it is if you do a lot of database queries from your template rendering, and you're therefore caching database results (as rendered text). In that case, it's an easy patch. However a much better solution is to fetch all database results up front.
In my previous company we had a very significant Django codebase with plenty of templating, and found that using the templating system for (lazy loaded) database queries or caching was more hassle than it was worth and avoided it as much as possible. Treating template rendering as a pure CPU bound function was always better.
It'd be faster to retrieve from a cache than to make a round-trip to a DB to get the data needed for templating.
My point was that you shouldn't be doing DB queries in the template. If you're doing the DB queries before templating then you should also be doing the cache queries before templating too.
Nice post,thanks for sharing!
It would be nice to include the generated sql queries along with the code samples though. I've been on a similar path recently and being able to see the queries was really helpful (even the ones that failed!).
Don't store secrets in settings.py. Typically you'd inject those from secrets management as environment variables.
And also, when possible, try to use a key manager over environment variables.
Using a library like keyring [1] is a significant step up from a .env file sitting in your dev environment.
In other words:
- Store secrets in settings.py (bad)
- Store secrets in .env file (better)
- Store secrets in OS-level key vault (even better)
When the secrets are in a plaintext .env file, that file can get leaked in many non-obvious ways. Your antivirus uploads a copy, your IT department runs backups, someone on the team clones your git repo to a OneDrive/Dropbox folder and puts the .env file there. Then any of those services that has a leak, or any of the services those services use has a leak (improperly configured S3 bucket, etc), your secrets are leaked.
[1] https://github.com/jaraco/keyring
The basic outline of this post isn’t bad, the problem is that’s all there is - a basic outline. If you haven’t dealt with these problems before the checklists are meaningless. If you HAVE dealt with these problems before the checklists are redundant
This sort of article seems perfectly poised to be useless to beginners (no context, doesn't tell you how to use the things) and experts (no nuance, just listing basic features) alike. Who is it for? Why does it exist? Why is it posted here?
Seo and marketing to sell their product is the reason it exists.
So I shouldnt have my business logic done in django templates?