You can amortize the write speed significantly by not committing often, either in the sense of a SQL `COMMIT` or in the sense of doing a _synchronous_ `COMMIT`. You could commit every N seconds, say, for some sufficiently large N, or you could commit after N seconds of idle time and no more than M seconds since the last commit. You can also disable `fsync()`, commit often, and re-enable `fsync()` once every N seconds. There are many tactics you can use for data where some loss due to power failure is tolerable.
I.e., you can probably get pretty close to the storage device's max sustained write throughput, though with some losses for write magnification due, e.g., to B-tree write magnification and also indices you might want to maintain.
Write magnification due to B-tree write magnification can be amortized by committing infrequently (which is why I listed that _first_ above). Though there should be no need because SQLite3 already amortizes B-tree write magnification by using a write-ahead log (WAL), so be sure to enable the WAL for this sort of application.
Write magnification due to indices can be amortized by partitioning your tables by time ranges, and then use a VIEW to unify your tables, and then you can create an index on any partition only when it closes to new log entries. This approach causes reads to be slow when searching newer log entries, but those probably will all fit in memory, so it's not a problem if you have a large enough page cache.
Now I've not built anything like this so I can't say for sure, but I suspect that one could get very aggressive with these techniques and reach a sustained write rate of around 75% of the storage device(s)' sustained write rate.
Turning off fsync is pretty dangerous since a crash could corrupt the database. You might think you would just lose a couple seconds of data, but that's only true if writes are applied in order.
E.g. if some data is moved from page A to page B, you normally write B with the new data, fsync, and then write A without the data. Without fsync, you might only write page A and you would lose that data. This might happen on a internal data structure and corrupt the whole database.
You could also disable sync writes for the WAL but enable them for the main DB.
I don't think this is going to be an issue as Linux has a built-in rate limiter.
This is a core design challenge for all logging systems. This is why there are mechanisms for intentionally dropping messages to relieve queue pressure, optimizations around the use of IO_URING. Inversely because logging systems can drop messages it is one of the primary reasons for "MARK" type mechanisms (https://lists.debian.org/debian-user/1998/09/msg00915.html).
That's not GP's question. GP wants to know how high the write rate can be regardless of the systemd log rate limiter, likely so as to be able to increase that rate limit!