For a couple years I had a process running on a PC without ECC that simply filled a large buffer with a bit pattern that was 50% ones and 50% zeros, and then periodically scanned the buffer for changes. It never found a changed bit.
I used a 2008 Mac Pro at work and a 2009 Mac Pro at home for several years. They had ECC memory. I would periodically check the RAM status and I never saw anything that said an error had been corrected.
For that test with the big buffer I don't remember how big a buffer I used, but I remember my calculations based on the error rate data I could find said that with the size buffer I was using I should have seen a few errors per year.
But I think all the error rate data I found was from servers with a lot of RAM and located in data centers. I wonder if that environment is more prone to RAM errors than the environment of the typical home computer?
A buffer is much too small to catch any errors, unless you dedicate more than half of your installed memory to it.
Typical memory rates for a computer with only 8 to 32 GB of DRAM might be of at most a few errors per year when new. For some aged memory modules, after several years of use, the error rate can increase a lot and it can become noticeable. For myself this has been the main benefit of ECC, the ability to detect early the memory modules that must be replaced to avoid data corruption.
Besides the errors from cosmic radiation, which depend mainly on the altitude of the location, there are also errors caused by electrical noise from the environment. The latter may be more frequent in data centers and in industrial computers.
Memory sizes are constantly growing, as semiconductor feature sizes continue to shrink. It stands to reason that random energetic photons (gamma rays, etc.) have a higher chance of flipping a bit on modern memory than on stuff from 15 years ago.