Many of the best cell phone sensors are off-the-shelf Sony sensors that individuals can buy in reasonable quality. The “magic” of cell phone cameras is the combination of these decent sensors, a lot of processing (that you don’t want), and really amazing lenses (that, in a microscope application, you’re replacing with your own — better to go microscope optics to focal plane than microscope optics to cell phone optics to focal plane). Certainly at the hobbiest level cell phone cameras are amazing, but I suspect even “advanced hobbiest” or whatever would prefer the same sensor in a C mount.
A raw sensor is clearly not easy to just hook up to a computer.
For example look at the top end Raspberrypi sensor. It's a pathetic 12MP. That's like a ten year old phone or so?
I think the processing is also not to be entirely dismissed. There is frame stacking that extends the dynamic range and there is compression and other complex DSP going on that is necessary (b/c 50MP of raw pixel data is a ton of raw data to pull off the sensor). Realistically you probably can only do some of that in software
You can do all of these things in software, and it is done. It’s important thing to have control over the process so you can get quantitative data at the other end, and not just a pretty picture. Also, noise should not be discounted as a very good reason to use lower megapixel sensors. If you want a pretty picture, by all means use a cellphone, but you can’t reality use or trust the result for many scientific purposes.
I think this is just not realistic. "Pretty pictures" are actually more important even in science. The vast majority of the time you're not using pixel values or exact color characteristics in a scientific sense. You just want a clear high res image of what you're looking at so that you can ID the pollen, plankton or whatever it is. The algorithms in phone cameras are some of the most advances available. Sure you could in theory reproduce it in software.. but realistically there are no opensource code bases that can recreate the same level of dynamic range that a high end phone company software stack does
I take a pic with my iPhone and I can ID the things on my slide much better than with the color accurate high end Olympus scientific sensor. And in the end that's what matters most
A 4K TV is 8.2MPix. You would need extremely good optics and a very high resolution display to make full use of a 12MPix camera, let alone more.
(Note I am an OpenFlexure Maintainer)
Camera sensors are very rarely the limiting factor for a microscope unless you are in pretty exotic modes where speed, timing, or low light conditions are important. The key reason it is better often to use something like a Raspberry Pi camera than a phone is you know exactly what sensor you have and can design for it. Also there are benefits of not having the lens in front of it where you then need extra lenses to act as eyepieces to view a virtual image. But using the picamera and either using a microscope objective and a tube lens (or in the low cost version just the picamera lens and a spacer) we can get diffraction limited performance in a really small, light footprint. (More detail on the optics for nerds: https://build.openflexure.org/openflexure-microscope/v7.0.0-... )
However, the camera/sensor isn't the clever bit. The main benefit of OpenFlexure is the automated stage. The range of motion is small and the motion is slow so it really isn't the right microscope for looking at something like a bug leg. But if you want to take loads of high resolution images with a high powered objective and stitch them into a composite image (or take time-lapses automatically autofocusing regularly) we are considerably smaller, more affordable and more customisable than commercial alternatives. With lots of options for scripting.
As an example of what is possible, check out this multi-gigapixel composite image of a cervical smear, and the resolution when you zoom in: https://images.openflexure.org/cap_demo/viewer.html
Note, this is collected with an experimental branch of the software (of course open source). We need to do some tidying and bugfixes before it is ready for release.
I mean... that seems like a very valuable piece of technology but I feel it ideally would be hardware agnostic? If you have a piece of software that takes a video of the camera going over a near-static field and it generates a composite highres image - that'd be very useful.
You could then either have an automated stage, or a hand operated one, or just move your slide by hand under the microscope.
I think the camera can then be a RPi or a phone or anything else
People have been mounting off the shelf cameras to microscopes since before digital cameras existed.
>But we noticed that if you took a cheap old school microscope and stuck an iPhone on the lens the resulting images were infinitely more crisp vivid and high-res
That's the core reason why the Foldscope is so popular. It really does work well.