bhaney
6 hours ago
> produce full-color images that are equal in quality to those produced by conventional cameras
I was really skeptical of this since the article conveniently doesn't include any photos taken by the nano-camera, but there are examples [1] in the original paper that are pretty impressive.
[1] https://www.nature.com/articles/s41467-021-26443-0/figures/2
roelschroeven
5 hours ago
Those images are certainly impressive, but I certainly don't agree with the statement "equal in quality to those produced by conventional cameras": they're quite obviously lacking in sharpness and color.
neom
4 hours ago
conventional ultra thin lens cameras are mostly endoscopes, so it's up against this: https://www.endoscopy-campus.com/wp-content/uploads/Neuroend...
jvanderbot
3 hours ago
Just curious, what am I looking at here?
neom
3 hours ago
my education is on the imaging side not the medical side but I believe this: https://www.mayoclinic.org/diseases-conditions/neuroendocrin... + this: https://emedicine.medscape.com/article/176036-overview?form=... - looks like it was shot with this: https://vet-trade.eu/enteroscope/218-olympus-enteroscope-sif...
dylan604
2 hours ago
There's one of those Taboola type ads going around with a similar image that suggests it is a close up of belly fat. Given the source and their propensity for using images unrelated to topic, so not sure if that's what it really is.
card_zero
4 hours ago
I wonder how they took pictures with four different cameras from the exact same position at the exact same point in time. Maybe the chameleon was staying very still, and maybe the flowers were indoors and that's why they didn't move in the breeze, and they used a special rock-solid mount that kept all three cameras perfectly aligned with microscopic precision. Or maybe these aren't genuine demonstrations, just mock-ups, and they didn't even really have a chameleon.
gcanyon
an hour ago
Given the size of their camera, you could glue it to the center of another camera’s lens with relatively insignificant effect on the larger camera’s performance.
cliffy
3 hours ago
Camera rigs exist for this exact reason.
dylan604
2 hours ago
what happens when you go too far from trusting what you see/read/hear on the internet? simple logic gets tossed out like a baby in the bathwater.
now, here's the rig I'd love to see with this: take a hundred of them and position them like a bug's eye to see what could be done with that. there'd be so much overlapping coverage that 3D would be possible, yet the parallax would be so small that makes me wonder how much depth would be discernible
baxtr
4 hours ago
Also interesting: the paper is from 2021.
Intralexical
5 hours ago
> Ultrathin meta-optics utilize subwavelength nano-antennas to modulate incident light with greater design freedom and space-bandwidth product over conventional diffractive optical elements (DOEs).
Is this basically a visible-wavelength beamsteering phased array?
itishappy
3 hours ago
Yup. It's also passive. The nanostructures act like delay lines.
mrec
2 hours ago
Interesting. This idea appears pretty much exactly at the end of Bob Shaw's 1972 SFnal collection Other Days, Other Eyes. The starting premise is the invention of "slow glass" that looks like an irrelevant gimmick but ends up revolutionizing all sorts of things, and the final bits envisage a disturbing surveillance society with these tiny passive cameras spread everywhere.
It's a good read; I don't think the extrapolation of one technical advance has ever been done better.
andrepd
5 hours ago
How does this work? If it's just reconstructing the images with nn, a la Samsung pasting a picture of the moon when it detected a white disc on the image, it's not very impressive.
nateroling
5 hours ago
I had the same thought, but it sounds like this operates at a much lower level than that kind of thing:
> Then, a physics-based neural network was used to process the images captured by the meta-optics camera. Because the neural network was trained on metasurface physics, it can remove aberrations produced by the camera.
Intralexical
5 hours ago
I'd like to see some examples showing how it does when taking a picture of completely random fractal noise. That should show it's not just trained to reconstruct known image patterns.
Generally it's probably wise to be skeptical of anything that appears to get around the diffraction limit.
brookst
4 hours ago
I believe the claim is that the NN is trained to reconstruct pixels, not images. As in so many areas, the diffraction limit is probabalistic so combining information from multiple overlapping samples and NNs trained on known diffracted -> accurate pairs may well recover information.
You’re right that it might fail on noise with resolution fine enough to break assumptions from the NN training set. But that’s not a super common application for cameras, and traditional cameras have their own limitations.
Not saying we shouldn’t be skeptical, just that there is a plausible mechanism here.
neom
4 hours ago
we've had very good chromatic aberration correction since I got a degree in imaging technology and that was over 20 years ago so I'd imagine it's not particularly difficult for name your flavour of ML.
Intralexical
35 minutes ago
My concern would be that if it can't produce accurate results on a random noise test, then how do we trust that it actually produces accurate results (as opposed to merely plausible results) on normal images?
Multilevel fractal noise specifically would give an indication of how fine you can go.