barishnamazov
13 hours ago
I love posts that peel back the abstraction layer of "images." It really highlights that modern photography is just signal processing with better marketing.
A fun tangent on the "green cast" mentioned in the post: the reason the Bayer pattern is RGGB (50% green) isn't just about color balance, but spatial resolution. The human eye is most sensitive to green light, so that channel effectively carries the majority of the luminance (brightness/detail) data. In many advanced demosaicing algorithms, the pipeline actually reconstructs the green channel first to get a high-resolution luminance map, and then interpolates the red/blue signals—which act more like "color difference" layers—on top of it. We can get away with this because the human visual system is much more forgiving of low-resolution color data than it is of low-resolution brightness data. It’s the same psycho-visual principle that justifies 4:2:0 chroma subsampling in video compression.
Also, for anyone interested in how deep the rabbit hole goes, looking at the source code for dcraw (or libraw) is a rite of passage. It’s impressive how many edge cases exist just to interpret the "raw" voltages from different sensor manufacturers.
shagie
11 hours ago
> A fun tangent on the "green cast" mentioned in the post: the reason the Bayer pattern is RGGB (50% green) isn't just about color balance, but spatial resolution. The human eye is most sensitive to green light, so that channel effectively carries the majority of the luminance (brightness/detail) data.
From the classic file format "ppm" (portable pixel map) the ppm to pgm (portable grayscale map) man page:
https://linux.die.net/man/1/ppmtopgm
The quantization formula ppmtopgm uses is g = .299 r + .587 g + .114 b.
You'll note the relatively high value of green there, making up nearly 60% of the luminosity of the resulting grayscale image.I also love the quote in there...
Quote
Cold-hearted orb that rules the night
Removes the colors from our sight
Red is gray, and yellow white
But we decide which is right
And which is a quantization error.
(context for the original - https://www.youtube.com/watch?v=VNC54BKv3mc )boltzmann-brain
9 hours ago
Funnily enough that's not the only mistake he made in that article. His final image is noticeably different from the camera's output image because he rescaled the values in the first step. That's why the dark areas look so crushed, eg around the firewood carrier on the lower left or around the cat, and similarly with highlights, e.g. the specular highlights on the ornaments.
After that, the next most important problem is the fact he operates in the wrong color space, where he's boosting raw RGB channels rather than luminance. That means that some objects appear much too saturated.
So his photo isn't "unprocessed", it's just incorrectly processed.
tpmoney
6 hours ago
I didn’t read the article as implying that the final image the author arrived at was “unprocessed”. The point seemed to be that the first image was “unprocessed” but that the “unprocessed” image isn’t useful as a “photo”. You only get a proper “picture” Of something after you do quite a bit of processing.
integralid
6 hours ago
Definitely what the author means:
>There’s nothing that happens when you adjust the contrast or white balance in editing software that the camera hasn’t done under the hood. The edited image isn’t “faker” then the original: they are different renditions of the same data.
viraptor
5 hours ago
That's not how I read it. As in, this is an incidental comment. But the unprocessed version is the raw values from the sensors visible in the first picture, the processed are both the camera photo and his attempt at the end.
eloisius
an hour ago
This whole post read like and in-depth response to people that claim things like “I don’t do any processing to my photos” or feel some kind of purist shame about doing so. It’s a weird chip some amateur photographers have on their shoulders, but even pros “process” their photos and have done so all the way back until the beginning of photography.
svara
an hour ago
But mapping raw values to screen pixel brightness already entails an implicit transform, so arguably there is no such thing as an unprocessed photo (that you can look at).
Conversely the output of standard transforms applied to a raw Bayer sensor output might reasonably be called the "unprocessed image", since that is what the intended output of the measurement device is.
akx
5 hours ago
If someone's curious about those particular constants, they're the PAL Y' matrix coefficients: https://en.wikipedia.org/wiki/Y%E2%80%B2UV#SDTV_with_BT.470
skrebbel
2 hours ago
> The quantization formula ppmtopgm uses is g = .299 r + .587 g + .114 b.
Seriously. We can trust linux man pages to use the same 1-letter variable name for 2 different things in a tiny formula, can't we?
liampulles
4 hours ago
The bit about the green over-representation in camera color filters is partially correct. Human color sensitivity varies a lot from individual to individual (and not just amongst individuals with color blindness), but general statistics indicate we are most sensitive to red light.
The main reason is that green does indeed overwhelmingly contribute to perceptual luminance (over 70% in sRGB once gamma corrected: https://www.w3.org/TR/WCAG20/#relativeluminancedef) and modern demosaicking algorithms will rely on both derived luminance and chroma information to get a good result (and increasingly spatial information, e.g. "is this region of the image a vertical edge").
Small neural networks I believe are the current state of the art (e.g. train to reverse a 16x16 color filter pattern for the given camera). What is currently in use by modern digital cameras is all trade secret stuff.
NooneAtAll3
3 hours ago
> we are most sensitive to red light
> green does indeed overwhelmingly contribute to perceptual luminance
so... if luminance contribution is different from "sensitivity" to you - what do you imply by sensitivity?
liampulles
2 hours ago
Upon further reading, I think I am wrong here. My confusion was that I read that over 60% of the cones in ones eye are "red" cones (which is a bad generalization), and there is more nuance here.
Given equal power red, blue, or green light hitting our eyes, humans tend to rate green "brighter" in pairwise comparative surveys. That is why it is predominant in a perceptual luminance calculation converting from RGB.
Though there are much more L-cones (which react most strongly to "yellow" light, not "red", also "much more" varies across individuals) than M-cones (which react most strongly to a "greenish cyan"), the combination of these two cones (which make ~95% of the cones in the eye) mean that we are able to sense green light much more efficiently than other wavelengths. S-cones (which react most strongly to "purple") are very sparse.
devsda
2 hours ago
Is it related to the fact that monkeys/humans evolved around dense green forests ?
frumiousirc
35 minutes ago
Well, plants and eyes long predate apes.
Water is most transparent in the middle of the "visible" spectrum (green). It absorbs red and scatters blue. The atmosphere has a lot of water as does, of course, the ocean which was the birth place of plants and eyeballs.
It would be natural for both plants and eyes to evolve to exploit the fact that there is a green notch in the water transparency curve.
Edit: after scrolling, I find more discussion on this below.
delecti
13 hours ago
I have a related anecdote.
When I worked at Amazon on the Kindle Special Offers team (ads on your eink Kindle while it was sleeping), the first implementation of auto-generated ads was by someone who didn't know that properly converting RGB to grayscale was a smidge more complicated than just averaging the RGB channels. So for ~6 months in 2015ish, you may have seen a bunch of ads that looked pretty rough. I think I just needed to add a flag to the FFmpeg call to get it to convert RGB to luminance before mapping it to the 4-bit grayscale needed.
isoprophlex
5 hours ago
I wouldn't worry about it too much, looking at ads is always a shitty experience. Correctly grayscaled or not.
barishnamazov
12 hours ago
I don't think Kindle ads were available in my region in 2015 because I don't remember seeing these back then, but you're a lucky one to fix this classic mistake :-)
I remember trying out some of the home-made methods while I was implementing a creative work section for a school assignment. It’s surprising how "flat" the basic average looks until you actually respect the coefficients (usually some flavor of 0.21R + 0.72G + 0.07B). I bet it's even more apparent in a 4-bit display.
kccqzy
12 hours ago
I remember using some photo editing software (Aperture I think) that would allow you to customize the different coefficients and there were even presets that give different names to different coefficients. Ultimately you can pick any coefficients you want, and only your eyes can judge how nice they are.
acomjean
8 hours ago
>Ultimately you can pick any coefficients you want, and only your eyes can judge how nice they are.
I went to a photoshop conference. There was a session on converting color to black and white. Basically at the end the presenter said you try a bunch of ways and pick the one that looks best.
(people there were really looking for the “one true way”)
I shot a lot of black and white film in college for our paper. One of my obsolete skills was thinking how an image would look in black and white while shooting, though I never understood the people who could look at a scene and decide to use a red filter..
reactordev
12 hours ago
If you really want that old school NTSC look: 0.3R + 0.59G + 0.11B
This is the coefficients I use regularly.
ycombiredd
11 hours ago
Interesting that the "NTSC" look you describe is essentially rounded versions of the coefficients quoted in the comment mentioning ppm2pgm. I don't know the lineage of the values you used of course, but I found it interesting nonetheless. I imagine we'll never know, but it would be cool to be able to trace the path that lead to their formula, as well as the path to you arriving at yours
zinekeller
11 hours ago
The NTSC color coefficients are the grandfather of all luminance coefficients.
It is necessary that it was precisely defined because of the requirements of backwards-compatible color transmission (YIQ is the common abbreviation for the NTSC color space, I being ~reddish and Q being ~blueish), basically they treated B&W (technically monochrome) pictures like how B&W film and videotubes treated them: great in green, average in red, and poorly in blue.
A bit unrelated: pre-color transition, the makeups used are actually slightly greenish too (which appears nicely in monochrome).
ycombiredd
10 hours ago
Cool. I could have been clearer in my post; as I understand it actual NTSC circuitry used different coefficients for RGBx and RGBy values, and I didn't take time to look up the official standard. My specific pondering was based on an assumption that neither the ppm2pgm formula nor the parent's "NTSC" formula were exact equivalents to NTSC, and my "ADHD" thoughts wondered about the provenance of how each poster came to use their respective approximations. While I write this, I realize that my actual ponderings are less interesting than the responses generated because of them, so thanks everyone for your insightful responses.
reactordev
9 hours ago
There are no stupid questions, only stupid answers. It’s questions that help us understand and knowledge is power.
shagie
10 hours ago
To the "the grandfather of all luminance coefficients" ... https://www.earlytelevision.org/pdf/ntsc_signal_specificatio... from 1953.
Page 5 has:
Eq' = 0.41 (Eb' - Ey') + 0.48 (Er' - Ey')
Ei' = -0.27(Eb' - Ey') + 0.74 (Er' - Ey')
Ey' = 0.30Er' + 0.59Eg' + 0.11Eb'
The last equation are those coefficients.zinekeller
10 hours ago
I was actually researching why PAL YUV has the same(-ish) coefficients, while forgetting that PAL is essentially a refinement of the NTSC color standard (PAL stands for phase-alternating line, which solves much of NTSC's color drift issues early in its life).
adrian_b
2 hours ago
It is the choice of the 3 primary colors and of the white point which determines the coefficients.
PAL and SECAM use different color primaries than the original NTSC, and a different white, which lead to different coefficients.
However, the original color primaries and white used by NTSC had become obsolete very quickly so they no longer corresponded with what the TV sets could actually reproduce.
Eventually even for NTSC a set of primary colors was used that was close to that of PAL/SECAM, which was much later standardized by SMPTE in 1987. The NTSC broadcast signal continued to use the original formula, for backwards compatibility, but the equipment processed the colors according to the updated primaries.
In 1990, Rec. 709 has standardized a set of primaries intermediate between those of PAL/SECAM and of SMPTE, which was later also adopted by sRGB.
reactordev
11 hours ago
I’m sure it has its roots in amiga or TV broadcasting. ppm2pgm is old school too so we all tended to use the same defaults.
Like q3_sqrt
brookst
11 hours ago
Even old school chemical films were the same thing, just different domain.
There is no such thing as “unprocessed” data, at least that we can perceive.
adrian_b
an hour ago
True, but there may be different intentions behind the processing.
Sometimes the processing has only the goal to compensate the defects of the image sensor and of the optical elements, in order to obtain the most accurate information about the light originally coming from the scene.
Other times the goal of the processing is just to obtain an image that appears best to the photographer, for some reason.
For casual photographers, the latter goal is typical, but in scientific or technical applications the former goal is frequently encountered.
Ideally, a "raw" image format is one where the differences between it and the original image are well characterized and there are no additional unknown image changes done for an "artistic" effect, in order to allow further processing when having either one of the previously enumerated goals.
kdazzle
8 hours ago
Exactly - film photographers heavily process(ed) their images from the film processing through to the print. Ansel Adams wrote a few books on the topic and they’re great reads.
And different films and photo papers can have totally different looks, defined by the chemistry of the manufacturer and however _they_ want things to look.
acomjean
7 hours ago
Excepting slide photos. No real adjustment once taken (a more difficult medium than negative film which you can adjust a little when printing)
You’re right about Ansel Adams. He “dodged and burned” extensively (lightened and darkened areas when printing.) Photoshop kept the dodge and burn names on some tools for a while.
https://m.youtube.com/watch?v=IoCtni-WWVs
When we printed for our college paper we had a dial that could adjust the printed contrast a bit of our black and white “multigrade” paper (it added red light). People would mess with the processing to get different results too (cold/ sepia toned). It was hard to get exactly what you wanted and I kind of see why digital took over.
mradalbert
3 hours ago
Also worth noting that manufacturers advertise photodiode count as a sensor resolution. So if you have 12 Mp sensor then your green resolution is 6 Mp and blue and red are 3 Mp
yzydserd
an hour ago
Another tangent. Bryce Bayer is the dad of a HN poster. https://news.ycombinator.com/item?id=12111995 https://news.ycombinator.com/item?id=36043826
JumpCrisscross
7 hours ago
> modern photography is just signal processing with better marketing
I pass on a gift I learned of from HN: Susan Sunday’s “On Photography”.
raphman
5 hours ago
Thanks! First hit online: https://www.lab404.com/3741/readings/sontag.pdf
Out of curiosity: what led you to write "Susan Sunday" instead of "Susan Sontag"? (for other readers: "Sonntag" is German for "Sunday")
integralid
6 hours ago
And this is just what happens for a single frame. It doesn't even touch computational photography[1].
[1] https://dpreview.com/articles/9828658229/computational-photo...
cataflam
2 hours ago
Great series of articles!
mwambua
7 hours ago
> The human eye is most sensitive to green light, so that channel effectively carries the majority of the luminance (brightness/detail) data
How does this affect luminance perception for deuteranopes? (Since their color blindness is caused by a deficiency of the cones that detect green wavelengths)
fleabitdev
3 hours ago
Protanopia and protanomaly shift luminance perception away from the longest wavelengths of visible light, which causes highly-saturated red colours to appear dark or black. Deuteranopia and deuteranomaly don't have this effect. [1]
Blue cones make little or no contribution to luminance. Red cones are sensitive across the full spectrum of visual light, but green cones have no sensitivity to the longest wavelengths [2]. Since protans don't have the "hardware" to sense long wavelengths, it's inevitable that they'd have unusual luminance perception.
I'm not sure why deutans have such a normal luminous efficiency curve (and I can't find anything in a quick literature search), but it must involve the blue cones, because there's no way to produce that curve from the red-cone response alone.
[1]: https://en.wikipedia.org/wiki/Luminous_efficiency_function#C...
[2]: https://commons.wikimedia.org/wiki/File:Cone-fundamentals-wi...
doubletwoyou
5 hours ago
The cones are the colour sensitive portion of the retina, but only make up a small percent of all the light detecting cells. The rods (more or less the brightness detecting cells) would still function in a deuteranopic person, so their luminance perception would basically be unaffected.
Also there’s something to be said about the fact that the eye is a squishy analog device, and so even if the medium wavelengths cones are deficient, long wavelength cones (red-ish) have overlap in their light sensitivities along with medium cones so…
fleabitdev
2 hours ago
The rods are only active in low-light conditions; they're fully active under the moon and stars, or partially active under a dim street light. Under normal lighting conditions, every rod is fully saturated, so they make no contribution to vision. (Some recent papers have pushed back against this orthodox model of rods and cones, but it's good enough for practical use.)
This assumption that rods are "the luminance cells" is an easy mistake to make. It's particularly annoying that the rods have a sensitivity peak between the blue and green cones [1], so it feels like they should contribute to colour perception, but they just don't.
[1]: https://en.wikipedia.org/wiki/Rod_cell#/media/File:Cone-abso...
volemo
5 hours ago
It’s not that their M-cones (middle, i.e. green) don’t work at all, their M-cones responsivity curve is just shifted to be less distinguishable from their L-cones curve, so they effectively have double (or more) the “red sensors”.
formerly_proven
an hour ago
> It really highlights that modern photography is just signal processing with better marketing.
Showing linear sensor data on a logarithmic output device to show how hard images are processed is an (often featured) sleight of hand, however.
f1shy
5 hours ago
> The human eye is most sensitive to green light,
This argument is very confusing: if is most sensitive, less intensity/area should be necessary, not more.
matsemann
31 minutes ago
Yeah, was thinking the same. If we're more sensitive, why do we need double sensors? Just have 1:1:1, and we would anyways see more of the green? Won't it be too much if we do 1:2:1, when we're already more perceptible to green?
Lvl999Noob
4 hours ago
Since the human eye is most sensitive to green, it will find errors in the green channel much easier than the others. This is why you need _more_ green data.
gudzpoz
5 hours ago
Note that there are two measurement systems involved: first the camera, and then the human eyes. Your reasoning could be correct if there were only one: "the sensor is most sensitive to green light, so less sensor area is needed".
But it is not the case, we are first measuring with cameras, and then presenting the image to human eyes. Being more sensitive to a colour means that the same measurement error will lead to more observable artifacts. So to maximize visual authenticity, the best we can do is to make our cameras as sensitive to green light (relatively) as human eyes.
jamilton
8 hours ago
Why that ratio in particular? I wonder if there’s a more complex ratio that could be better.
shiandow
2 hours ago
This ratio allows for a relatively simple 2x2 repeating pattern. That makes interpolating the values immensely simpler.
Also you don't want the red and blue to be too far apart, reconstructing the colour signal is difficult enough as it is. Moire effects are only going to get worse if you use an even sparser resolution.
dheera
12 hours ago
This is also why I absolute hate, hate, hate it when people ask me whether I "edited" a photo or whether a photo is "original", as if trying to explain away nice-looking images as if they are fake.
The JPEGs cameras produce are heavily processed, and they are emphatically NOT "original". Taking manual control of that process to produce an alternative JPEG with different curves, mappings, calibrations, is not a crime.
beezle
8 hours ago
As a mostly amateur photographer, it doesn't bother me if people ask that question. While I understand the point that the camera itself may be making some 'editing' type decision on the data first, a) in theory each camera maker has attempted to calibrate the output to some standard, b) public would expect two photos taken at same time with same model camera should look identical. That differs greatly from what often can happen in "post production" editing - you'll never find two that are identical.
vladvasiliu
an hour ago
> public would expect two photos taken at same time with same model camera should look identical
But this is wrong. My not-too-exotic 9-year-old camera has a bunch of settings which affect the resulting image quite a bit. Without going into "picture styles", or "recipes", or whatever they're called these days, I can alter saturation, contrast, and white balance (I can even tell it to add a fixed alteration to the auto WB and tell it to "keep warm colors"). And all these settings will alter how the in-camera produced JPEG will look, no external editing required at all.
So if two people are sitting in the same spot with the same camera, who's to say they both set them up identically? And if they didn't, which produces the "non-processed" one?
I think the point is that the public doesn't really understand how these things work. Even without going to the lengths described by another commenter (local adjust so that there appears to be a ray of light in that particular spot, remove things, etc), just playing with the curves will make people think "it's processed". And what I described above is precisely what the camera itself does. So why is there a difference if I do it manually after the fact or if I tell the camera to do it for me?
integralid
6 hours ago
You and other responders to GP disagree with TFA:
>There’s nothing that happens when you adjust the contrast or white balance in editing software that the camera hasn’t done under the hood. The edited image isn’t “faker” then the original: they are different renditions of the same data.
dsego
3 hours ago
I don't think it's the same, for me personally I don't like heavily processed images. But not in the sense that they need processing to look decent or to convey the perception of what it was like in real life, more in the sense that the edits change the reality in a significant way so it affects the mood and the experience. For example, you take a photo on a drab cloudy day, but then edit the white balance to make it seem like golden hour, or brighten a part to make it seems like a ray of light was hitting that spot. Adjusting the exposure, touching up slightly, that's all fine, depending on what you are trying to achieve of course. But what I see on instagram or shorts these days is people comparing their raws and edited photos, and without the edits the composition and subject would be just mediocre and uninteresting.
gorgolo
41 minutes ago
The “raw” and unedited photo can be just as or even more unrealistic than the edited one though.
Photographs can drop a lot of the perspective, feeling and colour you experience when you’re there. When you take a picture of a slope on a mountain for example (on a ski piste for example), it always looks much less impressive and steep on a phone camera. Same with colours. You can be watching an amazing scene in the mountains, but when you take a photo with most cameras, the colours are more dull, and it just looks flatter. If a filter enhances it and makes it feel as vibrant as the real life view, I’d argue you are making it more realistic.
The main message I get from OP’s post is precisely that there is no “real unfiltered / unedited image”, you’re always imperfectly capturing something your eyes see, but with a different balance of colours, different detector sensitivity to a real eye etc… and some degree of postprocessing is always required make it match what you see in real life.
foldr
42 minutes ago
This is nothing new. For example, Ansel Adams’s famous Moonrise, Hernandez photo required extensive darkroom manipulations to achieve the intended effect:
https://www.winecountry.camera/blog/2021/11/1/moonrise-80-ye...
Most great photos have mediocre and uninteresting subjects. It’s all in the decisions the photographer makes about how to render the final image.
gorgolo
3 hours ago
I noticed this a lot when taking pictures in the mountains.
I used to have a high resolution phone camera from a cheaper phone and then later switched to an iPhone. The latter produced much nicer pictures, my old phone just produces very flat-looking pictures.
People say that the iPhone camera automatically edits the images to look better. And in a way I notice that too. But that’s the wrong way of looking at it; the more-edited picture from the iPhone actually corrresponds more to my perception when I’m actually looking at the scene. The white of the snow and glaciers and the deep blue sky really does look amazing in real life, and when my old phone captured it into a flat and disappointing looking photo with less postprocessing than an iPhone, it genuinely failed to capture what I can see with my eyes. And the more vibrant post processed colours of an iPhone really do look more like what I think I’m looking at.
to11mtm
12 hours ago
JPEG with OOC processing is different from JPEG OOPC (out-of-phone-camera) processing. Thank Samsung for forcing the need to differentiate.
seba_dos1
12 hours ago
I wrote the raw Bayer to JPEG pipeline used by the phone I write this comment on. The choices on how to interpret the data are mine. Can I tweak these afterwards? :)
Uncorrelated
5 hours ago
I found the article you wrote on processing Librem 5 photos:
https://puri.sm/posts/librem-5-photo-processing-tutorial/
Which is a pleasant read, and I like the pictures. Has the Librem 5's automatic JPEG output improved since you wrote the post about photography in Croatia (https://dosowisko.net/l5/photos/)?
seba_dos1
15 minutes ago
Yes, these are quite old. I've written a GLSL shader that acts as a simple ISP capable of real-time video processing and described it in detail here: https://source.puri.sm/-/snippets/1223
It's still pretty basic compared to hardware accelerated state-of-the-art, but I think it produces decent output in a fraction of a second on the device itself, which isn't exactly a powerhouse: https://social.librem.one/@dos/115091388610379313
Before that, I had an app for offline processing that was calling darktable-cli on the phone, but it took about 30 seconds to process a single photo with it :)
to11mtm
11 hours ago
I mean it depends, does your Bayer-to-JPEG pipeline try to detect things like 'this is a zoomed in picture of the moon' and then do auto-fixup to put a perfect moon image there? That's why there's some need to differentiate between SOOC's now, because Samsung did that.
I know my Sony gear can't call out to AI because the WIFI sucks like every other Sony product and barely works inside my house, but also I know the first ILC manufacturer that tries to put AI right into RAW files is probably the first to leave part of the photography market.
That said I'm a purist to the point where I always offer RAWs for my work [0] and don't do any photoshop/etc. D/A, horizon, bright adjust/crop to taste.
Where phones can possibly do better is the smaller size and true MP structure of a cell phone camera sensor, makes it easier to handle things like motion blur. and rolling shutter.
But, I have yet to see anything that gets closer to an ILC for true quality than the decade+ old pureview cameras on Nokia cameras, probably partially because they often had sensors large enough.
There's only so much computation can do to simulate true physics.
[0] - I've found people -like- that. TBH, it helps that I tend to work cheap or for barter type jobs in that scene, however it winds up being something where I've gotten repeat work because they found me and a 'photoshop person' was cheaper than getting an AIO pro.
fc417fc802
10 hours ago
There's a difference between an unbiased (roughly speaking) pipeline and what (for example) JBIG2 did. The latter counts as "editing" and "fake" as far as I'm concerned. It may not be a crime but at least personally I think it's inherently dishonest to attempt to play such things off as "original".
And then there's all the nonsense BigTech enables out of the box today with automated AI touch ups. That definitely qualifies as fakery although the end result may be visually pleasing and some people might find it desirable.
make3
11 hours ago
it's not a crime but applying post processing in an overly generous way that goes a lot further than replicating what a human sees does take away from what makes pictures interesting imho vs other mediums, that it's a genuine representation of something that actually happened.
if you take that away, a picture is not very interesting, it's hyperrealistic so not super creative a lot of the time (compared to eg paintings), & it doesn't even require the mastery of other mediums to get hyperrealistism
Eisenstein
11 hours ago
Do you also want the IR light to be in there? That would make it more of 'genuine representation'.
BenjiWiebe
10 hours ago
Wouldn't be a genuine version of what my eyes would've seen, had I been the one looking instead of the camera.
I can't see infrared.
ssl-3
8 hours ago
Perhaps interestingly, many/most digital cameras are sensitive to IR and can record, for example, the LEDs of an infrared TV remote.
But they don't see it as IR. Instead, this infrared information just kind of irrevocably leaks into the RGB channels that we do perceive. With the unmodified camera on my Samsung phone, IR shows up kind of purple-ish. Which is... well... it's fake. Making invisible IR into visible purple is an artificially-produced artifact of the process that results in me being able to see things that are normally ~impossible for me to observe with my eyeballs.
When you generate your own "genuine" images using your digital camera(s), do you use an external IR filter? Or are you satisfied with knowing that the results are fake?
lefra
4 hours ago
Silicon sensors (which is what you'll get in all visible-light cameras as far as I know) are all very sensitive to near-IR. Their peak sensitivity is around 900nm. The difference between cameras that can see or not see IR is the quality of their anti-IR filter.
Your Samsung phone probably has the green filter of its bayer matrix that blocks IR better than the blue and red ones.
Here's a random spectral sensitivity for a silicon sensor:
https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcRkffHX...
Eisenstein
10 hours ago
But the camera is trying to emulate how it would look if your eyes were seeing it. In order for it to be 'genuine' you would need not only the camera to genuine, but also the OS, the video driver, the viewing app, the display and the image format/compression. They all do things to the image that are not genuine.
make3
7 hours ago
"of what I would've seen"
thousand_nights
12 hours ago
the bayer pattern is one of those things that makes me irrationally angry, in the true sense, based on my ignorance of the subject
what's so special about green? oh so just because our eyes are more sensitive to green we should dedicate double the area to green in camera sensors? i mean, probably yes. but still. (⩺_⩹)
MyOutfitIsVague
11 hours ago
Green is in the center of the visible spectrum of light (notice the G in the middle of ROYGBIV), so evolution should theoretically optimize for green light absorption. An interesting article on why plants typically reflect that wavelength and absorb the others: https://en.wikipedia.org/wiki/Purple_Earth_hypothesis
bmitc
9 hours ago
Green is the highest energy light emitted by our sun, from any part of the entire light spectrum, which is why green appears in the middle of the visible spectrum. The visible spectrum basically exists because we "grew up" with a sun that blasts that frequency range more than any other part of the light spectrum.
cycomanic
3 hours ago
That comment does not make sense. Do you mean the sun emits it's peak intensity at green (I don't believe that is true either, but at least it would make a physically sensical statement). To clarify why the statement does not make sense, the energy of light is directly proportional to its frequency so saying that green is the highest energy light the sun emits is saying the sun does not emit any light at frequency higher than green, i.e. no blue light no UV... That's obviously not true.
imoverclocked
8 hours ago
I have to wonder what our planet would look like if the spectrum shifts over time. Would plants also shift their reflected light? Would eyes subtly change across species? Of course, there would probably be larger issues at play around having a survivable environment … but still, fun to ponder.
milleramp
11 hours ago
Several reasons, -Silicon efficiency (QE) peaks in the green -Green spectral response curve is close to the luminance curve humans see, like you said. -Twice the pixels to increase the effective resolution in the green/luminance channel, color channels in YUV contribute almost no details.
Why is YUV or other luminance-chrominance color spaces important for a RGB input? Because many processing steps and encoders, work in YUV colorspaces. This wasn't really covered in the article.
shiandow
2 hours ago
You think that's bad? Imagine finding out that all video still encodes colour at half resolution simply because that is how analog tv worked.
Renaud
8 hours ago
Not sure why it would invoke such strong sentiments but if you don’t like the bayer filter, know that some true monochrome cameras don’t use it and make every sensor pixel available to the final image.
For instance, the Leica M series have specific monochrome versions with huge resolutions and better monochrome rendering.
You can also modify some cameras and remove the filter, but the results usually need processing. A side effect is that the now exposed sensor is more sensitive to both ends of the spectrum.
NetMageSCW
6 hours ago
Not to mention that there are non-Bayer cameras that vary from the Sigma Foveon and Quattro sensors that use stacked sensors to filter out color entirely differently to the Fuji EXR and X-Trans sensors.
japanuspus
3 hours ago
If the Bayer pattern makes you angry, I imagine it would really piss you off to realize that the whole concept encoding an experienced color by a finite number of component colors is fundamentally species-specific and tied to the details of our specific color sensors.
To truly record an appearance without reference to the sensory system of our species, you would need to encode the full electromagnetic spectrum from each point. Even then, you would still need to decide on a cutoff for the spectrum.
...and hope that nobody ever told you about coherence phenomena.
bstsb
12 hours ago
hey, not accusing you of anything (bad assumptions don't lead to a conducive conversation) but did you use AI to write or assist with this comment?
this is totally out of my own self-interest, no problems with its content
sho_hn
12 hours ago
Upon inspection, the author's personal website used em dashes in 2023. I hope this helped with your witch hunt.
I'm imagining a sort of Logan's Run-like scifi setup where only people with a documented em dash before November 30, 2022, i.e. D(ash)-day, are left with permission to write.
brookst
11 hours ago
Phew. I have published work with em dashes, bulleted lists, “not just X, but Y” phrasing, and the use of “certainly”, all from the 90’s. Feel sorry for the kids, but I got mine.
qingcharles
4 hours ago
I'm grandfathered in too. RIP the hyphen crew.
mr_toad
10 hours ago
> I'm imagining a sort of Logan's Run-like scifi setup where only people with a documented em dash before November 30, 2022, i.e. D(ash)-day, are left with permission to write.
At least Robespierre needed two sentences before condemning a man. Now the mob is lynching people on the basis of a single glyph.
bstsb
an hour ago
wasn't talking about the em dashes (i use them myself) but thanks anyway :)
ozim
5 hours ago
I started to use — dash so that algos skip my writing thinking they were AI generated.
ekidd
12 hours ago
I have been overusing em dashes and bulleted lists since the actual 80s, I'm sad to say. I spent much of the 90s manually typing "smart" quotes.
I have actually been deliberately modifying my long-time writing style and use of punctuation to look less like an LLM. I'm not sure how I feel about this.
disillusioned
12 hours ago
Alt + 0151, baby! Or... however you do it on MacOS.
But now, likewise, having to bail on emdashes. My last differentiator is that I always close set the emdash—no spaces on either side, whereas ChatGPT typically opens them (AP Style).
piskov
12 hours ago
Just use some typography layout with a separate layer. Eg “right alt” plus “-” for m-dash
Russians use this for at least 15 years
qingcharles
4 hours ago
I'm a savage, I just copy-paste them from Unicode sites.
ksherlock
11 hours ago
On the mac you just type — for an em dash or – for an en dash.
xp84
6 hours ago
Is this a troll?
But anyway, it’s option-hyphen for a en-dash and opt-shift-hyphen for the em-dash.
I also just stopped using them a couple years ago when the meme about AI using them picked up steam.
ajkjk
12 hours ago
found the guy who didn't know about em dashes before this year
also your question implies a bad assumption even if you disclaim it. if you don't want to imply a bad assumption the way to do that is to not say the words, not disclaim them
bstsb
an hour ago
didn't even notice the em dashes to be honest, i noticed the contrast framing in the second paragraph and the "It's impressive how" for its conclusion.
as for the "assumption" bit, yeah fair enough. was just curious of AI usage online, this wasn't meant to be a dig at anyone as i know people use it for translations, cleaning up prose etc
barishnamazov
an hour ago
No offense taken, but realize that good number of us folks who have learned English as a second language have been taught in this way (especially in an academic setting). LLMs' writing are like that of people, not the other way around.
reactordev
12 hours ago
The hatred mostly comes from TTS models not properly pausing for them.
“NO EM DASHES” is common system prompt behavior.
xp84
6 hours ago
You know, I didn’t think about that, but you’re right. I have seen so many AI narrations where it reads the dash exactly like a hyphen, actually maybe slightly reducing the inter-word gap. Odd the kinds of “easy” things such as complicated and advanced system gets wrong.