In some ways this has been a point of discussion since the earliest days of photography and has gotten periodically controversial as for instance with the substantial amount of darkroom manipulation that Ansel Adams used with his most famous images such as Half Dome.
That being said, I always admired his response which was that he was using the tools available, to being the essence of what HE saw to others.
Or far that matter Cartier-Bresson (one of my all time favorites) he of the “decisive moment” fame never claimed that his images were anything other than a capture of a moment in time, but that they for him best captured what he saw/experienced.
Perhaps what’s different this time is both the hugely larger number of images being captured and the possible unease that what they are trying to capture is being manipulated without clear understanding of the extent of what’s being done.
And of course there are the purists (the kind that shoot ONLY in RAW mode) that also argue that it’s not “faithful” to what they saw, but to me that’s also a weak argument going back to the film days, where something as simple as your choice of film such as Tri-X black and white, or Kodachrome inherently alters/modifies/enhances the image you are trying to capture.
Interesting read regardless,
The Pixel 8 and the what-is-a-photo apocalypse - The Verge
PS: The use of the term “apocalypse” is nothing but clickbait, that otherwise detracts from a worthwhile discusson.
It’s a complex ‘issue’.
I think some photographers (or ‘photographer’’ for some of them) take it too far and produce bad pictures (pictures, not really photos).
But then there’s most smartphone cameras having their main lens be wider than human vision. You need to take into account a human being able to look around though and ‘memorise’ the scene as a whole.
Then there’s things like exposure that many cameras are worse than humans at dealing with, but some are better… which is better?
Myself, I see it as ‘photos’ that are either straight from the camera, or editted (ideally lightly) to make the image like how it was/would be seen by a human.
And then ‘pictures’ that are the more artistic side. Often heavily editted. Sometimes to try and make the image like how the person who took it wanted it to look like.
I don’t really care for ‘pictures’. If I wanted that, I’d just look at a painting.
I don’t know how you ever get photos of what your eyes see in the first place - pinks that come out orange (sunset); purples turned to blue, or vice versa…I think photography continually fails me (or I it)…
There may be more to that than you realize. There is a decent amount of experiential evidence that there is wide variation in how people experience/perceive color and that a certain object looking “green” is only understood that way because we’ve been taught that light in the 300-350 nanometer range is “green”, but perceptually my dark green may be your navy blue.
This at least somewhat supported by for instance different people’s color preferences for clothes, cars. etc.
It also may explain why many believ a cooler color balance on their displays is more 'true to life", even though they are not technically accurate and in fact takes learned experience to prefer the more “accurate” presentation.
That’s interesting, but wouldn’t that mean when looking at spectrums, people would perceive various colour bands as having different relative widths?
For example, these label markers might be off, if my “green band” extends further into blue. And maybe some people can see into the gray bands, having extended violets and reds.
Yes, that’s the hypothesis though skewed toward the ultraviolet end of the spectrum due to possibly shorter cones eg. there is some evidence that some people with milder forms of color blindness can also see slightly higher in the spectrum due to shorter but more cones in the eye.
Again this is mostly theory as it’s apparently very hard to conclusively affirm due to other eye structures that also impact color vision such as the shape and composition of the retina.