It does bring up a larger point that I sometimes have to remind myself of as well was that these are PERSONAL devices, and you should set them and use them to your preferences and not necessarily what is accurate/correct. (especially in my case when I go into an office and see a display set to maximum brightness and contrast)
I do however believe “accurate” is what the devices should deliver as a starting point and let the user take it from there
It’s all in what you’re used to + personal preference. Modern computational photography is much closer to what people see with their eyes. It adds in the dynamic range that film and sensors were always too limited to show. Back in the day, high contrast was unavoidable so it became part of the “art”. It was always an artificial distortion, however.
Sort of agree, but visual perception is a very tricky and also very personal thing and there are strong elements of bias and also the human tendency to take whatever you are first primarily exposed to as the “reference” in you brain for what is correct.
I think we’ve touched on this in other threads, but people rarely pick accurate as “best” unless they have taken the time to truly learn and expose themselves to what is accurate.
Audio salespeople learned this a long time ago where the louder of two pairs of speakers were almost always judged by people to sound better.
Or more modern and relevant to the subject at hand, Samsung figured this out early on with their smartphones where they to this day still crank up the default saturation. And in a side-by-side glance the S phones looked “better” than the other “droids” of the day. For that matter they still do that against the Google Pixels which whatever else you might say about them, trend in the last few generations to have pretty "accurate " displays.
I had a family friend who was one of my photo mentors, who got that qualification by working as young man as an assistant for Ansel Adams.
Adams along with IMHO an unparalleled eye for what makes a beautiful image was also the absolute master of the darkroom. And his mantra was that he wanted to convey how it “felt” when he captured an image.
So in many cases he was compensating for the technical limitations of the equipment, but he also would “enhance” an image to better convey that feeling, such as increasing the contrast in a stormy sky to amp up the drama of the shot.
and FWIW, we’ve heard from both Google and Apple engineers that some of the decisions made in computational photography especially with HDR are what has show to be more “pleasing” than accurate.
Agreed, though I was mostly writing in reference to contrast, which was due to the lack of dynamic range in photographic technology. It’s an objective fact that human eyes + brain can “see” far more dynamic range (shadows show their detail more, highlights blown out less) than the traditional photography the the article writer was extolling.
Yes definitely, but conversely with displays for example which is my home domain so to speak, unless they have been educated, people almost unerringly pick the display with the highest contrast as being “sharpest” and the most “realistic” image…
One reason so many are in love with OLED IMHO due to their inherent high contrast
Well, that’s preference and lack of exposure to accurate monitors, as you said. Same as the article writer thinking that impenetrably dark shadows are more “realistic” in photos—it’s what they grew up with.
All that aside, I agree that a modern camera should default to the most objectively accurate photo possible then give the user alternate interpretation settings. Much like how neolithic photographers could opt for Fujifilm or Kodachrome or even various B&W films to obtain the look they sought.
Or rather, not default but have the objectively accurate mode be a (sticky) option itself since as you say, most people like a more vibrant look out of the box.
I always immediately switched my Samsung devices to “natural” colors.
Edit: and I see that my iPhone 13 Pro, iOS 16, is set to “standard” out of the box, which has the dynamic range the article writer dislikes so much. But you can swipe over to the very next options pane for “rich contrast”, which might have made them happy and removed the motivation for the article.
A little bit sidetracking, but this reminds me of the other day when I had a golden sunrise while overhead dark clouds were dropping a fine drizzle. It created a nice rainbow to which the following top photo didn’t give full justice to my eye (iPhone 13 Pro “standard” shot”). Yet when I texted the shot to a friend he immediately responded, “photoshop!” because, “rainbows never show up that well in cameras”. He’s my age and he grew up with rainbow pics like the lower image (which I did “photoshop”, ironically).
That painting is also notoriously difficult to photograph properly with digital. One Nikon engineer told me that part of Nikon’s testing of image sensors included photographs of it and that they all failed in at least one aspect or another
As a artist (I paint watercolors), the development of HDR computational photography has been a godsend. I’m always trying to capture reference photos that come atleast close to what my eye sees in regards to color, and it’s tough. I’m always tweaking photos just after I take them- to my eye, Apple tends to have them too cool and too saturated, for example, with the shadows too dark. Later, I can decide what I want to alter for effect, but I never liked being stuck with the reference photos dictating how I saw things. And I always disliked having to take two photos with different exposures. Such a pain.
I have an iphone mini 13, and it takes pretty d*mn good photos all in all. WAY better than my old gen 1 SE from 2015, which was adequate at best. I thought the article was complaining about stuff that a single click or two would have fixed. Meh.
As Steve refers to capturing what the eye “sees” is quite complex as for instance for example, under ordinary circumstances the average human eye has about 4x the amount of dynamic range (the range of light to dark) as the best imaging sensors.
The same applies under the right circumstances where the average 8 year old can discern almost 3x the level of subtle variations of color than the best 10 bit displays are capable of.
I much prefer th3 ability to take as ‘natural/neutral’ photo as possible.
If I want to tweak it, then i want to do it my way, later. I’d prefer access to the algorithms and computation afterwards rather than during taking photos.
I have nothing against computational photography being done on-the-fly. We already have that option (or rarher in a lot of cases, it and no ‘natural/neutral’ option). If peope want that ease of use, good for them.
And yes, we already have filters to apply afterwards, etc. and LUTs. I’d like to have access what the on-the-fly stuff does in post though.