An alternate take on computational photography

There is an element of tilting at windmills here, but I think she also has some valid points.

That being said, she understates how much the hardware has advanced from the 2020 iPhone SE versus the Pro 13

It’s time to bring contrast back to our smartphone photos - The Verge

It does bring up a larger point that I sometimes have to remind myself of as well was that these are PERSONAL devices, and you should set them and use them to your preferences and not necessarily what is accurate/correct. (especially in my case when I go into an office and see a display set to maximum brightness and contrast)

I do however believe “accurate” is what the devices should deliver as a starting point and let the user take it from there

2 Likes

It’s all in what you’re used to + personal preference. Modern computational photography is much closer to what people see with their eyes. It adds in the dynamic range that film and sensors were always too limited to show. Back in the day, high contrast was unavoidable so it became part of the “art”. It was always an artificial distortion, however.

1 Like

Sort of agree, but visual perception is a very tricky and also very personal thing and there are strong elements of bias and also the human tendency to take whatever you are first primarily exposed to as the “reference” in you brain for what is correct.

I think we’ve touched on this in other threads, but people rarely pick accurate as “best” unless they have taken the time to truly learn and expose themselves to what is accurate.

Audio salespeople learned this a long time ago where the louder of two pairs of speakers were almost always judged by people to sound better.

Or more modern and relevant to the subject at hand, Samsung figured this out early on with their smartphones where they to this day still crank up the default saturation. And in a side-by-side glance the S phones looked “better” than the other “droids” of the day. For that matter they still do that against the Google Pixels which whatever else you might say about them, trend in the last few generations to have pretty "accurate " displays.

I had a family friend who was one of my photo mentors, who got that qualification by working as young man as an assistant for Ansel Adams.

Adams along with IMHO an unparalleled eye for what makes a beautiful image was also the absolute master of the darkroom. And his mantra was that he wanted to convey how it “felt” when he captured an image.

So in many cases he was compensating for the technical limitations of the equipment, but he also would “enhance” an image to better convey that feeling, such as increasing the contrast in a stormy sky to amp up the drama of the shot.

and FWIW, we’ve heard from both Google and Apple engineers that some of the decisions made in computational photography especially with HDR are what has show to be more “pleasing” than accurate.

2 Likes

Agreed, though I was mostly writing in reference to contrast, which was due to the lack of dynamic range in photographic technology. It’s an objective fact that human eyes + brain can “see” far more dynamic range (shadows show their detail more, highlights blown out less) than the traditional photography the the article writer was extolling.

Yes definitely, but conversely with displays for example which is my home domain so to speak, unless they have been educated, people almost unerringly pick the display with the highest contrast as being “sharpest” and the most “realistic” image…

One reason so many are in love with OLED IMHO due to their inherent high contrast

1 Like

Well, that’s preference and lack of exposure to accurate monitors, as you said. Same as the article writer thinking that impenetrably dark shadows are more “realistic” in photos—it’s what they grew up with.

All that aside, I agree that a modern camera should default to the most objectively accurate photo possible then give the user alternate interpretation settings. Much like how neolithic photographers could opt for Fujifilm or Kodachrome or even various B&W films to obtain the look they sought.

Or rather, not default but have the objectively accurate mode be a (sticky) option itself since as you say, most people like a more vibrant look out of the box.

I always immediately switched my Samsung devices to “natural” colors.

Edit: and I see that my iPhone 13 Pro, iOS 16, is set to “standard” out of the box, which has the dynamic range the article writer dislikes so much. But you can swipe over to the very next options pane for “rich contrast”, which might have made them happy and removed the motivation for the article.

1 Like

A little bit sidetracking, but this reminds me of the other day when I had a golden sunrise while overhead dark clouds were dropping a fine drizzle. It created a nice rainbow to which the following top photo didn’t give full justice to my eye (iPhone 13 Pro “standard” shot”). Yet when I texted the shot to a friend he immediately responded, “photoshop!” because, “rainbows never show up that well in cameras”. He’s my age and he grew up with rainbow pics like the lower image (which I did “photoshop”, ironically).

Honestly, it took a bit of a tweak to make it look more like what my eye saw, subjectively:

:person_shrugging:

3 Likes

I always loved shooting and printing in Cibachrome. Cibachrome was clearly a distortion of reality but it made for far more dramatic photographs.

3 Likes

Agreed, each film had it’s own unique look, that was sometimes better suited for particular subjects. I liked Kodachrome for people and city shots, but liked Ektachrome for landscapes.

And of course Kodacolor was the go to standard for prints.

There are some film profiles in some of the photo editing apps that attempt to simulate some of the more popular film stock, but to my eyes they all seem to be lacking in one way or another.

I do think starting out with film made me a better photographer though.

If you EVER want to truly test your perception and visual acuity, go to a Van Gogh exhibit that is showing The Starry Night…it will blow you away…

That painting is also notoriously difficult to photograph properly with digital. One Nikon engineer told me that part of Nikon’s testing of image sensors included photographs of it and that they all failed in at least one aspect or another

No flash allowed of course

The colors literally glow and are so intense. We weren’t even allowed to take non-flash photos. Was lucky to see it twice (Atlanta and New Haven)

1 Like

As a artist (I paint watercolors), the development of HDR computational photography has been a godsend. I’m always trying to capture reference photos that come atleast close to what my eye sees in regards to color, and it’s tough. I’m always tweaking photos just after I take them- to my eye, Apple tends to have them too cool and too saturated, for example, with the shadows too dark. Later, I can decide what I want to alter for effect, but I never liked being stuck with the reference photos dictating how I saw things. And I always disliked having to take two photos with different exposures. Such a pain.

I have an iphone mini 13, and it takes pretty d*mn good photos all in all. WAY better than my old gen 1 SE from 2015, which was adequate at best. I thought the article was complaining about stuff that a single click or two would have fixed. Meh.

2 Likes

Anyone that is interested in digital art photography which I suspect is many in our forum, really should take the time and learn a bit about it.

There is more than a bit of “art” to it as well as science as it leans heavily on visual perception as well as straight up data manipulation.

This is a decent overview/primer on the subject.

What Is Computational Photography? (howtogeek.com)

As Steve refers to capturing what the eye “sees” is quite complex as for instance for example, under ordinary circumstances the average human eye has about 4x the amount of dynamic range (the range of light to dark) as the best imaging sensors.

The same applies under the right circumstances where the average 8 year old can discern almost 3x the level of subtle variations of color than the best 10 bit displays are capable of.

1 Like

I much prefer th3 ability to take as ‘natural/neutral’ photo as possible.

If I want to tweak it, then i want to do it my way, later. I’d prefer access to the algorithms and computation afterwards rather than during taking photos.

I have nothing against computational photography being done on-the-fly. We already have that option (or rarher in a lot of cases, it and no ‘natural/neutral’ option). If peope want that ease of use, good for them.

And yes, we already have filters to apply afterwards, etc. and LUTs. I’d like to have access what the on-the-fly stuff does in post though.

This may be an “old” topic, but with my recent studying of current/upcoming phone choices, your comments made me wonder:

Is what I see “real” or just a construct of what my eyes and untold generations of evolution made the world look like to humans?

1 Like

Me as well which is why I have always shot in “raw” mode on my DLSR

Unfortunately on smart phones that isn’t really an option as even with the so called “raw” option with the iPhone Pro or the Samsung S22/23 Ultra, there is still significant processing on the “front end” before anything is written out.

And if anything Samsung has “doubled down” on that with the S23 Utra and Fold 5 cameras as in our testing getting “accurate” output in anything other than “perfect” lighting and exposure situations like shooting an 18% gray card under studio lighting produces highly variable results.

Especially with the Fold 5 which Samsung is “improved” over the Fold 4, it seems to be totally software driven as the sensors are the same, but show “punched up color” compared to the Fold 4