why computational photography will never replace a “real” camera

When I speak of computational photography, I’m speaking of any Apple iPhone since the iPhone 7 when Apple’s computational photography, and it’s capability of blurring the background behind a subject, came into its own. And when I speak of a “real” camera, I am of course speaking of any of my micro four thirds cameras from Olympus and Panasonic and the lenses that are a part of the micro four thirds system. And I’ll use two of the photos from the last post.

This photo of Bo was taken with the Olympus Pen F and the Olympus M.Zuiko 12-40mm/2.8 PRO zoom taken with the lens at its widest aperture, or f/2.8. Focus is right on the eye and the depth of field is deep enough to keep the long whiskers above, to either side, and on the muzzle in sharp focus. Focus begins to fall off around the nose, and then further back starting at the front of the ears and moving back towards the body. For added interest, Bo’s right forepaw is also in reasonable focus, lending additional interest to the overall composition. As your eye travels down Bo’s body, the out-of-focus effect grows increasingly, and pleasingly, blurry.

Interchangeable lenses can render a much nicer focus falloff than any of today’s current smartphone cameras, either from Apple or the combined Android makers. That’s just the way lenses work in the real world. If I’d taken this photo with my iPhone 11 Pro Max, you’d have seen Bo’s entire head in focus, with the rest suddenly out of focus. It would have been a tossup as to whether his right forepaw would have been rendered in or out of focus. The effect would have been subtly disconcerting, at least to any photographer who’s been using any ILC for some time.

The second photo of Bo was taken with the Panasonic Lumix G9 and the Lumix 30mm/2.8 macro lens at f/2.8. I’d caught Bo outside in the shadows, checking out the squirrels playing up in a nearby tree (that’s why the blue tint overall). Once again you can see the plane of sharpest focus around the eyes, with focus gradually falling off around his left cheek, and then becoming even blurrier moving down his crouched body. Lens physics being physics, the same effect is being created regardless of who manufactured the lens.

This isn’t to say that smart phone photography is terrible. Far from it. Some of my best photographs were taken by my iPhone of wide-field subjects, primarily landscapes. I have a pair of truly gorgeous photos taken in the Garden of the Gods in Colorado I wouldn’t trade for anything I might of gotten out of my Olympus camera I had with me at the same time. It was a lucky combination of subject, composition, and lighting that allowed me to capture those images with my iPhone 7 Plus at the time. And having the camera directly a part of the smart phone brings benefits that have already been written about ad infinitum.

But if I need that nice sharpness roll-off effect, then I’m going to turn to my ILC every single time. That’s just the nature of the tools, and why you should always use the proper tool.

One other observation. Both images are straight out of camera, with only my copyright notice added to the image and the image shrunken for publication on this blog. No special post manipulation. Every manufacturer has learned the color science well enough to render the use of post production software moot unless you have a very specific use case for that software. Practice long enough with your given camera and you’ll find it can produce color and luminance in an image that is far more than good enough; it’s fantastic.

One thought on “why computational photography will never replace a “real” camera

Comments are closed.