I've got a time machine I picked up at a swap meet this weekend. Let's fire it up and have some fun, shall we?
Before I get in, I'm going to grab a few cameras and my MacBook Pro Retina with my current image conversion tools. Hmm, let's pick the Canon EOS M, the Nikon Coolpix P7800, and, oh, I don't know, maybe a Sony A3000. The first is a camera that didn't sell until Canon offered it on fire sale at ridiculous prices. The second is a state of the art compact that's not getting a lot of attention. The third is a DSLR wannabe that has been trimmed down to a wicked price point.
Let's set the time machine to 2003, ten years ago. Just to remind everyone, the D70 wasn't out yet, and the 4mp D2h was Nikon's big announcement that year. The top Coolpix of the time was the 5400. Mirrorless? Not even a blip on the radar.
BZZZ. ZZZFF. PFFFT. Ding.
And here we are in 2003.
The photographers I'm meeting are all agog about basically one thing: the image quality I'm demonstrating with my from-the-future cameras. "Oooh, look at all those pixels!" "Wow, you can shoot in low light?" "Wait a second, how is your Mac processing all that so quickly?"
Virtually everyone was super impressed with the image quality. Indeed, they salivated over it and wanted that level of quality from their cameras.
Did anyone comment about the twist LCD on the Coolpix? Nope. Coolpix had twist and shoot back in the 90's. Did anyone comment about the focus performance? Nope. None of these cameras beat the top cameras of 2003. Did anyone comment about all the new features? Not really, though someone did ask why my still cameras all had video capabilities.
"Wait, filenames are still 8.3 in the future?" "Wait, we still have silly scene modes that don't really work well?" "What do you mean I need new lenses?" "Gee, Photoshop seems to have more buttons and windows."
On the other hand, everyone liked Lightroom once I demonstrated it to them ;~).
BZZZ. ZZZFF. PFFFT. Ding.
Okay, I'm back from my little experiment. What did I learn? Or, more importantly, what didn't the camera makers learn?
Simple. Sensor iteration, like CPU iteration, is pretty much a given. Move forward some number of months and there's an improvement. Move forward many years and there's an obvious improvement. That's the way semiconductor technology works. What you're delivering today isn't as good as what you're in beta test with in the labs, and what you're in beta test with isn't as good as what you're still in development with. The curves of semiconductor progress are relatively predictable and it's rare that we see something appear that's very far from the prediction curve.
Which means this: you could simply replace the sensor in a 2003 camera with a 2005 sensor, then a 2007 sensor, then a 2009 sensor, all the way to the current 2013 sensor, and get a "better camera." In essence, that's what the camera makers have been doing. Okay, I'll be harsher: in essence, that's basically all that the camera makers have been doing. Sure, they add some trivial iterative features here and there, including ones they intentionally left out in earlier models so they'd have something to add later. But the "big" changes have been rare and often not exactly what the user asked for (can you say video capabilities in a still camera?).
It's not surprising to me that we see a return to retro-type camera designs lately. The designs of cameras of the 80's weren't broken, so why reinvent them? A few camera makers seem to have come to that conclusion recently, though then they immediately set about trying to do their small, iterative feature creep that no one is really asking for.
Meanwhile, the whole notion of "taking a photograph" is still left mostly undisrupted by the camera makers (though not by the smartphone makers). As I pointed out a decade ago, the Japanese camera makers still think of photography much like they did in film: take a picture, then set it off down a process that doesn't involve the camera. The camera is only an input device, ever. And a pretty dumb one at that.
Here's one of the most absurd things about our current cameras: they have HDMI outputs. Which means that we can connect them directly to big screen TVs. Of course, our cameras are taking 4:3 or 3:2 images for the most part, but the TV screen is 16:9, so we have a cognitive dissonance right at the get go. But let's assume for a moment that we took our 3:2 images into our computer, edited them into 1920 x 1080 JPEGs, can we put them back on the camera and play them on our TV screen? Nope.
Part of the problem is the DCF standard, a standard that I pointed out was completely outdated and incomplete when it first appeared. Today, it's as if someone was forcing everyone to run punch cards on their laptop computer, that's how bad DCF has turned out to be. And if DCF is terrible, the BlueRay inspired AVCHD is a warren of absurdity.
What's stalling the camera market is, well, the cameras. Sure, at some level we'll all buy a new camera that delivers more image quality. But what we're not buying new cameras for is that they make our image production process easier, faster, or more compelling.
I was initially excited by Sony's opening up the API on their latest cameras. Then I looked at the APIs they've publicly disclosed versus the ones that are needed: we aren't there yet. Not by a long shot. And don't get me started on Android-based cameras: to date no one has demonstrated exactly why you'd want one over the same camera without Android (but with WiFi). Oh wait, someone has: the smartphone makers.
Wake up Japan. The next "image standards" are coming from players like Amazon, Apple, Google, and Microsoft. Already I can take a shot with my iPhone, edit it to my heart's content, and play it back to various things (including without a cable involved) without getting that dreaded "Image can't be displayed" message. Already I can use my phone to move images where I want them when I want it to happen. There's more coming. Soon. Just not from Japan, apparently.