Nikon's D600 – FX goes Prosumer

D600 with 24-85

Nikon announced the D600 at 5am today, confirming rumours which were beaten only by Apple’s iPhone 5 leaks for accuracy.

The 24Mp entrant seems to be part of ‘full-frame fever’ undoubtedly driven by Sony’s CMOS sensor development, pricing and more crucially, packaging the definitive 35mm format to appeal to mainstream consumers.

Despite a D3X matching resolution, the D600 is a very different sensor and package. Will this be the camera to push Nikon’s DSLR market share to over 50%?

The current DLSR line up at Nikon is quite striking, not only for capability but also the positioning, with a substantial gap between the highly-specified DX-crop D7000 and the 36Mp professional D800 bodies. The middle ground retains the D300s, almost identical in price to the D7000 but qualifying for Nikon Pro User status and now one of Nikon’s oldest DSLR bodies. The D600 fits at the upper end of that gap, with an SRP of £1955.99 in the UK for the body.

For that price, you get a tightly controlled feature set, a compact, lightweight body and sensor capabilities that exceed the state of the art just 2 years ago, when the D3X was in demand, in short supply, and retailing at over twice the D600’s figure. A quick launch-venue play suggests that the specified ISO range – peaking at 6400, rather than the D3X’s 1600 – is very usable. The body weighs only 760g, using a magnesium upper and rear body and offering similar weathersealing to the D800.

Advances in processing, video and OS make themselves felt instantly. FX and DX crop HD video recording with HDMI output for uncompressed streams and sophisticated audio monitoring, a base ISO range from 100 to 6400 extendable to 50 to 25,600, and in-body raw editing are all very compelling features regardless of resolution. The D600 manages 5.5-6fps in full-frame mode, and shoots to two UHS-1 SDHC cards.

The 100% viewfinder is bright and despite using the square, without blanking filter, window rather than the round type used on previous FX bodies seems very similar to the D800. The eyepoint may be a further slight reduction, but without detailed specifications that’s a hard one to call.

A true pentaprism is used – expected, perhaps, in a full-frame high-end body but fighting an increasing trend for electronic viewfinders.

A compact body presents a few ergonomic challenges, and Nikon have tackled the control interface with the experience you’d hope for after the clear new direction shown in the D4. Gentle slopes define the shutter release area, with joystick, function buttons and the standard buttons beside the 3.2″ screen (which features a clip-on protector). A mode wheel/drive wheel combination provides consumer-style selection of scene modes, with a drive wheel below including selection of the IR remote mode, which is supported by receivers on both the front and rear of the body as per the D7000.

Nevertheless the D600 is a consumer package. It’s a high-end one, but it carries a 1/4000th shutter, horizontal axis level only, consumer interface sockets (the compact remote/GPS port rather than the screw-in port of the pro bodies, and no PC-sync socket). Unlike the D800, the D600 has USB 2. At launch, it seemed that the WT4 wireless tethering solution was not supported, but some of the launch material suggests that it is supported, alongside the low cost WU-1b introduced specifically fort he D600.

The Android remote control application for the WU-1b (below) is already available; an iOS version will follow before the end of September 2012. It offers rather less control than Camera Control Pro, but does provide a live-view relay and release function.

That WU designation has been seen before, on the similar accessory for the determinedly consumer (and best-selling) D3200. It’s a wireless broadcast unit slightly more sophisticated than using an Eye-Fi card, and at £64 is almost a tenth of the SRP of the WT4. It sacrifices many of the camera control functions (though triggering is possible), and is mainly intended to transmit and share images via Android or iOS devices. It’s a shame that this split exists in Nikon’s line, as the WT4’s full-fat networking and storage solution is a lot for many studio photographers who would probably find the basic transfer/triggering of the WU-style units very useful on the pro bodies.

Mapping the planes

Samsung has a patent and a plan for using two lenses with triangulation (image offset) depth detection between two images in what is roughly a stereo pair. Here’s a link:

Pentax also have a system on the new Q range which takes more than one exposure, changes the focus point between them, and uses this to evaluate the focus map and create bokeh-like effects. Or so the pre-launch claims for this system indicate, though the process is not described. It’s almost certain to be a rapid multishot method, and it could equally well involve blending a sharp image with a defocused one.

In theory, the sweep panorama function of Sony and some other cameras can be used to do exactly the same thing – instead of creating a 3D 16:9 shot it could create a depth mapped focus effect in a single shot. 3D is possible with sweep pans by simply taking two frames from the multi-shot pan separated by a certain amount, so the lens positions for the frames are separated enough to be stereographic. 3D ‘moving’ pans (scrolling on the TV screen) can be compared to delaying the playback of the left eye view and shifting the position of subject detail to match the right. But like 16:9 pans, they are just two JPEGs.

All these methods including the Samsung concept can do something else which is not yet common – they can alter any other parameter, not just focus blur. They could for example change the colour balance or saturation so that the focused subject stands out against a monochrome scene, or so the background to a shot is made darker or lighter than the focused plane, or warmer in tone or cooler – etc. Blur is just a filter, in digital image terms. Think of all the filters available from watercolour or scraperboard effects to noise reduction, sharpening, blurring, tone mapping, masking – digital camera makers have already shown that the processors in their tiny cameras can handle such things pretty well.

Once a depth map exists there’s almost no limit to the manipulation possible. Samsung only scratches the surface by proposing this is used for the esoteric and popular bokeh enhancement (a peculiarly Japanese obsession which ended up going viral and infecting the entire world of images). I can easily image a distance-mapped filter turning your background scene into a Monet or a van Gogh, while applying a portrait skin smoothing process to your subjects.

Any camera with two lenses in stereo configuration should also, in theory, be able to focus using a completely different method to existing off-sensor AF – using the two lenses exactly like a rangefinder with two windows. So far this has not been implemented.

Way back – 40 years ago – I devised a rangefinder optical design under which you can see nothing at all at the focus point unless the lens was correctly focused. It works well enough for a single spot, the image detail being the usual double coincident effect when widely out of focus, but blacking out when nearly in focus and suddenly becoming visible only when focus is perfect. I had the idea of making a chequerboard pattern covering an entire image, so that the viewfinder would reveal the focused subject and blank out the rest of the scene, but a little work with a pencil and paper quickly shows why it wouldn’t work like that. The subject plane would have integrity, other planes would not all black out, they’d create an interestingly chaotic mess with phase-related black holes.

Samsung’s concept, in contrast, could isolate the subject entirely – almost as effectively as green screen techniques. It would be able to map the outline of a foreground subject like a newsreader by distance, instead of relying on the colour matte effect of green or blue screen technology. This would free film makers and TV studios from the restraints of chroma-keyed matting (not that you really want the newsreader wearing a green tie).

The sensitivity of the masking could be controlled by detecting the degree of matched image detail offset and its direction (the basic principle of stereographic 3D) – or perhaps more easily by detecting exactly coincident detail, in the focused plane. Photoshop’s snap-to for layers works by detecting a match and so do the stitching functions used for sweep and multi shot in-camera panorama assembly. Snap-to alignment of image data is a very mature function.

Just when you think digital photography has rung all the bells and blown all the whistles, the tones of an approaching caliope can be heard rolling down the river…

– David Kilpatrick