The truth about 24 megapixels

There is a rumour, which the ides of August may stab in the back or elevate to divine truth, that the coming Alpha 77 will have 24 megapixels.

Because of this rumour, there is a lot of very negative discussion going round to the effect that 24MP on APS-C is far too much and the results will be poor (etc).

Well, they may be, if you think Canon’s results are poor – you can judge that for yourself, try a Canon. But they do not have 24MP sensors!

They also do not have APS-C sensors, in the same way that Sony does. They have smaller APS-C sensors with lots of pixels cut off all round the edges. Sony has chunky big APS-C sensors with acres of extra pixels to spare. This is a slight exaggeration of the situation, but hey, I may as well join in the mood of unrestrained opinion!

Facts: Canon’s 18-megapixel sensor makes images 3456 x 5184 pixels in size (give or take a few, depending on your raw processor). Fact: their smaller 1.6X factor sensor measures 22.3 x 14.9mm. Fact: Canon states it is approximately a 19 megapixel sensor with 18 megapixel final output.

Facts: Sony’s 16.2 megapixel sensor measures 23.5 x 15.6mm and into this packs 3264 x 4912 pixels (active area).

If you made a current Canon pixel-pitch sensor the same 1.5X size as a Sony sensor, it would be around 19.7 megapixels active from a 21 megapixel total. If you put Canon pixels on an existing Sony 1.5X sensor, you would be up to 3618 x 5463 pixels and 24 megapixels needs to be 4000 x 6000.

Clearly it’s not the quantum leap some people think, just a quantum leapfrog over Canon’s back with the benefit of the larger sensor. And it’s worth considering that APS-C covers sensor sizes up to a true 24 x 16mm, for Super-35 video use, and that such sensors have already been made. A few wide-angle lenses and zooms might be a bit tight on the image circle, but that half millimetre one way, 0.4mm the other way, adds up to a surprising number of pixels, enough to take the 19.7 megapixels up to 20.7 megapixels without changing from Canon’s current pixel pitch.

So don’t panic. The chances are that 24 megapixels on proper, big Sony APS-C will perform very well. If you’ve got the glass and the technique to make it…

– David Kilpatrick

 

Mapping the planes

Samsung has a patent and a plan for using two lenses with triangulation (image offset) depth detection between two images in what is roughly a stereo pair. Here’s a link:

http://www.photographybay.com/2011/07/19/samsung-working-on-dslr-like-bokeh-for-compact-cameras/

Pentax also have a system on the new Q range which takes more than one exposure, changes the focus point between them, and uses this to evaluate the focus map and create bokeh-like effects. Or so the pre-launch claims for this system indicate, though the process is not described. It’s almost certain to be a rapid multishot method, and it could equally well involve blending a sharp image with a defocused one.

In theory, the sweep panorama function of Sony and some other cameras can be used to do exactly the same thing – instead of creating a 3D 16:9 shot it could create a depth mapped focus effect in a single shot. 3D is possible with sweep pans by simply taking two frames from the multi-shot pan separated by a certain amount, so the lens positions for the frames are separated enough to be stereographic. 3D ‘moving’ pans (scrolling on the TV screen) can be compared to delaying the playback of the left eye view and shifting the position of subject detail to match the right. But like 16:9 pans, they are just two JPEGs.

All these methods including the Samsung concept can do something else which is not yet common – they can alter any other parameter, not just focus blur. They could for example change the colour balance or saturation so that the focused subject stands out against a monochrome scene, or so the background to a shot is made darker or lighter than the focused plane, or warmer in tone or cooler – etc. Blur is just a filter, in digital image terms. Think of all the filters available from watercolour or scraperboard effects to noise reduction, sharpening, blurring, tone mapping, masking – digital camera makers have already shown that the processors in their tiny cameras can handle such things pretty well.

Once a depth map exists there’s almost no limit to the manipulation possible. Samsung only scratches the surface by proposing this is used for the esoteric and popular bokeh enhancement (a peculiarly Japanese obsession which ended up going viral and infecting the entire world of images). I can easily image a distance-mapped filter turning your background scene into a Monet or a van Gogh, while applying a portrait skin smoothing process to your subjects.

Any camera with two lenses in stereo configuration should also, in theory, be able to focus using a completely different method to existing off-sensor AF – using the two lenses exactly like a rangefinder with two windows. So far this has not been implemented.

Way back – 40 years ago – I devised a rangefinder optical design under which you can see nothing at all at the focus point unless the lens was correctly focused. It works well enough for a single spot, the image detail being the usual double coincident effect when widely out of focus, but blacking out when nearly in focus and suddenly becoming visible only when focus is perfect. I had the idea of making a chequerboard pattern covering an entire image, so that the viewfinder would reveal the focused subject and blank out the rest of the scene, but a little work with a pencil and paper quickly shows why it wouldn’t work like that. The subject plane would have integrity, other planes would not all black out, they’d create an interestingly chaotic mess with phase-related black holes.

Samsung’s concept, in contrast, could isolate the subject entirely – almost as effectively as green screen techniques. It would be able to map the outline of a foreground subject like a newsreader by distance, instead of relying on the colour matte effect of green or blue screen technology. This would free film makers and TV studios from the restraints of chroma-keyed matting (not that you really want the newsreader wearing a green tie).

The sensitivity of the masking could be controlled by detecting the degree of matched image detail offset and its direction (the basic principle of stereographic 3D) – or perhaps more easily by detecting exactly coincident detail, in the focused plane. Photoshop’s snap-to for layers works by detecting a match and so do the stitching functions used for sweep and multi shot in-camera panorama assembly. Snap-to alignment of image data is a very mature function.

Just when you think digital photography has rung all the bells and blown all the whistles, the tones of an approaching caliope can be heard rolling down the river…

– David Kilpatrick

 

Pentax 645D Japan announced

Hasselblad has their Ferrari edition, Leica has the Ti and other special editions – and now Pentax, already making a veritable rainbow of K-r colours, are adorning their flagship 645D Medium Format system with a classical Japanese finish. The 645D is pretty much what happens when you approach medium format without any preconceptions and with a lot of experience in the consumer digital market; get a K DSLR and a medium-format digital sensor, and you get DSLR usability and medium-format images.

Produced to celebrate Pentax’s win of the 2011 Camera GP – with the 645D crowned “Camera of the Year” – the 645D has a hand finished lacquer body treatment and is available strictly to order for a two month period; delivery could take four months.

The release also confirms support for the O-GPS1 module, though the star tracking will not be applicable to this fixed sensor camera.

Official release after the break.

Continue reading »