Mapping the planes

Samsung has a patent and a plan for using two lenses with triangulation (image offset) depth detection between two images in what is roughly a stereo pair. Here’s a link:

http://www.photographybay.com/2011/07/19/samsung-working-on-dslr-like-bokeh-for-compact-cameras/

Pentax also have a system on the new Q range which takes more than one exposure, changes the focus point between them, and uses this to evaluate the focus map and create bokeh-like effects. Or so the pre-launch claims for this system indicate, though the process is not described. It’s almost certain to be a rapid multishot method, and it could equally well involve blending a sharp image with a defocused one.

In theory, the sweep panorama function of Sony and some other cameras can be used to do exactly the same thing – instead of creating a 3D 16:9 shot it could create a depth mapped focus effect in a single shot. 3D is possible with sweep pans by simply taking two frames from the multi-shot pan separated by a certain amount, so the lens positions for the frames are separated enough to be stereographic. 3D ‘moving’ pans (scrolling on the TV screen) can be compared to delaying the playback of the left eye view and shifting the position of subject detail to match the right. But like 16:9 pans, they are just two JPEGs.

All these methods including the Samsung concept can do something else which is not yet common – they can alter any other parameter, not just focus blur. They could for example change the colour balance or saturation so that the focused subject stands out against a monochrome scene, or so the background to a shot is made darker or lighter than the focused plane, or warmer in tone or cooler – etc. Blur is just a filter, in digital image terms. Think of all the filters available from watercolour or scraperboard effects to noise reduction, sharpening, blurring, tone mapping, masking – digital camera makers have already shown that the processors in their tiny cameras can handle such things pretty well.

Once a depth map exists there’s almost no limit to the manipulation possible. Samsung only scratches the surface by proposing this is used for the esoteric and popular bokeh enhancement (a peculiarly Japanese obsession which ended up going viral and infecting the entire world of images). I can easily image a distance-mapped filter turning your background scene into a Monet or a van Gogh, while applying a portrait skin smoothing process to your subjects.

Any camera with two lenses in stereo configuration should also, in theory, be able to focus using a completely different method to existing off-sensor AF – using the two lenses exactly like a rangefinder with two windows. So far this has not been implemented.

Way back – 40 years ago – I devised a rangefinder optical design under which you can see nothing at all at the focus point unless the lens was correctly focused. It works well enough for a single spot, the image detail being the usual double coincident effect when widely out of focus, but blacking out when nearly in focus and suddenly becoming visible only when focus is perfect. I had the idea of making a chequerboard pattern covering an entire image, so that the viewfinder would reveal the focused subject and blank out the rest of the scene, but a little work with a pencil and paper quickly shows why it wouldn’t work like that. The subject plane would have integrity, other planes would not all black out, they’d create an interestingly chaotic mess with phase-related black holes.

Samsung’s concept, in contrast, could isolate the subject entirely – almost as effectively as green screen techniques. It would be able to map the outline of a foreground subject like a newsreader by distance, instead of relying on the colour matte effect of green or blue screen technology. This would free film makers and TV studios from the restraints of chroma-keyed matting (not that you really want the newsreader wearing a green tie).

The sensitivity of the masking could be controlled by detecting the degree of matched image detail offset and its direction (the basic principle of stereographic 3D) – or perhaps more easily by detecting exactly coincident detail, in the focused plane. Photoshop’s snap-to for layers works by detecting a match and so do the stitching functions used for sweep and multi shot in-camera panorama assembly. Snap-to alignment of image data is a very mature function.

Just when you think digital photography has rung all the bells and blown all the whistles, the tones of an approaching caliope can be heard rolling down the river…

– David Kilpatrick

 

Sony invent 3D movie HDTV camera

Sony Corporation today announced the development of a single lens 3D camera technology capable of recording natural and smooth 3D images of even fast-moving subject matter such as sports, at 240fps (frames per second).  This technology combines a newly developed optical system for single lens 3D camera which captures the left and right images simultaneously, together with existing high frame rate (HFR) recording technology to realize 240fps 3D filming.  Sony will demonstrate a prototype model incorporating this technology at “CEATEC JAPAN 2009”, to be held at Makuhari Messe convention center in Chiba city, Japan, from October 6th.
High Frame Rate Single Lens 3D Camera Technology(prototype)
Continue reading »