In the late-1890's, a British film pioneer named William Friese-Greene filed a patent for a 3-D movie process. When viewed stereoscopically, it showed that the two images are combined by the brain to produce 3-D depth perception. On June 10, 1915, Edwin S. Porter and William E. Waddell presented tests to an audience at the Astor Theater in New York City. In red-green anaglyph, the audience was presented three reels of tests, which included rural scenes, test shots of Marie Doro, a segment of John Mason playing a number of passages from Jim the Penman (a film released by Famous Players-Lasky that year, but not in 3-D), Oriental dancers, and a reel of footage of Niagara Falls. However, according to Adolph Zukor in his 1953 autobiography The Public Is Never Wrong: My 50 Years in the Motion Picture Industry, nothing was produced in this process after these tests.

The stereoscope was improved by Louis Jules Duboscq, and a famous picture of Queen Victoria was displayed at The Great Exhibition in 1851. In 1855 the Kinematoscope was invented, i.e., the stereo animation camera. The first anaglyph (use of red-and-blue glasses, invented by L.D. DuHaron ) movie was produced in 1915 and in 1922 the first public 3D movie was displayed. Stereoscopic 3D television was demonstrated for the first time on August 10, 1928, by John Logie Baird in his company's premises at 133 Long Acre, London. Baird pioneered a variety of 3D television systems using electro-mechanical and cathode-ray tube techniques. In 1935 the first 3D color movie was produced. By the Second World War, stereoscopic 3D still cameras for personal use were already fairly common.
In the 1950s, when TV became popular in the United States, many 3D movies were produced. The first such movie was Bwana Devil from United Artists that could be seen all across the US in 1952. One year later, in 1953, came the 3D movie House of Wax which also featured stereophonic sound. Alfred Hitchcock produced his film Dial M for Murder in 3D, but for the purpose of maximizing profits the movie was released in 2D because not all cinemas were able to display 3D films. The Soviet Union also developed 3D films, with Robinzon Kruzo being its first full-length 3D movie, in 1946.
Subsequently, television stations started airing 3D serials in 2009 based on the same technology as 3D movies.

There are several techniques to produce and display 3D moving pictures. The basic requirement is to display offset images that are filtered separately to the left and right eye. Two strategies have been used to accomplish this: have the viewer wear eyeglasses to filter the separate offset images to each eye, or have the lightsource split the images directionally into the viewer's eyes (no glasses required). Common 3D display technology for projecting stereoscopic image pairs to the viewer include:
Single-view displays project only one stereo pair at a time. Multi-view displays either use head tracking to change the view depending of the viewing angle, or simultaneously project multiple independent views of a scene for multiple viewers (automultiscopic); such multiple views can be created on the fly using the 2D plus depth format.
Various other display techniques have been described, such as holography, volumetric display and the Pulfrich effect, which was used by Doctor Who for Dimensions in Time in 1993, by 3rd Rock From The Sun in 1997, and by the Discovery Channel's Shark Week in 2000, among others. Real-Time 3D TV (Youtube video) is essentially a form of autostereoscopic display.
Stereoscopy is the most widely accepted method for capturing and delivering 3D video. It involves capturing stereo pairs in a two-view setup, with cameras mounted side by side, separated by the same distance as between a person's pupils. If we imagine projecting an object point in a scene along the line-of-sight (for each eye, in turn) to a flat background screen, we may describe the location of this point mathematically using simple algebra. In rectangular coordinates with the screen lying in the Y-Z plane (the Z axis upward and the Y axis to the right) and the viewer centered along the X axis, we find that the screen coordinates are simply the sum of two terms, one accounting for perspective and the other for binocular shift. Perspective modifies the Z and Y coordinates of the object point by a factor of D/(D-x), while binocular shift contributes an additional term (to the Y coordinate only) of s*x/(2*(D-x)), where D is the distance from the selected system origin to the viewer (right between the eyes), s is the eye separation (about 7 centimeters), and x is the true x coordinate of the object point. The binocular shift is positive for the left-eye-view and negative for the right-eye-view. For very distant object points, it is obvious that the eyes will be looking along the same line of sight. For very near objects, the eyes may become excessively "cross-eyed". However, for scenes in the greater portion of the field of view, a realistic image is readily achieved by superposition of the left and right images (using the polarization method or synchronized shutter-lens method) provided the viewer is not too near the screen and the left and right images are correctly positioned on the screen. Digital technology has largely eliminated inaccurate superposition that was a common problem during the era of traditional stereoscopic films.
Multi-view capture uses arrays of many cameras to capture a 3D scene through multiple independent video streams. Plenoptic cameras, which capture the light field of a scene, can also be used to capture multiple views with a single main lens. Depending on the camera setup, the resulting views can either be displayed on multi-view displays, or passed for further image processing.
After capture, stereo or multi-view image data can be processed to extract 2D plus depth information for each view, effectively creating a device-independent representation of the original 3D scene. This data can be used to aid inter-view image compression or to generate stereoscopic pairs for multiple different view angles and screen sizes.
2D plus depth processing can be used to recreate 3D scenes even from a single view and convert legacy film and video material to a 3D look, though a convincing effect is harder to achieve and the resulting image will likely look like a cardboard miniature.


In the 1950s, there was a craze for 3D movies. Not very well-made ones by any means - most were sci-fi or horror B-movies - but they certainly made a splash.
Film producers were trying to get their audiences back into the cinemas and away from their new-fangled TVs, and movies like Creature From the Black Lagoon and It Came From Outer Space served that role well.
These 3D movies (the first was a 1922 flick called The Power of Love) used a red/green anaglyph dual-strip system. Before discussing what this means, let's take a look at how we see in three dimensions and perceive depth.
Stereoscopic vision
Our eyes are roughly 2.5 to 3-inches apart. This separation means that the image each eye receives is slightly different. The light from distant objects reaches each eye roughly in parallel, whereas the light from nearby objects travels at different angles (the nearer the object, the more different the angles). This is known as convergence.
The other process that's going on is focusing. When looking at distant objects, the lens in the eye is relaxed (or rather, the muscles that squeeze the lens are relaxed). The closer the object, the more work the lens has to do to keep it in focus. The brain uses all this effort, plus the image recorded by the light-sensitive cells in the eye (the rods and cones) to produce depth perception.
When we're out and about, walking around, we're unaware of the amount of work that's going on to stop us accidentally walking into doorframes or walls. The eyes are continually feeding information to the brain, which it interprets as 'this object is close, that one is further away'.
In essence, the convergence and focus points are equal for scenes viewed in the real world. When we look at a normal TV screen or a monitor, there is no depth perception - our eyes are just focused on the screen, and it's as if we're simply looking at a flat object (which we are, of course). There's no convergence needed for the 2D image on the screen either - it's just flat.
So how do we turn it into something with depth? The early 3D movies made use of convergence (and ignored focus). If the camera recorded the same scene via two lenses positioned 3-inches or so apart onto two separate film stocks, then the two films could be played back in sync - one film for the viewer's left eye and the other for the right eye.
But how do we ensure that each eye only sees what it's supposed to?
Early techniques
Back in the '50s, the answer was to play back the black-and-white film in two different colours on the same screen. The film for the left eye was blue (or cyan to be more precise) and the film for the right eye was red.
If you looked at the screen, you'd see the scene blurred between red and cyan, but if you looked at the screen wearing glasses where the left lens was red and the right one cyan, you'd see something completely different. The red lens would absorb all the red light hitting it and would only let through the cyan light. The cyan lens would let through the red light and absorb the cyan.
Each eye would therefore only see the scene in the colour meant for it, so the left eye would see the left film and the right eye the right film. This system is known as the anaglyph technique, and is a passive system.
It works well for black-and-white movies, since there's no colour in the scene to be incorrectly absorbed and confuse the viewer. You soon forget about the colour cast.

For an example of an anaglyph image, have a look at the image above while wearing a pair of red/cyan glasses (available cheaply on eBay). Because the light reaching the eyes obeys the 'distant objects send light in parallel, near objects at an angle' rule, the brain can perceive an illusion of depth though convergence.
However, the eye is only able to focus on the screen - there is nothing else there to focus on. A 3D movie will show things 'closer' and 'further away', but we can't focus on whatever we want to - we can only see in focus what the director wants us to concentrate on.
Anaglyph 3d For shock value, this generally means objects that seem to come close to the viewer's face. This difference between the convergence and focus points in 3D movies means that you're likely to experience eye strain and headaches if you watch something in 3D for too long, because your eyes are trying to do a lot of work that isn't necessary.
Polarised light
Moving back to 3D movies, the next big invention was the use of polarised light. Polarised light vibrates in a single plane, whereas the light waves in normal sunlight, for example, oscillate about many planes - some horizontally, some vertically, most in between.
The lenses in polarised glasses only let through light in a single plane, which is a handy way of reducing the amount of light that reaches your eyes in bright sunlight.
This time, the projectors display the left and right image streams using polarised light (the projectors essentially have big polarised screens in front of them), with the left images shown with horizontally polarised light, and the right with vertically polarised light. The viewer wears glasses with the left lens geared to horizontally polarised light and the right to vertically polarised light. Each lens only lets through the light with the correct polarisation for that eye.
Providing the viewers keep their heads vertical, they'll see a 3D effect because each of their eyes sees a different set of images. Again, it's all about convergence rather than focus, so the same drawbacks (eye strain and headaches) can appear. However, this time there's no colour cast to the movie.
This polarised light system first appeared in the early to mid 1950s, and quickly supplanted the old-fashioned anaglyph (two-colour) system, which has since been relegated to static images rather than films.