Different 3d technologies work

October 18, 2021 by Essay Writer

Contents

  • 1 CHAPTER 1: INTRODUCTION
  • 2 CHAPTER 2: HUMAN VISION
  • 3 CHAPTER 3: Early Stereoscopic History (1838 – 1920)
  • 4 This was the start of stereoscopic photography.
  • 5 Chapter 4: Early 3D Feature Films
  • 6 4.1 The first 3D feature film
  • 7 4.2 The first active-shutter 3D film
  • 8 4.3 The first polarised 3D film
  • 9 Chapter 5: ‘Golden Age’ of 3D
  • 10 Chapter 6: Occasional 3D films
  • 11 Chapter 7: The Second ‘Golden Age’
  • 12 CHAPTER 8: 3D TECHNOLOGIES
  • 13 8.1 – 3D capture and display methods
  • 14 8.2 – Ghosting & light efficiency
  • 15 8.3 – Colour separation – anaglyph
  • 16 8.4 – Colour Separation – Dolby 3D
  • 17 8.5 – Active shutter
  • 18 8.6 – Polarisation
  • 19 8.7 – Polarisation – RealD & MasterImage
  • 20 8.8 – IMAX 3D
  • 21 8.9 – Autostereoscopic displays
  • 22 WRITE ABOUT NINTENDO DS 3D
  • 23 8.10 Comparison of technologies
  • 24 Chapter 9: 3d Cinematography
  • 25 9.1 Interaxial distance
  • 26 9.2 Convergence
  • 27 9.3 Stereoscopic window
  • 28 9.4 Depth budget
  • 29 9.5 Matching size of screen
  • 30 chapter 10: Creating 3d content
  • 31 10.1 Computer Generated Images (CGI)
  • 32 10.2 Dual camera filmmaking
  • 33 10.3 2D-to-3D conversion
  • 34 chapter 11: 3D in the home
  • 35 11.1 Displays
  • 36 11.2 Blu-ray 3D
  • 37 11.3 3DTV
  • 38 Chapter 12: is 3D bad for you?
  • 39 Chapter 13: implications of 3D – conclusions and recommendations
  • 40 13.1 Filmmakers
  • 41 13.2 Cinemas
  • 42 13.3 Health & safety
  • 43 13.4 Studios
  • 44 13.5 3D combating piracy
  • 45 13.6 The public
  • 46 13.7 Overview
  • 47 GLOSSARY

CHAPTER 1: INTRODUCTION

This report will focus on how different 3D technologies work, it will include the entire work flow, from recording the action, encoding the footage, playing back the media via a cinema projector or television and finally how the audience views the 3D film or video, whether it be through specially designed glasses or an auto-stereoscopic television.

At present the most popular way to view 3D media is with the use of specialised glasses, the most popular being, active shutter glasses, passive polarised glasses and colour separationbased glasses.

Wearing glasses to watch a movie is often mentioned as a negative aspect of 3D. There is a technology available that allows you to watch 3D on screens without wearing any additional glasses, it is called autostereoscopy, this will also be looked at.

The health impacts that result from watching 3D will also be examined, along with factors that will prevent a person from being able to correctly view 3D images.

There will be impacts on the entire industry from studios and cinemas to smaller production companies and independent producers if 3D films become the norm and these will be examined.

A good place to start this report is to examine how two of the highest profile media companies around at present are currently viewing 3D technology.

Phil McNally stereoscopic supervisor at Disney-3D and Dreamworks was quoted as saying,

‘…consider that all technical progress in the cinema industry brought us closer to the ultimate entertainment experience: the dream. We dream in colour, with sound, in an incoherent world with no time reference. The cinema offers us a chance to dream awake for an hour. And because we dream in 3D, we ultimately want the cinema to be a 3D experience not a flat one.'(Mendiburu, 2009)

In the BBC Research White Paper: The Challenges of Three-Dimensional Television, 3D technology is referred to as

‘…a continuing long-term evolution of television standards towards a means of recording, transmitting and displaying images that are indistinguishable from reality'(Armstrong, Salmon, & Jolly, 2009)

It is clear from both of these high profile sources that the industry is taking the evolution of 3D very seriously, as a result this is a topic that is not only very interesting but will be at the cutting edge of technological advances for the next couple of years.

This report will be covering the following things:

  • What does the term 3D mean with reference to film and video
  • A look at the history of 3D in film
  • How does 3D technology work
  • The implications of 3D on the film business and on cinemas
  • The methods used to create the media and also the ways in which the 3D image is recreated for the viewer

The reasons I have chosen to do my project on this topic is that I am very interested in the new media field. 3D video when accompanied with high definition film and video is a field that is growing rapidly. Earlier this year, on 02 April 2009, Sky broadcast the UK’s first live event in the 3D TV format, it featured a live music concert by the pop group Keane, it was sent via the company’s satellite network using polarisation technology.

Traditionally we view films and television in two dimensions, this in essence means we view the media as a flat image. In real life we view everything in three dimensions, this is because we get a slightly different image received in each eye, the brain then combines these and we can work out depth of vision and create a 3D image. (this will be explained further in Chapter 3)

There is a high level of industrial relevance with this topic, as 3D technology coupled with high definition digital signal is at the cutting edge of mainstream digital media consumption. Further evidence of this is that the sports company ESPN will be launching their new TV channel, ESPN-3D in North America in time for this year’s Summer Football World Cup.

In January 2009 the BBC produced a Research White Paper entitled The Challenges of Three-Dimensional Television on this subject and over the next couple of years they predict that it will start to be introduced in the same way that HD (High Definition) digital television signal is currently being phased in, with pay-per-view movies and sports being the first take advantage of it.

Sky have announced that their existing Sky+HD boxes will be able to broadcast the 3D signals so customers will not even need to update their equipment to be able to receive the 3D Channel that they are starting to broadcast later this year.

On Sunday January 31st 2010, Sky broadcast a live Premier League football match between Arsenal and Manchester United for the first time in 3D to selected pubs across the country, Sky equipped the selected pubs with LG’s new 47-inch LD920 3D TVs. These televisions use the passive glasses, similar to the ones uses in cinemas as opposed to the more expensive Active glasses which are also an option. (The differences between Active and Passive technologies will be explained in Chapter 8)

It is also worth noting that at the 2010 Golden Globe awards, on acceptance of his award for ‘Best Picture’ for the 3D Box Office Hit Avatar, the Canadian director James Cameron pronounced 3D as ‘the future’.

At the time of writing this report (27/01/2010) the 3D film Avatar has just taken over from Titanic (also a James Cameron film) to become the highest grossing movie of all time, with worldwide takings of $1.859 billion. This is being accredited to the films outstanding takings in the 3D version of its release, in America 80% of the films box office revenue has been received from the 3D version of its release.

In an industry where ‘money talks’, these figures will surely lead to an dramatic increase in production of 3D films and as a result Avatar could potentially be one of the most influential films of all time.

After completing this dissertation I hope to be able to have a wide knowledge base on the subject and hopefully this will appeal to companies that I approach about employment once I have graduated.

In the summer of 2010 when I will be looking for jobs, I believe that a lot of production companies will have some knowledge of 3D technology and be aware of how in the near future it may be something that they will have to consider adopting in the way that many production companies are already or soon will be adopting HD into their workflow.

In order to ensure that I complete this project to a high standard it is important that I gain a complete understanding of the topic and study a variety of different sources when compiling my research.

3D media itself is not a new concept so there are a wide range of books and articles on the theory of 3D and stereoscopy along with anaglyphs.

However in recent years there has been a resurgence in 3D with relation to film and TV. This is due mainly to digital video and film production making it easier and cheaper to create and manage the two channels needed for three-dimensional video production.

It has proved more difficult to study books and papers on this most recent resurgence of 3D because it is still happening and evolving all the time. I have read various research white papers on the subject, which have been cited in the Bibliography, I have also used websites and blogs along with some recently published books, one of the problems with such a fast moving technological field such as 3D though, is that these books quickly become outdated.

CHAPTER 2: HUMAN VISION

In the real world we see in three dimensions as opposed to the two dimensions that we have become accustomed to when watching TV or at the cinema. Human vision appears in three dimensions because it is normal for people to have two eyes that both focus on the object, in the brain these two images are then fused into one, from this we can work out depth of vision, this process is called stereopsis. All of these calculations happen in the brain without the person ever even noticing, as a result we see the world in three dimensions very naturally.

The reason that we see in 3D is because of stereoscopic depth perception. There are various complex calculations going on in our brains, this coupled with real experience allows our brain to work out the depth of vision. If it wasn’t for this it would be impossible to tell if something was very small or just very far away.

As humans, we have learnt to judge depth even with only one view point. This is why, if a person has one eye they can still manage to do most things that a person with two eyes can do. This is also why when watching a 2-D film you can still get a good judge of depth.

The term for depth cues based on only one viewpoint is monoscopic depth cues.

One of the most important of these is our own experience, it relates to perspective and relative size of objects. In simple terms, we have become accustomed to object being certain sizes. An example of this is that we expect buildings to be very big, humans are smaller and insects are smaller still. So this means that if we can see all three of these objects next to each other and they appear to be the same size then the insect must be much closer than the person, and both the insect and the person must be much closer that the building (see figure 1).

The perspective depth cue (shown in figure1) was backed up when an experiment was carried out by Ittelson in 1951. He got volunteers to look through a peep hole at some playing cards, the only thing they could see were the cards and so there were no other types of depth cue available. ‘There were actually three different-sized playing cards (normal size, half-size, and double size), and they were presented one at a time at a distance of 2.3metres away. The half-sized playing card was judged to be 4.6 metres away from the observer, whereas the double-sized card was thought to be 1.3 metres away. Thus, familiar size had a large effect on distance judgement'(Eysenck, 2002).

Another monoscopic depth cue that is very effective is referred to as occlusion or interposition. This is where an object overlaps another object. If a person is standing behind a tree then you will be able to see all of the tree but only part of the person. This tells us that the tree is nearer to us that the person.

One of the most important single view depth cues in called motion parallax, it works on the basis that if a person moves their head, and therefore eyes, then objects nearer to them, whilst not physically moving, will appear to move more than the objects in the distance. This is the method that astronomers use to measure distances of stars and planets. It is in extremely important method of judging depth and is used extensively in 3D filmmaking.

In filmmaking, lighting is often talked about as being one of the key elements to giving the picture ‘depth’, and this is because it is a monoscopic depth cue. In real life the main light source for millennia has been the sun. Humans have worked out how to judge depth based on the shadows that are portrayed from an object. In 2D films shadows are often used to display depth by casting them across actors faces it allows the viewers to see the recesses and expressions trying to be portrayed.

So far all of the methods that have been described for determining depth have been monoscopic, these work independently within each eye. If these were the only methods for determining depth there would be no need for 3D films as it would not add anything because all of these methods could be recreated using a single camera lens. This is not the case however, a lot of the more advanced methods used in human vision for judging depth need the use of both eyes, these are called stereoscopic depth cues.

A great deal of stereoscopic depth cues are based around the feedback that your brain gets when the muscles in the eye are manipulated to concentrate your vision on a particular point.

One of the main stereoscopic depth cues is called convergence, this referrers to the way that the eyes rotate in order to focus on an object (see figure 2).

If the focus is on a near object, the eyes rotate around the Y axis and converge on a tighter angle , similarly if the focus is on a distant object the rotation means the eyes have a wider angle of convergence.

It is a lot less stressful on the muscles in the eye to have a wide angle of convergence and look at objects far away, in comparison looking at very close object for any amount of time causes the muscles in the eye to ache. This is a very important factor that should be considered when creating 3D films, as it doesn’t matter how good the film is, if it is going to hurt the audience it will not go down well.

A second stereoscopic depth cue that we use is called accommodation, this is the way that our eyes changes focus when we look at an object at different distances, it is very closely linked with convergence.

Usually when we look at an object very close up, our eyes will change rotation and point towards the object (convergence) allowing us to look at the item, our eyes will at the same time change focus (accommodation). Using the ciliarybody muscles in the eye, the lens will change shape to let more or less light in the same way a camera does, thus changing focus.

In everyday life convergence and accommodation usually happen in parallel. The fact that we can, if we wish choose to converge our eyes without changing the focus means that 3D films are possible. When you are sat in the cinema all of the action is projected onto the screen in front of you, so this is where your eyes need to focus. With 2D films the screen is also where your eyes need to converge, but with 3D films this is not the case. When watching a 3D film the focus never changes from the screen, else the whole picture would go out of focus, but objects appear to be in front and behind the screen, so your eyes need to change their convergence to look at these objects without altering their focus from the screen.

It has been suggested that this independence of accommodation and convergence is the reason for eye strain when watching a 3D picture as your eyes are doing something that they are not in the habit of doing (see chapter 12: Is 3D bad for you).

It is also worth noting that our monoscopic depth cues work at almost any range, this is not the case with stereoscopic depth cues. As objects become further away they no longer appear differently in each eye, so there is no way the brain can calculate a difference and work out depth.

‘The limit occurs in the 100 to 200-yard range, as our discernment asymptomatically tends to zero. In a theatre, we will hit the same limitation, and this will define the "depth resolution" and the "depth range" of the screen’.(Mendiburu, 2009)

This means that when producing a 3D film you have to be aware that the range of 3D that you have to use is not infinite and is limited to 100-200 yards.

CHAPTER 3: Early Stereoscopic History (1838 – 1920)

Three dimensional films are not a new phenomenon, ‘Charles Wheatstone discovered, in 1838, that the mechanism responsible for human depth perception is the distance separating the retinas of our eyes .’ (Autodesk, 2008)

In a 12,000 word research paper that Wheatstone presented to the Royal Society of Great Britain he described ‘the stereoscope and claimed as a new fact in his theory if vision the observation that two different pictures are projected on the retinas of the eyes when a single object is seen’.(Zone, 2007)

Included in the paper were a range of line drawings presented as stereoscopic pairs, these were designed to be viewed in 3D using Wheatstones invention, the stereoscope.

Wheatstone was not the first person to look at the possibility of receiving separate views in each eye, ‘In the third century B.C, Euclid in his treatise on Optics observed that the left and right eyes see slightly different views of a sphere'(Zone, 2007). However, Wheatstone was the first person to create a device to be able to re-create 3D images.

Between 1835 and 1839 photography was starting to be developed thanks to work from William Fox Talbot, Nicephore Niepce and Louise Daguerre.

Once Wheatstone became aware of the photographic pictures that were available he requested some stereoscopic photographs to be made for him. Wheatstone observed that ‘it has been found advantageous to employ, simultaneously, two cameras fixed at the proper angular positions'(Zone, 2007).

This was the start of stereoscopic photography.

Between 1850 and 1860 work was starting to be done by various people to try and combine stereoscopic photography with machines that would display a series of images very quickly and therefore using persistence of vision to create a moving 3D image. These were the first glimpses of 3D motion.

In 1891 a French scientist, Louis Ducos du Hauron patented the anaglyph, a method for separating an image into two separate colour channels and then by wearing glassing with the same colours but on opposite eyes thereby cancelling out the image, thus reproducing one image, but in 3D.

Another method used at this time to create 3D was proposed by John Anderton, also in 1891. Anderton’s system was to use polarisation techniques to split the image into two separate light paths and then employ a similar polarisation technique to divert a separate image to each eye on viewing.

One of the main advantages of polarisation over anaglyphs is that they do not lose any colour information, this is due to the fact that both images retain the original colour spectrums. They do however loose luminance. It is common for a silver screen to be necessary, it serves two purposes, firstly the specially designed screen maintains the separate polarisation required for each image. It also reflects more light than conventional screens, this compensates for the loss of luminance.

During 1896 and 1897 2D motion pictures started to take off, and by 1910 after a lot of initial experimenting the creative formats of film that we recognise today such as cuts and framing had started to become evident.

In 1920 Jenkins, an inventor that worked hard to try and create a method for recreating stereoscopic motion picture was quoted as saying ‘Stereoscopic motion pictures have been the subject of considerable thought and have been attained in several ways…but never yet have they been accomplished in a practical way. By practical, I mean, for example without some device to wear over the eyes of the observer.'(Zone, 2007)

It is worth noting that this problem of finding a ‘practical’ method of viewing 3D has still to a large extent not been solved.

Chapter 4: Early 3D Feature Films

(1922 – 1950)

4.1 The first 3D feature film

The first 3D feature film, The Power of Love was released in 1922, it was exhibited at the Ambassador Hotel Theatre in Los Angeles. ‘Popular Mechanics magazine described how the characters in the film "did not appear flat on the screen, but seemed to be moving about in locations which had depth exactly like the real spots where the pictures were taken"'(Zone, 2007).

The Power of Love was exhibited using red/green glasses using a dual strip anaglyph method of 3D projection. (Anaglyphs are explained in chapter 8.3)

The film was shot on a custom made camera invented by Harry K.Fairall, he was also the director on the film. ‘The camera incorporated two films in one camera body’.(Symmes, 2006)

Power of Love was the first film to be viewed using anaglyph glasses, also the first to use dual-strip projection.

Also in 1922, William Van Doren Kelley designed his own camera rig, based on the Prizma colour system which he had invented in 1913. The Prizma 3D colour method worked by capturing two different colour channels by placing filters over the lenses. This way he made his own version of the red/blue anaglyphic print. Kelleys ‘Movies of the Future’ was shown at Rivoli Theatre in New York City.

4.2 The first active-shutter 3D film

A year later in 1923 the first alternate-frame 3D projection system was unveiled. It used a technology called ‘Teleview’. Which blocked the left and right eyes periodically in sync with the projector, thereby allowing you to see too separate images.

Teleview was not an original idea, but up to this point no one had been able to get the theory to actually work in a practical way that would allow for films to be viewed in a cinema. This is where Laurens Hammond comes in.

Hammons designed a system where two standard projectors would be hooked up to their own AC generators, running at 60Hz this meant that adjusting the AC frequency would increase or decrease the speed of the projectors.

‘The left film was in the left projector and right film in the right. The projectors were in frame sync, but the shutters were out of phase sync.'(Symmes, 2006) This meant that the left image was shown, then the right image.

The viewing device was attached to the seats in the theatre. ‘It was mounted on a flexible neck, similar to some adjustable "gooseneck" desk lamps. You twisted it around and centred it in front of your face, kind of like a mask floating just in front of your face.’ (Symmes, 2006)

The viewing device consisted of a circular mask with a view piece for each eye plus a small motor that moved a shutter across in front of either the left or right eye piece depending on the cycle of current running through it. All of the viewing devices were powered by the same AC generator as the projectors meaning that they were all exactly in sync.

One of the major problems Hammond had to overcome was the fact that at the time film was displayed at 16 frames per second. With this method of viewing you are effectively halving the frame rate. 8 frames per second resulted in a very noticeable flicker.

To overcome this Hammond cut each frame up in to three flashes so the new ‘sequence was: 1L-1R-1L-1R-1L-1-R 2L-2R-2L-2R-2L-2R and so on. Three alternate flashes per eye on the screen.’ (Symmes, 2006)

This method of separating and duplicating certain frames effectively resulted in increasing the overall frame rate thereby eradicating the flicker.

There was only one film produced using this method, it was called M.A.R.S and displayed at the Selwyn Theatre in New York City in December 1922. The reason the technology didn’t catch on was not due to the image, as the actual theory for producing the image has changed very little from the Teleview method to the current active-shutter methods which will be explained later.

As with a lot of 3D methods the reason this one did not become mainstream was due the viewing apparatus that was needed. Although existing projectors could be modified by linking them up to separate AC generator, meaning no extra equipment was needed, the headsets that were required did need a lot of investment and time to install. All of the seats in the theatre needed to be fitted with headsets, these were adjusted in front of the audience members. These also had to be linked up to the AC generator so as they were perfectly in sync, this meant that they had to be wired in to the seats.

These problems have since been overcome with wireless technologies such as Bluetooth as will be explained later.

4.3 The first polarised 3D film

The next and arguably one of the most important advancements in 3D technology came in 1929 when Edwin H. Land worked out a way of using polarised lenses (Polaroid) together with images to create stereo vision. (Find more on polarisation in chapter 8.6)

‘Lands polarizing material was first used for projection of still stereoscopic images at the behest of Clarence Kennedy, an art history instructor at Smith College who wanted to project photo images of sculptures in stereo to his students’. (Zone, 2007)

In 1936 Beggar’s Wedding was released in Italy, it was the first stereoscopic feature to include sound, it was exhibited using Polaroid filters. This was filmed using polarised technology.

The first American film to use polarising filters was shot in 1939 and entitled In Tune With Tomorrow, it was a 15 minute short film which shows ‘through stop motion, a car being built piece-by-piece in 3D with the added enhancement of music and sound effects’. (Internet Movie Database, 2005)

Between 1939 and 1952 3D films continued to me made but with the Great Depression and the onset of the Second World War, the cinema industry was restricted with its output because of finances and as 3D films were more expensive to make their output started to be reduced.

Chapter 5: ‘Golden Age’ of 3D

(1952 – 1955)

‘With cinema ticket sales plummeting from 90 million in 1948 to 40 million in 1951’ (Sung, 2009) largely being put down to the television becoming coming in people’s front rooms the cinema industry needed to find a way to encourage the viewers back the big screen, 3D was seen as a way to offer something extra to make viewers return.

In 1952 the first colour 3D film was released called Bwana Devil,it was the first of many stereoscopic films to follow in the next few years. The process of combining 3D and colour attracted a new audience to 3D films.

Between 1950 and 1955 there were far more 3D films produced that at any other time before or since, apart from possibly in the next couple of years from 2009 onwards, as the cinema industry tries to fight back again against falling figures, this time though because of home entertainment systems, video-on-demand, and legal and illegal movie downloads.

Towards the end of the ‘Golden Age’, around 1955, the fascination with 3D was starting to be lost. There were a number of reasons for this, one of the main factors was that in order for the film to be seen in 3D it had to be shown on two reels at the same time, which meant that the two reels had to be exactly in time else the effect would be lost and it would cause the audience headaches.

Chapter 6: Occasional 3D films

(1960 – 2000)

Between 1960 and 2000 there were sporadic resurgences in 3D. These were down to new technologies becoming available.

In the late 1960’s the invention of a single strip 3D format initiated a revival as it meant that the dual projectors would no longer go out of sync and cause eye-strain. The first version of this single strip 3D format to be used was called Space-Vision 3D, it worked on an ‘over and under’ basis. This meant that the frame was horizontally split into two, during playback it was then separate in two using a prism and polarised glasses.

However, there were major drawbacks with Space-Vision 3D. Due to the design of the cameras required to film in this format, the only major lens that was compatible was the Bernier lens. ‘The focal length of the Bernier optic is fixed at 35mm and the interaxial at 65mm. Neither may be varied, but convergence may be altered'(Lipton, 1982).This obviously restricted the creative filmmaking options and as a result was soon superseded by a new format called Stereovision.

Stereovision was similar to Space-Vision 3D in that is split the frame in two, unlike Space-Vision though, the frame was split vertically, and they were placed side-by-side. During projection these frames were then put through an anamorphic lens, thereby stretching them back to their original size. These also made use of the polarising method introduced by Land in 1929.

A film made using this process was called The Stewardess, released in 1969, it cost only $100,000 to make but at the cinema it grossed $26,000,000 (Lipton, 1982). Understandably the studios were very interested in the profit margin that arose from this film. As a result 3D once again became an interesting prospect for studios.

Up until fairly recently films were still shot and edited using old film techniques (i.e. not digitally). This made manipulating 3D films quite difficult, this lack of control over the full process made 3D less appealing to film makers.

‘The digitisation of post-processing and visual effects gave us another surge in the 1990’s. But only full digitisation, from glass to glass – from the camera’s to projector lenses – gives 3D the technological biotope it needs to thrive’ (Mendiburu, 2009).

Chapter 7: The Second ‘Golden Age’

of 3D (2004 – present)

In 2003 James Cameron released Ghost of the Abyss, it was the first full length 3D feature film that used the Reality Camera System, which was specially designed to use new high definition digital cameras. These digital cameras meant that the old techniques used with 3D film no longer restricted the work-flow, and the whole process can be done digitally, from start to finish.

The next groundbreaking film was Robert Semecki’s 2004 animated film Polar Express which was also shown in IMAX theatres. It was released at the same time in 2D and 3D, the 3D cinemas took on average 14 times more money that the 2D cinemas.

The cinemas once again took note, and since Polar Express was released in 2004, 3D digital films have become more and more prominent.

IMAX are no longer the only cinemas capable of displaying digital 3D films. A large proportion of conventional cinemas have made the switch to digital, this switch has enabled 3D films to be exhibited in a large range of cinemas.

CHAPTER 8: 3D TECHNOLOGIES

8.1 – 3D capture and display methods

Each different type of stereoscopic display projects the combined left and right images together onto a flat surface, usually a television or cinema screen. The viewer then must have a method of decoding this image and separating the combined image into left and right images and relaying these to the correct eye. The method that is used to split this image is, in the majority of cases, a pair of glasses.

There are two brackets of encoding method, passive and active. Passive means that the images are combined into one and then the glasses split this image in to two separate images for left and right eye. In this method the glasses are cheaper to produce and the expense usually comes in the equipment used to project the image. The second method is active display. This method works by sending the alternative images in a very quick succession (L-R-L-R-L-R), the glasses then periodically block the appropriate eye piece, this is done at such a fast rate that it appears to be continuous in both eyes.

There are various different types of encoding encapsulated within each of the two methods mentioned above.

The encoding can use either colour separation (anaglyph, Dolby 3D), time separation (active glasses) or polarisation (RealD). A separate method, which does not require the use of glasses is done by using a virtual space in front of the screen and is called autosterescopic.

In cinemas across the world at the moment there are several formats that are used to display 3D films. Three of the main distributors are Real-D, iMAX and Dolby-3D.

Once a 3D film has been finished by the studios, it then needs to be prepared for exhibition in various different formats, this can include amongst other things colour grading and anti ghosting processes.

At present there is not a universally agreed format for capturing or playing back 3D films, as a result there are several different versions, these are explained below.

A large majority of the latest wave of 3D technology options send the image using one projector, so removing the old problem of out sync left and right images. The methods that do use dual projectors are much more sophisticated that the older versions used in anaglyphic films so have eradicated the old problems of out of sync projectors.

8.2 – Ghosting & light efficiency

When you try and create two channels of images (left and right) and blend them into one frame, using passive or active systems, there are some errors that occur and have to be managed. Most of the systems looked at below tend to be good at one thing or the other and have incorporated methods to try and counter problems that have occurred.

The two main issues that arise from blending frames together (passive) and showing alternating frames (active) are ghosting and light efficiency.

Ghosting refers to the leakage of images between eyes. ‘No 3D projection system perfectly isolates the left and right images. There is always some leaking from one eye to the other’ (Mendiburu, 2009).

On most systems this leaking is minimal and not a problem. However, when the leaking raises over a couple of percent, the images appear to ghost or blur. It is especially noticeable on high contrast images.

The RealD polarisation method is the most affected by ghosting due to the methods it uses to split the images. RealD have incorporated a method of reducing this ghosting problem, the solution is called ‘ghost-busting’.

The ‘ghost-busting’ process works by calculating the pattern of light which is expected to leak between eyes, this value is then subtracted from the original image. The drawback with this method is that it reduces the overall dynamic range of the image by the amount that is subtracted.

Colour separation methods such as Dolby-3D and anaglyph both also suffer from ghosting but to a lesser extent.

The poor light efficiency of 3D is another one of the major flaws that had to be overcome with all of the 3D display methods.

Colour separation methods suffer because by their very nature they have to filter certain colour ranges that enter each eye in order to create two images.

In the case of active shutter displays the light levels are diminished even more. As each eye is periodically turned off it reduces the light levels by 50%, plus the dark time between frame means that the overall light level is approximately 20% of the original.

In order to create stereoscopic pictures using polarised methods, the eye pieces filter out certain wavelengths of light, this has a similar effect of reducing light levels.

One solution to the low light problem is the installation of silver screens in cinemas, these reflect more light that the standards screens, thereby increasing the light levels.

8.3 – Colour separation – anaglyph

Anaglyphs are an example of passive 3D because the method works by combining the two images into one, then relying on the glasses to separate the signal into two channels.

Anaglyphs are one of the oldest methods for displaying 3D images, they are also the cheapest type of glasses to mass produce. The fact that they don’t cost much to manufacture is the reason that why 3D is mentioned most people think of these red and blue glasses.

The anaglyph, proposed by D’Almeida (in one form at least) in 1858, used complementary-coloured filters over the left and right lenses to superimpose both images on a screen. Viewing devices with red and green lenses separated the images and selected the appropriate view for each eye (Lipton, 1982).

One of the problems with separating the colour channels in this way to portray 3D is that it reduces the overall luminance level of the image. In addition to this you are also only seeing half the colour in each eye so it is not a full representation of the original image.

Anaglyphs were most widely used for films in the early days of 3D. When Polaroid lenses were starting to be introduced in the late 1930’s, it quickly became apparent that the disadvantages of anaglyphs such as poor colour separation were much less apparent in the new technology. As a result anaglyphs started to be phased out in the cinemas.

Today anaglyph glasses are still used because of their cost effectiveness, although they are mainly used for comic books and stereographic photography, not for moving pictures.

8.4 – Colour Separation – Dolby 3D

Dolby-3D is one of the most advanced technologies currently employed in the 3D market in terms of image quality.

It uses the same theory that was used in the anaglyph technology although in a much more advanced state which produces much improved results. As with anaglyphs, Dolby 3D is also a passive method of 3D.

Dolby believed that there were two key points for 3D to be successful in cinemas. They argued that if 3D is going to be widely accepted into cinemas then the technology needs to be portable and easily installed and moved from screen to screen. This way a film can be released on the largest screen with the highest capacity for seats, then after the film has been out for a while the equipment needed to be able to be easily moved to a smaller screen so the 3D release can still be shown but freeing up the larger screen for a conventional 2D film. This mobility is a major advantaged over systems that require installation of a special screen.

The seconds key point for successful 3D that was identified was the need for passive glasses, this way they would require significantly less maintenance in comparison to active glasses that would require charging or replacing batteries.

‘Dolby 3D uses a “wavelength triplet” technique originally developed by the German company Infitec, specialists in 3D visualisation for computer-aided design. In this technique, the red, green and blue primary colours used to construct the image in the digital cinema projector are each split into two slightly different shades. One set of primaries is then used to construct the left eye image, and one for the right’ (Slater, 2008).

The technique of splitting the image into two primary shades is done by inserting a filter wheel inside the digital projector. Unlike the polarizing method, the separation is carried out before the image is created. This occurs because the wheel is placed between the lamp and the DLP (Digital Light Processing) imaging chip. According to Dolby this creates a better image that mounting a filter in the image path, which is the method RealD employs.

The process of inserting the filer wheel can be done digitally, which means just pressing a switch can convert the projectors from 2D to 3D, and as the screen is the same as with 2D projectors, it means it’s a very easy task converting the screen from one format to the other.

‘Very advanced wavelength filters are used in the glasses to ensure that each eye only sees the appropriate image. As each eye sees a full set of red, green and blue primary colours, the 3D image is recreated authentically with full and accurate colours using a regular white cinema screen’ (Slater, 2008).

The projectors in this format produce a very high frame rate, typically 144 frames per second. This is because the frame rate is effectively halved due to the images being split between the left and right eyes.

Further advantages of this technology are that because the glasses use wavelength filters and unlike the active shutter glasses, do not need to be battery powered, they are cheaper to run and maintain. However, they still aren’t as cheap as polarised glasses, due to the complex structure of the filters which cost a lot to produce.

One of the major advantages of Dolby 3D is that it doesn’t reduce the luminance level of the image, which is a side effect of both the active shutter rand polarisation methods it also has very high quality colour reproduction.

As the luminance is not reduced it means that unlike polarised methods, there is no need to install a special silver screen to boost the overall light level.

8.5 – Active shutter

This method involves periodically shutting off one eye piece followed by the other, it is done at such a fast rate that the viewer should not be able to notice any change due to the persistence of vision phenomenon.

The glasses are synced with the projector using either infra-red, Bluetooth, Direct Link Protocol (DLP-Link Protocol) or similar means to ensure that the timings that the eye piece is shut off is exactly in time with the image being projected, therefore ensuring that a 3D image is portrayed.

This is the same theory as was used in 1929, when Laurens Hammond came up with Teleview. However, were Teleview failed due to technical restrictions, current technology has overcame the earlier limitations.

The new LC (liquid crystal) shutter glasses work by shutting of alternating eyepieces, but unlike the Teleview system they do it by using polarized filters and a liquid crystal. When a voltage is applied to the eyepiece the filter and crystal become dark, when there is no voltage it is transparent. This alternate darkening is done in sync with the refresh rate of the screen, thereby creating a stereoscopic image.

This technology is mainly used in home 3D systems as the glasses are more expensive to produce, as a result it would not be economically viable for cinemas to purchase large quantities to distribute to cinemagoers. Unlike passive glasses they have to been powered, usually by a battery, which means there would be additional problems in cinemas when peoples glasses have ran out of battery etc.

However, a company called XpanD, probably the world’s largest producer of LC active shutter glasses are trying to buck the trend. They have produced a method for producing a cheap digital projector system that still maintains a high quality image. This enables the cinemas to save money by not having to purchase special silver screens as in the case of RealD, this saving in money is cancelled out though by the increased cost of using and maintaining the LC shutter glasses.

One of the main advantages of this method of 3D display is that it reduces ghosting, which is a problem with most of display types. In the case of XpanD they have overcome the usual low light issues that you would associate with active glasses by using a very high shutter speed.

8.6 – Polarisation

When light travels from a source it has electric and magnetic vectors, these vectors move up and down in a random pattern along the Z axis as it moves away from the origin. This is said to be un-polarised.

‘if such a polarising filter is held over the right projector lens, the light for the right image will be polarised in a plane (perpendicular to the filter surface) that can be controlled by rotating the filter in its plane’ (Lipton, 1982).

Polarisation was first discovered in 1852 by William Bird Herapath, albeit only in a basic form. This science was built on by Anderton during the 1890’s, and he first ‘suggested that the use of polarised light for image selection for stereographic projection’ (Lipton, 1982) was possible. But it was not until 1929 when Hand worked out a method of producing a new type of polarised filter (Polaroid), which was capable of working with moving images to create the first stereographic polarised images and then films.

At present there are the options of using either linear or circular polarisation. During the circular polarisation process, the projector polarised the images in a set direction (either clockwise or anti-clockwise), the left and right lenses of the glasses then polarise the incoming signal in the corresponding direction, thereby only allowing one of the two images through.

Tests have shown that circular polarised images usually offer better separation of the individual channels when compared to linearly polarised images but the filters that are required for it to work are more expensive to produce than the linear versions.

If linear polarized lenses are being used, the viewer will achieve the best results if the eyes are kept level, if they are tilted the 3D effects could start to be reduced, this is less noticeable with circularly polarized lenses.

8.7 – Polarisation – RealD & MasterImage

RealD is the most widely used cinema projection system across the globe. In the UK, Cineworld use equipment from RealD. This format uses a single projector and circular polarised glasses.

The stereo digital signal is decoded and sent to the projector, it is then beamed out at 24fps (frames per second) in each eye, which equates to 48fps.

Each of the 24 frames that are projected each second are flashed three time, this equates to a rate of 144 combines frames a second.

The projector buffers the left and right image and projects them in alternation, at a rate of 144 frames per second, presenting three “flashes” of each frame(Cowan, 2007).

The signal is projected through a RealD Z Screen which is placed in front of the projector, this Z Screen polarizes the image. The glasses are polarised to only allow the required image channel through the filter.

A large reason for the success of RealD is due to the fact that they use polarised glasses, this makes it a cheap option for cinemas as the glasses are easy to produce and quite often are disposed of after each use. These ‘throw-away’ glasses raise another problem however, as 3D films become more popular it could lead to very large quantities of plastic glasses being put in landfill, this is at a time where ‘Green Policies’ are at the forefront of political decisions could mean that a method of recycling the glasses will have to be used.

One of the drawbacks with this technology is the loss of luminance in the picture, the cinemas usually need to install a silver screen to compensate for this, which reflects approximately 2.4 times the amount of light when compared to the usual cinema screens. Installing these screens is an additional cost that the cinemas will have to absorb.

During the colour grading phase of the editing process the light levels of the image usually have to be raised to compensate for the overall loss of light when the viewer is watching the film.

RealD has the advantage of being able to reproduce the colours very effectively, this is due to the fact that the same colours are sent to each eye via the polarization method.

MasterImage is a similar method to RealD, in that it uses polarised glasses.

The main advantage the MasterImage holds over RealD is centred on the hardware that is required to project the image.

MasterImage use a easily portable device which consists of a ‘High efficiency rotating circular polarizing filter which provides left and right image separation and bright richly coloured 3D images'(Masterimage, 2009).

This device can be moved from screen to screen depending on where it is needed, although you would still need a specialized silver screen due to the loss of luminance due to the polarizing process, so it is not as mobile as the company make out.

8.8 – IMAX 3D

IMAX were the first company to introduce mainstream 3D films, it was originally intended for analogue film.

The IMAX 3D methods are slightly different to the previously mentioned ones here in that they use two projectors. One each for the left and right images. IMAX 3D is available in digital in some theatres other still use film. Whereas all of the previous techniques are digital only.

With IMAX-3D the image is shot using two cameras, if being made specifically for the IMAX -3D theatres. All of the 3D films for this format of 3D are played back through two projectors. This reduces the luminance issues that are present in the other formats.

‘As of 2010 the linear polarized filter system has become the Imax 3D standard. Linear polarization has a significant disadvantage compared to circularized polarization used in other systems such as Dolby 3D and RealD; with linear polarization you lose the 3D effect if you tilt your head. You may even need to experiment to get the best position for normal viewing’ (3D Forums, 2009).

In addition to this, due to the large screen sizes that are used with IMAX cinemas, ghosting and focusing problems have been reported with 3D in this format, however these have been counteracted by the immersive experience by watching the 3D film on such a large screen.

8.9 – Autostereoscopic displays

One thing that all of the previous methods of viewing 3D formats have in common is that they require the user to wear glasses. For many people this is a disadvantage as the public have become so familiar with watching a standard 2D film where you do not need any extra add-ons to see a film. You could argue that having the filters required for 3D right next to your eyes would result in the best possible reproduction of the image but your average person that does not have or want to have any knowledge of the workings of 3D will probably care very little about this.

Autosterescopic technology is built in to the screen and requires no glasses to view the 3D image.

There are two main types of technology that are exploited in autosterescopic screens: lenticular lenses and parallax barriers.

The lenticular lens approach works by placing an cylindrical lens over each pair of pixels (left and right image), this lens then directs the image to either the left or right eye.

For this display to work it required the person to be standing a set distance from the device. If they stand to far away then the image will miss the eye-line for the person and they will not be able to see a 3D image.

‘In the parallax barrier a mask is placed over the LCD display which directs light from alternate pixel columns to each eye'(3D Forums, 2009). One of the major advantages of this technology is that in can be easily switched from 2D to 3D because the mask is a liquid crystal later, which becomes transparent by turning off the current running through it.

‘Although this technology is currently in existence today it is expensive and there are not too many companies developing it’ (Totally 3D, 2009).

WRITE ABOUT NINTENDO DS 3D

8.10 Comparison of technologies

It is clear that there are advantages and disadvantages with all of the available formats. There is a place for all of the different methods as they all have different uses. Anaglyphs great strength is that they are the easiest to produce and the glasses are cheapest to make. While they don’t look good for films, stereo-photography and comic books remain areas where these glasses are used.

At the other end of the scale are the Liquid Crystal Active Shutter glasses. These are the most expensive due to the electronics involved and at present are only being considered for home 3D systems.

TYPE OF 3D

3d METHOD

Advantages

Disadvantages

Anaglyph

Colour Separation

Very cheap glasses

Poor colour reproduction. Worst of all 3D images.

Dolby-3D

Colour Separation

Tilting eyes doesn’t affect the 3D image. No need for silver screen to boost light levels.

Can result in colour bleeding between eyes.

XpanD

LC Active Shutter

Good colour reproduction.

Low light levels, expensive glasses. Needs a very high frame rate to avoid flicker.

RealD

Circular polarisation

Viewer able to tilt head without losing 3D image.

More expensive that linear polarised glasses. Needs a silver screen to boost light level.

IMAX-3D

Linear polarisation

Extremely large screen creates an engulfing experience. Linear polarised glasses are cheaper than circular polarised glasses.

Eye level needs to be kept horizontal or could lead to poor reproduction of 3D image.

Auto-stereoscopic

Auto-stereoscopic

No need for glasses or any other method of filter as decoding filter is built into the screen.

Very expensive, poor viewing angle.

Chapter 9: 3d Cinematography

Cinematography is the art of controlling how a film looks, it includes shooting and editing the film.

With 3D film there are all of the techniques that you would expect of 2D film such as focus, lighting and sound but there are also added aspects which are unique to 3D film, controlling these is vital to creating an effective three dimensional film.

9.1 Interaxial distance

The first of these is the interaxial distance, this means the amount of space between the two cameras.

The standard distance that most directors start with and work from is about 2.5 inches, this is then altered to achieve the desired effect. The reason this distance is used is that it is the same distance as between our own eyes. This allows us to see the 3D world in the same relative manor as we would if we were physically standing there.

Orthostereoscopy is one case where you would not alter this 2.5inch interaxial distance. This method of 3D filmmaking is designed to perfectly replicate the way human vision works. As a result this 2.5 inch human eye separation distance must remain at all times. This method of 3D filmmaking is not commonly used though, and conventional 3D filmmaking doesn’t restrict the altering of this distance

By adjusting this distance you are in affect widening or narrowing the difference between the images that each eye receives. This will have a scaling effect on any images that are displayed in the virtual space.

Moving the cameras apart will make the object grow and pushing them close together will have the opposite effect.

Extremes of these effects are known as hyper-stereoscopy and hypo-stereoscopy. Hyper-stereoscopy is where the cameras are so widely spaced that it creates an effect where all of the images appear to become miniatures. At the other end if this scale is Hypo-stereoscopy, where the cameras are so close together that it is almost a 2D image. As a result the objects appear flat, which is why it is sometimes referred to as ‘cardboarding’.

With larger cameras it is often impossible to physically get the cameras close enough so that the two lenses are 2.5 inches apart, this is fixed by setting the cameras up pointing into mirrors that reflect the image on to the cameras sensors.

9.2 Convergence

The next big control that directors have over the look of a film is the way that the cameras converge. Earlier in this report it was mentioned how human eyes converge on a subject in order for us to focus on an object.

In a similar method, the director has the option of changing the angle of the two cameras on an object in a scene. There are two methods for performing this. You can either do it on set by physically moving the cameras or you can do it in post by performing something called Horizontal Image Translation (HIT). Both methods have things going for and against them.

The benefit of doing the converging on set is that it is cheaper and requires less post-production. The drawbacks of doing it on set is that it can invoke a phenomenon known as keystoning. This is when the left and right edges of an image no longer match as they should. It occurs because when you angle the cameras inwards towards the point of focus, in inevitably means that the left and right sides if each image respectively are closer to the nearest camera, and equally the opposite side now becomes further away. Keystoning, when extreme can be very uncomfortable to watch.

The alternative to converging on set it to do it in post, using HIT. This process gives the director much more control over the angle of convergence. It works by shifting the images left or right to move them out of line. The drawbacks of this method are that it will be more costly and that you will have to shoot the scene wider that it will be displayed. Overshooting is necessary because after the image has been shifted you will lose the pixels that have moved off screen and it is necessary to crop the overlapping images so as you have two frames that are matches again.

Until real time HIT correction is possible live events filmed in 3D will have to rely on physically converging the cameras to create and set the depth.

If the cameras are not converged and stay parallel then there will still be a 3D effect achieved but the furthest point back in the 3D image will be level with the screen.

When the cameras are angled inwards and converge on an object then that object becomes level with the screen and anything behind that convergence point will appear behind the screen in the virtual 3D space (see figure 3: Convergence example).

9.3 Stereoscopic window

One of the big differences between how the viewer sees 2D and 3D is that when a person is watching a 2D medium then they are looking at a flat screen, the edges of the image are defined by the physical edges of the image or cinema screen. This is different with 3D. When a person is watching a 3D film, the screen becomes a window. The viewer can see objects behind the screen or in front of the screen.

One of the problems that had to be overcome with 3D was when this stereoscopic window was broken, as it created an uncomfortable viewing sensation.

In a 2D film if a object is half in the frame, then both of your eyes will see this and your brain tells you that the other half of the object is outside of the frame. In 3D, if a prominent object, such as a person is located half in and half out of the frame then it will result in each eye seeing different amounts of that person. This will cause our brain to have difficulties in compositing the left and right images in to the one image needed for 3D.

To fix this problem it is sometimes necessary to mask a small portion of the side of either the left or right image in order to make the edges of the two images match. This process is performed in post production.

9.4 Depth budget

Inside the cinema there are limits to where it is comfortable to view the 3D content, if the images exceed these areas then it can cause eye strain and headaches.

The optimum Z position to view the image is at screen depth, as the image moves behind the screen or in the opposite position towards the audience it gradually becomes more painful due to the limitations of the ability to independently control convergence and accommodation (as discussed in chapter 2). There are also areas at the extreme sides of the screen, where only one eye can see the image, these areas are also painful to view.

One of the main jobs of a 3D cinematographer is to try and fit the whole range of vision available in real world, into the stereoscopic comfort zone available in the cinema.

One solution to try and maintain the range of 3D space available is to ‘float’ the stereoscopic window. For this to make sense, you have to remember that, the entire range from nearest to furthest point of focus is ‘x’ feet. If there is a image being shown extremely close to the audience, the furthest point in the distance is equal to the closest image + ‘x’. To solve this the stereoscopic window is floated nearer to the audience, so it appears that the screen is closer that it actually is. This is equally true for objects which need to be set far back in the screen.

9.5 Matching size of screen

It is important when the decisions are being made about the level of depth in the 3D image, that the final output medium is being considered. This is because if the film has been made to be screen on a 5 foot screen, then it is instead played on a 10 foot screen, it will double the levels of depth in the image.

This could then push the extremes of 3D beyond the comfortable levels and result in uncomfortable viewing experiences.

chapter 10: Creating 3d content

There are three basic ways to create 3D content. The first two methods involve creating new content in 3D formats, these are: computer generated images (CGI) and stereoscopic film making using two cameras.

The final method is converting existing 2D material in to 3D.

10.1 Computer Generated Images (CGI)

Out all of all of these methods, the one that offers most control is CGI. This type of 3D work is done digitally on a computer, the animator has absolute control, or as close as is possible, over all of the different aspects of the 3D space. He can build virtual environments. control convergence, motion parallax and focus.

Building 3D content this way is very time consuming but it does allow you to be extremely accurate with all of the necessary variables used in 3D.

This complete control over the image explains why a large proportion of recent 3D films have been of the animated CGI type.

Since animated films started to move from being hand drawn to computerized over the last couple of decades, most of the animated worlds are built in three dimensions to start with. So adding a separate camera and moving it a few inches away from the original is a relatively simple step. As a result creating true stereoscopic 3D animation isn’t much of a jump, providing that the virtual environment is built in 3D anyway.

10.2 Dual camera filmmaking

The second method is camera based 3D. This is where on set you have two cameras positioned together at the required distance apart and converged at the required angle.

Using this method two cameras are connected together in a 3D rig. As with the CGI method there are many variables that can alter the 3D image which is being captured.

If the filmmaker is going to set the convergence on set, it is very important to consider the size of the screen that the final image will be portrayed on, as decisions taken when filming will affect the size of the 3D window.

If the amount of depth has been set manually during filming by setting the convergence and distance between cameras, it is much more difficult to adjust the image for a different screen size.

The advantages of filming with two cameras on set is that you have two real viewpoints to work from, this should provide you with the most detailed 3D image as you have more real data available.

The downside to filming with two cameras in 3D is that all of the costs related to capture, storage and editing are doubled, as there is twice the amount of data to be processed.

In addition to this it is necessary to employ a crew that has specific 3D knowledge, and as it is such a relatively new medium at present, the costs of specialist crew will be much higher.

10.3 2D-to-3D conversion

It is possible to convert existing 2D footage into 3D. It is however, a very expensive process when done to a high level.

There are several steps that can be taken to convert the picture.

One of the most powerful methods involve cutting the image up in to sections and then frame by frame manipulating the sections and overlapping them. This overlapping creates parallax(objects nearer to you move by a greater distance) and occlusions (objects further away are hidden by nearer objects), thereby generating a sense of 3D. This frame-by-frame rotoscoping is a very lengthy process.

A further method utilises the Pulfrich effect. Named after the German inventor Carl Pulfrich. He discovered that if the light to one of your eyes is slightly reduced, then when you view objects moving horizontally then they appear to move along an Z axis towards you.

The main method that is predominately used in big budget conversions is the 3D reconstruction and projection method. This is done my digitally modelling the 3D environment and then laying the original 2D frame on top of this. You then create a virtual viewpoint for the second channel. When combined these two channels produce the stereo vision.

The American company, ‘In-Three’ are one of the most well-known of the high end companies that offer the conversion facility. They were responsible for converting Tim Burtons 2010 film, ‘Alice in Wonderland’, into 3D.

They have named their method of conversion Dimensionalization. It exploits all of the above methods to various degrees.

‘With the Dimensionalization process, one eye’s view remains the original image. For the other eye, objects from the original image are altered to achieve the perception of relative depth. In effect, Dimensionalization creates a virtual second camera’ (DeJohn, Drees, Seigle, & Susinno, 2007).

One of the major factors that has to be considered when converting an image from 2D to 3D is the original depth of field used in the 3D image.

‘A wide depth of field can benefit from Detailed Dimensionalization because every element in the shot is well defined’ (DeJohn, Nelson, & Seigle, 2007). If the original frame of the 2D feature has all of the objects in focus because of a very wide depth of field it will mean that it is possible to isolate each different object and give them a 3D position behind and in front of the screen.

On the other hand if the depth of field is very narrow, it is only possible to give the main focused point a new 3D position in front of the screen, the rest of the out of focus objects would have to be placed at screen level.

In Threehave worked out that ‘When an object is shifted to the left in the right image, the right eye will track the object to the left in order to keep the object in its line of sight. This creates the impression the objects are closer to the image’ (DeJohn, Drees, Seigle, & Susinno, 2007). Similarly if the left and right images remain unmoved then the 3D image will be displayed on the screen, and if the image is shifted to the right it will appear behind the screen.

One of the main advantages with the conversion process is that you can manipulate the amount of 3D to match the screen size that the work is being made for. As mentioned in a previous chapter, if you double or half the screen size then it will affect the amount of depth by a relative amount.

The disadvantage with single camera 3D via conversion are that the quality will probably never be quite as good as with two cameras simply because there is half the amount of data being used and you are trying to virtually create another camera. If there were two cameras to start with then you already have the real image.

Another drawback is that it is limited to footage that has already been captured. Due to the extreme amounts of processing that need to be done to the image it means that real-time conversion is very difficult, so live events will not be able to converted in this way, for a while at least.

On the other hand, converting a 3D film to 2D it is a much simpler process, all that is required is to display one channel instead of both, this results in a 2D reproduction of the 3D movie.

chapter 11: 3D in the home

11.1 Displays

As with the switch from black and white to colour, then standard definition to high definition and analogue to digital; the switch from 2D to 3D will in most cases require new hardware.

It is predicted to be a rapid growth area for technology companies in the years to come as people start to purchase television sets that allow them to watch this exciting new form of media, which up till now has only been available in cinemas.

‘With revenue from 3D TV display sales projected to grow by 95% annually, from $140 million in 2008 to $15.8 billion in 2015, 3D TV is likely to be big business in years to come’ (World Vision, 2009).

With the large sums of money at stake, companies are understandably investing huge sums to come up with what they see as the best or most profitable way of selling 3D capable televisions to the public.

There are a few different types of 3D displays which are available at present, in the same way as there are different method to project 3D displays in cinemas, the same applies to consumer displays. These are: active shutter, polarization and colour separation, with the addition of autosterescopic, which as present is impractical for cinema screens.

The three main types of 3D displays which are commercially available at present are Digital Light Processing (DLP), Plasma and LCD.

11.2 Blu-ray 3D

An important date for the future of 3D in the home, was December 17th 2009. On this day the Blu-ray Disc Association (BDA) announced the finalized release of the ‘Blu-ray 3DTM’ specification.

‘The Blu-ray 3D specification calls for encoding 3D video using the Multiview Video Coding (MVC) codec, an extension to the ITU-T H.264 Advanced Video Coding (AVC) codec currently supported by all Blu-ray Disc players’ (Blu-ray Disc Association, 2009).

This means that the new Blu-ray3D discs will be compatible with standard Blu-ray DVD players, although they will only exhibit the 2D version of the media.

The format uses the MPEG4-MVC compression to reduce the file size of both the left and right views, while it still maintains full 1080p resolution. Even with this full resolution there is only a 50% increase in data compared to the 2D Blu-ray format.

The MVC compression works by ‘utilizing combined temporal and inter-view prediction’ (Smolic, 2008). This MVC compression is based upon the fact that there are a lot of similarities between the two camera view points, these similarities are then utilised by only sending the piece of data once if it is identical in both viewpoints. As there are only certain parts of the image that will be different in both views, it means that there will be a lot of repeated data that can be discarded.

The importance of having a unified format is that there is now a set of guidelines and way of encoding the high definition 3D format to consumers.

A further key point is that the Blu-ray 3D format will display stereoscopic pictures on any compatible 3D display (LCD, plasma etc) using any other the available 3D technologies, and is not limited to a specific format of 3D projection.

11.3 3DTV

Watching 3D content at home is not just restricted to Blu-ray 3D. 3DTV is starting to be introduced across the world.

In Japan, 3D content is broadcast daily on their cable channel BS 11. In the UK Sky have announced that their new channel Sky 3D will be launched in April 2010, with current Sky+HD customers being able to receive the 3D channel without upgrading their Sky equipment. There are similar stories in American with ESPN and the Discovery Channel planning 3D ventures in the very near future.

Currently there are different methods of sending the 3D signal to televisions. Resolution and bandwidth are the two key attributes that alter for each of the formats. These attributes are intrinsically linked, increasing the resolution creates a better quality image but at the expense of a increased bandwidth.

Checkerboard, panels and line interleaved are all formats that require the smallest amount of bandwidth as the resolution is lower.

There are also full resolution formats, these produce a higher image resolution but require a higher bandwidth. simulcast, MPEG’s Multi-View Coding (MVC) and 2D+Depth are examples of these full resolution formats.

A standard 3D image or frame requires twice as much bandwidth as a similar 2D image, due to the split images for the left and right images. As a result of this it is often necessary to compress the data in order to save bandwidth and incorporate existing 2D devices.

Spatial compression is once such method of reducing bandwidth.

This can be done by sub-sampling left- and right-eye images and then compressing them into a single 2-D image frame. The sub-sampled left and right images can be packed in a top/bottom, side-by-side, line/column interleaved or checkerboard fashion(Zou, 2009). The downside to spatial compression is that it is not compatible with existing 2D displays, also it greatly reduces the resolution as the two stereoscopic frame need to fit into a single 2D frame.

Time multiplexing is an alternative method of compressing the 3D signal. The great advantage of this method is that it retains the full resolution of each frame while still converting the signal to a 2D format. It works by doubling the frame rate, this way the alternate left and right images are transmitted one after the other and twice the existing frame rate.

2D+Depth is another format used for compression, the major advantage of this format is that the signal can also be used with 2D displays. It works by sending the original frame then a difference frame is sent as a metadata stream. This means that the decoding system has the option of using the metadata or not, thereby viewing the transmission as a 2D or 3D signal.

The final method used is colour separation, this is based on the anaglyph method which is the oldest method of encoding images. The dual images are merged into one and then separated with coloured glasses. It is the least enjoyable in terms of image quality and is extremely unlikely to be used for mainstream home 3D exhibition.

Chapter 12: is 3D bad for you?

When 3D is mentioned it is often followed by comments from people about how it causes eye-strain and headaches, but is this just a rumour or does it hold any truth?

A recent study by the Vision Science Program at The University of California, Berkley, has found that looking at 3D displays did cause more fatigue, eye strain and headaches that looking at a 2D display.

In a study published in March 2008 in the Journal of Vision, Banks and his team of researchers had 11 study participants view a monitor that independently controlled the convergence and accommodation distance. Each of two sessions lasted about 45 minutes. "After the inconsistent (convergence and accommodation) conditions, people reported more fatigue, eye strain and headaches," said Banks (Berkeley University of California, 2010).

As discussed in earlier chapters, when your eyes focus on an object, the brain has becomes accustomed to adjusting the convergence and accommodation simultaneously. In 3D films the convergence works independently of the accommodation. This, according to the above study does have an effect and cause eye-strain. It is feasible thought that the skill could be learned over time, and the more 3D displays/films a person sees the easier and less painful it becomes to watch. This would then lead to the question, should it ever be painful to watch a film at all?

More evidence for the negative side effects of 3D, comes from Dr Michael Rosenberg, an ophthalmology professor at Northwestern University Feinberg School of Medicine in Chicago, he was quoted as saying, ‘There are a lot of people walking around with very minor eye problems, for example a minor muscle imbalance, which under normal circumstances, the brain deals with naturally. These people are confronted with an entirely new sensory experience.That translates into greater mental effort, making it easier to get a headache’ (Daily Mail, 2010).

It has also been mentioned in a previous chapter how the depth budget plays an important part in the 3D films. This sets the limits between the maximum and minimum level of 3D in any given scene. Exceeding this can cause pain to the viewer because of the extremes in convergence that the eyes need to try and accommodate. This is the reason that 3D content has to be scaled to the screen size that the medium is intended for, increasing or decreasing the screen size has the same scaling effect on the level of ‘3D-ness’ in the scene.

On the other end of the scale are the studios and manufacturers of the 3D equipment. They claim that the headaches that people associate with 3D are due to either badly made films or the old ‘red and green’ style anaglyph glasses, and that modern technology has eliminated this problem.

This opinion is backed up in the book ‘3D Movie-Making’, by the renowned 3D expert Bernard Mendiburu, who also claims, ‘a digital 3D movie should not give you a headache (unless the director made an awful film) – and not hurting the audience tends to be a key issue when you’re selling entertainment’ (Mendiburu, 2009).

Chapter 13: implications of 3D – conclusions and recommendations

13.1 Filmmakers

With the market for 3D films currently booming, directors that shoot high budget films are understandably under pressure from the studios (their bosses) to make as much money as possible.

With the commercial success of Avatar, recently becoming the highest grossing film of all time. Studios will no doubt be pressing for films to be released in the 3D format.

As has been documented in this report it is perfectly possible to convert 2D films to 3D. Although this process is expensive (costs are approximately $15,000 a minute according to Tim Sassoon, director of Sassoon Film Design), it will be seen as a worthwhile step if it doubles the box office takings, more that recouping the fee for the process.

However, according to In Three, the company responsible for converting the 2010 film Alice in Wonderland from 2D to 3D, some shots are more suitable for conversion that others. One of these being shots with a very wide depth of field. As a result this could lead to a trend of 2D films that are being considered for the conversion process being full of very wide depth of field scenes, so as the frames can be pulled apart in post and given greater depth.

There are other factors included in 3D filmmaking that affect the quality of the stereoscopic image. One of these is the type of lens being used. As is the case with 2D films, the lens makes a big difference to the image. With flat 2D films the lens alters the depth of field, in 3D film it affects the ’roundness’ of the image. When long lenses are being used, 3D looks flat, which induces the cardboarding phenomenon. When short lenses are used it creates a round look, which is much better for 3D. This could lead to short lenses being used much more often that is currently the case.

For smaller budget filmmakers and producers, many of whom are still not shooting in HD, the costs involved with additional equipment plus the extra knowledge needed to film 3D means that it will simply not be a possibility for some time to come. In addition to this, at the present time the overwhelming majority of smaller to medium size clients (website videos, conferences, wedding videos etc) will not want videos in 3D as they will not see the need or not be able to justify the additional costs.

13.2 Cinemas

Most of the methods that are used for exhibiting 3D films use digital processes. For large cinemas this isn’t that much of an issue as most of them already use digital projectors, the ones that don’t can easily absorb the costs. One way of doing this is to charge extra for 3D films, which is what most of the cinemas in England seem to be doing.

Different methods of 3D need different equipment in order to be able to properly exhibit the material. RealD, which is one of the most prevalent formats being used, requires a special silver screen to boost the light levels. This is an additional cost for cinemas.

All of the methods require a special projector or an add-on to the existing projector. This is an additional cost for the cinemas.

These extra costs however, will be a small price to pay if the income from these new enhanced films mean a larger income.

For small cinemas, this jump to 3D might prove impossible because of costs involved. If, in future the public decide that 2D films no longer have the pull that they used to then these smaller cinemas could end up going out of business, as their income falls.

13.3 Health & safety

The one thing that is essential for stereoscopic viewing is that the viewer has two eyes, and that both the left and right eyes have good vision.

‘Depending on which expert you listen to, between 2 and 12 percent of all viewers are unable to appreciate video shown in 3D’ (Media College, 2010). This percentage is a large portion of the target audience which the studios will have to take in to account.

In order for 3D to become the ‘norm’, there needs to be a way that these people can see the films at the same time in 2D. It is possible to watch a 3D film through 3D glasses, and if you only have one eye, then you would see the film in 2D as you are effectively only seeing one channel. However, if you have two working eyes but poor stereovision, which can result in headaches, watching the film in this way will not help.

The University of California, Berkley, recently carried out a study into 3D and found that viewing stereoscopic films does cause more fatigue, eye strain and headaches when compared to 2D films. The studios and manufacturers of 3D products put this down to badly made films and not the actual technology itself, and argue that as better made films are made, the viewing experience will be easier.

Whichever case is true, the success or failure of 3D films and 3DTV will come down to the consumers and what they are willing to pay for. From researching online for this report it has been a lot easier finding pages, forums and sites detailing how 3D causes side effects such as headaches than it has been finding research saying there are no side effects. This negative press no matter what studios say, is bound to have an effect on people when they decided whether to spend money on the latest 3D technology.

13.4 Studios

The aim of studios is to make as much money as possible. For them 3D is a very appealing prospect, it means a new exciting way of getting customers to come and spend their money at cinemas.

One avenue that is being explored by studios that is bound to mean extremely big profits for them is the re-release of old 2D films in 3D.

One of the first examples of this was in 2009, the two films Toy Story (1995) and Toy Story 2 (1999) were re-released in the Disney 3D format. It was such as commercial success that the run was extended.

There is also talk of a re-release of James Cameron’s box office hit Titanic (1998) in a 3D format.

According to USA Today, Cameron was quoted as saying ‘We’re targeting spring of 2012 for the release (of a 3D version of Titanic), which is the 100 year anniversary of the sailing of the ship. It’s never going to be as good as if you shot it in 3D, but think of it as sort of 2.8D.’

The conversion process was documented in chapter 10.

From studios point of view there is such a large repertoire of films that can be converted and exhibited to new audiences in 3D or 2.8D as Cameron argues. The potential for vast profits available from this are immense as a great deal of the costs with making an original film are removed. It has to be an avenue that is going to be explored. This could mean that less money and time is spent on creative new projects and more is spent recycling old content.

13.5 3D combating piracy

The studios have been extremely concerned about piracy in recent years, as more and more people illegally download films from the internet or copy DVD’s. For them 3D is not only a great new exciting format to get people in to the cinemas, arguably more important for them, is that it is an excellent way of combating piracy.

‘According to DreamWorks Animation CEO Jeffrey Katzenberg about 90 per cent of piracy today occurs when people bring a camcorder into a screening and they shoot it – and that "won’t work with 3D."’ (Cugnini, 2009)

Due to the separate images that are projected onto the screen, it means that if you record the image with one camera, it is effectively the same as looking at the 3D image without any glasses, so there is no way to decode the separate pictures. As a result the image would be blurry, and unpleasant to watch.

13.6 The public

Later this year 3D television programs will start to be introduced on British television for the first time in digital HD. This is, as quoted in a BBC White Paper ‘…a continuing long-term evolution of television standards towards a means of recording, transmitting and displaying images that are indistinguishable from reality’ (Armstrong, Salmon, & Jolly, 2009).

For 3D to become widely accepted it might be necessary for a unified format for 3D display at home. At present there are various methods that are being used to display 3D at home. Most of the different types of glasses are not compatible with other systems. This means that if for example, you are going to have a group of 5 or 6 people round to watch the world cup games on the new ESPN 3D channel, which is what the company must be hoping, you will need to have an equal amount of glasses for that one system. If you decide then to watch the next game at a different persons house on their different 3D system, you will need another 5 or 6 pairs of glasses. If a universal format was agreed upon then people could have their own glasses that could be carried with them. In practice this will be very difficult to achieve however, as a lot of large companies have invested great sums of money in to their own designs.

History has shown though, that when there are a couple of options, usually there is one that comes to the forefront and leaves the others to be forgotten. This was the case in the 80’s with VHS and Betamax, and again more recently with Blu-ray and HD-DVD.

A further obstacle for the public with regards to home viewing 3D content at present is the costs that are involved. With the economical situation at present, spending a thousand pounds or more on a new 3D-ready TV set might not be at the top of peoples priorities. Especially when a lot of people have spent out on a HD ready TV fairly recently anyway.

13.7 Overview

In the near future it is highly likely that people will consider 3D the same way they consider sound and colour in movies now. It will become the ‘norm’ for a film to be shot and exhibited in three dimensions, and anything that isn’t will be done for a artistic or creative reason.

It is also highly likely that there will be a large increase in 2D films being re-released in 3D due to the easy profits available to the studios. Star Wars 3D is surely going to be in the cinemas before long.

It is possible that 3D will come to be used in different ways for different genres of films, in the same way that colour is now. In western ‘cowboy’ films, the images usually have a yellow tint. This is something that has become a trademark of the genre though the years. It is highly likely that the same will happen with 3D. In cartoons for example the 3D effects can be really in your face and apparent. In contrast, during serious dramas, the 3D effects can be used in a much more ‘grown up’ way, it can add subtle depth to the picture but without being so in your face.

In terms of watching 3D at home, one of the problems that needs to be overcome is when people want to watch television together, but the situation arises where some choose to watch it in 2D and others in 3D. There is very little written at present about how this is going to be overcome.

All of the TV manufactures mention how the 3D can be switched on and off, allowing to watch TV in 2D whenever you want, but not how it can be watched in both formats at the same time. With most of the mentioned formats if the signal is in 3D, it creates a blurry image on the screen that only looks crisp through glasses. So anyone watching without glasses or without the ability to see 3D may have issues now watching in 2D also.

A good summary of the state of 3D film at present can be found in this review from 1915. As written in the book by Ray Zone on the origins of 3D film.

‘When the first publicly exhibited stereoscopic motion pictures were shown in 1915

at the Astor Theatre in New York, Lyne Denig, a reviewer for Moving Picture World,

wrote, "These pictures would appeal first by reason of their novelty, then because of

the wonderful effects obtained, and after that, when they had become familiar, there

would be the same old demand for an interesting story"(Zone, 2007).

Although these comments are almost 100 years old, the point being made is still very relevant today. It has clearly got a lot easier to produce and edit 3D film, but it is going to be the story that is the most important aspect once the novelty has worn off.

GLOSSARY

3D

An object that consists of three dimensions (length, width and height).

Accommodation depth cue

A stereoscopic depth cue, refers to the way our eyes change focus.

Active shutter glasses

Type of 3D glasses. Decodes 3D images by alternately shutting of the left and right eye pieces.

Alternate-frame 3D projection

A method of using a single projector to display a stereoscopic film. Works by displaying alternate left and right frames.

Anaglyphs

A composite picture printed in two colours that produces a three-dimensional image when viewed through spectacles having lenses of corresponding colours.

Auto-stereoscopic / autostereoscopy

A method of viewing three-dimensional images which does not require any glasses.

Ciliary body muscles

Muscles which control the eyes accommodation and convergence.

Colour grading

The process of altering the colours in a image of film. Different formats need different colour balances.

Colour separation glasses

Type of 3D glasses. Decodes 3D images by using opposite colour filters as used in the projection. Red and cyan are commonly used colours.

Convergence depth cue

A stereoscopic depth cue, the brain receives depth cues from electronic signals sent from the ciliary muscles which control the eyes rotation around the Y axis when focussing on objects at different depths.

Depth Budget

The limit of comfortable 3D viewing in a stereoscopic film.

DLP-Link Protocol

DLP-Link is a communication protocol that uses the DLP chip inside DLPTV and DLP Projectors.

Dual-strip projection

A method of displaying stereoscopic films, it used two projects, one each for the left and right images.

Frame rate

The amount of frame shown per second on screen. Film standard is 24 fps (frames per second).

Ghosting

This is where the colours bleed from the left to right frames of a 3D image. It is a problem that needs to be resolved using a ‘anti-ghosting process’. Some formats suffer from this more than others.

HD (high definition)

A video signal which has a higher resolution that standard definition, up to 5 times better picture quality.

(HIT) Horizontal Image Translation

A digital post-production process where convergence can be altered.

Hyper-stereoscopy

An effect which is created in 3D filmmaking when the interaxial distance is extremely wide.

Hypo-stereoscopy

An effect which is created in 3D filmmaking when the interaxial distance is extremely narrow.

Interaxial distance

The distance between the two cameras in a 3D production.

Keystoning

A problem that can arise in 3D filmmaking, means the left and right edges of an image no longer match.

Motion parallax

A monoscopic depth cue, nearer objects move a greater distance relative to further objects.

Monoscopic depth cues

A method of working out depth by using only one viewpoint. Either one eye or one camera lens.

Occlusion (interposition) depth cue

A monoscopic depth cue, overlapping objects are perceived as being closer that objects behind them.

Over and Under

A method of squeezing two frames into a single frame, they are printed on above and below each other.

Passive polarised glasses

Type of 3D glasses. Decodes 3D images by using apposing polarising as is used in the left and right channels of the projector.

Perspective depth cue

A monoscopic depth cue, works by judging distance based on an objects size

Polarisation

A method for viewing 3D images, often used in cinemas. Glasses are usually very cheap and often disposable.

Reality Camera System

A specially designed camera, created for the 2003 James Cameron film, Ghost of the Abyss. Utilized HD video.

Single strip 3D

A method of displaying 3D using a single projector.

Space-Vision

The first single strip 3D projection method, invented in the 1960’s, Utilized over and under encoding.

Stereoscope

Invented in 1838 by Charles Wheatstone, an apparatus used to view stereoscopic images.

Stereoscopic depth cues

Depth cues based on two sources, left and right eyes.

Stereoscopic depth perception

The method used to see depth by using stereopsis.

Stereopsis

Often referred to as ‘depth perception’. It is the process of receiving a slightly different image in each eye which results to a 3D view.

Stereoscopy

The process of recreating a virtual 3D image. First invented by Sir Charles Wheatstone in 1938.

Stereovision

A single strip 3D method used in the 1960’s. Split the two frame vertically.

Stereoscopic Window

The area that a 3D film is viewed through.

Teleview

A method of cinema projection invented in 1923, uses the alternate-frame technology

Time Separation

The type of encoding used in active 3D glasses. Dual images are switched off and on periodically.

Video-on-demand

A process where you can view video whenever you want to see it, usually comes as part of a package (Virgin, Sky etc)

Read more