CHAPTER-1 AIMS TO PROJECT 3D TV Project Aims to Create 3D Television by 2020 Tokyo - Imagine watching a football match on a TV that not only shows the players in three dimensions but also lets you experience the smells of the stadium and maybe even pat a goal scorer on the back. Japan plans to make this futuristic television a commercial reality by 2020as part of a broad broad national national project that will bring together together researchers researchers from the governmen government, t, technology technology companies and academia. The targeted "virtual reality" television would allow people to view high definition images images in 3D from any angle, angle, in addition addition to being able to touch and smell the objects objects being projected upwards from a screen to the floor. "Can you imagine hovering over your TV to watc h Japan versus Brazil in the finals of the World Cup as if you are really there?" asked Yoshiaki Takeuchi, development at Japan's Ministry of Internal Affairs and Communications. While companies, universities and research institutes around the world have made some progress on reproducing 3D images suitable for TV, developing the technologies to create the sensations of touch and smell could prove the most challenging, challenging, Takeuchi Takeuchi said in an interview with Reuters. Researchers are looking into ultrasound, electric stimulation and wind pressure as potential technologies for touch. Such a TV would have a wide range of potential uses. It could be used in homeshopping programs, allowing viewers to "feel" a handbag before placing their order, or in the medical industry, enabling doctors to view or even perform simulated surgery on 3D images of someone's heart. The future TV is part of a larger national project under which Japan aims to promote "uni "unive vers rsal al comm commun unic icati ation on," ," a conc concep eptt wher whereb eby y info inform rmat atio ion n is share shared d smoo smooth thly ly and and intelligently regardless of location or language.
[1]
Takeuc Takeuchi hi said said an open open forum forum coverin covering g a broad broad range range of techno technolog logies ies related related to universal communication, such as language translation and advanced Web search techniques, could be established by the end of this year. Researchers Researchers from several several top firms including including Matsushita Matsushita Electric Electric Industrial Industrial Co. Ltd. and Sony Corp. are members of a report on the project last month. The ministry plans to request a budget of more than 1 billion yen to help fund the project in the next fiscal year starting in April 2006
[2]
CHAPTER-2 INTRODUCTION Three-dimensional TV is expected to be the next revolution in the TV history. They implem implement ented ed a 3D TV protot prototype ype system system with with real-tim real-timee acquis acquisitio ition n transm transmiss ission ion,, & 3D display of dynamic scenes. They developed a distributed scalable architecture to manage the high computation & bandwidth demands. 3D display shows high-resolution stereoscopic color images for multiple viewpoints without special glasses. This is first real time end-to-end 3D TV system with enough views & resolution to provide a truly immersive 3D experience.
2.1 Why 3D TV The evolution of visual media such as cinema and television is one of the major hallmarks of our modern civilization. In many ways, these visual media now define our modern life style. Many of us are curious: what is our life style going to be in a few years? What kind of films and television are we going to see? Although cinema and television both evolved over decades, there were stages, which, in fact, were once seen as revolutions: 1) at first, films were silent, then sound was added; 2) cinema and television were initially black-and-white, then color was introduced; 3) computer imaging and digital special effects have been the latest major novelty.
So the question is: what is the next revolution in cinema and television going to be? If we look at these stages precisely, we can notice that all types of visual media have been evolving closer to the way we see things in real life. Sound, colors and computer graphics brought a good part of it, but in real life we constantly see objects around us at close range, we sense their location in space, we see them from different angles as we change pos posit itio ion. n. This This has has not not been been pos possibl siblee in ordi ordina nary ry cine cinema ma.. Movi Moviee imag images es lack lack true true dimensionality and limit our sense that what we are being seeing is real. Nearly a century ago, in the 1920s, the great film director Sergei Eisenstein said that the future future of cinemat cinematogr ograph aphy y was the 3d motion motion pictur pictures. es. Many Many other other cinema cinema pionee pioneers rs thought in the same way. Even the Lumière brothers experimented with three-dimensional (stereoscopic) images using two films painted in red and blue (or green) colors and projected simultaneously onto the screen. Viewers saw stereoscopic images through glasses, painted in the opposite colors. But the resulting image was black-and-white, like in the first feature stereoscopic film "Power of Love" (1922, USA, Dir. H. Fairhal). [3]
CHAPTER-3 Basics of 3D TV Human gains three-dimensional information from variety of cues. Two of the most important ones are binocular parallax & motion parallax .
3.1 Binocular Parallax It means for any point you fixate the images on the two eyes must be slightly different. But the two different image so allow us to perceive a stable visual world. Binocular parallax r defers to the ability of the eyes to see a solid object and a continuous surface behind that object even though the eyes see two different views.
3.2 Motion Parallax It means information at the retina caused by relative movement of objects as the observer moves to the side (or his head moves sideways). Motion parallax varies depending on the distance of the observer from objects. The observer's movement also causes occlusion (covering of one object by another), and as movement changes so too does occlusion. This can give a powerful cue to the distance of objects from the observer. For example, you are sitt sittin ing g in the the trai train n & trees trees are goin going g oppo opposi site te side side to you. you. Whea Wheats tsto tone ne was was able able to scientifically prove the link between parallax & depth perception using a stereoscope- the world's first three dimensional display device. So, there will be a question in your mind that what are this depth perception, stereoscopic images & stereoscope. Let's understand these words.
3.2.1 Depth perception It is the visual ability to perceive the world in three dimensions. It is a trait common to many higher animals. Depth perception allows the beholder to accurately gauge the distance to an object. The small distance between our eyes gives us stereoscopic depth perception. The brain combines the two slightly different images into one 3D image. It works most effectively for distances up to 18 feet. For objects at a greater distance, our brain uses relative size and motion to determine depth. The ability to distinguish objects in a visual field. Figure 1 shows the depth perception. [4]
Fig.3.1 Depth Perception As shown in the figure, each eye captures its own view and the two separate images are sent on to the brain for processing. When the two images arrive simultaneously in the back of the brain, they are united into one picture. The mind combines the two images by matchi matching ng up the simila similariti rities es and adding adding in the small small differ differenc ences. es. The small small differ differenc ences es between the two images add up to a big difference in the final picture ! The combined image is more than the sum of its parts. It is a three-dimensional stereo picture. The word "stereo" comes from the Greek word "stereos" which means firm or solid. With stereovision you see an object as solid in three spatial dimensions-width, height and depth--or x, y and z. It is the added perception of the depth dimension that makes stereovision so rich and special.
3.2.2 Stereographic Images It means two pictures taken with a spatial or time separation that are then arranged to be viewed simultaneously. When so viewed they provide the sense of a three-dimensional scene using the innate capability of the human visual system to detect three dimensions. Figure 2 shows the stereographic images.
[5]
Fig.3.2 Stereoscopic Images As you can see, a stereoscopic image is composed of a right perspective frame and a left perspective frame - one for each eye. When your right eye views the right frame and the left frame is viewed by your left eye, your brain will perceive a true 3D view.
Fig.3.3 Stereoscopes 3.2.3 Stereoscope It is an optical device for creating stereoscopic (or three dimensional) effects from flat (two-d (two-dime imensi nsiona onal) l) images images;; D.Brew D.Brewste sterr first first constr construct ucted ed the stereo stereosco scope pe in 1844. 1844. It is provided with lenses, under which two equal images are placed, so that one is viewed with the right eye and the other with the left. Observed at the same time, the two images merge [6]
into a single virtual image, which, as a consequence of our binocular vision, appears to be three-dimensional. For those wondering what "stereoscopic" is all about, viewing stereoscopic images gives an enhanced depth perception. This is similar to the depth perception we get in real life, the same effect IMAX 3D and many computer games now provide.
3.3 Holographic Images A luminous, 3D, transparent, colored and nonmaterial image appearing out of a 2D medium medium,, called called a hologr hologram. am. A hologr holograph aphic ic image image cannot cannot be viewed viewed withou withoutt the proper proper lighting. Holographic images can be viewed in virtual space (behind the film plane), in real space (in front of the film plane), or in both at once. They may be orthoscopic, orthoscopic, that is, have the same appearance of depth and parallax as the original 3D image, or pseudoscopic, in which the scene depth is inverted. Holographic images do not create a shadow, since they are non-material. They can only be viewed in.
[7]
CHAPTER-4 OVERVIEW OF THE SYSTEM 3D video usually refers to store animated sequences, whereas 3D TV includes realtime acquisition, coding & transmission of the dynamic scene. In this seminar we present first end-to-end 3D TV system with 16 independent high resolution views & auto stereoscopic display. They have used hardware synchronized cameras to capture multiple perspective scenes. They have developed a fully distributed architecture with clusters of PCs on the sender & receiver side. The system is scalable in the number of acquired, transmitted, & displayed video streams. The system architecture is flexible enough to enable a broad range of resear research ch in 3D TV. TV. This This syst system em prov provid ides es enou enough gh viewp viewpoi oint ntss 8 enou enough gh pixe pixels ls per per viewpoint to produce a believable & immersive 31) experience. In these system there are following contribution: 1. Distributed architecture 2. Scalability 3. Multiview video rendering 4. High-resolution 3D display 5. Computational alignment for 3D display
4.1 Model Based System One approach to 3D TV is to acquire multiview video from sparsely arranged cameras & to use some model of the scene for view interpolation. Typical scene models are per-pixel depth maps, the visual hull, or a prior model of the acquired objects, such as human body shapes as shown in the figure 4.
[8]
Fig.4.1 Interpolations It has been shown that even coarse scene models improve the image quality during view synt synthe hesi sis. s. It is poss possib ible le to achi achiev evee very very high high imag imagee qual qualit ity y with with two two laye layerr imag imagee representation that includes automatically extracted boundary mattes near depth penetration. The Blue-C system consists of a room-sized environment with real-time capture & spatially immersive display. All 3D video systems provide the ability to interactively control the viewpoint, the feature that has been termed free viewpoint video by the MPEG Ad-Hoc Group on 3D Audio 8 Video (3DAV). Real-time acquisition of scene models for general, real-wo real-world rld scenes scenes is very very diffic difficult. ult. Many Many syste systems ms do not provid providee real-tim real-timee end-to end-to end performance, and if they do they are limited to simple scenes with only a handful of objects. Using a dense light field representation representation that does not require require a scene model but on the other hand, dense light field require more storage 8 transmission bandwidth. So, related to this light field systems is our next topic.
4.2 Light Field System A light field represents radiance as a function of position & direction in regions of space free of occludes. The light field describes the amount of light traveling through every point point in 3D space in every possible direction. direction. It varies with the wavelength A, distance x & the unit vector direction w. In this system, the ultimate goal, which Gavin Miller called the " hyper display ", is to capture a time varying light field passing passing through a surface & emitting the same light field through another surface with minimum delay. Acquisition of dense,
[9]
dynamic light fields has only recently become feasible. Some system uses a bundle of optical fibers fibers in front front of high high defini definitio tion n camera camera to captur capturee multip multiple le views views simult simultane aneous ously. ly. The problem with the single camera is that the limited resolution of the camera greatly reduces the number number & resolution resolution of the acquired views. Dense array of synchronized synchronized cameras will give high resolution light fields. These cameras are connected with the cluster of PCs. Camera array consists consists of up to 128 cameras & special purpose hardware to compress & store all the video data in real-time. Most light field cameras allow interactive navigation & manipulation of the dynamic scene. Now, let's move on to the architecture of the 3D TV.
[10]
CHAPTER-5 ARCHITECTURE OF 3D TV Figure 5 shows the schematic representation of 3D TV system.
Fig.5.1 3D TV System The whole system consists mainly three blocks: 1. Acquisition 2. Transmission 3. Display Unit The system system consis consists ts mostly mostly of commod commodity ity compon component entss that that are readily readily availa available ble today. Note that the overall architecture of system accommodates different display types. Let's understand the three blocks one after another.
5.1 Acquisition The acquisition acquisition stage consists of an array of hardware-sy hardware-synchro nchronized nized cameras. Small clus cluste ters rs of camera camerass are conn connect ected ed to the the prod produc ucer er PCs. PCs. The The prod produc ucers ers capt captur uree live live,, uncompressed video streams & encode them using standard MPEG coding. The compressed video then broadcast on separate channels over a transmission network, which could be digital cable, satellite TV or the Internet.
[11]
As explain above each camera captures progressive high-definition video in real time. Generally they are using 16 Basler A101fc color cameras with 1300X1030, 8 bits per pixel CCD sensors. The question might be arising in your mind that what are CCD image sensors & MPEG coding?
5.1.1 CCD Image Sensors Charge coupled device are electronic devices that are capable of transforming a light pattern (image) into an electric charge pattern (an electronic image). The CCD consists of several individual elements that have the capability of collecting, storing and transporting elect electric rical al charg chargee from from one one elem elemen entt to anot anothe her. r. This This toge togeth ther er with with the the phot photos osen ensi siti tive ve properties properties of silicon silicon is used to design image sensors. sensors. Each photosensi photosensitive tive element will then repres represent ent a pictur picturee elemen elementt (pixel (pixel). ). With With semico semicondu nducto ctorr techno technolog logies ies and design design rules, rules, structures are made that form lines, or matrices of pixels. One or more output amplifiers at the edge of the chip collect the signals from the CCD. An electronic image can be obtained by - after having exposed the sensor with a light pattern - applying series of pulses that transfer the charge of one pixel after another to the output amplifier, line after line. The output amplifier converts the charge into a voltage. External electronics will transform this output signal into a form suitable for monitors or frame grabbers. CCDs have extremely low noise figures. Figure 6 shows CCD sensors.
Fig.5.2 CCD Image Sensor CCD image sensors can be a color sensor or a monochrome sensor. In a color image sensor sensor an integr integral al RGB color color filter filter array array provid provides es color color respon responsiv sively ely and separa separatio tion. n. A monochrome image sensor senses only in black and white. An important environmental parameter to consider is the operating temperature.
[12]
5.1.2 MPEG-2 Encoding MPEG MPEG-2 -2 is an exte extens nsio ion n of the the MPEG MPEG-1 -1 inte intern rnat atio iona nall stan standa dard rd for for digi digita tall compressio compression n of audio and video signals. MPEG-2 is directed directed at broadcast broadcast formats at higher higher data rates; it provides extra algorithmic 'tools' for efficiently coding interlaced video, supports a wide range of bit rates and provides for multichannel surround sound coding. MPEG- 2 aims aims to be a generi genericc video video coding coding system system suppor supportin ting g a divers diversee range range of applica applicatio tions. ns. Different algorithmic 'tools', developed for many applications, have been integrated into the full standard. To implement all the features of the standard in all decoders is unnecessarily complex and a waste of bandwidth, so a small number of subsets of the full standard, known as profiles and levels, have been defined. A profile is a subset of algorithmic tools and a level identifies a set of constraints on parameter values (such as picture size and bit rate). A decode decoder, r, which which suppor supports ts a partic particula ularr profil profilee and level, level, is only only requir required ed to suppo support rt the corresponding subset of the full standard and set of parameter constraints. Now, the cameras are connected by IEEE-1394 High Performance Serial Bus to the produ producer cer PCs. PCs. The maximum maximum transmi transmitted tted frame rate rate at full full resolu resolutio tion n is 12 frames per seconds. Two cameras each are connected to one of the eight producer PCs. All PCs in this prototype have 3 GHz Pentium 4 Processors, 2 GB of RAM, & run r un Windows XP. They chose the Basler cameras primarily because it has an external trigger that allows for for comp comple lete te cont contro roll over over the the vide video o timi timing ng.. They They have have buil builtt a PCI PCI card card with with cust custom om programmable logic device (CPLD) that generates the synchronization signal for all the cameras. So, what is PCI card?
5.1.3 PCI Card The power and speed of computer components has increased at a steady rate since desk deskto top p comp comput uter erss were were firs firstt deve develo lope ped d deca decade dess ago. ago. Soft Softwar waree make makers rs creat createe new new applic applicatio ations ns capabl capablee of utilizi utilizing ng the latest latest advanc advances es in proces processor sor speed speed and hard hard drive drive capacity, while hardware makers' rush to improve components and design new technologies to keep up with the demands of high end software.
[13]
Fig.5.3 PCI Card There's one element, however, that often escapes notice - the bus. Essentially, a bus is a channel or path between the components in a computer. Having a high-speed bus is as important as having a good transmission in a car. If you have a 700-horsepower engine combined with a cheap transmission, you can't get all that power to the road. There are many different types of buses. In this article, you will learn about some of those buses. We will concentrate on the bus known as the Peripheral Component Interconnect (PCI). We'll talk about what PCI is, how it operates and how it is used, and we'll look into the future of bus technology. All 16 cameras are individually connected to the card, which is plugged into the one of the producer PCs. Although it is possible to use software synchronization, they consider precise hardware synchronization essential for dynamic scenes. Note that the price of the acquisition cameras can be high, since they will be mostly used in TV studios. They arranged the 16 cameras in regularly spaced linear array. See the figure 8.
Fig.5.4 Arrays of 16 Cameras [14]
The optical axis of each camera is roughly perpendicular to a common camera plane. It is imposs impossibl iblee to align align multip multiple le cameras cameras precis precisely ely,, so they they use standa standard rd calibr calibratio ation n procedures to determine the intrinsic & extrinsic camera parameters. In general, the cameras can be arranged arbitrarily because they are using light field rendering in the consumer to synchronize new views. A densely spaced array proved the best light fields capture, but highquality reconstruction filters could be used if the light field is under sampled.
5.2 Transmission Transmitting 16 uncompressed video streams with 1300X1030 resolution & 24 bits per pixel at 30 frames per seconds requires 14.4 Gblsec bandwidth, which is well beyond current broadcast capabilities. For compression & transmission o1 dynamic muitiview video data there are two basic design choices. Either the data from multiple cameras is compressed using spatial or spatio-temporal encoding, or each video stream is compressed individually using temporal encoding. The first option offers higher compression, since there is a lot of coherence between the views. However, it requires that a centralized processor compress multiple video streams. This compression-hub architecture is not scalable, since the addition of more views will eventually overwhelm the internal bandwidth of the encoder. So, they decided to use temporal encoding of individual video stream on distributed processors. This strategy has other advantages. Existing broadband protocols & compression standards do not need to be changed for immediate real world 3D TV experiments. This system system can plug plug into into today' today'ss digita digitall TV broadc broadcast ast infras infrastru tructu cture re & co-exi co-exist st in perfec perfectt harmony with 2D TV. There did not have access to digital broadcast equipment, they implemented the modified architecture as shown in figure 9.
[15]
Fig.5.5 Modified System Eight producer PCs are connected by gigabit Ethernet to eight consumers PCs. Video stream at full camera resolution (1300*103D) are encoded with MPEG-2 & immediately decoded on the producer PCs. This essentially corresponds to a broadband network with infinite bandwidth & almost zeros delay. The gigabit Ethernet provides all-to-all connectivity betwe between en decode decoders rs & consum consumers ers,, which which is import important ant for distri distribut buted ed render rendering ing & displa display y implementation. So, what is gigabit Ethernet? '
5.2.1 Gigabit Ethernet It a tran transm smis issi sion on tech techno nolo logy gy,, enab enable less Supe Superr Net Net to deliv deliver er enha enhanc nced ed netw networ ork k performance. Gigabit Ethernet is a high speed form of Ethernet (the most widely installed LAN technology), that can provide data transfer rates of about 1 gigabit per second (Gbps). Gigabit Gigabit Ethernet Ethernet provides provides the capacity capacity for server interconnection interconnection,, campus campus backbone backbone architecture and the next generation of super user workstations with a seamless upgrade path from existing Ethernet implementations.
5.3 Decoder & Consumer Processing The receiver side is responsible for generating the appropriate images to be displayed. The system needs to be able to provide all possible views to the end users at every instance. The The deco decode derr recei receive vess a comp compres resse sed d vide video o strea stream, m, deco decode de it, it, and and stor storee the the curre current nt [16]
uncompressed source frame in a buffer as shown in figure 10. Each consumer has virtual video buffer (VVD) with data from all current source frames. (I.e., all acquired views at a particular time instance).
Fig.5.6 Block Diagram of Decoder and Consumer processing The consumer then generates a complete output image by processing image pixels from multiple frames in the VVB. Due to the bandwidth 8 processing limitations it would be impossible for each consumer to receive the complete source of frames from all the decoders. This would also limit the scalability of the system. Here is one-to-one mapping between cameras & projectors. But it is not very flexible. For example, the cameras need to be equally spaced, which is hard to achieve in practice. Moreover, this method cannot handle the case when the number of cameras & projectors is not same. Anothe Another, r, more more flexib flexible le approa approach ch is to use image-b image-base ased d render rendering ing to synchr synchroni onize ze views views at the correc correctt virtua virtuall camera camera positi positions ons.. They They are using using unstru unstructu ctured red lurnig lurnigrap raph h rendering on the consumer side. They choose the plane that is roughly in the center of the depth of field. The virtual viewpoints for the projected images are chosen at even spacing. Now focus on the processing for one particular consumer, i.e., one particular view. For each pixel pixel o (u, v) in the output output image, the display display controller controller can determine determine the view number number v& the position (x, y) of each source pixel s (v, x, y) that contributes to it. To generate output views from incoming video streams, each output pixel is a linear combination of k source pixels: 0 (u, v) Σ wts (v, x, y)
............ (1)
[17]
The blending weights w can be pre-computed by the controller based on the virtual view information. The controller sends the position (x, y) of the k source pixels to each decoder v for pixel selection. The index c of the requesting consumer is sent to the decoder for pixel routing from decoders to the consumer. Optionally, multiple pixels can be buffered in to the decoder for pixel block compression before being sent over the network. The consumer decompresses the pixel blocks & stores each pixel in VVB number v at position (x, y). y). Each Each outp output ut pixe pixell requ require iress from from k sour source ce frame frames. s. That That mean meanss that that the the maxi maximu mum m bandwidth on the network to the VVB is k times the size of the output image times the numb number er of fram frames es per per seco second nd (fps) (fps).. This This can be subs substa tant ntial ially ly redu reduced ced if pixe pixell bloc block k compre compressi ssion on is used, used, at the expens expensee of more more proces processin sing. g. So to provid providee scalab scalabili ility ty it is important important that this bandwidth bandwidth is independen independentt of the total number of the transmitted transmitted views. . The processing requirements in the consumer are extremely simple. It needs to compute equation (1) for each output pixel. The weights are pre computed & stored in a lookup table. The memory requirements are k times the size of the output image. Assuming simple pixel block compression, consumers can easily be implemented in hardware. That means decoders, networks, & consumers could be combined on the one printed circuit board. Let's move on to the different types of display.
[18]
CHAPTER-6 MULTIVIEW AUTO STEREOSCOPIC DISPLAY 6.1 Holographic Displays It is widely acknowledged that Dennis Gabor invented the hologram in 1948. he was working on an electron microscope. He coined the word and received a Nobel Prize for inventing holography in 1971. The holographic image is true three-dimensional: it can be viewed in different angles without glasses. This innovation could be a new revolution – a new era of holographic cinema and of holographic media in whole. Holographic techniques were first applied to image display by Leith & Upatnieks in 1962. In holographic reproduction, interference fringes on the holographic surface to reconstruct the light wave front of the original object diffract light from illumination source. A hologram displays a continuous analog field has long been considered the “holy grail “of 3D TV. Most recent device, the Mark-2 Holographic Video Display, uses acousto-optic modulators, beam splitters, moving mirrors & lenses to create interactive holograms. In more recent systems, moving parts have been eliminated by replacing the acousto-optic modulators with LCD, focused light arrays, and optically addressed spatial modulators, digital micro mirror devices. Figure shows the holographic image.
Fig.6.1 Holographic Image
[19]
All current holo-video devices use single-color laser light. To reduce the amount of display data they provide only horizontal parallax. The display hardware is very large in relation to size of the image. So cannot be done in real-time.
6.2 Holographic Movies We have developed the world's first holographic equipment with the capability of projecting genuine 3-dimensional holographic films as well as holographic slides and real object objectss – for the multip multiple le viewer viewerss simult simultane aneous ously. ly. Our Hologr Holograph aphic ic Techno Technolog logy y was primarily designed for cinema. However it has many uses in advertising and show business as well. At the same time we have developed a new 3d digital image processing and projecting technology. It can be used for creation the modern 3d digital movie theaters and for the computer modeling of 3d virtual realities as well. On the same principle we have already test tested ed a syst system em 3d colo colorr TV. TV. In all all cases cases audi audien ence ce can see see colo colorfu rfull 3-d 3-d inco inconv nven enie ient nt accessories. Developed in the Holographic Laboratories of Professor Victor Komar (NIKFI), these techno technolog logies ies have have receive received d worldw worldwide ide recogn recogniti ition, on, includ including ing an Oscar Oscar for Techni Technical cal Achievement in Hollywood, a Nika Film Award in Moscow, endorsement from MIT's Media Lab and many others. On this website you can find general information about our technology, projects, brief history of 3d and holographic cinema, investment opportunities and sales. For more specific questions please check FAQ section on the ENQUIRE page. You can also send us a message via email: the addresses are on the CONTACT page. We have developed the world's first holographic equipment the genuine 3-dimensional holographic films as well as holographic slides and real objects – for the multiple viewers. Our Holographic Technology was primarily designed for cinema. However it has many uses in advertising and show business as well.
6.2.1 Volumetric Displays It use a medium to fill or scan a three-dimensional space & individually address & illuminate small voxels. However, volumetric systems produce transparent images that do not provide a fully convincing three dimensional experience. Furthermore, they cannot correctly reproduce the light field of a natural scene because of [20]
their limited color reproduction & lack of occlusions. The design of large size volumetric displays also poses some difficult obstacles.
6.2.2 Parallax Displays Parallax displays emit spatially varying directional light. Much of the early 3D display research focused on improvement to Wheat stone's stereoscope. In 1903, F.Ives used a plate with vertical slits as a barrier over an image with alternating strips of left-eye/righteye images images.. The result resulting ing device device is called called a paralla parallax x stereo stereogra gram m. To extend the limited viewing angle 8 restricted viewing position of stereogram, Kanolt & H.Ives used narrower slits & smaller pitch between the alternating image strips. These multiview images are called parallax panorama grams. Stereogram & panorama grams provide only horizontal parallax. Lippmann proposed using an array of spherical lenses instead of slits. This is frequently called a 'fly's eye" lens sheet, & resulting image is called integral photograph. An integral is a true planar light field with directionally directionally varying varying radiance radiance per pixel. pixel. Integral Integral sacrifice sacrifice significant significant spatial resolution resolution in both dimensions to gain full parallax. Researchers in the 1930s introduced the lenticular sheet, a line of array of narrow cylindrical lenses called Isnticules. Lenticular images found widespread use for advertising, CD covers, & postcards. To improve the native resolution of the display, H.Ives invented the multi-projector lenticular display in 1931. He painted the back of a lenticular sheet with diffuse paint & used it as a projection surface for 39 slide projectors. Finally high output resolution, the large number of views & the large physical dimensions of or display leads to a very immersive 3D display. Other research in parallax displays displays includes time multiplexed multiplexed 8 tracking-bass systems. In time multiplexing, multiple views are projected projected at different different time instances instances using a sliding sliding window or LCD shutter. This inherently reduces the frame rate of the display & may lead to noticeable flickering. Headtracking designs are mostly used to display stereo images, although it could also be used to introduce some vertical parallax in multiview lenticular displays. Today's commercial auto stereoscopic displays use variations of parallax parallax barriers or lenticular sheets placed on the top of LCD or plasma or plasma screens. Parallax barriers generally reduce some of the brightness & sharpness of the image. Here, this projector based 3D display currently has a native resolution of 12 million pixels.
[21]
Fig.6.2 Fig.6.2 Images of a scene from the viewer side of the display (top row) and as seen from some of the cameras (bottom row). 6.2.3 Multi Projector Displays offer very high resolution, flexibility, excellent cost performance, scalability, & large-format images. Graphics rendering for multiprojector systems can be efficiently paral paralleli lelized zed on cluste clusters rs of PCs using using,, for exampl example, e, the Chromi Chromium um API. API. Projec Projector torss also also provide the necessary flexibility to adapt to non-planar display geometries. Precise manual alignment alignment of the projector projector array is tedious 8 becomes becomes downright downright impossibl impossiblee for more than a handfu handfull of project projectors ors or non-pl non-plana anarr screen screens. s. Some Some syste systems ms use cameras cameras in the loop loop to automatically compute relative projectors poses for automatic alignment. Here they will use static camera for automatic image alignment & brightness adjustments of the projectors.
[22]
CHAPTER-7 3D DISPLAY This is a brief explanation explanation that we hope sorts out some of the confusion confusion about the many 3D display options that are available today. We'll tell you how they work, and what the relative tradeoffs of each technique are. Those of you that are just interested in comparing different Liquid Crystal Shutter glasses techniques can skip to the section at the end. Of course, we are always happy to answer your questions personally, and point you to other leading experts in the field. Figure shows a diagram of the multi-projector 3D displays with lenticular sheets.
Fig.7.1 Projection-type lenticular 3D displays They use 16 NEC LT-170 projectors with 1024'768 native output resolution. This is less less that that the resolu resolutio tion n of acquir acquired ed & transm transmitt itted ed video, video, which which has 1300' 1300'103 1030 0 pixels pixels.. Howe Howeve ver, r, HDTV HDTV proj project ector orss are are much much more more expe expens nsiv ivee than than comm commod odit ity y proj project ector ors. s. Commodity projector is a compact form factor. Out of eight consumer PCs one is dedicated as the controller. The consumers are identical to the producers except for a dual-output graphics card that is connected to two projectors. The graphic card is used only as an output device. For real-projection system as shown in the figure, two lenticular sheets are mounted back-to-back with optical diffuser material in the center. The front projection system uses only one lenticular sheet with a retro reflective front projection screen material from flexible fabric mounted on the back. Photographs show the rear and front projection. [23]
Fig.7.2 Rear Projection and Front Projection The projec projectio tion-si n-side de lentic lenticula ularr sheet sheet of the rear-pr rear-proje ojecti ction on displa display y acts acts as a light light multiplexer, focusing the projected light as thin vertical stripes onto the diffuser. Close up of the lenticular sheet is shown in the figure 6. Considering each lenticel to be an ideal Pinh Pinhol olee came camera ra,, the the stri stripe pess captu capture re the the view view-d -dep epen ende dent nt radia radianc ncee of a three three-dimensional light field. The viewer side lenticular sheet acts as a light de-multiplexer & projects the view-dependent radiance back to the viewer. The single lenticular sheet of the front-projection screen both multiplexes & demultiplexes the light. The two key parameters of lenticular sheets are the field-of-view (FOV) & the number of lenticules per inch (LPI). Here it is used 72" ' 48" lenticular sheets with 30 degrees FOV & 15 LPI. The optical design of the lenticules is optimized for multiview 3D display. The number of viewing zones of a lenticular display is related to its FOV. For example, if the FOV is 30 degrees, leading to 180/30 = 6 viewing zones.
7.1 3D TV for 21st Century Interest in 3D has never been greater. grea ter. The amount of research and development on 3D photo photogra graphi phic, c, motion motion pictur picturee and televi televisio sion n system systemss is stagge staggerin ring. g. Over Over 1000 1000 patent patent applications have been filed in these areas in the last ten years. There are also hundreds of technical papers and many unpublished projects. I have worked with numerous systems for 3D video and 3D graphics over the last 20 years and have years developed and marketed many products. In order to give some historical [24]
perspective I’ll start with an account of my 1985 visit to Exposition 85 in Tsukuba, Japan, I spent a month in Japan visiting with 3D researchers and attending the many 3D exhibits at the Tsukuba Science Exposition. The exposition was one of the major film and video events of the century, with a good chunk of its 2 1/2 billion dollar cost devoted to state of the art audiovisual systems in more than 25 pavilions. There was the world’s largest IMAX screen, Cinema-U (a Japanese version of IMAX), OMNIMAX (a dome projection version of IMAX using fisheye lenses) in 3D, numerous 5, 8 and 10 perforation 70mm systems - several with fisheye lens projection onto domes and one in 3D, single, double and triple 8 perforation 35mm systems, live high definition (1125 line) TV viewed on HDTV sets and HDTV video projectors (and played on HDTV video discs and VTR’s), and giant outdoor video screens culminating culminating in Sony’s Sony’s 30 meter diagonal diagonal Jumbotron Jumbotron (also presented presented in 3D). Included Included in the 3D feast at the exposition were four 3D movie systems, two 3DTV systems (one without glasses), a 3D slide show, a Pulfrich demonstration (synthetic 3D created by a dark filter in front of one eye), about 100 holograms of every type, size and quality (the Russian’s were best), and 3D slide sets, lenticular prints and embossed holograms for purchase. Most of the technology technology,, from a robot that read music and played the piano to the world’s world’s largest tomato plant, was developed in Japan in the two years before the exposition, but most of the 3D hardware and software was the result of collaboration between California and Japan. It was the chance of a lifetime to compare practically all of the state of the art 2D and 3D motion picture and video systems, tweaked to perfection and running 12 hours a day, seven days a week. After describing the systems at Tsukuba, I will survey some of the recent work elsewhere in the world and suggest likely developments during the next decade.
[25]
CHAPTER-8 CONCLUSION Most of the key ideas for 3D TV systems presented in this paper have been known for decade decade,, such such as lenticu lenticular lar screen screens, s, multi multi project projector or 3D displa displays, ys, and camera camera array array for acquisition. This system is the first fir st to provide enough view points and enough pixels per view point pointss to produc producee an immers immersive ive and convinci convincing ng 3D experie experience nce.. Anothe Anotherr area of future future research is to improve the optical characteristic of the 3D display computationally. This conc concep eptt is comp comput utati ation onal al disp displa lay. y. Anot Anothe herr area area of futu future re rese resear arch ch is preci precise se colo color r reproduction of natural scenes on multiview display.
[26]
REFERENCES 1. An Assessme Assessment nt of 3DTV 3DTV Techno Technolog logies, ies, Levent Levent OnuralOnural-Bil Bilken kentt Un.,Th Un.,Thoma omass SikorSikorTech Tech.. Univ Univ.. Of Berl Berlin, in, Jörn Jörn Oste Osterm rman annn- Univ Univ.. Of Hano Hanove ver, r, Aljo Aljosc scha ha Smol SmolicicFrau Fraunh nhof ofer er Inst Inst..-HH HHI, I, M. Reha Reha Civa Civanl nlar ar-- Koç Koç Univ Univ., ., John John Wa Wats tson on-- Univ Univ.. Of Aberdeen, NAB-2006 - Las Vegas - 26 April 2006 © Copyright 2006. 2. T. Capin, Capin, K. Pulli, Pulli, and T. T. Akenine-Möl Akenine-Möller, ler, “The “The State of the the Art in Mobile Mobile Graphics Graphics Research”, IEEE Computer Graphics and Applications and Applications , vol. 28, no. 4, pp. 74 - 84, 2008. 3. K. Müller, Müller, P. Merkle, Merkle, and T. Wiegand, Wiegand, “Compre “Compressi ssing ng 3D Visual Visual Conten Content”, t”, IEEE Signal Processing Magazine, vol. 24, no. 6, pp. 58-65, November 2007. 4. T. Okosh Okoshi, i, "Three "Three dimens dimension ional al displa displays, ys,"" Proceedings of the IEEE, vol. 68 , pp. 548564, 1980. 5. I. Sexton, Sexton, and P. Surman, Surman, “Stereos “Stereoscop copic ic and auto stereosc stereoscopi opicc displa display y syste systems, ms,””
IEEE Signal IEEE Signal Processing Magazine , vol. 16 , no. 3, pp. 85-99, 1999. 6.
P C. Fehn, Fehn, P. Kauff, Kauff, M. Op De Beeck, Beeck, F. Ernst, Ernst, W. W. IJsselsteij IJsselsteijn, n, M. Pollefeys Pollefeys,, L. Van Gool, E. Ofek and I. Sexton, “An Evolutionary and Optimized Approach on 3D-TV”,
Proc. of International Broadcast Conference Broadcast Conference , 2002. 7. C. Fehn, Fehn, “A 3D-TV 3D-TV approach approach using using depth image- based based rendering rendering (DIBR)”, (DIBR)”, Proc. Of
VIIP 2003. 8. D. Florencio Florencio and and C. Zhang, Zhang, “Multivie “Multiview w video Compres Compression sion and and Streaming Streaming Based Based on Predicted Viewer Position”, Proc. ICASSP Proc. ICASSP 2009 . 9. P. Merkle, Merkle, A. Smolic, Smolic, K. Müller, Müller, and T. Wiegan Wiegand, d, “Multi-v “Multi-view iew Video Video plus Depth Depth Repr Repres esen enta tati tion on and and Codi Coding ng”, ”, Proc. Proc. IEEE IEEE Inte Intern rnat atio iona nall Conf Confer eren ence ce on Imag Imagee Processing (ICIP'07), San Antonio, TX, USA, pp. 201- 204, Sept. 2007. 10. A. Nurminen, Nurminen, “Mobile “Mobile 3D City Maps”, IEEE Computer Graphics and Applications, and Applications, vol. 28, no. 4, pp. 20-31, 2008.
f
[27]