US20090153652A1 - Depth dependent filtering of image signal - Google Patents
Depth dependent filtering of image signal Download PDFInfo
- Publication number
- US20090153652A1 US20090153652A1 US12/095,176 US9517606A US2009153652A1 US 20090153652 A1 US20090153652 A1 US 20090153652A1 US 9517606 A US9517606 A US 9517606A US 2009153652 A1 US2009153652 A1 US 2009153652A1
- Authority
- US
- United States
- Prior art keywords
- image
- depth
- spatial
- view
- filtering
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001914 filtration Methods 0.000 title claims abstract description 57
- 230000001419 dependent effect Effects 0.000 title description 24
- 238000009877 rendering Methods 0.000 claims abstract description 34
- 238000000034 method Methods 0.000 claims abstract description 32
- 238000009826 distribution Methods 0.000 claims description 11
- 238000005070 sampling Methods 0.000 claims description 8
- 238000013507 mapping Methods 0.000 claims description 7
- 230000007704 transition Effects 0.000 claims description 4
- 230000003287 optical effect Effects 0.000 claims description 3
- 230000000694 effects Effects 0.000 description 14
- 125000001475 halogen functional group Chemical group 0.000 description 7
- 230000008569 process Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 239000003086 colorant Substances 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 230000003292 diminished effect Effects 0.000 description 2
- 238000005315 distribution function Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 230000010339 dilation Effects 0.000 description 1
- 230000003467 diminishing effect Effects 0.000 description 1
- 230000001747 exhibiting effect Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000003278 mimic effect Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/128—Adjusting depth or disparity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/122—Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/275—Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/275—Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
- H04N13/279—Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/302—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
- H04N13/305—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using lenticular lenses, e.g. arrangements of cylindrical lenses
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/302—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
- H04N13/317—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using slanted parallax optics
Definitions
- the invention relates to a method of rendering image data for a multi-view display.
- the invention relates to a method of rendering image data for a multi-view display by means of a depth dependent spatial filter.
- the invention further relates to a multi-view display, to a signal rendering system and to computer readable code for implementing the method.
- a multi-view display is a display capable of presenting to a viewer, different images depending upon the view direction, so that an object in an image may be viewed from different angles.
- An example of a multi-view display is an auto-stereoscopic display capable of presenting a viewer's left eye with a different image than the right eye.
- Various multi-view display technologies exist, one such technology is lenticular based.
- a lenticular display is a parallax 3D display capable of showing multiple images for different horizontal viewing directions. This way, the viewer can experience, e.g., motion parallax and stereoscopic cues.
- One problem relating to multi-view displays is that images for different view-directions may overlap and thereby giving rise to ghost images, or cross-talk between images.
- Another problem relates to that the number of view-directions may be relatively small, typically eight or nine which may give rise to aliazing effects in some view-directions.
- the published US patent application US 2003/0117489 discloses a three dimensional display and method of reducing crosstalk between left and right eye images of a 3D auto-stereoscopic display.
- the disclosed method of reducing crosstalk is based on adding a base level of grey to every pixel of both the left and right images so as to raise the background grey level.
- the present invention seeks to provide improved means for rendering image data for a multi-view display, and it may be seen as an object of the invention to provide an effective filtering technique that ameliorates the perceived image quality of a viewer, or user, of a multi-view display.
- the invention alleviates, mitigates or eliminates one or more of the above or other disadvantages singly or in any combination.
- a method of rendering image data for a multi-view display comprising steps of:
- the image data is typically rendered for proper presentation.
- the rendering may be needed since the image may be based on 2D image data projected to the viewer in such a way that the viewer perceives a spatial, or 3D, dimension of the image.
- For each view-direction of an image a sub-image of the image as seen from that view-direction is generated, and the sub-images are projected into the associated view-direction.
- the rendering process typically comprises several operations or steps, e.g. depending upon the input format of the image data, the display apparatus, the type of image data, etc.
- Image data of a first image is provided in a first step. This first step need not be a first step of the entire rendering process.
- the first image is typically in a format including image plus depth data, or an associated depth map may be provided with the image data, so that the 3D image data may be determined.
- the inventor had the insight that spatial filtering for improving the perceived image quality, especially in terms of crosstalk and aliazing effect, is performed in the output domain, i.e. it is performed at a rendering stage where an input image has already been sampled, at least to some degree, for multi-view display.
- spatially filtering the first image signal to provide a second image and the second image being sampled to a plurality of sub-images for multi-view artefacts, such as crosstalk and aliazing effects, are dealt with in the input domain on a single image instead of in the output domain on a plurality of images, thereby dealing with artefacts in an efficient way.
- the method is effectively dealing with reduction of artefacts, such as crosstalk and aliazing artefacts, thereby rendering pre-processing or post-processing to further remove or diminish crosstalk or aliazing artefacts unnecessary.
- band-pass filtering done by low-pass filtering, high-pass filtering and/or a combination of the two are well-known band-pass filtering techniques which may be implemented in variety of ways, thereby ensuring robust and versatile implementation.
- the high-pass filter amplify high frequencies, e.g. the frequencies below the Nyquist frequency.
- the optional features as defined in dependent claim 8 are advantageous since by updating the depth of the image elements of the second image an improved handling of viewpoint changes may be provided.
- the depth is updated by setting the depth of the image element of the second image element to a value between the depth of the image element of the first image and the depth of the image element of the second image.
- the depth may be set to a value substantially towards the depth of the foreground, a gradual depth transition soften the depth edge.
- the depth value be set to the maximum of the depth of the image element of the first image and the depth of the image element of the second image.
- the optional feature as defined in claim 11 is advantageous since the 2.5D video image format is a standard and widely used format.
- a multi-view display device comprising:
- the display device being a multi-view display device enhanced with the rendering method of the first aspect. It is an advantage of the present invention that the multi-view display device may both be a display device born with the functionality according to the first aspect of the invention, or a display device not born with the functionality according to the first aspect of the invention, but which subsequently is enhanced with the functionality of the present invention.
- the input module, the rendering module and the output module may be provided as a signal rendering system according to the third aspect of the invention.
- a fourth aspect of the invention is provided a computer readable code for implementing the method according to the first aspect.
- FIG. 1 illustrates the principle of a 3D lenticular display
- FIG. 2 shows a sketch of a lenticular display in top view
- FIG. 3 illustrates crosstalk between sub-images from neighboring views
- FIG. 4 illustrates an example of blurring of an image as a result of a camera focused at a particular depth
- FIGS. 5A-5C illustrate an embodiment of a mapping of a first image to a second image in a spatial filtering process
- FIGS. 6A and 6B illustrate an example of a 2D input image with associated depth map
- FIGS. 7A and 7B illustrate a depth dependent spatial filtering of the input image and a shifted view of the same image (or scene),
- FIGS. 8A and 8B illustrate a depth dependent spatial filtering of the input image where a visibility factor has been applied and a shifted view of the filtered image
- FIGS. 9A and 9B illustrate a depth dependent spatial filtering of the input image where an adjusted visibility factor has been applied and a shifted view of the filtered image
- FIGS. 10A and 10B illustrate a spatially filtered depth map and depth dependent spatial filtering of the input image with the filtered depth map
- FIGS. 11A and 11B illustrate a depth dependent spatial filtering of the input image for a 1D situation and a shifted view of the filtered image
- FIGS. 12A-12F illustrate aspects concerning the application of high-pass spatial filtering.
- FIGS. 1 and 2 illustrates embodiments of a multi-view display, namely sketches of a 3D lenticular display as seen from the side ( FIG. 1 ) and in top view ( FIG. 2 ).
- FIG. 1 illustrates the principle of a 3D lenticular display.
- a lenticular display is based on an LCD panel display 1 , in front of which lenses 2 are attached.
- the lenses accommodate that for a specific viewing angle ⁇ , the viewer 3 only sees a subset of the pixels of the underlying LCD. If appropriate values are set to the subset of pixels associated to the various viewing directions, the viewer will see different images from different viewing directions. So that the viewer 3 sees a center view of the image, whereas the viewer would see a side view of the image from the view-angle denoted 6 .
- Each lens covers a number of pixels 4 , 5 and projects them out, as illustrated by the number of pixels denoted 7 .
- the viewer sees one subset of pixels 4 with the right eye and another subset of pixels 5 with the left eye. A 3D experience is thereby obtained.
- FIG. 2A shows a sketch of lenticular display in top view.
- the display comprises an array of display elements 20 or pixels, such as a conventional LC matrix display panel, where the pixels are arranged in groups, each group being associated with a view direction of an image. Each group of pixel constitutes a sub-image, each sub-image being associated with a view direction.
- An optical element i.e. the lenses, direct light emitted from the pixels, so that light emitting from a group of pixels is directed into an angular distribution associated with the view direction of the group, thereby providing separate images to a viewer's eyes.
- the lenticular lenses are in the illustrated embodiment arranged at a slight angle or slanted with respect to the columns of the pixels, so that their main longitudinal axis is at an angle with respect to the column direction of the display elements. In this configuration the viewer will see the points sampled along a direction 22 of the lens.
- nine-view display nine images, one for each view direction, are concurrently computed and shown on the group of pixels associated with a sub-image. When a pixel is lit, the entire lens above the pixel is illuminated 21 —this is shown in FIG. 2 B—so that for a specific view direction it is the entire lens above the pixel that is seen emitting the color of that pixel.
- FIGS. 1 and 2 describe a LCD-lenticular display, it is however to be understood that the invention is not limited to this type of display.
- the invention be applied with such displays as barrier-type displays, and the matrix display panel may be other than an LC panel, such as other forms of spatial light modulators, or other types of display panels such as electroluminescent or plasma panels.
- the visibility of sub-images from neighboring views from a single viewing direction may cause artefacts such as crosstalk.
- FIG. 3 showing the visible light, 1 , as a function of the view angle for a 42 ⁇ 3-display, i.e. for a display where each lens cover 42 ⁇ 3 pixel in the horizontal direction. It is seen that the angular distributions 30 - 34 from different sub-images overlap.
- the perceived image of a viewer is the sum of the light from each angular distribution and it may be seen that for this particular example, three sub-images contributes to the perceived image of each viewing direction.
- the inventor of the present invention has appreciated that by appropriate spatial filtering problems relating to crosstalk, to ghost imaging and aliazing may be removed or at least diminished. Furthermore, by spatially filtering the input image before the image is rendered for multi-view display, only a single image needs to be filtered (and possible a depth map in accordance with certain embodiments). Thereby providing an efficient way of handling spatial filtering of multi-view image data.
- Depth dependent spatial filtering is done to counter crosstalk and/or aliazing effects.
- the depth dependent spatial filtering on an input image which is subsequently rendered for different viewpoints may introduce new artefacts by the rendering. Such as artefacts relating to that foreground and background objects mix for the rendered images with shifted viewpoint, thereby diminishing the perceived image quality of the 3D image at the different viewpoints.
- the depth dependent filtering of the image may be such that a blurring of the image is consistent with a blur introduced by a camera focused at a particular depth, this is illustrated in FIG. 4 .
- the Figure shows a scene from the TV-series Star Trek Enterprise, showing two actors 41 , 42 in front of a blurry background 44 .
- the actor denoted 42 is also out of focus and as a consequence of this, blurred.
- the shoulder of the actor denoted 41 it is clear that background does not blur over foreground objects, as a sharp outline 43 of the shoulder is seen.
- the shoulder outline 45 of the actor denoted 42 is, however, blurred, showing that foreground objects do blur over background object.
- image rendering process following these rules of what blurs over what in an image leads to an increased perceived spatial dimension in a 3D image.
- the band-pass filter is typically a low-pass or a high-pass filter.
- the low-pass filter mitigates problems, typically alias problems, related to sampling the intensity function into a low number of sub-images, such as eight or nine, depending upon the number of views of the display.
- the high-pass filter mitigates problems relating to crosstalk imposing blur in the view direction.
- a combination of high-pass filtering and low-pass filtering may be performed to optimize the perceived image quality, or the filters may be applied separately.
- FIGS. 5A-5C illustrate an embodiment of a mapping of a first image to a second image in a spatial filtering process.
- an image signal representing a first image comprising 3D image data is received or provided.
- the 3D image data may be represented in any suitable coordinate representation.
- the image is described in terms of a spatial coordinate set referring to a position in image plane, and a depth of the image in a direction perpendicular to the image plane. It is, however, to be understood that alternative coordinate representations may be envisioned.
- the filtering may be input-driven, and for each input pixel, the input pixel also being referred to as the source element, the difference in depth between the source element and a reference depth is determined.
- the reference depth being set to the depth layer in the image which is in focus, or which should remain in focus.
- the depth difference is then used as a measure for the strength of the spatial filter.
- the strength of the filter may in an embodiment be the number of pixels affected by the intensity of the source element, i.e. as the size of the set of image elements of the second image.
- the size of the set of image elements may be the radius of a distribution filter, distributing the intensity of the source element to the set of destination elements.
- source element and destination element be referred to as source pixel and destination pixel, respectively.
- the intensity value of a pixel 52 is distributed in a set of pixels in the second image 53 , the second image being an updated version of the first image 50 , where for all pixels 52 , the intensity of the pixel is distributed to a set of pixel elements surrounding the pixel, the size of the set, or the radius, r, of the area 55 of the affected pixel is determined from d-dref.
- the distribution filter may be a cubic b-spline.
- the radius of the distribution filter is small, so destination pixels only receive contribution from the corresponding source pixel.
- source pixel intensities are distributed over large areas and the mix, resulting in a blur.
- the visibility factor equals zero when the destination pixel is much closer to the viewpoint than the source pixel, thereby ensuring that background does not blur over foreground.
- the visibility factor equals one when the source pixel is much closer to the viewpoint than the source pixel, and has a gradual transition between the two values.
- the distances between the source pixel, the destination pixel and the viewpoint may be evaluated from the spatial coordinates of the source pixel, the destination pixel and the viewpoint, for example by comparing the distances between the source pixel and the viewpoint and between the destination pixel and the viewpoint to one or more minimum distances so as to determined when the source pixel is much closer than the destination pixel to the viewpoint, and vice versa.
- the visibility factor has the effect that destination colors have to be normalized according to the summation of weights, since the sum of weights cannot be held constant beforehand.
- a depth-dependent spatial filter can nevertheless be used to both depth-depending blurring (low-pass filtering) and to depth dependent sharpening (high-pass filtering) which is discussed in connection with FIG. 12 .
- FIGS. 6 to 11 illustrate the effect of the applying a depth filter in accordance with the present invention on a scene from the game QuakeTM.
- the background is blurring over the foreground. This may e.g. be seen by the white background area which blurs over the pillar 70 . This blurring has an effect for the shifted viewpoint ( FIG. 7B ), since some white area 71 may be seen even though it should not be visible from the specific view-angle. Blurring of background over foreground results in a halo-effect counteracting the occlusion.
- FIGS. 8A and 8B illustrate the depth-dependent blur with the visibility factor.
- FIG. 8A shows a blurring of the source image with the visibility factor, i.e. a destination image obtained with the visibility factor.
- the image of FIG. 8B is derived from the destination image of FIG. 8A for a shifted viewpoint, similar to FIG. 7B .
- Both in the destination image ( FIG. 8A ) and the shifted viewpoint image ( FIG. 8B ) are the artefacts discussed in connection with FIGS. 7A and 7B removed.
- a sharp edge is obtained for the left side of the pillar 80 , and the pillar occlude the white area in the shifted view 81 .
- the de-occlusion that occurs with the object at the lower left corner 84 is not just a repetition of a background color, but of a color which for a large part is made up of foreground color, i.e. a semi-transparency is introduced.
- the visibility factor is modified so that source pixels only contribute to destination pixels of similar depth.
- the result of such a filtering is shown in FIGS. 9A and 9B . It may be seen that the halo-effect is removed, but the sharp edge of the foreground object 90 , 91 remains. Such sharp silhouette edges may result in double images due to crosstalk, even though the interior of the object is blurred.
- halo-effects are countered by filtering the depth map.
- the halo-effects 84 as seen in the lower left corner of FIG. 8B are largely due to foreground color being repeated, because the pixels that originally contained background colors only, contain a lot of foreground color after the blurring.
- the halo-artefacts are enlarged when shifted views are computed, because a color which consist for a large part of foreground color is now used to fill in a de-occlusion area.
- a solution which at least reduces the artefacts considerably, is to also filter the depth map itself, thereby ensuring that the artefacts are not enlarged as much by the rendering.
- Any destination pixel to which a foreground color is distributed should also have foreground depth, thereby avoiding that such a pixel will be used in the multi-view rendering to fill in de-occlusion areas.
- This can be done by applying a depth-dependent morphological filter: when an source pixel is distributed to a destination pixel, the depth of the destination pixel is set to the maximum of the depth of the source pixel and the previous depth of that destination pixel.
- depth information from background objects does not change depth information of foreground objects (which for example will keep the depth transitions of for example the pillar to its background sharp, both in color and in depth).
- the updating of the depth map be done by instead of setting the depth of the destination pixel to the maximum value as mention above, to set the depth of the destination pixel to a value between the depth of the source pixel and the depth of the destination pixel.
- the depth map is updated with the foreground depth to extend the foreground object.
- the result is shown in FIG. 10A , showing the updated depth map. Comparing the depth map of FIG. 6B and the depth map of FIG. 10A , the dilation of the foreground objects is clear (the object in the lower left corner 101 , but also in the background to the right of the pillar 100 ).
- the spatial filtering as discussed in connection with FIGS. 5 to 10 is a 2D filtering in the sense that the set of destination pixels is a set of pixels in an image plane. Such 2D filtering may be necessary in order to mimic the out-of-focus blur of a real camera, and thereby improve the perceived image quality of a viewer.
- a horizontal filter may suffice.
- the set of destination pixels instead of the set of destination pixels being comprised within an area 55 as shown in FIG. 5C , the set of destination pixels extend along the horizontal direction on both sides of the source pixel.
- An example of an image and an image with a shifted viewpoint is shown in FIG.
- FIG. 11 for 1D horizontal depth dependent spatial filtering.
- FIG. 11A shows the horizontal blur with been applied.
- FIG. 11B shows the situation with a shifted viewpoint for a case where the depth map has been filtered, as in connection with FIGS. 10A and 10B . Also in the 1D situation large halo artefacts are prevented from appearing.
- a high-pass filtering is typically applied in order to pre-compensate blurring of an image introduced later on, e.g. in connection with the multi-view rendering or sampling of the image.
- FIG. 12E shows the shifted version of FIG. 12D
- FIG. 12E illustrates a situation where crosstalk has been introduced, i.e. FIG. 12F is the combination of FIG. 12E (the shifted view) and FIG. 12D (the original view). As shown, the edge is still sharp, despite the crosstalk.
- the signal including the image data to be presented to the viewer is inputted into an input module, as a first image signal.
- the depth dependent spatial filtering of the first image to provide a second image is conducted at a rendering module, the rendering module typically being a processor unit.
- the input module, rendering module and output module need not, but may, be separate entities.
- the rendering module may also apply additional rendering functions to the image data, e.g. the image data may be properly scaled to the view resolution, colors may be adjusted, etc.
- the rendering of the image signal may be done separately for different color components and the view-dependent intensity function may be determined for at least one color component of the image, and the band-pass filtering applied to the at least one color component of the image. For example, since in an RGB-signal the green component is the most luminous component, the spatial filtering may in an embodiment only be applied for the green component.
Abstract
Description
- The invention relates to a method of rendering image data for a multi-view display. In particular the invention relates to a method of rendering image data for a multi-view display by means of a depth dependent spatial filter. The invention further relates to a multi-view display, to a signal rendering system and to computer readable code for implementing the method.
- A multi-view display is a display capable of presenting to a viewer, different images depending upon the view direction, so that an object in an image may be viewed from different angles. An example of a multi-view display is an auto-stereoscopic display capable of presenting a viewer's left eye with a different image than the right eye. Various multi-view display technologies exist, one such technology is lenticular based. A lenticular display is a parallax 3D display capable of showing multiple images for different horizontal viewing directions. This way, the viewer can experience, e.g., motion parallax and stereoscopic cues.
- One problem relating to multi-view displays is that images for different view-directions may overlap and thereby giving rise to ghost images, or cross-talk between images. Another problem relates to that the number of view-directions may be relatively small, typically eight or nine which may give rise to aliazing effects in some view-directions.
- The published US patent application US 2003/0117489 discloses a three dimensional display and method of reducing crosstalk between left and right eye images of a 3D auto-stereoscopic display. The disclosed method of reducing crosstalk is based on adding a base level of grey to every pixel of both the left and right images so as to raise the background grey level.
- The inventor of the present invention has appreciated that an improved method of rendering image data is of benefit, and has in consequence devised the present invention.
- The present invention seeks to provide improved means for rendering image data for a multi-view display, and it may be seen as an object of the invention to provide an effective filtering technique that ameliorates the perceived image quality of a viewer, or user, of a multi-view display. Preferably, the invention alleviates, mitigates or eliminates one or more of the above or other disadvantages singly or in any combination.
- According to a first aspect of the present invention there is provided, a method of rendering image data for a multi-view display, the method the comprising steps of:
-
- receiving an image signal representing a first image, the first image comprising 3D image data,
- spatially filtering the first image signal to provide a second image signal, the second image signal representing a second image, the spatial filtering comprising a mapping between an image element of the first image and an image element of the second image, a strength of the spatial filter is determined by a reference depth of the first image and a depth of an image element of the first image,
- sample the second image to a plurality of sub-images, each sub-image being associated with a view direction of the image.
- In a multi-view display, the image data is typically rendered for proper presentation. The rendering may be needed since the image may be based on 2D image data projected to the viewer in such a way that the viewer perceives a spatial, or 3D, dimension of the image. For each view-direction of an image, a sub-image of the image as seen from that view-direction is generated, and the sub-images are projected into the associated view-direction.
- The rendering process typically comprises several operations or steps, e.g. depending upon the input format of the image data, the display apparatus, the type of image data, etc. Image data of a first image is provided in a first step. This first step need not be a first step of the entire rendering process. The first image is typically in a format including image plus depth data, or an associated depth map may be provided with the image data, so that the 3D image data may be determined.
- The inventor had the insight that spatial filtering for improving the perceived image quality, especially in terms of crosstalk and aliazing effect, is performed in the output domain, i.e. it is performed at a rendering stage where an input image has already been sampled, at least to some degree, for multi-view display. By spatially filtering the first image signal to provide a second image and the second image being sampled to a plurality of sub-images for multi-view, artefacts, such as crosstalk and aliazing effects, are dealt with in the input domain on a single image instead of in the output domain on a plurality of images, thereby dealing with artefacts in an efficient way.
- While filtering a single image in the input domain rather than the multitude of images in the output domain, may be less perfect than the full-blown filtering of the multitude of images in the output domain, most artefacts may still be avoided or diminished, and a low-cost alternative in terms of processing power and time may thereby be provided.
- Further advantages of the invention according to the first aspect include easy implementation in the rendering pipeline of images for multi-view display. The invention may be implemented in a separate pipeline step before the actual multi-view rendering, allowing for a more pipelined parallel implementation.
- Furthermore, the method is effectively dealing with reduction of artefacts, such as crosstalk and aliazing artefacts, thereby rendering pre-processing or post-processing to further remove or diminish crosstalk or aliazing artefacts unnecessary.
- The optional features as defined in
dependent claims - The optional features as defined in
dependent claims - The optional features as defined in
dependent claims - The optional features as defined in dependent claim 8 are advantageous since by updating the depth of the image elements of the second image an improved handling of viewpoint changes may be provided. The depth is updated by setting the depth of the image element of the second image element to a value between the depth of the image element of the first image and the depth of the image element of the second image. In this way, when an image element of the second image would substantially be composed of foreground and only a little of background, the depth may be set to a value substantially towards the depth of the foreground, a gradual depth transition soften the depth edge. In an embodiment may the depth value be set to the maximum of the depth of the image element of the first image and the depth of the image element of the second image.
- The optional features as defined in dependent claim 10 are advantageous since by applying the spatial filter so that the image element of the first image and the set of image elements of the second image are aligned along a horizontal line of the first image, effects of the coarse sampling in the view direction and crosstalk may effectively be countered for a multi-view display projecting the difference views in a plurality of horizontally orientated directions.
- The optional feature as defined in claim 11 is advantageous since the 2.5D video image format is a standard and widely used format.
- According to a second aspect of the invention is provided a multi-view display device comprising:
-
- a display panel comprising an array of display elements, the display elements being arranged in groups, each group being associated with a view direction of an image,
- an optical element for directing light emitted from the display panel, so that light emitting from a group of display elements is directed into an angular distribution associated with the view direction of the group,
- an input module for receiving a first image signal,
- a rendering module for spatially filtering the first image signal to provide a second image signal, the second image signal representing a second image, the spatial filtering comprising a mapping between an image element of the first image and an image element of the second image, a strength of the spatial filter is determined by a reference depth of the first image and a depth of an image element of the first image,
- an output module for sampling the second image to a plurality of sub-images, each sub-image being associated with a view direction of the image.
- The display device being a multi-view display device enhanced with the rendering method of the first aspect. It is an advantage of the present invention that the multi-view display device may both be a display device born with the functionality according to the first aspect of the invention, or a display device not born with the functionality according to the first aspect of the invention, but which subsequently is enhanced with the functionality of the present invention.
- The input module, the rendering module and the output module may be provided as a signal rendering system according to the third aspect of the invention.
- According to a fourth aspect of the invention is provided a computer readable code for implementing the method according to the first aspect.
- In general the various aspects of the invention may be combined and coupled in any way possible within the scope of the invention. These and other aspects, features and/or advantages of the invention will be apparent from and elucidated with reference to the embodiments described hereinafter.
- Embodiments of the invention will be described, by way of example only, With reference to the drawings, in which
-
FIG. 1 illustrates the principle of a 3D lenticular display, -
FIG. 2 shows a sketch of a lenticular display in top view, -
FIG. 3 illustrates crosstalk between sub-images from neighboring views, -
FIG. 4 illustrates an example of blurring of an image as a result of a camera focused at a particular depth, -
FIGS. 5A-5C illustrate an embodiment of a mapping of a first image to a second image in a spatial filtering process, -
FIGS. 6A and 6B illustrate an example of a 2D input image with associated depth map, -
FIGS. 7A and 7B illustrate a depth dependent spatial filtering of the input image and a shifted view of the same image (or scene), -
FIGS. 8A and 8B illustrate a depth dependent spatial filtering of the input image where a visibility factor has been applied and a shifted view of the filtered image, -
FIGS. 9A and 9B illustrate a depth dependent spatial filtering of the input image where an adjusted visibility factor has been applied and a shifted view of the filtered image, -
FIGS. 10A and 10B illustrate a spatially filtered depth map and depth dependent spatial filtering of the input image with the filtered depth map, -
FIGS. 11A and 11B illustrate a depth dependent spatial filtering of the input image for a 1D situation and a shifted view of the filtered image, -
FIGS. 12A-12F illustrate aspects concerning the application of high-pass spatial filtering. -
FIGS. 1 and 2 illustrates embodiments of a multi-view display, namely sketches of a 3D lenticular display as seen from the side (FIG. 1 ) and in top view (FIG. 2 ). -
FIG. 1 illustrates the principle of a 3D lenticular display. A lenticular display is based on anLCD panel display 1, in front of whichlenses 2 are attached. The lenses accommodate that for a specific viewing angle φ, theviewer 3 only sees a subset of the pixels of the underlying LCD. If appropriate values are set to the subset of pixels associated to the various viewing directions, the viewer will see different images from different viewing directions. So that theviewer 3 sees a center view of the image, whereas the viewer would see a side view of the image from the view-angle denoted 6. - Each lens covers a number of
pixels pixels 4 with the right eye and another subset ofpixels 5 with the left eye. A 3D experience is thereby obtained. -
FIG. 2A shows a sketch of lenticular display in top view. The display comprises an array ofdisplay elements 20 or pixels, such as a conventional LC matrix display panel, where the pixels are arranged in groups, each group being associated with a view direction of an image. Each group of pixel constitutes a sub-image, each sub-image being associated with a view direction. An optical element, i.e. the lenses, direct light emitted from the pixels, so that light emitting from a group of pixels is directed into an angular distribution associated with the view direction of the group, thereby providing separate images to a viewer's eyes. - The lenticular lenses are in the illustrated embodiment arranged at a slight angle or slanted with respect to the columns of the pixels, so that their main longitudinal axis is at an angle with respect to the column direction of the display elements. In this configuration the viewer will see the points sampled along a
direction 22 of the lens. In a nine-view display nine images, one for each view direction, are concurrently computed and shown on the group of pixels associated with a sub-image. When a pixel is lit, the entire lens above the pixel is illuminated 21—this is shown in FIG. 2B—so that for a specific view direction it is the entire lens above the pixel that is seen emitting the color of that pixel. -
FIGS. 1 and 2 describe a LCD-lenticular display, it is however to be understood that the invention is not limited to this type of display. For example may the invention be applied with such displays as barrier-type displays, and the matrix display panel may be other than an LC panel, such as other forms of spatial light modulators, or other types of display panels such as electroluminescent or plasma panels. - The visibility of sub-images from neighboring views from a single viewing direction may cause artefacts such as crosstalk. This is illustrated in
FIG. 3 showing the visible light, 1, as a function of the view angle for a 4⅔-display, i.e. for a display where each lens cover 4⅔ pixel in the horizontal direction. It is seen that the angular distributions 30-34 from different sub-images overlap. The perceived image of a viewer is the sum of the light from each angular distribution and it may be seen that for this particular example, three sub-images contributes to the perceived image of each viewing direction. - The inventor of the present invention has appreciated that by appropriate spatial filtering problems relating to crosstalk, to ghost imaging and aliazing may be removed or at least diminished. Furthermore, by spatially filtering the input image before the image is rendered for multi-view display, only a single image needs to be filtered (and possible a depth map in accordance with certain embodiments). Thereby providing an efficient way of handling spatial filtering of multi-view image data.
- Depth dependent spatial filtering is done to counter crosstalk and/or aliazing effects. However, the depth dependent spatial filtering on an input image which is subsequently rendered for different viewpoints may introduce new artefacts by the rendering. Such as artefacts relating to that foreground and background objects mix for the rendered images with shifted viewpoint, thereby diminishing the perceived image quality of the 3D image at the different viewpoints.
- In order to provide a 3D image with high perceived image quality, the depth dependent filtering of the image may be such that a blurring of the image is consistent with a blur introduced by a camera focused at a particular depth, this is illustrated in
FIG. 4 . The Figure shows a scene from the TV-series Star Trek Enterprise, showing twoactors blurry background 44. The actor denoted 42 is also out of focus and as a consequence of this, blurred. Looking at the shoulder of the actor denoted 41 it is clear that background does not blur over foreground objects, as asharp outline 43 of the shoulder is seen. Theshoulder outline 45 of the actor denoted 42 is, however, blurred, showing that foreground objects do blur over background object. In an image rendering process, following these rules of what blurs over what in an image leads to an increased perceived spatial dimension in a 3D image. - The band-pass filter is typically a low-pass or a high-pass filter. The low-pass filter mitigates problems, typically alias problems, related to sampling the intensity function into a low number of sub-images, such as eight or nine, depending upon the number of views of the display. The high-pass filter mitigates problems relating to crosstalk imposing blur in the view direction. A combination of high-pass filtering and low-pass filtering may be performed to optimize the perceived image quality, or the filters may be applied separately.
-
FIGS. 5A-5C illustrate an embodiment of a mapping of a first image to a second image in a spatial filtering process. - Firstly, an image signal representing a first image comprising 3D image data is received or provided. The 3D image data may be represented in any suitable coordinate representation. In a typical coordinate representation, the image is described in terms of a spatial coordinate set referring to a position in image plane, and a depth of the image in a direction perpendicular to the image plane. It is, however, to be understood that alternative coordinate representations may be envisioned.
- The filtering may be input-driven, and for each input pixel, the input pixel also being referred to as the source element, the difference in depth between the source element and a reference depth is determined. The reference depth being set to the depth layer in the image which is in focus, or which should remain in focus. The depth difference is then used as a measure for the strength of the spatial filter. The strength of the filter may in an embodiment be the number of pixels affected by the intensity of the source element, i.e. as the size of the set of image elements of the second image. The size of the set of image elements may be the radius of a distribution filter, distributing the intensity of the source element to the set of destination elements. In the following may source element and destination element be referred to as source pixel and destination pixel, respectively.
-
FIG. 5A schematically illustrates the principle of using the difference between the depth of a given pixel and a reference depth, d-dref, as a measure of the radius, r, of a distribution filter. The same is illustrated inFIGS. 5B and 5C , but concretized with a section of a matrix display panel, such as a LCD display, comprising a number of image elements orpixels 51. A reference depth is set for the entire image, and for each pixel a depth of the pixel is determined. The intensity value of apixel 52 is distributed in a set of pixels in thesecond image 53, the second image being an updated version of thefirst image 50, where for allpixels 52, the intensity of the pixel is distributed to a set of pixel elements surrounding the pixel, the size of the set, or the radius, r, of thearea 55 of the affected pixel is determined from d-dref. The intensity of a pixel in the set ofpixels 55 may be determined as Ip:=Ip+f(r)*Iq, where Ip is the intensity of the pixel, i.e. the output intensity that is accumulating, f(r) is the distribution function and Iq is the intensity of the source pixel at a distance r of the destination. The distribution filter may be a cubic b-spline. - For areas where the depth values are near the reference depth, the radius of the distribution filter is small, so destination pixels only receive contribution from the corresponding source pixel. For areas where the depth is much different from the reference value, source pixel intensities are distributed over large areas and the mix, resulting in a blur.
- To generate a blur that is consistent with the blur introduced by cameras focused on a particular depth, a visibility factor, v, is multiplied to the distribution function, so that Ip:=Ip+f(r)*Iq*v. The visibility factor equals zero when the destination pixel is much closer to the viewpoint than the source pixel, thereby ensuring that background does not blur over foreground. The visibility factor equals one when the source pixel is much closer to the viewpoint than the source pixel, and has a gradual transition between the two values. The distances between the source pixel, the destination pixel and the viewpoint may be evaluated from the spatial coordinates of the source pixel, the destination pixel and the viewpoint, for example by comparing the distances between the source pixel and the viewpoint and between the destination pixel and the viewpoint to one or more minimum distances so as to determined when the source pixel is much closer than the destination pixel to the viewpoint, and vice versa. The visibility factor has the effect that destination colors have to be normalized according to the summation of weights, since the sum of weights cannot be held constant beforehand.
- In the following embodiments relating to depth-dependent blur are addressed. A depth-dependent spatial filter can nevertheless be used to both depth-depending blurring (low-pass filtering) and to depth dependent sharpening (high-pass filtering) which is discussed in connection with
FIG. 12 . -
FIGS. 6 to 11 illustrate the effect of the applying a depth filter in accordance with the present invention on a scene from the game Quake™. -
FIG. 6A illustrates the scene as used in the game, andFIG. 6B illustrates the depth map of the image (or scene). In the depth map, the grey-scale corresponds to disparity, so that bright objects are closer than dark object. The reference depth is set to the pillar 60 in the middle. In the following images, the image ofFIG. 6A is the first image to be mapped into a second image by a spatial filtering according to the present invention. The image ofFIG. 6A in combination with the depth map ofFIG. 6B is hereafter referred to as the source image. The second image may hereafter be referred to as the destination image. -
FIGS. 7A and 7B illustrate the depth-dependent blur without the visibility factor.FIG. 7A shows a blurring of the source image without the visibility factor, i.e. a destination image obtained without the visibility factor. The image ofFIG. 7B is derived from the destination image ofFIG. 7A for a shifted viewpoint using the depth map shown inFIG. 6B . - It is seen in
FIG. 7A that the background is blurring over the foreground. This may e.g. be seen by the white background area which blurs over thepillar 70. This blurring has an effect for the shifted viewpoint (FIG. 7B ), since somewhite area 71 may be seen even though it should not be visible from the specific view-angle. Blurring of background over foreground results in a halo-effect counteracting the occlusion. -
FIGS. 8A and 8B illustrate the depth-dependent blur with the visibility factor.FIG. 8A shows a blurring of the source image with the visibility factor, i.e. a destination image obtained with the visibility factor. The image ofFIG. 8B is derived from the destination image ofFIG. 8A for a shifted viewpoint, similar toFIG. 7B . - Both in the destination image (
FIG. 8A ) and the shifted viewpoint image (FIG. 8B ) are the artefacts discussed in connection withFIGS. 7A and 7B removed. A sharp edge is obtained for the left side of the pillar 80, and the pillar occlude the white area in the shifted view 81. - However, the halo-artefacts remain for silhouettes where foreground blurs over background. The de-occlusion that occurs with the object at the lower
left corner 84 is not just a repetition of a background color, but of a color which for a large part is made up of foreground color, i.e. a semi-transparency is introduced. - In a situation where additional viewpoints of an image is rendered from image plus depth information, different solutions exists as how to diminish or even remove the halo-effects.
- In an embodiment, the visibility factor is modified so that source pixels only contribute to destination pixels of similar depth. The result of such a filtering is shown in
FIGS. 9A and 9B . It may be seen that the halo-effect is removed, but the sharp edge of theforeground object - In another embodiment, halo-effects are countered by filtering the depth map. The halo-
effects 84 as seen in the lower left corner ofFIG. 8B are largely due to foreground color being repeated, because the pixels that originally contained background colors only, contain a lot of foreground color after the blurring. The halo-artefacts are enlarged when shifted views are computed, because a color which consist for a large part of foreground color is now used to fill in a de-occlusion area. - A solution, which at least reduces the artefacts considerably, is to also filter the depth map itself, thereby ensuring that the artefacts are not enlarged as much by the rendering.
- Any destination pixel to which a foreground color is distributed, should also have foreground depth, thereby avoiding that such a pixel will be used in the multi-view rendering to fill in de-occlusion areas. This can be done by applying a depth-dependent morphological filter: when an source pixel is distributed to a destination pixel, the depth of the destination pixel is set to the maximum of the depth of the source pixel and the previous depth of that destination pixel. This naturally follows the visibility criterion: depth information from background objects does not change depth information of foreground objects (which for example will keep the depth transitions of for example the pillar to its background sharp, both in color and in depth). In general, may the updating of the depth map be done by instead of setting the depth of the destination pixel to the maximum value as mention above, to set the depth of the destination pixel to a value between the depth of the source pixel and the depth of the destination pixel.
- In a situation where the image filter blurs foreground over background, the depth map is updated with the foreground depth to extend the foreground object. The result is shown in
FIG. 10A , showing the updated depth map. Comparing the depth map ofFIG. 6B and the depth map ofFIG. 10A , the dilation of the foreground objects is clear (the object in the lowerleft corner 101, but also in the background to the right of the pillar 100). - Using this filtered depth map, along with the filtered image from
FIG. 8A results in the alternate view shown inFIG. 10B . Somehalos 102 are still visible, due to the semi-transparency of the blurred edges which are now rendered at foreground depth, and also due to de-occlusions now being filled with color information originating from quite far from the edge, but by far not as severe as those depicted inFIG. 8B . - The spatial filtering as discussed in connection with
FIGS. 5 to 10 is a 2D filtering in the sense that the set of destination pixels is a set of pixels in an image plane. Such 2D filtering may be necessary in order to mimic the out-of-focus blur of a real camera, and thereby improve the perceived image quality of a viewer. However, to counter effects of coarse sampling in the view direction as may be present on multi-view display devices, as well as crosstalk, a horizontal filter may suffice. In a horizontal filter, or a 1D filter, instead of the set of destination pixels being comprised within anarea 55 as shown inFIG. 5C , the set of destination pixels extend along the horizontal direction on both sides of the source pixel. An example of an image and an image with a shifted viewpoint is shown inFIG. 11 for 1D horizontal depth dependent spatial filtering. As can be seen when comparingFIG. 11A toFIG. 6A , the horizontal blur has been applied.FIG. 11B shows the situation with a shifted viewpoint for a case where the depth map has been filtered, as in connection withFIGS. 10A and 10B . Also in the 1D situation large halo artefacts are prevented from appearing. - In horizontal filtering vertical halo-effects are avoided for shifted viewpoints. An example of a vertical halo-effect that is avoided in this situation may be seen be comparing the top of the
pillar 110 onFIG. 11B andFIG. 7B . OnFIG. 7B vertical halo-effects are introduced by the shifting of viewpoint. - A high-pass filtering is typically applied in order to pre-compensate blurring of an image introduced later on, e.g. in connection with the multi-view rendering or sampling of the image.
-
FIG. 12A schematically illustrates an input image with an edge. Multi-view rendering will shift this image, according to depth, so a second view will be a shifted version of the image (assuming constant depth in that area in this case). This is shown inFIG. 12B . On a multi-view display exhibiting crosstalk a viewer will not purely see one view (say view 1), but see a mix of the view one should see, and the neighboring views. As an exampleFIG. 12C illustrates the situation where ⅛th of a neighboring view is seen, thusFIG. 12C illustrate the combination of ⅛th of a neighboring view (FIG. 12B ) and ⅞th of the view itself (FIG. 12A ). The edge is split over 2 smaller steps, i.e. the edge is blurred. - To counter this, the input image can be high-pass filtered. Usually resulting in some overshoot before and after the edge, making the edge “higher”. This is drawn schematically in
FIG. 12D .FIG. 12E shows the shifted version ofFIG. 12D , andFIG. 12E illustrates a situation where crosstalk has been introduced, i.e.FIG. 12F is the combination ofFIG. 12E (the shifted view) andFIG. 12D (the original view). As shown, the edge is still sharp, despite the crosstalk. - For high-pass filtering areas which have a depth similar to the reference depth no or only little sharpening occurs, as the difference between the reference depth and the depth of the area increases, the radius, or extent, of the area affected by the sharpening increases, matching the distance between edges in neighboring views.
- In an embodiment, the signal including the image data to be presented to the viewer is inputted into an input module, as a first image signal. The depth dependent spatial filtering of the first image to provide a second image is conducted at a rendering module, the rendering module typically being a processor unit. The input module, rendering module and output module, need not, but may, be separate entities.
- The rendering module may also apply additional rendering functions to the image data, e.g. the image data may be properly scaled to the view resolution, colors may be adjusted, etc. The rendering of the image signal may be done separately for different color components and the view-dependent intensity function may be determined for at least one color component of the image, and the band-pass filtering applied to the at least one color component of the image. For example, since in an RGB-signal the green component is the most luminous component, the spatial filtering may in an embodiment only be applied for the green component.
- The invention can be implemented in any suitable form including hardware, software, firmware or any combination of these. The invention or some features of the invention can be implemented as computer software running on one or more data processors and/or digital signal processors. The elements and components of an embodiment of the invention may be physically, functionally and logically implemented in any suitable way. Indeed, the functionality may be implemented in a single unit, in a plurality of units or as part of other functional units. As such, the invention may be implemented in a single unit, or may be physically and functionally distributed between different units and processors.
- Although the present invention has been described in connection with preferred embodiments, it is not intended to be limited to the specific form set forth herein. Rather, the scope of the present invention is limited only by the accompanying claims.
- In this section, certain specific details of the disclosed embodiment are set forth for purposes of explanation rather than limitation, so as to provide a clear and thorough understanding of the present invention. However, it should be understood readily by those skilled in this art, that the present invention may be practiced in other embodiments which do not conform exactly to the details set forth herein, without departing significantly from the spirit and scope of this disclosure. Further, in this context, and for the purposes of brevity and clarity, detailed descriptions of well-known apparatus, circuits and methodology have been omitted so as to avoid unnecessary detail and possible confusion.
- Reference signs are included in the claims, however the inclusion of the reference signs is only for clarity reasons and should not be construed as limiting the scope of the claims.
Claims (13)
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP05111632 | 2005-12-02 | ||
EP05111632 | 2005-12-02 | ||
EP05111632.5 | 2005-12-02 | ||
PCT/IB2006/054456 WO2007063477A2 (en) | 2005-12-02 | 2006-11-27 | Depth dependent filtering of image signal |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB2006/054456 A-371-Of-International WO2007063477A2 (en) | 2005-12-02 | 2006-11-27 | Depth dependent filtering of image signal |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/038,007 Continuation US9595128B2 (en) | 2005-12-02 | 2013-09-26 | Depth dependent filtering of image signal |
Publications (2)
Publication Number | Publication Date |
---|---|
US20090153652A1 true US20090153652A1 (en) | 2009-06-18 |
US8624964B2 US8624964B2 (en) | 2014-01-07 |
Family
ID=38043042
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/095,176 Active 2030-08-22 US8624964B2 (en) | 2005-12-02 | 2006-11-27 | Depth dependent filtering of image signal |
US14/038,007 Active 2027-08-27 US9595128B2 (en) | 2005-12-02 | 2013-09-26 | Depth dependent filtering of image signal |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/038,007 Active 2027-08-27 US9595128B2 (en) | 2005-12-02 | 2013-09-26 | Depth dependent filtering of image signal |
Country Status (5)
Country | Link |
---|---|
US (2) | US8624964B2 (en) |
EP (1) | EP1958459B1 (en) |
JP (1) | JP2009519625A (en) |
CN (1) | CN101322418B (en) |
WO (1) | WO2007063477A2 (en) |
Cited By (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090141022A1 (en) * | 2007-11-24 | 2009-06-04 | Tom Kimpe | Sensory unit for a 3-dimensional display |
US20100315488A1 (en) * | 2009-06-16 | 2010-12-16 | Samsung Electronics Co., Ltd. | Conversion device and method converting a two dimensional image to a three dimensional image |
US20110025822A1 (en) * | 2007-12-27 | 2011-02-03 | Sterrix Technologies Ug | Method and device for real-time multi-view production |
US20110170607A1 (en) * | 2010-01-11 | 2011-07-14 | Ubiquity Holdings | WEAV Video Compression System |
US20110199379A1 (en) * | 2008-10-21 | 2011-08-18 | Koninklijke Philips Electronics N.V. | Method and device for providing a layered depth model of a scene |
US20120019625A1 (en) * | 2010-07-26 | 2012-01-26 | Nao Mishima | Parallax image generation apparatus and method |
US20120062709A1 (en) * | 2010-09-09 | 2012-03-15 | Sharp Laboratories Of America, Inc. | System for crosstalk reduction |
US20120249750A1 (en) * | 2009-12-15 | 2012-10-04 | Thomson Licensing | Stereo-image quality and disparity/depth indications |
US20120320036A1 (en) * | 2011-06-17 | 2012-12-20 | Lg Display Co., Ltd. | Stereoscopic Image Display Device and Driving Method Thereof |
US20130050303A1 (en) * | 2011-08-24 | 2013-02-28 | Nao Mishima | Device and method for image processing and autostereoscopic image display apparatus |
US20130321596A1 (en) * | 2012-05-31 | 2013-12-05 | Superd Co. Ltd. | Method and system for reducing stereoscopic display crosstalk |
US20140085417A1 (en) * | 2011-05-05 | 2014-03-27 | Orange | Method for encoding and decoding integral images, device for encoding and decoding integral images and corresponding computer programs |
WO2014063085A1 (en) * | 2012-10-18 | 2014-04-24 | Williams Mark B | System and method for expectation maximization reconstruction for gamma emission breast tomosynthesis |
US20140153007A1 (en) * | 2012-11-30 | 2014-06-05 | Lumenco, Llc | Slanted lens interlacing |
US20140285884A1 (en) * | 2012-11-30 | 2014-09-25 | Lumenco, Llc | Slant lens interlacing with linearly arranged sets of lenses |
US20150195504A1 (en) * | 2014-01-08 | 2015-07-09 | SuperD Co. Ltd | Three-dimensional display method and three-dimensional display device |
US9143765B2 (en) | 2010-09-28 | 2015-09-22 | Samsung Display Co., Ltd. | Three dimensional image display |
CN105007476A (en) * | 2015-07-01 | 2015-10-28 | 北京邮电大学 | Image display method and device |
US9313475B2 (en) | 2012-01-04 | 2016-04-12 | Thomson Licensing | Processing 3D image sequences |
US9418619B2 (en) | 2011-06-22 | 2016-08-16 | Sharp Kabushiki Kaisha | Image display device |
WO2016168783A1 (en) * | 2015-04-17 | 2016-10-20 | The Lightco Inc. | Methods and apparatus for filtering image data to reduce noise and/or generating an image |
WO2017074880A1 (en) * | 2015-10-30 | 2017-05-04 | Essential Products, Inc. | Camera integrated into a display |
US20170185228A1 (en) * | 2013-05-09 | 2017-06-29 | Spacee, LLC | System, Method, and Apparatus for an Interactive Container |
US9754526B2 (en) | 2015-10-30 | 2017-09-05 | Essential Products, Inc. | Mobile device with display overlaid with at least a light sensor |
US9767728B2 (en) | 2015-10-30 | 2017-09-19 | Essential Products, Inc. | Light sensor beneath a dual-mode display |
US9823694B2 (en) | 2015-10-30 | 2017-11-21 | Essential Products, Inc. | Camera integrated into a display |
US9843736B2 (en) | 2016-02-26 | 2017-12-12 | Essential Products, Inc. | Image capture with a camera integrated display |
US9848106B2 (en) | 2010-12-21 | 2017-12-19 | Microsoft Technology Licensing, Llc | Intelligent gameplay photo capture |
US9870024B2 (en) | 2015-10-30 | 2018-01-16 | Essential Products, Inc. | Camera integrated into a display |
US10102789B2 (en) | 2015-10-30 | 2018-10-16 | Essential Products, Inc. | Mobile device with display overlaid with at least a light sensor |
AU2015264559B2 (en) * | 2014-05-20 | 2019-10-24 | Lumenco, Llc | Slant lens interlacing with linearly arranged sets of lenses |
US20200281462A1 (en) * | 2017-10-27 | 2020-09-10 | Canon Kabushiki Kaisha | Ophthalmic imaging apparatus, control method for ophthalmic imaging apparatus, and computer-readable medium |
US10979695B2 (en) * | 2017-10-31 | 2021-04-13 | Sony Corporation | Generating 3D depth map using parallax |
US10986255B2 (en) | 2015-10-30 | 2021-04-20 | Essential Products, Inc. | Increasing display size by placing optical sensors beneath the display of an electronic device |
US11274928B2 (en) * | 2015-08-03 | 2022-03-15 | Tomtom Global Content B.V. | Methods and systems for generating and using localization reference data |
US11590416B2 (en) | 2018-06-26 | 2023-02-28 | Sony Interactive Entertainment Inc. | Multipoint SLAM capture |
Families Citing this family (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
BRPI0912063A2 (en) * | 2008-08-04 | 2016-01-05 | Humaneyes Technologies Ltd | "Method for preparing a lenticular imaging article, lenticular imaging article and system for preparing a lenticular imaging article" |
JP5527856B2 (en) | 2008-09-25 | 2014-06-25 | コーニンクレッカ フィリップス エヌ ヴェ | 3D image data processing |
KR101526866B1 (en) * | 2009-01-21 | 2015-06-10 | 삼성전자주식회사 | Method of filtering depth noise using depth information and apparatus for enabling the method |
US8284236B2 (en) * | 2009-02-19 | 2012-10-09 | Sony Corporation | Preventing interference between primary and secondary content in a stereoscopic display |
US8803950B2 (en) * | 2009-08-24 | 2014-08-12 | Samsung Electronics Co., Ltd. | Three-dimensional face capturing apparatus and method and computer-readable medium thereof |
CN101930606A (en) * | 2010-05-14 | 2010-12-29 | 深圳市海量精密仪器设备有限公司 | Field depth extending method for image edge detection |
KR101329969B1 (en) * | 2010-07-09 | 2013-11-13 | 엘지디스플레이 주식회사 | Liquid crystal display device and method for driving local dimming thereof |
JP5968107B2 (en) * | 2011-09-01 | 2016-08-10 | キヤノン株式会社 | Image processing method, image processing apparatus, and program |
CN103139577B (en) | 2011-11-23 | 2015-09-30 | 华为技术有限公司 | The method and apparatus of a kind of depth image filtering method, acquisition depth image filtering threshold |
EP2600616A3 (en) * | 2011-11-30 | 2014-04-30 | Thomson Licensing | Antighosting method using binocular suppression. |
CN104041027A (en) | 2012-01-06 | 2014-09-10 | 奥崔迪合作公司 | Display Processor For 3d Display |
CN102621702B (en) * | 2012-02-20 | 2013-08-28 | 山东科技大学 | Method and system for naked eye three dimensional (3D) image generation during unconventional arrangement of liquid crystal display pixels |
CN104205827B (en) * | 2012-03-30 | 2016-03-16 | 富士胶片株式会社 | Image processing apparatus and method and camera head |
JP5687803B2 (en) | 2012-05-09 | 2015-03-25 | 富士フイルム株式会社 | Image processing apparatus and method, and imaging apparatus |
WO2014013405A1 (en) * | 2012-07-20 | 2014-01-23 | Koninklijke Philips N.V. | Metadata for depth filtering |
US9967538B2 (en) | 2013-11-04 | 2018-05-08 | Massachussetts Institute Of Technology | Reducing view transitions artifacts in automultiscopic displays |
US9756316B2 (en) * | 2013-11-04 | 2017-09-05 | Massachusetts Institute Of Technology | Joint view expansion and filtering for automultiscopic 3D displays |
CN112367512B (en) * | 2021-01-14 | 2021-04-02 | 广州市诺以德医疗科技发展有限公司 | RDK stereoscopic vision detection system and use method thereof |
KR20220157147A (en) | 2021-05-20 | 2022-11-29 | 삼성전자주식회사 | Method and apparatus for processing an image |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS63308684A (en) * | 1987-02-05 | 1988-12-16 | Hitachi Ltd | Extracting method for feature of gradation image |
US4925294A (en) * | 1986-12-17 | 1990-05-15 | Geshwind David M | Method to convert two dimensional motion pictures for three-dimensional systems |
US5148502A (en) * | 1988-02-23 | 1992-09-15 | Olympus Optical Co., Ltd. | Optical image input/output apparatus for objects having a large focal depth |
US5537144A (en) * | 1990-06-11 | 1996-07-16 | Revfo, Inc. | Electro-optical display system for visually displaying polarized spatially multiplexed images of 3-D objects for use in stereoscopically viewing the same with high image quality and resolution |
US5671264A (en) * | 1995-07-21 | 1997-09-23 | U.S. Philips Corporation | Method for the spatial filtering of the noise in a digital image, and device for carrying out the method |
US5684890A (en) * | 1994-02-28 | 1997-11-04 | Nec Corporation | Three-dimensional reference image segmenting method and apparatus |
US5751927A (en) * | 1991-03-26 | 1998-05-12 | Wason; Thomas D. | Method and apparatus for producing three dimensional displays on a two dimensional surface |
US5914728A (en) * | 1992-02-28 | 1999-06-22 | Hitachi, Ltd. | Motion image display apparatus |
US5954414A (en) * | 1996-08-23 | 1999-09-21 | Tsao; Che-Chih | Moving screen projection technique for volumetric three-dimensional display |
US6166720A (en) * | 1997-08-01 | 2000-12-26 | Lg Semicon Co., Ltd. | Color LCD driver with a YUV to RGB converter |
US20020109701A1 (en) * | 2000-05-16 | 2002-08-15 | Sun Microsystems, Inc. | Dynamic depth-of- field emulation based on eye-tracking |
US20030117489A1 (en) * | 1998-05-02 | 2003-06-26 | Sharp Kabushiki Kaisha | Display controller, three dimensional display, and method of reducing crosstalk |
US20030234795A1 (en) * | 2002-06-24 | 2003-12-25 | Samsung Electronics Co., Ltd. | Apparatus and method for converting of pixels from YUV format to RGB format using color look-up tables |
EP1388817A1 (en) * | 2002-08-08 | 2004-02-11 | GE Medical Systems Global Technology Company LLC | Three-dimensional spatial filtering apparatus and method |
WO2004023348A1 (en) * | 2002-09-03 | 2004-03-18 | 4D-Vision Gmbh | Method for simulating optical components for the stereoscopic production of spatial impressions |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0034686A1 (en) * | 1980-02-20 | 1981-09-02 | Lady Bea Enterprises, Inc. | Method and apparatus for generating and processing television signals for viewing in three dimensions and moving holograms |
US4895431A (en) * | 1986-11-13 | 1990-01-23 | Olympus Optical Co., Ltd. | Method of processing endoscopic images |
JPH038681A (en) | 1989-06-02 | 1991-01-16 | Mitsubishi Electric Corp | Main rope slippage detecting device of elevator |
JP3008681B2 (en) | 1992-07-14 | 2000-02-14 | 松下電器産業株式会社 | Image blur processing device |
JP3058769B2 (en) * | 1992-09-01 | 2000-07-04 | 沖電気工業株式会社 | 3D image generation method |
JPH06319152A (en) * | 1993-05-06 | 1994-11-15 | Mitsubishi Electric Corp | Time spatial picture filter |
US6005607A (en) * | 1995-06-29 | 1999-12-21 | Matsushita Electric Industrial Co., Ltd. | Stereoscopic computer graphics image generating apparatus and stereoscopic TV apparatus |
JP2003296748A (en) * | 2002-03-29 | 2003-10-17 | Sony Corp | Image processor and method thereof |
JP4485951B2 (en) * | 2002-10-23 | 2010-06-23 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | 3D video signal post-processing method |
JP4740135B2 (en) * | 2003-09-17 | 2011-08-03 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | System and method for drawing 3D image on screen of 3D image display |
CN1641702A (en) * | 2004-01-13 | 2005-07-20 | 邓兴峰 | Method for designing stereo image from planar image |
WO2006120607A2 (en) | 2005-05-13 | 2006-11-16 | Koninklijke Philips Electronics N.V. | Cost effective rendering for 3d displays |
EP1946566B1 (en) | 2005-11-04 | 2010-09-01 | Koninklijke Philips Electronics N.V. | Rendering of image data for multi-view display |
-
2006
- 2006-11-27 EP EP06831955.7A patent/EP1958459B1/en active Active
- 2006-11-27 JP JP2008542900A patent/JP2009519625A/en active Pending
- 2006-11-27 CN CN2006800453787A patent/CN101322418B/en active Active
- 2006-11-27 WO PCT/IB2006/054456 patent/WO2007063477A2/en active Application Filing
- 2006-11-27 US US12/095,176 patent/US8624964B2/en active Active
-
2013
- 2013-09-26 US US14/038,007 patent/US9595128B2/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4925294A (en) * | 1986-12-17 | 1990-05-15 | Geshwind David M | Method to convert two dimensional motion pictures for three-dimensional systems |
JPS63308684A (en) * | 1987-02-05 | 1988-12-16 | Hitachi Ltd | Extracting method for feature of gradation image |
US5148502A (en) * | 1988-02-23 | 1992-09-15 | Olympus Optical Co., Ltd. | Optical image input/output apparatus for objects having a large focal depth |
US5537144A (en) * | 1990-06-11 | 1996-07-16 | Revfo, Inc. | Electro-optical display system for visually displaying polarized spatially multiplexed images of 3-D objects for use in stereoscopically viewing the same with high image quality and resolution |
US5751927A (en) * | 1991-03-26 | 1998-05-12 | Wason; Thomas D. | Method and apparatus for producing three dimensional displays on a two dimensional surface |
US5914728A (en) * | 1992-02-28 | 1999-06-22 | Hitachi, Ltd. | Motion image display apparatus |
US5684890A (en) * | 1994-02-28 | 1997-11-04 | Nec Corporation | Three-dimensional reference image segmenting method and apparatus |
US5671264A (en) * | 1995-07-21 | 1997-09-23 | U.S. Philips Corporation | Method for the spatial filtering of the noise in a digital image, and device for carrying out the method |
US5954414A (en) * | 1996-08-23 | 1999-09-21 | Tsao; Che-Chih | Moving screen projection technique for volumetric three-dimensional display |
US6166720A (en) * | 1997-08-01 | 2000-12-26 | Lg Semicon Co., Ltd. | Color LCD driver with a YUV to RGB converter |
US20030117489A1 (en) * | 1998-05-02 | 2003-06-26 | Sharp Kabushiki Kaisha | Display controller, three dimensional display, and method of reducing crosstalk |
US20020109701A1 (en) * | 2000-05-16 | 2002-08-15 | Sun Microsystems, Inc. | Dynamic depth-of- field emulation based on eye-tracking |
US20030234795A1 (en) * | 2002-06-24 | 2003-12-25 | Samsung Electronics Co., Ltd. | Apparatus and method for converting of pixels from YUV format to RGB format using color look-up tables |
EP1388817A1 (en) * | 2002-08-08 | 2004-02-11 | GE Medical Systems Global Technology Company LLC | Three-dimensional spatial filtering apparatus and method |
WO2004023348A1 (en) * | 2002-09-03 | 2004-03-18 | 4D-Vision Gmbh | Method for simulating optical components for the stereoscopic production of spatial impressions |
Cited By (54)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090141022A1 (en) * | 2007-11-24 | 2009-06-04 | Tom Kimpe | Sensory unit for a 3-dimensional display |
US9225949B2 (en) * | 2007-11-24 | 2015-12-29 | Barco, N.V. | Sensory unit for a 3-dimensional display |
US20110025822A1 (en) * | 2007-12-27 | 2011-02-03 | Sterrix Technologies Ug | Method and device for real-time multi-view production |
US8736669B2 (en) * | 2007-12-27 | 2014-05-27 | Sterrix Technologies Ug | Method and device for real-time multi-view production |
US20110199379A1 (en) * | 2008-10-21 | 2011-08-18 | Koninklijke Philips Electronics N.V. | Method and device for providing a layered depth model of a scene |
US20100315488A1 (en) * | 2009-06-16 | 2010-12-16 | Samsung Electronics Co., Ltd. | Conversion device and method converting a two dimensional image to a three dimensional image |
US9030530B2 (en) * | 2009-12-15 | 2015-05-12 | Thomson Licensing | Stereo-image quality and disparity/depth indications |
US20120249750A1 (en) * | 2009-12-15 | 2012-10-04 | Thomson Licensing | Stereo-image quality and disparity/depth indications |
US9106925B2 (en) * | 2010-01-11 | 2015-08-11 | Ubiquity Holdings, Inc. | WEAV video compression system |
US20110170607A1 (en) * | 2010-01-11 | 2011-07-14 | Ubiquity Holdings | WEAV Video Compression System |
US20120019625A1 (en) * | 2010-07-26 | 2012-01-26 | Nao Mishima | Parallax image generation apparatus and method |
US20120062709A1 (en) * | 2010-09-09 | 2012-03-15 | Sharp Laboratories Of America, Inc. | System for crosstalk reduction |
US9143765B2 (en) | 2010-09-28 | 2015-09-22 | Samsung Display Co., Ltd. | Three dimensional image display |
US9848106B2 (en) | 2010-12-21 | 2017-12-19 | Microsoft Technology Licensing, Llc | Intelligent gameplay photo capture |
US9635338B2 (en) * | 2011-05-05 | 2017-04-25 | Orange | Method for encoding and decoding integral images, device for encoding and decoding integral images and corresponding computer programs |
US20140085417A1 (en) * | 2011-05-05 | 2014-03-27 | Orange | Method for encoding and decoding integral images, device for encoding and decoding integral images and corresponding computer programs |
US8988453B2 (en) * | 2011-06-17 | 2015-03-24 | Lg Display Co., Ltd. | Stereoscopic image display device and driving method thereof |
US20120320036A1 (en) * | 2011-06-17 | 2012-12-20 | Lg Display Co., Ltd. | Stereoscopic Image Display Device and Driving Method Thereof |
US9418619B2 (en) | 2011-06-22 | 2016-08-16 | Sharp Kabushiki Kaisha | Image display device |
US20130050303A1 (en) * | 2011-08-24 | 2013-02-28 | Nao Mishima | Device and method for image processing and autostereoscopic image display apparatus |
US9313475B2 (en) | 2012-01-04 | 2016-04-12 | Thomson Licensing | Processing 3D image sequences |
US20130321596A1 (en) * | 2012-05-31 | 2013-12-05 | Superd Co. Ltd. | Method and system for reducing stereoscopic display crosstalk |
US9615084B2 (en) * | 2012-05-31 | 2017-04-04 | Superd Co. Ltd. | Method and system for reducing stereoscopic display crosstalk |
WO2014063085A1 (en) * | 2012-10-18 | 2014-04-24 | Williams Mark B | System and method for expectation maximization reconstruction for gamma emission breast tomosynthesis |
US20150260883A1 (en) * | 2012-11-30 | 2015-09-17 | Lumenco, Llc | Slant lens interlacing with linearly arranged sets of lenses |
US9383588B2 (en) * | 2012-11-30 | 2016-07-05 | Lumenco, Llc | Slanted lens interlacing |
US9482791B2 (en) * | 2012-11-30 | 2016-11-01 | Lumenco, Llc | Slant lens interlacing with linearly arranged sets of lenses |
US9052518B2 (en) * | 2012-11-30 | 2015-06-09 | Lumenco, Llc | Slant lens interlacing with linearly arranged sets of lenses |
US20140285884A1 (en) * | 2012-11-30 | 2014-09-25 | Lumenco, Llc | Slant lens interlacing with linearly arranged sets of lenses |
US20140153007A1 (en) * | 2012-11-30 | 2014-06-05 | Lumenco, Llc | Slanted lens interlacing |
US20170185228A1 (en) * | 2013-05-09 | 2017-06-29 | Spacee, LLC | System, Method, and Apparatus for an Interactive Container |
US10891003B2 (en) * | 2013-05-09 | 2021-01-12 | Omni Consumer Products, Llc | System, method, and apparatus for an interactive container |
US20150195504A1 (en) * | 2014-01-08 | 2015-07-09 | SuperD Co. Ltd | Three-dimensional display method and three-dimensional display device |
US9485488B2 (en) * | 2014-01-08 | 2016-11-01 | SuperD Co. Ltd | Three-dimensional display method and three-dimensional display device |
AU2015264559B2 (en) * | 2014-05-20 | 2019-10-24 | Lumenco, Llc | Slant lens interlacing with linearly arranged sets of lenses |
WO2016168783A1 (en) * | 2015-04-17 | 2016-10-20 | The Lightco Inc. | Methods and apparatus for filtering image data to reduce noise and/or generating an image |
CN105007476A (en) * | 2015-07-01 | 2015-10-28 | 北京邮电大学 | Image display method and device |
US11274928B2 (en) * | 2015-08-03 | 2022-03-15 | Tomtom Global Content B.V. | Methods and systems for generating and using localization reference data |
WO2017074880A1 (en) * | 2015-10-30 | 2017-05-04 | Essential Products, Inc. | Camera integrated into a display |
US11042184B2 (en) | 2015-10-30 | 2021-06-22 | Essential Products, Inc. | Display device comprising a touch sensor formed along a perimeter of a transparent region that extends through a display layer and exposes a light sensor |
US9864400B2 (en) | 2015-10-30 | 2018-01-09 | Essential Products, Inc. | Camera integrated into a display |
US9870024B2 (en) | 2015-10-30 | 2018-01-16 | Essential Products, Inc. | Camera integrated into a display |
US10062322B2 (en) | 2015-10-30 | 2018-08-28 | Essential Products, Inc. | Light sensor beneath a dual-mode display |
US10102789B2 (en) | 2015-10-30 | 2018-10-16 | Essential Products, Inc. | Mobile device with display overlaid with at least a light sensor |
US10432872B2 (en) | 2015-10-30 | 2019-10-01 | Essential Products, Inc. | Mobile device with display overlaid with at least a light sensor |
US9754526B2 (en) | 2015-10-30 | 2017-09-05 | Essential Products, Inc. | Mobile device with display overlaid with at least a light sensor |
US9767728B2 (en) | 2015-10-30 | 2017-09-19 | Essential Products, Inc. | Light sensor beneath a dual-mode display |
US9823694B2 (en) | 2015-10-30 | 2017-11-21 | Essential Products, Inc. | Camera integrated into a display |
US11204621B2 (en) | 2015-10-30 | 2021-12-21 | Essential Products, Inc. | System comprising a display and a camera that captures a plurality of images corresponding to a plurality of noncontiguous pixel regions |
US10986255B2 (en) | 2015-10-30 | 2021-04-20 | Essential Products, Inc. | Increasing display size by placing optical sensors beneath the display of an electronic device |
US9843736B2 (en) | 2016-02-26 | 2017-12-12 | Essential Products, Inc. | Image capture with a camera integrated display |
US20200281462A1 (en) * | 2017-10-27 | 2020-09-10 | Canon Kabushiki Kaisha | Ophthalmic imaging apparatus, control method for ophthalmic imaging apparatus, and computer-readable medium |
US10979695B2 (en) * | 2017-10-31 | 2021-04-13 | Sony Corporation | Generating 3D depth map using parallax |
US11590416B2 (en) | 2018-06-26 | 2023-02-28 | Sony Interactive Entertainment Inc. | Multipoint SLAM capture |
Also Published As
Publication number | Publication date |
---|---|
EP1958459A2 (en) | 2008-08-20 |
JP2009519625A (en) | 2009-05-14 |
WO2007063477A2 (en) | 2007-06-07 |
US9595128B2 (en) | 2017-03-14 |
US20140168206A1 (en) | 2014-06-19 |
WO2007063477A3 (en) | 2007-10-18 |
CN101322418B (en) | 2010-09-01 |
CN101322418A (en) | 2008-12-10 |
EP1958459B1 (en) | 2018-06-13 |
US8624964B2 (en) | 2014-01-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9595128B2 (en) | Depth dependent filtering of image signal | |
US9532038B2 (en) | Rendering of image data for multi-view display | |
CN106170084B (en) | Multi-view image display apparatus, control method thereof, and multi-view image generation method | |
JP5366547B2 (en) | Stereoscopic display device | |
US8405708B2 (en) | Blur enhancement of stereoscopic images | |
US9185432B2 (en) | Method and system for encoding a 3D image signal, encoded 3D image signal, method and system for decoding a 3D image signal | |
US9183670B2 (en) | Multi-sample resolving of re-projection of two-dimensional image | |
CN107430782A (en) | Method for being synthesized using the full parallax squeezed light field of depth information | |
EP1991963A2 (en) | Rendering an output image | |
WO2012094077A1 (en) | Multi-sample resolving of re-projection of two-dimensional image | |
US9990738B2 (en) | Image processing method and apparatus for determining depth within an image | |
US10805601B2 (en) | Multiview image display device and control method therefor | |
US10931927B2 (en) | Method and system for re-projection for multiple-view displays | |
US20240054619A1 (en) | Differently correcting images for different eyes | |
US20030052899A1 (en) | Dynamic spatial warp | |
CN115244570A (en) | Merging split pixel data to obtain deeper depth of field | |
CN116866540A (en) | Image rendering method, system, device and storage medium | |
TWI463434B (en) | Image processing method for forming three-dimensional image from two-dimensional image | |
Miyazawa et al. | Real-time interactive 3D computer stereography for recreational applications |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: KONINKLIJKE PHILIPS ELECTRONICS N V, NETHERLANDS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BARENBRUG, BART GERARD BERNARD;REEL/FRAME:021009/0504 Effective date: 20070802 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |
|
AS | Assignment |
Owner name: LEIA INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KONINKLIJKE PHILIPS N.V.;REEL/FRAME:065385/0825 Effective date: 20231024 |