WO2015123775A1 - Systems and methods for incorporating a real image stream in a virtual image stream - Google Patents
Systems and methods for incorporating a real image stream in a virtual image stream Download PDFInfo
- Publication number
- WO2015123775A1 WO2015123775A1 PCT/CA2015/050124 CA2015050124W WO2015123775A1 WO 2015123775 A1 WO2015123775 A1 WO 2015123775A1 CA 2015050124 W CA2015050124 W CA 2015050124W WO 2015123775 A1 WO2015123775 A1 WO 2015123775A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- virtual
- physical
- image stream
- camera
- image
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 230000003190 augmentative effect Effects 0.000 claims abstract description 12
- 238000009877 rendering Methods 0.000 claims abstract description 6
- 238000004040 coloring Methods 0.000 claims description 10
- 238000012545 processing Methods 0.000 abstract description 4
- 239000003086 colorant Substances 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000010348 incorporation Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 238000012505 colouration Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012800 visualization Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
- G02B27/0176—Head mounted characterised by mechanical features
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
- H04N13/344—Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/2224—Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
- H04N5/2226—Determination of depth image, e.g. for foreground/background separation
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/0138—Head-up displays characterised by optical features comprising image capture systems, e.g. camera
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0179—Display position adjusting means not related to the information to be displayed
- G02B2027/0187—Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye
Definitions
- the following relates generally to wearable technologies, and more specifically to systems and methods for incorporating a real image stream in a virtual image stream.
- AR augmented reality
- VR virtual reality
- a system for generating an augmented reality image stream combining virtual features and a physical image stream.
- the system comprises: (a) an image camera mounted to a head mounted display, the image camera configured to capture the physical image stream of a physical environment within its field of view; (b) a depth camera mounted to the head mounted display, the depth camera configured to capture depth information for the physical environment within its field of view; (c) a processor configured to: (i) obtain the physical image stream and the depth information; (ii) align the physical image stream with the depth information; (d) a graphics engine configured to: (i) model the virtual features in a virtual map; (ii) model the physical features in the virtual map based on the depth information; (iii) capture the modelled virtual and physical features within a field of view of a virtual camera, the field of view of the virtual camera corresponding to the field of view of the image camera; and (e) a shader configured to generate a virtual image stream incorporating the physical image stream by: (i) colouring the visible portions of
- the system may further display the virtual image stream incorporating the physical image stream, wherein the head mounted display system further comprises a display to display the virtual image stream.
- the depth camera and image camera may be jointly provided by a stereo camera, and the processor may be configured to determine depth information from the stereo camera. The processor may be further configured to provide the parameters for the virtual map to the shader.
- a method for generating an augmented reality image stream combining virtual features and a physical image stream comprising: obtaining a physical image stream of a physical environment within a field of view of an image camera; obtaining depth information for the physical image stream; modelling the physical environment in the physical image stream to a virtual map according to the depth data; modelling virtual features to the virtual map; obtaining a virtual image stream of the virtual map within a field of view of a virtual camera having a field of view corresponding to the field of view of the image camera; colouring visible portions of the modelled physical environment in the virtual image stream by assigning colour values from corresponding portions of the physical image stream; and colouring the visible portions of the modelled virtual features according to parameters for the virtual map.
- the method may further comprise aligning the depth information to the physical image stream.
- the method may still further comprise translating the depth information from physical coordinates to virtual map coordinates.
- a method for generating an augmented reality image combining virtual features and a physical image comprises: obtaining a physical image of a physical environment within a field of view of an image camera; obtaining depth information for the physical image; modelling virtual features to a virtual map; capturing the virtual features from a virtual camera having a field of view corresponding to the field of view of the image camera; pasting the physical image to a rear clipping plane in the field of view of the virtual camera;
- a method for generating an augmented reality image combining virtual features and a physical image comprises: capturing a physical image of the physical environment in a physical field of view; obtaining depth information for the physical image; modelling at least one virtual feature to be placed in a virtual view frustum overlaying the physical field of view, the view frustum having a virtual depth limited by a far clipping plane; providing the physical image and the least one virtual feature to a rendering engine defining a notional virtual camera having the virtual view frustum; and instructing the graphics engine to: (i) apply the physical image at the far clipping plane; and (ii) render points of the virtual feature for which the depth information indicates that no physical feature has a depth less than the virtual depth of the points of the virtual feature.
- Fig. 1 illustrates a head mounted display for generating and displaying AR to a user thereof
- FIG. 2 is a schematic diagram of components of the head mounted display illustrated in Fig. 1 ;
- FIG. 3 illustrates a field of view of a notional camera simulated by a graphics engine configured to render a virtual image stream depicting an AR for display by a head mounted display;
- FIG. 4 illustrates an exemplary incorporation of a real image stream into a virtual image stream generated by a graphics engine
- FIG. 5 illustrates an exemplary scenario in which a graphics engine incorporates a physical image stream into a virtual image stream
- FIG. 6 illustrates a method for incorporating a physical image stream into a virtual image stream as exemplified in Fig. 5;
- FIG. 7 illustrates another exemplary scenario in which a graphics engine incorporates a physical image stream into a virtual image stream
- FIG. 8 illustrates another method for incorporating a physical image stream into a virtual image stream as exemplified in Fig. 7
- any module, unit, component, server, computer, terminal or device exemplified herein that executes instructions may include or otherwise have access to computer readable media such as storage media, computer storage media, or data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape.
- Computer storage media may include volatile and non-volatile, removable and non- removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data.
- Examples of computer storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by an application, module, or both. Any such computer storage media may be part of the device or accessible or connectable thereto.
- any processor or controller set out herein may be implemented as a singular processor or as a plurality of processors. The plurality of processors may be arrayed or distributed, and any processing function referred to herein may be carried out by one or by a plurality of processors, even though a single processor may be exemplified.
- AR augmented reality
- AR includes: the interaction by a user with real physical features and structures along with virtual features and structures overlaid thereon; and the interaction by a user with a fully virtual set of features and structures that are generated to include renderings of physical features and structures and that may comply with scaled versions of physical environments to which virtual features and structures are applied, which may alternatively be referred to as an "enhanced virtual reality”.
- VR virtual reality
- FIG. 1 an exemplary HMD 12 configured as a helmet is shown;
- the HMD 12 which may be worn by a user occupying a physical environment, may comprise: a processor 130 in communication with one or more of the following components: (i) a graphics processing unit 133 having a graphics engine to generate a virtual image stream representing AR; (ii) a memory 131 to store data used and generated by the processor 130; (iii) a depth camera 127 to capture depth information for the physical environment within its field of view; (iv) an image camera to capture a physical image stream of the physical environment within its field of view; (v) a display 121 to display the virtual image stream to the user; and (vi) a power source 103, such as, for example, a battery, to provide power to the components.
- a graphics processing unit 133 having a graphics engine to generate a virtual image stream representing AR
- a memory 131 to store data used and generated by the processor 130
- a depth camera 127 to capture depth information for the physical environment within its field of view
- an image camera to capture a physical image stream of the physical
- Fig. 2 illustrates the components of the HMD 12 shown in Fig. 1 in schematic form.
- the memory 131 is accessible by the processor 130.
- the processor communicates with the image camera 123 and the depth camera 127 to obtain, respectively, a physical image stream (i.e., a "real image stream") and depth information for the physical environment.
- the processor is further in communication with a graphics processing unit 133 (GPU) having a graphics engine 137 and a graphics engine plugin 135.
- the graphics engine plugin may facilitate communication between the processor 130 and the graphics engine 137.
- the graphics engine 137 obtains the physical image stream and depth information associated with the physical image stream from the processor 130, either directly, or as data stored by the processor 130 to the memory 131.
- the graphics engine 137 generates a virtual image stream and provides the virtual image stream to the display 122.
- the foregoing components are powered by the power source 103.
- the power source 103 is shown as being electrically coupled to the processor 130, the power source may be electrically coupled directly to the remaining ones of the foregoing components.
- the processor provides the depth information and the physical image stream to the graphics engine 137, for example, as a pixel map and a depth map, respectively, and the graphics engine 137 uses the depth information to generate models within a virtual environment of the captured physical environment alongside virtual features.
- the shader 139 obtains the models from the graphics engine 137 and colours the models to provide a virtual image stream.
- the image camera 123 may be any suitable image camera, such as, for example, a stereo camera or a monovision camera, suited to capture the physical environment within its field of view to generate a physical image stream.
- the field of view of the image camera 123 is defined by parameters which may be continuously provided from the image camera 123 to the processor 130, or which may be predetermined and stored in the memory 131. For example, if the image camera 123 has a fixed field of view, the parameters of the field of view are fixed and may be stored in the memory 131 and accessible to the GPU 133 and processor 130.
- the depth camera 127 may be any suitable depth camera or scanner, such as, for example, a range finder, a time-of-flight camera, a LIDAR scanner, radar scanner or scanning laser range finder operable to capture depth information for the physical environment surrounding the HMD and provide the depth information to the processor 130.
- the field of view of the depth camera 127 intersects at least a region of the field of view of the image camera 123.
- the field of view of the depth camera 127 substantially overlaps the field of view of the image camera 123.
- the image camera 123 may be a stereo camera operable to provide depth information based on epipolar geometry, such that the depth camera 127 may be redundant.
- the processor 130 may obtain sufficient depth information without requiring depth information from the depth camera 127, so that the depth camera 127 may not be required in such embodiments.
- the processor 130 obtains the physical image stream from the image camera 123 and the depth information from the depth camera 127.
- the processor 130 is configured to align the depth information from the depth camera 127 with the physical image stream captured by the image camera 123 such that, for any region, such as a pixel, within the physical image stream, the processor 130 may determine the corresponding position of the region in world coordinates relative to the image camera 123.
- the processor 130 aligns the physical image stream with the depth information according to any suitable calibration technique.
- the image camera 123 and the depth camera 127 are mounted to the HMD 12 at a fixed position and orientation with respect to each other.
- the spatial relationship between the depth camera 127 and image camera 123 may be defined in the memory 131 for use by the processor 130 in performing the calibration.
- Calibration may be according to any suitable calibration technique, such as the technique described in Canessa et al, "Calibrated depth and color cameras for accurate 3D interaction in a stereoscopic augmented reality environment", Journal of Visual Communication and Image Representation, Volume 25, Issue 1 , January 2014, Pages 227-237. If the image camera 123 is a stereo camera, the processor 130 may not calibrate the depth camera 127 to the image camera 123, since the image camera 123 may provide sufficient depth information on its own.
- the processor 130 may transform the world coordinates associated to the pixels to the graphics engine coordinate system using a suitable transformation technique.
- the processor 130 may store the transformed and/or un- transformed coordinates and pixel values to the memory 131 for subsequent retrieval by the graphics engine 137.
- the processor 130 calls the graphics engine 137, such as, for example the Unity 3DTM engine, to generate a virtual image stream comprising rendered graphics representing a 3D virtual reality environment.
- the display 122 of the HMD 12 obtains the virtual image stream and displays it to the user.
- the virtual image stream may comprise computer generated imagery (CGI), such as, for example, virtual characters, virtual environments or virtual effects representing a 3D VR.
- CGI computer generated imagery
- the graphics engine 137 may simulate a notional camera 301 (referred to herein as a virtual camera) to capture the virtual image stream as it would be captured by a real camera occupying the virtual environment.
- the virtual camera 301 may have the properties of a mono or stereo camera, in accordance with the image camera 123 of the HMD 12. It will be appreciated that a stereo virtual camera may render a virtual image stream which the user will perceive as 3D provided the display 122 of the HMD 12 is suitably configured to display a 3D image stream.
- the virtual camera 301 further comprises a view frustum, which is defined as the region lying within the field of view between a far clipping plane 313 and a near clipping plane 311.
- the virtual image stream only comprises virtual features lying within the view frustum.
- the graphics engine 137 is configurable by the processor 130 to define a view frustum having specific properties.
- the processor 130 preferably configures the graphics engine to define a view frustum defined by a field of view having parameters corresponding to the field of view of the image camera 123.
- the parameters for the field of view of the image camera 123 may be obtained from the image camera 123, or may be predefined in the memory 131. In either event, the parameters are obtained by the processor 130 to configure the graphics engine 137 so that the graphics engine 137 accordingly defines the field of view of the virtual camera 301.
- the graphics engine 137 may incorporate the physical image stream into the virtual image stream alongside virtual elements.
- the processor 130 may therefore route the physical image stream from the image camera 123 to the graphics engine 137 for incorporation into the virtual image stream.
- the physical image stream may depict features which are dispersed throughout 3 dimensions of the physical environment.
- Certain available graphics engines incorporate the physical image stream by pasting the physical image stream to a far clipping plane as a 2D background texture. This has generally been done as the physical image stream has traditionally been used to display only bounds of the physical environment.
- FIG. 4 illustrates an exemplary scenario in which the incorporation of a physical image stream 401 into a virtual image stream 411 as a background texture pasted to the far clipping plane 313 may provide an inaccurate representation to a user.
- the virtual feature 413 is shown as having a depth of one pixel, but it will be appreciated that a virtual feature may have any depth.
- the physical image stream 401 depicts a person 403 standing in front of a background feature 405.
- the distance in the physical environment of the person 403 relative to the background feature 405 corresponds to a distance in graphics engine coordinates that is greater than the distance between the virtual feature 413 and the rear clipping plane 313, such that the person 403 should appear in the virtual image stream 411 as though located closer to the virtual camera 301 than the virtual feature 413, which lies at Z-Z v , but further from the virtual camera 301 than the near clipping plane 31 1.
- the graphics engine 137 merely pastes the entire physical image stream 401 as a background texture on the far clipping plane 313, i.e., behind the virtual feature 413, the person 403 actually appears to be standing behind the virtual feature 413 in the virtual image stream 411.
- the processor calls the graphics engine 137 to selectively display or not display pixels of the virtual feature 413 and the physical image 401 stream in the virtual image stream 511 so that for any two or more pixels within the view frustum having identical X and Y coordinates, the graphics engine selects display of the pixel having the lower Z-value, according to the method illustrated in Fig. 6.
- the physical image stream 401 may be understood as comprising a background element 405 and a person 403 standing in front of the background element 405 by a distance in world coordinates (as determined by the depth camera of the HMD), which translates to a distance in the graphic engine coordinates that is greater than the distance between the virtual feature 413 and the far clipping plane 313.
- the representation 403' illustrates the relative position of the person 403 with respect to the background feature 405 in graphics system coordinates; however, the graphics engine pastes the entire physical image stream 313, including the person 403, to the far clipping plane 313.
- the processor 130 or the memory 131 provides the Z coordinates corresponding to the elements within the physical image stream 401 to the graphics engine 137, as determined by the processor 130 based on the depth information, such that the graphics engine 137 can determine that the pixels representing the person 403 lie in the X-Y plane at Z p , i.e., closer to the virtual camera 301 than the virtual feature 413. Therefore, within the view frustum, for any point X n , Y n , there may be a plurality of pixels.
- the graphics engine 137 applies the colour for any point X n , Y n having two or more pixels the colour corresponding to that point for whichever feature has the lowest Z value, i.e., the pixel which is nearest the virtual camera 301. If, at the point X n , Y n , a physical feature is nearer the virtual camera 301than a virtual feature, the graphics engine 137 obtains the colour of the physical feature at the corresponding location in the physical image stream 401 and assigns that colour to the point X,, Y n in the virtual image stream.
- Fig. 6 illustrates steps in a method for incorporating the physical image stream 401 into the virtual image stream 411 as previously described with reference to Fig. 5.
- the image camera 123 captures the physical image stream 401 depicting the physical environment within its field of view
- the depth camera 127 captures depth information of the physical environment within its field of view.
- the processor 130 obtains the physical image stream 401 and the depth information and aligns the physical image 401 and depth information, as previously described, to assign coordinates to features within the physical image stream 401.
- the processor 130 translates the assigned coordinates into graphics engine coordinates if necessary.
- the processor 130 calls the graphics engine to render the virtual image stream 411 , incorporate the physical image stream 401 into the virtual image stream 411 and to display whichever of any overlapping pixels is nearer the virtual camera 301 along the Z-axis.
- the processor 130 also calls the graphics engine 137 to define a virtual camera 301 having a field of view corresponding to the field of view of the image camera 123.
- the graphics engine 137 obtains the physical image stream 401 and the assigned coordinates while rendering virtual features 413.
- the graphics engine 137 determines which features in the physical image stream 401 overlap with the virtual features 413 from the point of view of the virtual camera 301.
- the graphics engine 137 determines which feature within the overlap is closer to the virtual camera 301 and includes the pixels for that feature in the virtual image stream while not including the overlapping pixels for the feature further from the virtual camera. At block 617 the graphics engine 137 provides the virtual image stream 411 to the display 122.
- the processor 130 obtains depth information from the depth camera 127 for the physical environment captured within the physical image stream.
- the processor 130 associates the depth information to corresponding regions within the physical image stream and either provides the associated information directly to the graphics engine 137 or stores it to the memory 131 for subsequent retrieval by the graphics engine 137.
- the processor 130 calls the graphics engine 137 to model the physical environment captured within the physical image stream 401 in the virtual environment using the depth information for the physical image stream 401.
- the graphics engine 137 models each feature from the physical image stream 401 within the virtual environment as a 3D model, such as, for example, a point cloud, polygonal mesh or triangular mesh.
- the graphics engine further models the virtual feature 411 in the virtual environment so that all physical and virtual features are modelled in 3D within the virtual environment.
- the graphics engine 137 models the person 403 as a 3D model 703, the feature 405 as a 3D model 705, and the virtual feature 411 as a 3D model 711.
- the virtual camera 301 captures the virtual and physical features within its view frustum.
- the graphics engine 137 may model the person 403 in the virtual environment in 3D based on the depth information provided by the depth camera.
- the model 703 of the person 403 appears in grey for illustrative purposes.
- the graphics engine 137 determines which regions of the modelled virtual and physical features would be visible when captured by the virtual camera 301.
- the shader 139 assigns colours to the visible regions.
- the shader 139 obtains the colour values associated to the locations by the processor 130, as previously described. For example, if the processor 130 associates the colour black to a pixel located in graphics engine coordinates at p, Yp, Zp, the shader 139 assigns the colour black to the surface of the model 703 of the person 403 where the model 703 intersects that point.
- the shader 139 may further assign various colours to the visible surfaces of the model 713 of the virtual feature 403 captured by the virtual camera 301.
- the processor 130 may call the shader 139 to colour the virtual image stream 71 1 without reference to the colouring for the physical features from the physical image stream 401.
- the structure of the physical features may be depicted in the virtual image stream 71 1
- the surface colouration and, optionally, even texturing, of the physical elements may be depicted partially or entirely independently of the colouration and/or texture of the physical elements in the physical image stream 401.
- the processor 130 may call the graphics engine 137 to alter the models of the physical elements or omit modelling other physical elements so that the structures in the virtual image stream are partially, but not entirely, related to physical features captured within the physical image stream 401.
- Fig. 8 illustrates steps in a method for incorporating the physical image stream 401 into the virtual image stream 411 as previously described with reference to Fig. 7.
- the image camera 123 captures the physical image stream 401 depicting the physical environment within its field of view
- the depth camera 127 captures depth information of the physical environment within its field of view.
- the processor 130 obtains the physical image stream 401 and the depth information and aligns the physical image 401 and depth information, as previously described, to assign coordinates to regions within the physical image stream 401.
- the processor 130 translates the assigned coordinates into graphics engine coordinates if necessary.
- the processor 130 calls the graphics engine to model the virtual features 413 and physical features as 3D models within a virtual environment.
- the processor 130 also calls the graphics engine 137 to define a virtual camera 301 having a field of view corresponding to the field of view of the image camera 123.
- the graphics engine 137 obtains the assigned coordinates and renders the 3D models.
- the graphics engine 137 determines which regions of the models are visible to the virtual camera 301.
- the shader 139 obtains the modelled features and colours the visible portions of the physical features according to their corresponding colours in the physical image stream 401 and the visible portions of the virtual features according to parameters for the virtual environment as defined, for example, by the processor 130.
- the coloured depiction forms the virtual image stream 711.
- the shader 139 provides the virtual image stream 711 to the display 122.
- the processor calls the shader 139 to resolve such conflicts in favour of displaying the pixel of the physical feature. This resolution may, for example, enhance user safety by prioritising display of physical features which may pose safety hazards or physical obstacles which the user must navigate when moving throughout a physical environment.
Abstract
Systems and methods are described for rendering an image stream combining real and virtual elements for display to a user equipped with a head mounted display as an augmented reality. The head mounted display comprises: an image camera to capture a physical image stream of the physical environment surrounding the user; a depth camera for capturing depth information for the physical environment; a processor for receiving the depth information and the physical image stream and associating the depth information with regions in the physical image stream; a graphics processing unit having a graphics engine to render a virtual image stream comprising virtual features alongside the physical image stream; and a display to display the virtual image stream to the user. In use, the processor calls the graphics engine to incorporate the physical image stream such that the relative depths of the virtual and physical features are represented.
Description
SYSTEMS AND METHODS FOR INCORPORATING A REAL IMAGE STREAM IN A VIRTUAL
IMAGE STREAM TECHNICAL FIELD
[0001] The following relates generally to wearable technologies, and more specifically to systems and methods for incorporating a real image stream in a virtual image stream.
BACKGROUND
[0002] The range of applications for augmented reality (AR) and virtual reality (VR) visualization has increased with the advent of wearable technologies and 3-dimensional (3D) rendering techniques. AR and VR exist on a continuum of mixed reality visualization.
SUMMARY
[0003] A system is provided for generating an augmented reality image stream combining virtual features and a physical image stream. The system comprises: (a) an image camera mounted to a head mounted display, the image camera configured to capture the physical image stream of a physical environment within its field of view; (b) a depth camera mounted to the head mounted display, the depth camera configured to capture depth information for the physical environment within its field of view; (c) a processor configured to: (i) obtain the physical image stream and the depth information; (ii) align the physical image stream with the depth information; (d) a graphics engine configured to: (i) model the virtual features in a virtual map; (ii) model the physical features in the virtual map based on the depth information; (iii) capture the modelled virtual and physical features within a field of view of a virtual camera, the field of view of the virtual camera corresponding to the field of view of the image camera; and (e) a shader configured to generate a virtual image stream incorporating the physical image stream by: (i) colouring the visible portions of the modelled physical features by assigning the colour values from the physical image stream to their corresponding locations on the visible portions of the modelled physical features; and (ii) colouring the visible portions of the modelled virtual features according to parameters for the virtual map.
[0004] The system may further display the virtual image stream incorporating the physical image stream, wherein the head mounted display system further comprises a display to display the virtual image stream.
[0005] The depth camera and image camera may be jointly provided by a stereo camera, and the processor may be configured to determine depth information from the stereo camera. The processor may be further configured to provide the parameters for the virtual map to the shader.
[0006] A method is provided for generating an augmented reality image stream combining virtual features and a physical image stream. The method comprising: obtaining a physical image stream of a physical environment within a field of view of an image camera; obtaining depth information for the physical image stream; modelling the physical environment in the physical image stream to a virtual map according to the depth data; modelling virtual features to the virtual map; obtaining a virtual image stream of the virtual map within a field of view of a virtual camera having a field of view corresponding to the field of view of the image camera; colouring visible portions of the modelled physical environment in the virtual image stream by assigning colour values from corresponding portions of the physical image stream; and colouring the visible portions of the modelled virtual features according to parameters for the virtual map.
[0007] The method may further comprise aligning the depth information to the physical image stream. The method may still further comprise translating the depth information from physical coordinates to virtual map coordinates.
[0008] A method is provided for generating an augmented reality image combining virtual features and a physical image. The method comprises: obtaining a physical image of a physical environment within a field of view of an image camera; obtaining depth information for the physical image; modelling virtual features to a virtual map; capturing the virtual features from a virtual camera having a field of view corresponding to the field of view of the image camera; pasting the physical image to a rear clipping plane in the field of view of the virtual camera;
[0009] A method is provided for generating an augmented reality image combining virtual features and a physical image. The method comprises: capturing a physical image of the physical environment in a physical field of view; obtaining depth information for the physical image; modelling at least one virtual feature to be placed in a virtual view frustum overlaying the physical field of view, the view frustum having a virtual depth limited by a far clipping plane; providing the physical image and the least one virtual feature to a rendering engine defining a notional virtual camera having the virtual view frustum; and instructing the graphics engine to: (i) apply the physical image at the far clipping plane; and (ii) render points of the virtual feature for which the depth information indicates that no physical feature has a depth less than the virtual depth of the points of the virtual feature.
[0010] These and other aspects are contemplated and described herein. It will be appreciated that the foregoing summary sets out representative aspects of systems and methods for incorporating a real (i.e. , physical) image stream in a virtual image stream, to assist skilled readers in understanding the following detailed description.
DESCRIPTION OF THE DRAWI NGS
[001 1] A greater understanding of the embodiments will be had with reference to the Figures, in which:
[0012] Fig. 1 illustrates a head mounted display for generating and displaying AR to a user thereof;
[0013] Fig. 2 is a schematic diagram of components of the head mounted display illustrated in Fig. 1 ;
[0014] Fig. 3 illustrates a field of view of a notional camera simulated by a graphics engine configured to render a virtual image stream depicting an AR for display by a head mounted display;
[0015] Fig. 4 illustrates an exemplary incorporation of a real image stream into a virtual image stream generated by a graphics engine;
[0016] Fig. 5 illustrates an exemplary scenario in which a graphics engine incorporates a physical image stream into a virtual image stream;
[0017] Fig. 6 illustrates a method for incorporating a physical image stream into a virtual image stream as exemplified in Fig. 5;
[0018] Fig. 7 illustrates another exemplary scenario in which a graphics engine incorporates a physical image stream into a virtual image stream; and
[0019] Fig. 8 illustrates another method for incorporating a physical image stream into a virtual image stream as exemplified in Fig. 7
[0020] DETAI LED DESCRI PTION
[0021] It will be appreciated that for simplicity and clarity of illustration, where considered appropriate, reference numerals may be repeated among the Figures to indicate corresponding or analogous elements. In addition, numerous specific details are set forth in order to provide a thorough understanding of the embodiments described herein. However, it will be understood by those of ordinary skill in the art that the embodiments described herein may be practised without
these specific details. In other instances, well-known methods, procedures and components have not been described in detail so as not to obscure the embodiments described herein. Also, the description is not to be considered as limiting the scope of the embodiments described herein.
[0022] It will be appreciated that various terms used throughout the present description may be read and understood as follows, unless the context indicates otherwise: "or" as used throughout is inclusive, as though written "and/or"; singular articles and pronouns as used throughout include their plural forms, and vice versa; similarly, gendered pronouns include their counterpart pronouns so that pronouns should not be understood as limiting anything described herein to use, implementation, performance, etc. by a single gender. Further definitions for terms may be set out herein; these may apply to prior and subsequent instances of those terms, as will be understood from a reading of the present description.
[0023] It will be appreciated that any module, unit, component, server, computer, terminal or device exemplified herein that executes instructions may include or otherwise have access to computer readable media such as storage media, computer storage media, or data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape. Computer storage media may include volatile and non-volatile, removable and non- removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. Examples of computer storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by an application, module, or both. Any such computer storage media may be part of the device or accessible or connectable thereto. Further, unless the context clearly indicates otherwise, any processor or controller set out herein may be implemented as a singular processor or as a plurality of processors. The plurality of processors may be arrayed or distributed, and any processing function referred to herein may be carried out by one or by a plurality of processors, even though a single processor may be exemplified. Any method, application or module herein described may be implemented using computer readable/executable instructions that may be stored or otherwise held by such computer readable media and executed by the one or more processors.
[0024] The present disclosure is directed to systems and methods for augmented reality (AR). However, the term "AR" as used herein may encompass several meanings. In the present disclosure, AR includes: the interaction by a user with real physical features and structures along with virtual features and structures overlaid thereon; and the interaction by a user with a fully virtual set of features and structures that are generated to include renderings of physical features and structures and that may comply with scaled versions of physical environments to which virtual features and structures are applied, which may alternatively be referred to as an "enhanced virtual reality". Further, the virtual features and structures could be dispensed with altogether, and the AR system may display to the user a version of the physical environment which solely comprises an image stream of the physical environment. Finally, a skilled reader will also appreciate that by discarding aspects of the physical environment, the systems and methods presented herein are also applicable to virtual reality (VR) applications, which may be understood as "pure" VR. For the reader's convenience, the following may refer to "AR" but is understood to include all of the foregoing and other variations recognized by the skilled reader.
[0025] Referring now to Fig. 1 , an exemplary HMD 12 configured as a helmet is shown;
however, other configurations are contemplated. The HMD 12, which may be worn by a user occupying a physical environment, may comprise: a processor 130 in communication with one or more of the following components: (i) a graphics processing unit 133 having a graphics engine to generate a virtual image stream representing AR; (ii) a memory 131 to store data used and generated by the processor 130; (iii) a depth camera 127 to capture depth information for the physical environment within its field of view; (iv) an image camera to capture a physical image stream of the physical environment within its field of view; (v) a display 121 to display the virtual image stream to the user; and (vi) a power source 103, such as, for example, a battery, to provide power to the components.
[0026] Fig. 2 illustrates the components of the HMD 12 shown in Fig. 1 in schematic form. The memory 131 is accessible by the processor 130. The processor communicates with the image camera 123 and the depth camera 127 to obtain, respectively, a physical image stream (i.e., a "real image stream") and depth information for the physical environment. The processor is further in communication with a graphics processing unit 133 (GPU) having a graphics engine 137 and a graphics engine plugin 135. The graphics engine plugin may facilitate communication between the processor 130 and the graphics engine 137. The graphics engine 137 obtains the physical image stream and depth information associated with the physical image stream from the processor 130, either directly, or as data stored by the processor 130 to the memory 131.
The graphics engine 137 generates a virtual image stream and provides the virtual image stream to the display 122. The foregoing components are powered by the power source 103. Although the power source 103 is shown as being electrically coupled to the processor 130, the power source may be electrically coupled directly to the remaining ones of the foregoing components.
[0027] In embodiments, the processor provides the depth information and the physical image stream to the graphics engine 137, for example, as a pixel map and a depth map, respectively, and the graphics engine 137 uses the depth information to generate models within a virtual environment of the captured physical environment alongside virtual features. The shader 139 obtains the models from the graphics engine 137 and colours the models to provide a virtual image stream.
[0028] The image camera 123 may be any suitable image camera, such as, for example, a stereo camera or a monovision camera, suited to capture the physical environment within its field of view to generate a physical image stream. The field of view of the image camera 123 is defined by parameters which may be continuously provided from the image camera 123 to the processor 130, or which may be predetermined and stored in the memory 131. For example, if the image camera 123 has a fixed field of view, the parameters of the field of view are fixed and may be stored in the memory 131 and accessible to the GPU 133 and processor 130.
[0029] The depth camera 127 may be any suitable depth camera or scanner, such as, for example, a range finder, a time-of-flight camera, a LIDAR scanner, radar scanner or scanning laser range finder operable to capture depth information for the physical environment surrounding the HMD and provide the depth information to the processor 130. The field of view of the depth camera 127 intersects at least a region of the field of view of the image camera 123. Preferably, the field of view of the depth camera 127 substantially overlaps the field of view of the image camera 123. In embodiments, the image camera 123 may be a stereo camera operable to provide depth information based on epipolar geometry, such that the depth camera 127 may be redundant. If the image camera 123 is a stereo camera and the processor 130 is configured calculate depth information based on the epipolar geometry of the depth camera 123, then the processor may obtain sufficient depth information without requiring depth information from the depth camera 127, so that the depth camera 127 may not be required in such embodiments.
[0030] In use, the processor 130 obtains the physical image stream from the image camera 123 and the depth information from the depth camera 127. The processor 130 is configured to align
the depth information from the depth camera 127 with the physical image stream captured by the image camera 123 such that, for any region, such as a pixel, within the physical image stream, the processor 130 may determine the corresponding position of the region in world coordinates relative to the image camera 123. The processor 130 aligns the physical image stream with the depth information according to any suitable calibration technique. The image camera 123 and the depth camera 127 are mounted to the HMD 12 at a fixed position and orientation with respect to each other. The spatial relationship between the depth camera 127 and image camera 123 may be defined in the memory 131 for use by the processor 130 in performing the calibration. Calibration may be according to any suitable calibration technique, such as the technique described in Canessa et al, "Calibrated depth and color cameras for accurate 3D interaction in a stereoscopic augmented reality environment", Journal of Visual Communication and Image Representation, Volume 25, Issue 1 , January 2014, Pages 227-237. If the image camera 123 is a stereo camera, the processor 130 may not calibrate the depth camera 127 to the image camera 123, since the image camera 123 may provide sufficient depth information on its own.
[0031] If the graphics engine 133 defines its own coordinate system (the "graphics engine coordinate system") to render the virtual image stream, the processor 130 may transform the world coordinates associated to the pixels to the graphics engine coordinate system using a suitable transformation technique. The processor 130 may store the transformed and/or un- transformed coordinates and pixel values to the memory 131 for subsequent retrieval by the graphics engine 137.
[0032] In use, the processor 130 calls the graphics engine 137, such as, for example the Unity 3D™ engine, to generate a virtual image stream comprising rendered graphics representing a 3D virtual reality environment. The display 122 of the HMD 12 obtains the virtual image stream and displays it to the user. The virtual image stream may comprise computer generated imagery (CGI), such as, for example, virtual characters, virtual environments or virtual effects representing a 3D VR.
[0033] Referring now to Fig. 3, the graphics engine 137 may simulate a notional camera 301 (referred to herein as a virtual camera) to capture the virtual image stream as it would be captured by a real camera occupying the virtual environment. The virtual camera 301 may have the properties of a mono or stereo camera, in accordance with the image camera 123 of the HMD 12. It will be appreciated that a stereo virtual camera may render a virtual image stream which the user will perceive as 3D provided the display 122 of the HMD 12 is suitably configured
to display a 3D image stream. As shown, the virtual camera 301 has a notional field of view (in the case of a stereo camera, the notional field of view is a combination of the fields of view of each lens) defined by the pyramid projecting outwardly from the virtual camera 301 and having its apex at the centre of the lens 303 of the virtual camera, which may be defined as being located at c, Yc, Zc = 0, 0, 0 in the graphics engine coordinate system. The virtual camera 301 further comprises a view frustum, which is defined as the region lying within the field of view between a far clipping plane 313 and a near clipping plane 311. The virtual image stream only comprises virtual features lying within the view frustum. If, for example, a ray of light traverses the virtual world, only the portion lying within the view frustum would be depicted in the virtual image stream. The graphics engine 137 is configurable by the processor 130 to define a view frustum having specific properties. The processor 130 preferably configures the graphics engine to define a view frustum defined by a field of view having parameters corresponding to the field of view of the image camera 123. As previously described, the parameters for the field of view of the image camera 123 may be obtained from the image camera 123, or may be predefined in the memory 131. In either event, the parameters are obtained by the processor 130 to configure the graphics engine 137 so that the graphics engine 137 accordingly defines the field of view of the virtual camera 301.
[0034] In order to provide the user with an AR experience, the graphics engine 137 may incorporate the physical image stream into the virtual image stream alongside virtual elements. The processor 130 may therefore route the physical image stream from the image camera 123 to the graphics engine 137 for incorporation into the virtual image stream. In various cases, the physical image stream may depict features which are dispersed throughout 3 dimensions of the physical environment. Certain available graphics engines incorporate the physical image stream by pasting the physical image stream to a far clipping plane as a 2D background texture. This has generally been done as the physical image stream has traditionally been used to display only bounds of the physical environment. However, it has been found that where the physical environment includes physical features not at the bounds of the physical environment, user perception of the displayed virtual image stream is often hindered by such treatment since any virtual features within the virtual image stream always appear to the user as though placed in front of the physical image stream.
[0035] Fig. 4 illustrates an exemplary scenario in which the incorporation of a physical image stream 401 into a virtual image stream 411 as a background texture pasted to the far clipping plane 313 may provide an inaccurate representation to a user. In the exemplary scenario, the
graphics engine 137 renders a virtual feature 413 located in the -Y plane at Z=ZV, i.e., between the near clipping plane 31 1 and the far clipping plane 313. The virtual feature 413 is shown as having a depth of one pixel, but it will be appreciated that a virtual feature may have any depth. The physical image stream 401 depicts a person 403 standing in front of a background feature 405. As determined from the depth information for the physical image stream 401 , the distance in the physical environment of the person 403 relative to the background feature 405 corresponds to a distance in graphics engine coordinates that is greater than the distance between the virtual feature 413 and the rear clipping plane 313, such that the person 403 should appear in the virtual image stream 411 as though located closer to the virtual camera 301 than the virtual feature 413, which lies at Z-Zv, but further from the virtual camera 301 than the near clipping plane 31 1. However, since the graphics engine 137 merely pastes the entire physical image stream 401 as a background texture on the far clipping plane 313, i.e., behind the virtual feature 413, the person 403 actually appears to be standing behind the virtual feature 413 in the virtual image stream 411.
[0036] Referring now to Fig. 5, the processor calls the graphics engine 137 to selectively display or not display pixels of the virtual feature 413 and the physical image 401 stream in the virtual image stream 511 so that for any two or more pixels within the view frustum having identical X and Y coordinates, the graphics engine selects display of the pixel having the lower Z-value, according to the method illustrated in Fig. 6. For example, the physical image stream 401 may be understood as comprising a background element 405 and a person 403 standing in front of the background element 405 by a distance in world coordinates (as determined by the depth camera of the HMD), which translates to a distance in the graphic engine coordinates that is greater than the distance between the virtual feature 413 and the far clipping plane 313. The representation 403' illustrates the relative position of the person 403 with respect to the background feature 405 in graphics system coordinates; however, the graphics engine pastes the entire physical image stream 313, including the person 403, to the far clipping plane 313. The processor 130 or the memory 131 provides the Z coordinates corresponding to the elements within the physical image stream 401 to the graphics engine 137, as determined by the processor 130 based on the depth information, such that the graphics engine 137 can determine that the pixels representing the person 403 lie in the X-Y plane at Zp, i.e., closer to the virtual camera 301 than the virtual feature 413. Therefore, within the view frustum, for any point Xn, Yn, there may be a plurality of pixels. Rather than preferring the pixels of the virtual feature 413 over the pixels of the physical image stream 401 , as in Fig. 4, the graphics engine 137 applies the colour for any point Xn, Yn having two or more pixels the colour corresponding to
that point for whichever feature has the lowest Z value, i.e., the pixel which is nearest the virtual camera 301. If, at the point Xn, Yn, a physical feature is nearer the virtual camera 301than a virtual feature, the graphics engine 137 obtains the colour of the physical feature at the corresponding location in the physical image stream 401 and assigns that colour to the point X,, Yn in the virtual image stream.
[0037] Fig. 6 illustrates steps in a method for incorporating the physical image stream 401 into the virtual image stream 411 as previously described with reference to Fig. 5. At block 601 , the image camera 123 captures the physical image stream 401 depicting the physical environment within its field of view, and at block 603 the depth camera 127 captures depth information of the physical environment within its field of view. At block 605, the processor 130 obtains the physical image stream 401 and the depth information and aligns the physical image 401 and depth information, as previously described, to assign coordinates to features within the physical image stream 401. At block 607, the processor 130 translates the assigned coordinates into graphics engine coordinates if necessary. At block 609, the processor 130 calls the graphics engine to render the virtual image stream 411 , incorporate the physical image stream 401 into the virtual image stream 411 and to display whichever of any overlapping pixels is nearer the virtual camera 301 along the Z-axis. The processor 130 also calls the graphics engine 137 to define a virtual camera 301 having a field of view corresponding to the field of view of the image camera 123. At block 611 , the graphics engine 137 obtains the physical image stream 401 and the assigned coordinates while rendering virtual features 413. At block 613, the graphics engine 137 determines which features in the physical image stream 401 overlap with the virtual features 413 from the point of view of the virtual camera 301. At block 615 the graphics engine 137 determines which feature within the overlap is closer to the virtual camera 301 and includes the pixels for that feature in the virtual image stream while not including the overlapping pixels for the feature further from the virtual camera. At block 617 the graphics engine 137 provides the virtual image stream 411 to the display 122.
[0038] Referring now to Fig. 7, aspects of another method to incorporate the physical image stream 401 into a virtual image stream 71 1 are shown. As previously described, the processor 130 obtains depth information from the depth camera 127 for the physical environment captured within the physical image stream. The processor 130 associates the depth information to corresponding regions within the physical image stream and either provides the associated information directly to the graphics engine 137 or stores it to the memory 131 for subsequent retrieval by the graphics engine 137. The processor 130 calls the graphics engine 137 to model
the physical environment captured within the physical image stream 401 in the virtual environment using the depth information for the physical image stream 401. The graphics engine 137 models each feature from the physical image stream 401 within the virtual environment as a 3D model, such as, for example, a point cloud, polygonal mesh or triangular mesh. The graphics engine further models the virtual feature 411 in the virtual environment so that all physical and virtual features are modelled in 3D within the virtual environment. For example, the graphics engine 137 models the person 403 as a 3D model 703, the feature 405 as a 3D model 705, and the virtual feature 411 as a 3D model 711. The virtual camera 301 captures the virtual and physical features within its view frustum. For example, the graphics engine 137 may model the person 403 in the virtual environment in 3D based on the depth information provided by the depth camera. The model 703 of the person 403 appears in grey for illustrative purposes. Once the graphics engine 137 has modelled all features within the view frustum of the virtual camera 301 , the graphics engine 137 determines which regions of the modelled virtual and physical features would be visible when captured by the virtual camera 301. The shader 139 then assigns colours to the visible regions. When colouring the visible regions of the physical elements, the shader 139 obtains the colour values associated to the locations by the processor 130, as previously described. For example, if the processor 130 associates the colour black to a pixel located in graphics engine coordinates at p, Yp, Zp, the shader 139 assigns the colour black to the surface of the model 703 of the person 403 where the model 703 intersects that point. The shader 139 may further assign various colours to the visible surfaces of the model 713 of the virtual feature 403 captured by the virtual camera 301.
[0039] Alternatively, the processor 130 may call the shader 139 to colour the virtual image stream 71 1 without reference to the colouring for the physical features from the physical image stream 401. Although the structure of the physical features may be depicted in the virtual image stream 71 1 , the surface colouration and, optionally, even texturing, of the physical elements may be depicted partially or entirely independently of the colouration and/or texture of the physical elements in the physical image stream 401. Further alternatively, the processor 130 may call the graphics engine 137 to alter the models of the physical elements or omit modelling other physical elements so that the structures in the virtual image stream are partially, but not entirely, related to physical features captured within the physical image stream 401.
[0040] Fig. 8 illustrates steps in a method for incorporating the physical image stream 401 into the virtual image stream 411 as previously described with reference to Fig. 7. At block 801 , the image camera 123 captures the physical image stream 401 depicting the physical environment
within its field of view, and at block 803 the depth camera 127 captures depth information of the physical environment within its field of view. At block 805, the processor 130 obtains the physical image stream 401 and the depth information and aligns the physical image 401 and depth information, as previously described, to assign coordinates to regions within the physical image stream 401. At block 807, the processor 130 translates the assigned coordinates into graphics engine coordinates if necessary. At block 809, the processor 130 calls the graphics engine to model the virtual features 413 and physical features as 3D models within a virtual environment. The processor 130 also calls the graphics engine 137 to define a virtual camera 301 having a field of view corresponding to the field of view of the image camera 123. At block 811 , the graphics engine 137 obtains the assigned coordinates and renders the 3D models. At block 813, the graphics engine 137 determines which regions of the models are visible to the virtual camera 301. At block 815, the shader 139 obtains the modelled features and colours the visible portions of the physical features according to their corresponding colours in the physical image stream 401 and the visible portions of the virtual features according to parameters for the virtual environment as defined, for example, by the processor 130. The coloured depiction forms the virtual image stream 711. At block 817 the shader 139 provides the virtual image stream 711 to the display 122.
[0041] In the event that the graphics engine models a virtual feature having the same 3D coordinates as a physical feature, a conflict will arise such that the shader 139 must determine whether to display the colour of the virtual feature or the physical feature. In embodiments, the processor calls the shader 139 to resolve such conflicts in favour of displaying the pixel of the physical feature. This resolution may, for example, enhance user safety by prioritising display of physical features which may pose safety hazards or physical obstacles which the user must navigate when moving throughout a physical environment.
[0042] Although the foregoing has been described with reference to certain specific
embodiments, various modifications thereto will be apparent to those skilled in the art without departing from the spirit and scope of the invention as outlined in the appended claims. The entire disclosures of all references recited above are incorporated herein by reference.
Claims
1. A system for generating an augmented reality image stream combining virtual features and a physical image stream, the system comprising:
a) an image camera mounted to a head mounted display, the image camera
configured to capture the physical image stream of a physical environment within its field of view;
b) a depth camera mounted to the head mounted display, the depth camera configured to capture depth information for the physical environment within its field of view;
c) a processor configured to:
i) obtain the physical image stream and the depth information;
ii) align the physical image stream with the depth information;
d) a graphics engine configured to:
i) model the virtual features in a virtual map;
ii) model the physical features in the virtual map based on the depth
information;
iii) capture the modelled virtual and physical features within a field of view of a virtual camera, the field of view of the virtual camera corresponding to the field of view of the image camera; and
e) a shader configured to generate a virtual image stream incorporating the physical image stream by:
i) colouring the visible portions of the modelled physical features by assigning the colour values from the physical image stream to their corresponding locations on the visible portions of the modelled physical features; and
ii) colouring the visible portions of the modelled virtual features according to parameters for the virtual map.
The system of claim 1 further for displaying the virtual image stream incorporating the physical image stream, the head mounted display system further comprising a display to display the virtual image stream.
The system of claim 1 , wherein the depth camera and image camera are jointly provided by a stereo camera, and further wherein the processor is configured to determine depth information from the stereo camera.
The system of claim 1 , wherein the processor is further configured to provide the parameters for the virtual map the shader.
A method for generating an augmented reality image stream combining virtual features and a physical image stream, the method comprising:
a) obtaining a physical image stream of a physical environment within a field of view of an image camera;
b) obtaining depth information for the physical image stream;
c) modelling the physical environment in the physical image stream to a virtual map according to the depth data;
d) modelling virtual features to the virtual map;
e) obtaining a virtual image stream of the virtual map within a field of view of a
virtual camera having a field of view corresponding to the field of view of the image camera;
f) colouring visible portions of the modelled physical environment in the virtual image stream by assigning colour values from corresponding portions of the physical image stream; and
g) colouring the visible portions of the modelled virtual features according to
parameters for the virtual map.
The method of claim 5 further comprising aligning the depth information to the physical image stream.
The method of claim 5, further comprising translating the depth information from physical coordinates to virtual map coordinates.
A method for generating an augmented reality image combining virtual features and a physical image, the method comprising:
a) obtaining a physical image of a physical environment within a field of view of an image camera;
b) obtaining depth information for the physical image;
c) modelling virtual features to a virtual map;
d) capturing the virtual features from a virtual camera having a field of view
corresponding to the field of view of the image camera;
e) pasting the physical image to a rear clipping plane in the field of view of the
virtual camera.
9. The method of claim 8 further comprising aligning the depth information to the physical image stream.
10. The method of claim 8, further comprising translating the depth information from physical coordinates to virtual map coordinates.
11 . A method for generating an augmented reality image combining virtual features and a physical image, the method comprising:
a) capturing a physical image of the physical environment in a physical field of view; b) obtaining depth information for the physical image;
c) modelling at least one virtual feature to be placed in a virtual view frustum
overlaying the physical field of view, the view frustum having a virtual depth limited by a far clipping plane;
d) providing the physical image and the least one virtual feature to a rendering
engine defining a notional virtual camera having the virtual view frustum; and e) instructing the graphics engine to:
i) apply the physical image at the far clipping plane; and
ii) render points of the virtual feature for which the depth information
indicates that no physical feature has a depth less than the virtual depth of the points of the virtual feature.
12. The method of claim 1 1 , further comprising aligning the depth information to the physical image.
13. The method of claim 11 , further comprising translating depth information from a coordinate system of the physical field of view to a coordinate system of the virtual field of view.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201461941063P | 2014-02-18 | 2014-02-18 | |
US61/941,063 | 2014-02-18 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015123775A1 true WO2015123775A1 (en) | 2015-08-27 |
Family
ID=53877478
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CA2015/050124 WO2015123775A1 (en) | 2014-02-18 | 2015-02-18 | Systems and methods for incorporating a real image stream in a virtual image stream |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2015123775A1 (en) |
Cited By (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017039348A1 (en) * | 2015-09-01 | 2017-03-09 | Samsung Electronics Co., Ltd. | Image capturing apparatus and operating method thereof |
WO2017112138A1 (en) * | 2015-12-21 | 2017-06-29 | Intel Corporation | Direct motion sensor input to rendering pipeline |
CN107076998A (en) * | 2016-04-29 | 2017-08-18 | 深圳市大疆创新科技有限公司 | Wearable device and UAS |
US9766449B2 (en) | 2014-06-25 | 2017-09-19 | Thalmic Labs Inc. | Systems, devices, and methods for wearable heads-up displays |
WO2017212130A1 (en) | 2016-06-10 | 2017-12-14 | Estereolabs | Individual visual immersion device for a moving person |
US9904051B2 (en) | 2015-10-23 | 2018-02-27 | Thalmic Labs Inc. | Systems, devices, and methods for laser eye tracking |
US9958682B1 (en) | 2015-02-17 | 2018-05-01 | Thalmic Labs Inc. | Systems, devices, and methods for splitter optics in wearable heads-up displays |
GB2555841A (en) * | 2016-11-11 | 2018-05-16 | Sony Corp | An apparatus, computer program and method |
GB2556114A (en) * | 2016-11-22 | 2018-05-23 | Sony Interactive Entertainment Europe Ltd | Virtual reality |
US9989764B2 (en) | 2015-02-17 | 2018-06-05 | Thalmic Labs Inc. | Systems, devices, and methods for eyebox expansion in wearable heads-up displays |
US10073268B2 (en) | 2015-05-28 | 2018-09-11 | Thalmic Labs Inc. | Display with integrated visible light eye tracking |
US10126815B2 (en) | 2016-01-20 | 2018-11-13 | Thalmic Labs Inc. | Systems, devices, and methods for proximity-based eye tracking |
US10133075B2 (en) | 2015-05-04 | 2018-11-20 | Thalmic Labs Inc. | Systems, devices, and methods for angle- and wavelength-multiplexed holographic optical elements |
US10151926B2 (en) | 2016-01-29 | 2018-12-11 | North Inc. | Systems, devices, and methods for preventing eyebox degradation in a wearable heads-up display |
US10215987B2 (en) | 2016-11-10 | 2019-02-26 | North Inc. | Systems, devices, and methods for astigmatism compensation in a wearable heads-up display |
US10230929B2 (en) | 2016-07-27 | 2019-03-12 | North Inc. | Systems, devices, and methods for laser projectors |
US10365549B2 (en) | 2016-04-13 | 2019-07-30 | North Inc. | Systems, devices, and methods for focusing laser projectors |
US10365492B2 (en) | 2016-12-23 | 2019-07-30 | North Inc. | Systems, devices, and methods for beam combining in wearable heads-up displays |
US10409057B2 (en) | 2016-11-30 | 2019-09-10 | North Inc. | Systems, devices, and methods for laser eye tracking in wearable heads-up displays |
US10437073B2 (en) | 2017-01-25 | 2019-10-08 | North Inc. | Systems, devices, and methods for beam combining in laser projectors |
US10459222B2 (en) | 2016-08-12 | 2019-10-29 | North Inc. | Systems, devices, and methods for variable luminance in wearable heads-up displays |
WO2019185986A3 (en) * | 2018-03-28 | 2019-10-31 | Nokia Technologies Oy | A method, an apparatus and a computer program product for virtual reality |
US10488662B2 (en) | 2015-09-04 | 2019-11-26 | North Inc. | Systems, articles, and methods for integrating holographic optical elements with eyeglass lenses |
US10528135B2 (en) | 2013-01-14 | 2020-01-07 | Ctrl-Labs Corporation | Wearable muscle interface systems, devices and methods that interact with content displayed on an electronic display |
US10656822B2 (en) | 2015-10-01 | 2020-05-19 | North Inc. | Systems, devices, and methods for interacting with content displayed on head-mounted displays |
US10684692B2 (en) | 2014-06-19 | 2020-06-16 | Facebook Technologies, Llc | Systems, devices, and methods for gesture identification |
US10901216B2 (en) | 2017-10-23 | 2021-01-26 | Google Llc | Free space multiple laser diode modules |
US10969740B2 (en) | 2017-06-27 | 2021-04-06 | Nvidia Corporation | System and method for near-eye light field rendering for wide field of view interactive three-dimensional computer graphics |
US11079846B2 (en) | 2013-11-12 | 2021-08-03 | Facebook Technologies, Llc | Systems, articles, and methods for capacitive electromyography sensors |
US11308696B2 (en) | 2018-08-06 | 2022-04-19 | Apple Inc. | Media compositor for computer-generated reality |
CN114821001A (en) * | 2022-04-12 | 2022-07-29 | 支付宝(杭州)信息技术有限公司 | AR-based interaction method and device and electronic equipment |
US11635736B2 (en) | 2017-10-19 | 2023-04-25 | Meta Platforms Technologies, Llc | Systems and methods for identifying biological structures associated with neuromuscular source signals |
WO2023071586A1 (en) * | 2021-10-25 | 2023-05-04 | 腾讯科技(深圳)有限公司 | Picture generation method and apparatus, device, and medium |
US11644799B2 (en) | 2013-10-04 | 2023-05-09 | Meta Platforms Technologies, Llc | Systems, articles and methods for wearable electronic devices employing contact sensors |
US11666264B1 (en) | 2013-11-27 | 2023-06-06 | Meta Platforms Technologies, Llc | Systems, articles, and methods for electromyography sensors |
CN116843819A (en) * | 2023-07-10 | 2023-10-03 | 上海随幻智能科技有限公司 | Green curtain infinite extension method based on illusion engine |
US11797087B2 (en) | 2018-11-27 | 2023-10-24 | Meta Platforms Technologies, Llc | Methods and apparatus for autocalibration of a wearable electrode sensor system |
US11868531B1 (en) | 2021-04-08 | 2024-01-09 | Meta Platforms Technologies, Llc | Wearable device providing for thumb-to-finger-based input gestures detected based on neuromuscular signals, and systems and methods of use thereof |
US11907423B2 (en) | 2019-11-25 | 2024-02-20 | Meta Platforms Technologies, Llc | Systems and methods for contextualized interactions with an environment |
US11921471B2 (en) | 2013-08-16 | 2024-03-05 | Meta Platforms Technologies, Llc | Systems, articles, and methods for wearable devices having secondary power sources in links of a band for providing secondary power in addition to a primary power source |
US11961494B1 (en) | 2020-03-27 | 2024-04-16 | Meta Platforms Technologies, Llc | Electromagnetic interference reduction in extended reality environments |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020000986A1 (en) * | 1998-02-17 | 2002-01-03 | Sowizral Henry A. | Mitigating the effects of object approximations |
US20070038944A1 (en) * | 2005-05-03 | 2007-02-15 | Seac02 S.R.I. | Augmented reality system with real marker object identification |
US20120120200A1 (en) * | 2009-07-27 | 2012-05-17 | Koninklijke Philips Electronics N.V. | Combining 3d video and auxiliary data |
US20120139906A1 (en) * | 2010-12-03 | 2012-06-07 | Qualcomm Incorporated | Hybrid reality for 3d human-machine interface |
US20130120365A1 (en) * | 2011-11-14 | 2013-05-16 | Electronics And Telecommunications Research Institute | Content playback apparatus and method for providing interactive augmented space |
-
2015
- 2015-02-18 WO PCT/CA2015/050124 patent/WO2015123775A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020000986A1 (en) * | 1998-02-17 | 2002-01-03 | Sowizral Henry A. | Mitigating the effects of object approximations |
US20070038944A1 (en) * | 2005-05-03 | 2007-02-15 | Seac02 S.R.I. | Augmented reality system with real marker object identification |
US20120120200A1 (en) * | 2009-07-27 | 2012-05-17 | Koninklijke Philips Electronics N.V. | Combining 3d video and auxiliary data |
US20120139906A1 (en) * | 2010-12-03 | 2012-06-07 | Qualcomm Incorporated | Hybrid reality for 3d human-machine interface |
US20130120365A1 (en) * | 2011-11-14 | 2013-05-16 | Electronics And Telecommunications Research Institute | Content playback apparatus and method for providing interactive augmented space |
Cited By (91)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10528135B2 (en) | 2013-01-14 | 2020-01-07 | Ctrl-Labs Corporation | Wearable muscle interface systems, devices and methods that interact with content displayed on an electronic display |
US11009951B2 (en) | 2013-01-14 | 2021-05-18 | Facebook Technologies, Llc | Wearable muscle interface systems, devices and methods that interact with content displayed on an electronic display |
US11921471B2 (en) | 2013-08-16 | 2024-03-05 | Meta Platforms Technologies, Llc | Systems, articles, and methods for wearable devices having secondary power sources in links of a band for providing secondary power in addition to a primary power source |
US11644799B2 (en) | 2013-10-04 | 2023-05-09 | Meta Platforms Technologies, Llc | Systems, articles and methods for wearable electronic devices employing contact sensors |
US11079846B2 (en) | 2013-11-12 | 2021-08-03 | Facebook Technologies, Llc | Systems, articles, and methods for capacitive electromyography sensors |
US11666264B1 (en) | 2013-11-27 | 2023-06-06 | Meta Platforms Technologies, Llc | Systems, articles, and methods for electromyography sensors |
US10684692B2 (en) | 2014-06-19 | 2020-06-16 | Facebook Technologies, Llc | Systems, devices, and methods for gesture identification |
US10012829B2 (en) | 2014-06-25 | 2018-07-03 | Thalmic Labs Inc. | Systems, devices, and methods for wearable heads-up displays |
US10054788B2 (en) | 2014-06-25 | 2018-08-21 | Thalmic Labs Inc. | Systems, devices, and methods for wearable heads-up displays |
US9766449B2 (en) | 2014-06-25 | 2017-09-19 | Thalmic Labs Inc. | Systems, devices, and methods for wearable heads-up displays |
US10067337B2 (en) | 2014-06-25 | 2018-09-04 | Thalmic Labs Inc. | Systems, devices, and methods for wearable heads-up displays |
US9874744B2 (en) | 2014-06-25 | 2018-01-23 | Thalmic Labs Inc. | Systems, devices, and methods for wearable heads-up displays |
US9958682B1 (en) | 2015-02-17 | 2018-05-01 | Thalmic Labs Inc. | Systems, devices, and methods for splitter optics in wearable heads-up displays |
US10191283B2 (en) | 2015-02-17 | 2019-01-29 | North Inc. | Systems, devices, and methods for eyebox expansion displays in wearable heads-up displays |
US9989764B2 (en) | 2015-02-17 | 2018-06-05 | Thalmic Labs Inc. | Systems, devices, and methods for eyebox expansion in wearable heads-up displays |
US10031338B2 (en) | 2015-02-17 | 2018-07-24 | Thalmic Labs Inc. | Systems, devices, and methods for eyebox expansion in wearable heads-up displays |
US10613331B2 (en) | 2015-02-17 | 2020-04-07 | North Inc. | Systems, devices, and methods for splitter optics in wearable heads-up displays |
US10197805B2 (en) | 2015-05-04 | 2019-02-05 | North Inc. | Systems, devices, and methods for eyeboxes with heterogeneous exit pupils |
US10175488B2 (en) | 2015-05-04 | 2019-01-08 | North Inc. | Systems, devices, and methods for spatially-multiplexed holographic optical elements |
US10133075B2 (en) | 2015-05-04 | 2018-11-20 | Thalmic Labs Inc. | Systems, devices, and methods for angle- and wavelength-multiplexed holographic optical elements |
US10078219B2 (en) | 2015-05-28 | 2018-09-18 | Thalmic Labs Inc. | Wearable heads-up display with integrated eye tracker and different optical power holograms |
US10114222B2 (en) | 2015-05-28 | 2018-10-30 | Thalmic Labs Inc. | Integrated eye tracking and laser projection methods with holographic elements of varying optical powers |
US10078220B2 (en) | 2015-05-28 | 2018-09-18 | Thalmic Labs Inc. | Wearable heads-up display with integrated eye tracker |
US10139633B2 (en) | 2015-05-28 | 2018-11-27 | Thalmic Labs Inc. | Eyebox expansion and exit pupil replication in wearable heads-up display having integrated eye tracking and laser projection |
US10488661B2 (en) | 2015-05-28 | 2019-11-26 | North Inc. | Systems, devices, and methods that integrate eye tracking and scanning laser projection in wearable heads-up displays |
US10180578B2 (en) | 2015-05-28 | 2019-01-15 | North Inc. | Methods that integrate visible light eye tracking in scanning laser projection displays |
US10073268B2 (en) | 2015-05-28 | 2018-09-11 | Thalmic Labs Inc. | Display with integrated visible light eye tracking |
US10165199B2 (en) | 2015-09-01 | 2018-12-25 | Samsung Electronics Co., Ltd. | Image capturing apparatus for photographing object according to 3D virtual object |
WO2017039348A1 (en) * | 2015-09-01 | 2017-03-09 | Samsung Electronics Co., Ltd. | Image capturing apparatus and operating method thereof |
US10890765B2 (en) | 2015-09-04 | 2021-01-12 | Google Llc | Systems, articles, and methods for integrating holographic optical elements with eyeglass lenses |
US10877272B2 (en) | 2015-09-04 | 2020-12-29 | Google Llc | Systems, articles, and methods for integrating holographic optical elements with eyeglass lenses |
US10488662B2 (en) | 2015-09-04 | 2019-11-26 | North Inc. | Systems, articles, and methods for integrating holographic optical elements with eyeglass lenses |
US10718945B2 (en) | 2015-09-04 | 2020-07-21 | North Inc. | Systems, articles, and methods for integrating holographic optical elements with eyeglass lenses |
US10705342B2 (en) | 2015-09-04 | 2020-07-07 | North Inc. | Systems, articles, and methods for integrating holographic optical elements with eyeglass lenses |
US10656822B2 (en) | 2015-10-01 | 2020-05-19 | North Inc. | Systems, devices, and methods for interacting with content displayed on head-mounted displays |
US10228558B2 (en) | 2015-10-23 | 2019-03-12 | North Inc. | Systems, devices, and methods for laser eye tracking |
US10606072B2 (en) | 2015-10-23 | 2020-03-31 | North Inc. | Systems, devices, and methods for laser eye tracking |
US9904051B2 (en) | 2015-10-23 | 2018-02-27 | Thalmic Labs Inc. | Systems, devices, and methods for laser eye tracking |
WO2017112138A1 (en) * | 2015-12-21 | 2017-06-29 | Intel Corporation | Direct motion sensor input to rendering pipeline |
US10096149B2 (en) | 2015-12-21 | 2018-10-09 | Intel Corporation | Direct motion sensor input to rendering pipeline |
US10241572B2 (en) | 2016-01-20 | 2019-03-26 | North Inc. | Systems, devices, and methods for proximity-based eye tracking |
US10303246B2 (en) | 2016-01-20 | 2019-05-28 | North Inc. | Systems, devices, and methods for proximity-based eye tracking |
US10126815B2 (en) | 2016-01-20 | 2018-11-13 | Thalmic Labs Inc. | Systems, devices, and methods for proximity-based eye tracking |
US10451881B2 (en) | 2016-01-29 | 2019-10-22 | North Inc. | Systems, devices, and methods for preventing eyebox degradation in a wearable heads-up display |
US10437067B2 (en) | 2016-01-29 | 2019-10-08 | North Inc. | Systems, devices, and methods for preventing eyebox degradation in a wearable heads-up display |
US10151926B2 (en) | 2016-01-29 | 2018-12-11 | North Inc. | Systems, devices, and methods for preventing eyebox degradation in a wearable heads-up display |
US10365549B2 (en) | 2016-04-13 | 2019-07-30 | North Inc. | Systems, devices, and methods for focusing laser projectors |
US10365548B2 (en) | 2016-04-13 | 2019-07-30 | North Inc. | Systems, devices, and methods for focusing laser projectors |
US10365550B2 (en) | 2016-04-13 | 2019-07-30 | North Inc. | Systems, devices, and methods for focusing laser projectors |
CN107076998A (en) * | 2016-04-29 | 2017-08-18 | 深圳市大疆创新科技有限公司 | Wearable device and UAS |
CN107076998B (en) * | 2016-04-29 | 2020-09-01 | 深圳市大疆创新科技有限公司 | Wearable equipment and unmanned aerial vehicle system |
US11036050B2 (en) | 2016-04-29 | 2021-06-15 | SZ DJI Technology Co., Ltd. | Wearable apparatus and unmanned aerial vehicle system |
FR3052565A1 (en) * | 2016-06-10 | 2017-12-15 | Stereolabs | INDIVIDUAL VISUAL IMMERSION DEVICE FOR MOVING PERSON |
WO2017212130A1 (en) | 2016-06-10 | 2017-12-14 | Estereolabs | Individual visual immersion device for a moving person |
US10230929B2 (en) | 2016-07-27 | 2019-03-12 | North Inc. | Systems, devices, and methods for laser projectors |
US10277874B2 (en) | 2016-07-27 | 2019-04-30 | North Inc. | Systems, devices, and methods for laser projectors |
US10250856B2 (en) | 2016-07-27 | 2019-04-02 | North Inc. | Systems, devices, and methods for laser projectors |
US10459221B2 (en) | 2016-08-12 | 2019-10-29 | North Inc. | Systems, devices, and methods for variable luminance in wearable heads-up displays |
US10459223B2 (en) | 2016-08-12 | 2019-10-29 | North Inc. | Systems, devices, and methods for variable luminance in wearable heads-up displays |
US10459222B2 (en) | 2016-08-12 | 2019-10-29 | North Inc. | Systems, devices, and methods for variable luminance in wearable heads-up displays |
US10345596B2 (en) | 2016-11-10 | 2019-07-09 | North Inc. | Systems, devices, and methods for astigmatism compensation in a wearable heads-up display |
US10215987B2 (en) | 2016-11-10 | 2019-02-26 | North Inc. | Systems, devices, and methods for astigmatism compensation in a wearable heads-up display |
GB2555841A (en) * | 2016-11-11 | 2018-05-16 | Sony Corp | An apparatus, computer program and method |
US11138794B2 (en) | 2016-11-11 | 2021-10-05 | Sony Corporation | Apparatus, computer program and method |
GB2556114B (en) * | 2016-11-22 | 2020-05-27 | Sony Interactive Entertainment Europe Ltd | Virtual reality |
GB2556114A (en) * | 2016-11-22 | 2018-05-23 | Sony Interactive Entertainment Europe Ltd | Virtual reality |
US10459220B2 (en) | 2016-11-30 | 2019-10-29 | North Inc. | Systems, devices, and methods for laser eye tracking in wearable heads-up displays |
US10409057B2 (en) | 2016-11-30 | 2019-09-10 | North Inc. | Systems, devices, and methods for laser eye tracking in wearable heads-up displays |
US10365492B2 (en) | 2016-12-23 | 2019-07-30 | North Inc. | Systems, devices, and methods for beam combining in wearable heads-up displays |
US10663732B2 (en) | 2016-12-23 | 2020-05-26 | North Inc. | Systems, devices, and methods for beam combining in wearable heads-up displays |
US10718951B2 (en) | 2017-01-25 | 2020-07-21 | North Inc. | Systems, devices, and methods for beam combining in laser projectors |
US10437074B2 (en) | 2017-01-25 | 2019-10-08 | North Inc. | Systems, devices, and methods for beam combining in laser projectors |
US10437073B2 (en) | 2017-01-25 | 2019-10-08 | North Inc. | Systems, devices, and methods for beam combining in laser projectors |
US10969740B2 (en) | 2017-06-27 | 2021-04-06 | Nvidia Corporation | System and method for near-eye light field rendering for wide field of view interactive three-dimensional computer graphics |
US11747766B2 (en) | 2017-06-27 | 2023-09-05 | Nvidia Corporation | System and method for near-eye light field rendering for wide field of view interactive three-dimensional computer graphics |
US11635736B2 (en) | 2017-10-19 | 2023-04-25 | Meta Platforms Technologies, Llc | Systems and methods for identifying biological structures associated with neuromuscular source signals |
US11300788B2 (en) | 2017-10-23 | 2022-04-12 | Google Llc | Free space multiple laser diode modules |
US10901216B2 (en) | 2017-10-23 | 2021-01-26 | Google Llc | Free space multiple laser diode modules |
US11218685B2 (en) | 2018-03-28 | 2022-01-04 | Nokia Technologies Oy | Method, an apparatus and a computer program product for virtual reality |
WO2019185986A3 (en) * | 2018-03-28 | 2019-10-31 | Nokia Technologies Oy | A method, an apparatus and a computer program product for virtual reality |
US11308696B2 (en) | 2018-08-06 | 2022-04-19 | Apple Inc. | Media compositor for computer-generated reality |
US11804019B2 (en) | 2018-08-06 | 2023-10-31 | Apple Inc. | Media compositor for computer-generated reality |
US11941176B1 (en) | 2018-11-27 | 2024-03-26 | Meta Platforms Technologies, Llc | Methods and apparatus for autocalibration of a wearable electrode sensor system |
US11797087B2 (en) | 2018-11-27 | 2023-10-24 | Meta Platforms Technologies, Llc | Methods and apparatus for autocalibration of a wearable electrode sensor system |
US11907423B2 (en) | 2019-11-25 | 2024-02-20 | Meta Platforms Technologies, Llc | Systems and methods for contextualized interactions with an environment |
US11961494B1 (en) | 2020-03-27 | 2024-04-16 | Meta Platforms Technologies, Llc | Electromagnetic interference reduction in extended reality environments |
US11868531B1 (en) | 2021-04-08 | 2024-01-09 | Meta Platforms Technologies, Llc | Wearable device providing for thumb-to-finger-based input gestures detected based on neuromuscular signals, and systems and methods of use thereof |
WO2023071586A1 (en) * | 2021-10-25 | 2023-05-04 | 腾讯科技(深圳)有限公司 | Picture generation method and apparatus, device, and medium |
CN114821001A (en) * | 2022-04-12 | 2022-07-29 | 支付宝(杭州)信息技术有限公司 | AR-based interaction method and device and electronic equipment |
CN116843819B (en) * | 2023-07-10 | 2024-02-02 | 上海随幻智能科技有限公司 | Green curtain infinite extension method based on illusion engine |
CN116843819A (en) * | 2023-07-10 | 2023-10-03 | 上海随幻智能科技有限公司 | Green curtain infinite extension method based on illusion engine |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2015123775A1 (en) | Systems and methods for incorporating a real image stream in a virtual image stream | |
CN107564089B (en) | Three-dimensional image processing method, device, storage medium and computer equipment | |
JP7443602B2 (en) | Mixed reality system with virtual content warping and how to use it to generate virtual content | |
US10083540B2 (en) | Virtual light in augmented reality | |
AU2013266187B2 (en) | Systems and methods for rendering virtual try-on products | |
US20200296348A1 (en) | Virtual Reality Parallax Correction | |
CN109829981B (en) | Three-dimensional scene presentation method, device, equipment and storage medium | |
US11830148B2 (en) | Reconstruction of essential visual cues in mixed reality applications | |
CN105611267B (en) | Merging of real world and virtual world images based on depth and chrominance information | |
WO2018208460A1 (en) | Holographic illustration of weather | |
EP3779892A1 (en) | Light-field image generation system, image display system, shape information acquisition server, image generation server, display device, light-field image generation method and image display method | |
KR101208767B1 (en) | Stereoscopic image generation method, device and system using circular projection and recording medium for the same | |
KR20190056694A (en) | Virtual exhibition space providing method using 2.5 dimensional object tagged with digital drawing element | |
US11961250B2 (en) | Light-field image generation system, image display system, shape information acquisition server, image generation server, display device, light-field image generation method, and image display method | |
Mori et al. | Diminished hand: A diminished reality-based work area visualization | |
JP7261121B2 (en) | Information terminal device and program | |
US20230243973A1 (en) | Real space object reconstruction within virtual space image using tof camera | |
de Sorbier et al. | Depth Camera to Generate On-line Content for Auto-Stereoscopic Displays | |
CN114782663A (en) | 3D image processing method and device, electronic equipment and readable storage medium | |
CN115767068A (en) | Information processing method and device and electronic equipment | |
CN113192208A (en) | Three-dimensional roaming method and device | |
CN116309854A (en) | Method, device, equipment, system and storage medium for calibrating augmented reality equipment | |
JP2022067171A (en) | Generation device, generation method and program | |
CN115909251A (en) | Method, device and system for providing panoramic all-around image of vehicle | |
Levin et al. | Some aspects of the geospatial reality perception in human stereopsis-based defense display systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15752718 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15752718 Country of ref document: EP Kind code of ref document: A1 |