US20050083248A1 - Mobile face capture and image processing system and method - Google Patents

Mobile face capture and image processing system and method Download PDF

Info

Publication number
US20050083248A1
US20050083248A1 US10/914,621 US91462104A US2005083248A1 US 20050083248 A1 US20050083248 A1 US 20050083248A1 US 91462104 A US91462104 A US 91462104A US 2005083248 A1 US2005083248 A1 US 2005083248A1
Authority
US
United States
Prior art keywords
user
face
view
view images
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/914,621
Inventor
Frank Biocca
Jannick Rolland
George Stockman
Chandan Reddy
Miguel Figueroa
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Central Florida
Michigan State University MSU
Original Assignee
University of Central Florida
Michigan State University MSU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Central Florida, Michigan State University MSU filed Critical University of Central Florida
Priority to US10/914,621 priority Critical patent/US20050083248A1/en
Publication of US20050083248A1 publication Critical patent/US20050083248A1/en
Assigned to BOARD OF TRUSTEES OPERATING MICHIGAN STATE UNIVERSITY, UNIVERSITY OF CENTRAL FLORIDA reassignment BOARD OF TRUSTEES OPERATING MICHIGAN STATE UNIVERSITY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ROLLAND, JANNICK P., BIOCCA, FRANK, FIGUEROA-VILLANEUVA, MIGUEL, REDDY, CHANDAN K., STOCKMAN, GEORGE C.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • AHUMAN NECESSITIES
    • A41WEARING APPAREL
    • A41DOUTERWEAR; PROTECTIVE GARMENTS; ACCESSORIES
    • A41D31/00Materials specially adapted for outerwear
    • A41D31/04Materials specially adapted for outerwear characterised by special function or use
    • A41D31/32Retroreflective
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/194Transmission of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0132Head-up displays characterised by optical features comprising binocular systems
    • G02B2027/0134Head-up displays characterised by optical features comprising binocular systems of stereoscopic type
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/286Image signal generators having separate monoscopic and stereoscopic modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/398Synchronisation thereof; Control thereof

Definitions

  • the present invention generally relates to computer-based teleconferencing in a networked virtual reality environment, and more particularly to mobile face capture and image processing.
  • Networked virtual environments allow users at remote locations to use a telecommunication link to coordinate work and social interaction.
  • Teleconferencing systems and virtual environments that use 3D computer graphics displays and digital video recording systems allow remote users to interact with each other, to view virtual work objects such as text, engineering models, medical models, play environments and other forms of digital data, and to view each other's physical environment.
  • a number of teleconferencing technologies support collaborative virtual environments Which allow interaction between individuals in local and remote sites.
  • video-teleconferencing systems use simple video screens and wide screen displays to allow interaction between individuals in local and remote sites.
  • wide screen displays are disadvantageous because virtual 3D objects presented on the screen are not blended into the environment of the room of the users. In such an environment, local users cannot have a virtual object between them. This problem applies to representation of remote users as well. The location of the remote participants cannot be anywhere in the room or the space around the user, but is restricted to the screen.
  • Networked immersive virtual environments also present various disadvantages.
  • Networked immersive virtual reality systems are sometimes used to allow remote users to connect via a telecommunication link and interact with each other and virtual objects.
  • the users In many such systems the users must wear a virtual reality display where the user's eyes and a large part of the face are occluded. Because these systems only display 3D virtual environments, the user cannot see both the physical world of the site in which they are located and the virtual world which is displayed. Furthermore, people in the same room cannot see each others' full face and eyes, so local interaction is diminished. Because the face is occluded, such systems cannot capture and record a full stereoscopic view of remote users' faces.
  • CAVES Another teleconferencing system is termed CAVES.
  • CAVES systems use multiple screens arranged in a room configuration to display virtual information.
  • Such systems have several disadvantages.
  • CAVES systems there is only one correct viewpoint, all other local users have a distorted perspective on the virtual scene. Scenes in the CAVES are only projected on a wall. So two local users can view a scene on the wall, but an object cannot be presented in the space between users.
  • These systems also use multiple rear screen projectors, and therefore are very bulky and expensive.
  • CAVES systems may also utilize stereoscopic screen displays. Stereoscopic screen display systems do not present 3D stereoscopic views that interpose 3D objects between local users of the system. These systems sometimes use 3D glasses to present a 3D view, but only one viewpoint is shared among many users often with perspective distortions.
  • an augmented reality display that mitigates the above mentioned disadvantages and has the capability to display virtual objects and environments, superimpose virtual objects on the “real world” scenes, provide “face-to-face” recording and display, be used in various ambient lighting environments, and correct for optical distortion, while minimizing computational power and time.
  • Faces have been captured passively in rooms instrumented with a set of cameras, where stereo computations can be done using selected viewpoints. Other objects can be captured using the same methods. Such hardware configurations are unavailable for mobile use in arbitrary environments, however.
  • Other work has shown that faces can be captured using a single camera and processing that uses knowledge of the human face. Either the face has to move relative to the camera, or assumptions of symmetry are employed.
  • Our approach is to use two cameras affixed to the head, which is necessary to convey non symmetrical facial expression, such as the closing of one eye and not the other, or the reflection of a fire on only one side of the face.
  • novel views have been synthesized by a panoramic system and/or by interpolating between a set of views. Producing novel views in a dynamic scenario was successfully shown for a highly rigid motion.
  • This work extended interpolation techniques to the temporal domain from the spatial domain.
  • a novel view at a new time instant was generated by interpolating views at nearby time intervals using spatio-temporal view interpolation, where a dynamic 3-D scene is modeled and novel views are generated at intermediate time intervals.
  • image processing procedures include receiving at least two side view images of a face of a user.
  • side view images are warped and blended into an output image of a face of a user as if viewed from a virtual point of view.
  • a video is produced in real time of output images from a video feed of side view images.
  • a teleportal system is provided.
  • a principal feature of the teleportal system is that single or multiple users at a local site and a remote site use a telecommunication link to engage in face-to-face interaction with other users in a 3D augmented reality environment.
  • Each user utilizes a system that includes a display such as a projection augmented-reality display and sensors such as a stereo facial expression video capture system.
  • the video capture system allows the participants to view a 3D, stereoscopic, video-based image of the face of all remote participants and hear their voices, view unobstructed the local participants, and view a room that blends physical with virtual objects with which users can interact and manipulate.
  • multiple local and remote users can interact in a room-sized space draped in a fine grained retro-reflective fabric.
  • An optical tracker preferably having markers attached to each user's body and digital video cameras at the site records the location of each user at a site.
  • a computer uses the information about each user's location to calculate the user's body location in space and create a correct perspective on the location of the 3D virtual objects in the room.
  • the projection augmented-reality display projects stereo images towards a screen which is covered by a fine grain retro-reflective fabric.
  • the projection augmented-reality display uses an optics system that preferably includes two miniature source displays, and projection-optics, such as a double Gauss form lens combined with a beam splitter, to project an image via light towards the surface covered with the retro-reflective fabric.
  • the retro-reflective fabric retro-reflects the projected light brightly and directly back to the eyes of the user. Because of the properties of the retro-reflective screen and the optics system, each eye receives the image from only one of the source displays. The user perceives a 3D stereoscopic image apparently floating in space.
  • the projection augmented-reality display and video capture system does not occlude vision of the physical environment in which the user is located.
  • the system of the present invention allows users to see both virtual and physical objects, so that the objects appear to occupy the same space.
  • the system can completely immerse the user in a virtual environment, or the virtual environment can be restricted to a specific region in space, such as a projection window or table top.
  • the restricted regions can be made part of an immersive wrap-around display.
  • FIG. 1 is a plan view of a first preferred embodiment of a teleportal system of the present invention showing one local user at a first site and two remote users at a second site;
  • FIG. 2 is a block diagram depicting the teleportal system of the present invention
  • FIG. 3 is a perspective view of the illumination system for a projection user-mounted display of the present invention.
  • FIG. 4 is a perspective view of a first preferred embodiment of a vertical architecture of the illumination system for the projection user-mounted display of the present invention
  • FIG. 5 is a perspective view of a second preferred embodiment of a horizontal architecture of the illumination system for the projection user-mounted display of the present invention.
  • FIG. 6 is a diagram depicting an exemplary optical pathway associated with a projection user-mounted display of the present invention.
  • FIG. 7 is a side view of a projection lens used in the projection augmented-reality display of the present invention.
  • FIG. 8 is a side view of the projection augmented-reality display of FIG. 4 mounted into a headwear apparatus
  • FIG. 9 is a perspective view of the video system in the teleportal headset of the present invention.
  • FIG. 10 is a side view of the video system of FIG. 9 ;
  • FIG. 11 is a top view of a video system of FIG. 9 ;
  • FIG. 12 a is an alternate embodiment of the teleportal site of the present invention with a wall screen
  • FIG. 12 b is another alternate embodiment of the teleportal site of the present invention with a spherical screen
  • FIG. 12 c is yet another alternate embodiment of the teleportal site of the present invention with a hand-held screen
  • FIG. 12 d is yet another alternate embodiment of the teleportal site of the present invention with body shaped screens;
  • FIG. 13 is a first preferred embodiment of the projection augmented-reality display of the present invention.
  • FIG. 14 is a side view of the projection augmented-reality display of FIG. 13 ;
  • FIG. 15 is a view of a face capture concept and images from a prototype head mounted display unit
  • FIG. 16 is a view of an experimental prototype of a face capture system
  • FIG. 17 is a view demonstrating behavior of a grid pattern
  • FIG. 18 is a view of face images captured during a calibration stage
  • FIG. 19 is a block diagram of an off-line calibration stage during synthesis of a virtual frontal view
  • FIG. 20 is a block diagram of an operational stage during synthesis of a virtual frontal view
  • FIG. 21 is a set of views illustrating generation of a frontal view during a calibration stage and reconstruction of the frontal image from a side view using a grid: (a) left image captured during the calibration stage; (b) operational left image warped into virtual image plus calibration stripes; and (c) operational left image without stripes;
  • FIG. 22 is a set of views illustrating: (a) a frontal view obtained from a camcorder; and (b) a virtual frontal view obtained as a reconstructed frontal view from transformation tables and a side image of FIG. 21 ( c );
  • FIG. 23 is a set of views of images considered for objective evaluation with a top row of real video frames compared to a bottom row of virtual video frames;
  • FIG. 24 is a set of views of a real video image on the left compared to a corresponding virtual video image on the right, wherein facial regions are compared using cross-correlation;
  • FIG. 25 is a set of views of a real video image on the left compared to a corresponding virtual video image on the right, wherein distances between facial feature points are considered using a Euclidean distance measure;
  • FIG. 26 is a set of views with a top row showing images captured using a left camera, a second row showing images captured using a right camera; a third row showing images captured using a camcorder placed in front of the face, and a final row showing virtual frontal views generated from images in the first two rows;
  • FIG. 27 is a set of views illustrating synchronization of eyelids during blinking, with real video displayed in a top row and virtual video illustrated in a bottom row;
  • FIG. 28 is a view identifying some feature points in a side image and a set of triangles formed using the feature points as vertices.
  • FIG. 1 depicts a teleportal system 100 using two display sites 101 and 102 .
  • Teleportal system 100 includes a first teleportal site or local site 101 and a second teleportal site or remote site 102 . It should be appreciated that additional teleportal sites can be included in teleportal system 100 .
  • first teleportal site 101 is described in detail below, it should further be appreciated that the second teleportal site 102 can be identical to the first teleportal site 101 . It should also be noted that the number of users and types of screens can vary at each site.
  • Teleportal sites 101 and 102 preferably include a screen 103 .
  • Screen 103 is made of a retro-reflective material such as beads-based or corner-cube based materials manufactured by 3M® and Reflexite Corporation.
  • the retro-reflective material is preferably gold which produces a bright image with adequate resolution.
  • other material which has metalic fiber adequate to reflect at least a majority of the image or light projected onto its surface may be used.
  • the retro-reflective material preferably provides about 98 percent reflection of the incident light projected onto its surface. The material retro-reflects light projected onto its surface directly back upon its incident path and to the eyes of the user.
  • Screen 103 can be a surface of any shape, including but not limited to a plane, sphere, pyramid, and body-shaped, for example, like a glove for a user's hand or a body suit for the entire body. Screen 103 can also be formed to a substantially cubic shape resembling a room, preferably similar to four walls and a ceiling which generally surround the users. In the preferred embodiment, screen 103 forms four walls which surround users 110 . 3D graphics are visible via screen 103 . Because the users can see 3D stereographic images, text, and animations, all surfaces that have retro-reflective property in the room or physical environment can carry information. For example, a spherical screen 104 is disposed within the room or physical environment for projecting images. The room or physical environment may include physical objects substantially unrelated to the teleportal system 100 . For example, physical objects may include furniture, walls, floors, ceilings and/or other inanimate objects.
  • local site 101 includes a tracking system 106 .
  • Tracking system 106 is preferably an optical or optical/hybrid tracking system which may include at least one digital video camera or CCD camera.
  • CCD camera digital video camera
  • four digital video cameras 114 , 116 , 118 and 120 are shown.
  • several sets of three CCD arrays stacked up could be used for optical tracking.
  • Visual processing software processes teleportal site data acquired from digital video cameras 114 , 116 , 118 and 120 .
  • the software provides the data to the networked computer 107 a.
  • Teleportal site data for example, includes the position of users within the teleportal room.
  • Optical tracking system 106 further includes markers 96 that are preferably attached to one or more body parts of the user.
  • markers 96 are coupled to each user's hand, which is monitored for movement and position. Markers 96 communicate marker location data regarding the location of the user's head and hands. It should be appreciated that the location of any other body part of the user or object to which a marker is attached can be acquired.
  • Each headset preferably has displays and sensors.
  • Each teleportal headset 105 communicates with a networked computer.
  • teleportal headsets 105 of site 101 communicate with networked computer 107 a.
  • Networked computer 107 a communicates with a networked computer 107 b of site 102 via a networked data system 99 .
  • teleportal headsets can exchange data via the networked computers.
  • teleportal headset 105 can be connected via a wireless connection to the networked computers.
  • headset 105 can alternatively communicate directly to networked data system 99 .
  • One type of networked data system 99 is the Internet, a dedicated telecommunication line connecting the two sites, or a wireless network connection.
  • FIG. 2 is a block diagram showing the components for processing and distribution of information of the present invention teleportal system 100 .
  • information can be processed and distributed from other sources that provide visual data which can be projected by teleportal system 100 .
  • Teleportal headset 105 includes at least one sensor array 220 which identifies and transmits the user's behavior.
  • sensor array 220 includes a facial capture system 203 (described in further detail with reference to FIGS. 9, 10 , and 11 ) that senses facial expression, an optical tracking system 106 that senses head motion, and a microphone 204 that senses voice and communication noise. It should be appreciated that other attributes of the user's behavior can be identified and transmitted by adding additional types of sensors.
  • Facial capture system 203 provides image signals based on the image viewed by a digital camera which are processed by a face-unwarping and image stitching module 207 . Images or “first images” sensed by face capture system 203 are morphed for viewing by users at remote sites via a networked computer. The images for viewing are 3D and stereoscopic such that each user experiences a perspectively correct viewpoint on an augmented reality scene. The images of participants can be located anywhere in space around the user.
  • the distorted viewpoint is accomplished via image morphing to approximate a direct face-to-face view of the remote face.
  • Face-warping and image-stitching module 207 morphs images to the user's viewpoint.
  • the pixel correspondence algorithm or face warping and image stitching module 207 calculates the corresponding points between the first images to create second images for remote users.
  • Image data retrieved from the first images allows for a calculation of a 3D structure of the head of the user.
  • the 3D image is preferably a stereoscopic video image or a video texture mapping to a 3D virtual mesh.
  • the 3D model can display the 3D structure or second images to the users in the remote location.
  • Each user in the local and remote sites has a personal and correct perspective viewpoint on the augmented reality scene.
  • Optical tracking system 106 and microphone 204 provide signals to networked computer 107 that are processed by a virtual environment module 208 .
  • a display array 222 is provided to allow the user to experience the 3D virtual environment, for example via a projection augmented-reality display 401 and stereo audio earphones 205 which are connected to user 110 .
  • Display array 222 is connected to a networked computer.
  • a modem 209 connects a networked computer to network 99 .
  • FIGS. 3 through 5 illustrate a projection augmented-reality display 401 which can be used in a wide variety of lighting conditions, including indoor and outdoor environments.
  • a projection lens 502 is positioned to receive a beam from a beamsplitter 503 .
  • a source display 501 which is a reflective LCD panel, is positioned opposite of projection lens 502 from beamsplitter 503 .
  • source display 501 may be a DLP flipping mirror manufactured by Texas Instruments®.
  • Beamsplitter 503 is angled at a position less than ninety degrees from the plane in which projection lens 502 is positioned.
  • a collimating lens 302 is positioned to provide a collimating lens beam to beamsplitter 503 .
  • a mirror 304 is placed between collimating lens 302 and a surface mounted LCD 306 . Surface mounted LCD 306 provides light to mirror 304 which passes through collimating lens 302 and beamsplitter 503 .
  • Source display 501 transmits light to beamsplitter 503 .
  • FIG. 4 depicts a pair of the projection augmented-reality displays shown in FIG. 3 ; however, each of projection augmented-reality displays 530 and 532 are mounted in a vertical orientation relative to the head of the user.
  • FIG. 5 depicts a pair of projection augmented-reality displays of the type shown in FIG. 3 ; however, each of projection augmented-reality displays 534 and 536 are mounted in a horizontal orientation relative to the hood of the user.
  • FIG. 6 illustrates the optics of projection augmented-reality display 500 relative to a user's eye 508 .
  • a projection lens 502 receives an image from a source display 501 located beyond the focal plane of projection lens 502 .
  • Source display 501 may be a reflective LCD panel. However, it should be appreciated that any miniature display including, but not limited to, miniature CRT displays, DLP flipping mirror systems and backlighting transmissive LCDs may be alternatively utilized.
  • Source display 501 preferably provides an image that is further transmitted through projection lens 502 . The image is preferably computer-generated.
  • a translucent mirror or light beamsplitter 503 is placed after projection lens 502 at preferably 45 degrees with respect to the optical axis of projection lens 502 ; therefore, the light refracted by projection lens 502 produces an intermediary image 505 at its optical conjugate and the reflected light of the beam-splitter produces a projected image 506 , symmetrical to intermediary image 505 about the plane in which light beamsplitter 503 is positioned.
  • a retro-reflective screen 504 is placed in a position onto which projected image 506 is directed. Retro-reflective screen 504 may be located in front of or behind projected image 506 so that rays hitting the surface are reflected back in the opposite direction and travel through beamsplitter 503 to user's eye 508 .
  • the reflected image is of a sufficient brightness which permits improved resolution. User's eye 508 will perceive projected image 506 from an exit pupil 507 of the optical system.
  • FIG. 7 depicts a preferred optical form for projection lens 502 .
  • Projection lens 502 includes a variety of elements and can be accomplished with glass optics, plastic optics, or diffractive optics.
  • a non-limiting example of projection lens 502 is a double Gauss lens form formed by a first singlet lens 609 , a second singlet lens 613 , a first doublet lens 610 , a second doublet lens 612 , and a stop surface 611 , which are arranged in series.
  • Projection lens 502 is made of a material which is transparent to visible light. The lens material may include glass and plastic materials.
  • FIG. 8 shows projection augmented-reality display 800 mounted to headwear or helmet 810 .
  • Projection augmented-reality display 800 is mounted in a vertical direction.
  • Projection augmented-reality display 800 can be used in various ambient light conditions, including, but not limited to, artificial light and natural sunlight.
  • light source 812 transmits light to source display 814 .
  • Projection augmented-reality display 800 provides optics to produce an image to the user.
  • FIGS. 9, 10 and 11 illustrate teleportal headset 105 of the present invention.
  • Teleportal headset 105 preferably includes a facial expression capture system 402 , ear phones 404 , and a microphone 403 .
  • Facial expression capture system 402 preferably includes digital video cameras 601 a and 601 b.
  • digital video cameras 601 a and 601 b are disposed on either side of the user's face 606 , such that images covering the entire face are captured, which are the used to create one image of the complete face, or a 3D model of the complete face that can then be used to generate single images or stereo images for general viewpoints of the face 606 .
  • Each video camera 601 a and 601 b is mounted to a housing 406 .
  • Housing 406 is formed as a temple section of the headset 105 .
  • each digital video camera 601 a and 601 b is pointed at a respective convex mirror 602 a and 602 b.
  • Each convex mirror 602 a and 602 b is connected to housing 406 and is angled to reflect an image of the adjacent side of the face.
  • Digital cameras 601 a and 601 b located on each side of the user's face 410 capture a first image or particular image of the face from each convex mirror 602 a and 602 b associated with the individual digital cameras 601 a and 601 b, respectively, such that a stereo image of the face is captured.
  • a lens 408 is located at each eye of user face 606 .
  • Lens 408 allows images to be displayed to the user as the lens 408 is positioned 45 percent relative to the axis in which a light beam is transmitted from a projector.
  • Lens 408 is made of a material that reflects and transmits light. One preferred material is “half silvered mirror.”
  • FIGS. 12 a through 12 d show alternate configurations of a teleportal site of the present invention with various shaped screens.
  • FIG. 12 a illustrates an alternate embodiment of the teleportal system 702 in which retro-reflective fabric screen 103 is used on a room's wall so that a more traditional teleconferencing system can be provided.
  • FIG. 12 b illustrates another alternate embodiment of a teleportal site 704 in which a desktop system 702 is provided.
  • desktop system 702 two users 110 observe a 3D object on a table top screen 708 .
  • screen 708 is spherically shaped. All users in site of the screen 708 can view the perspective projections at the same time from their particular positions.
  • FIG. 12 c shows yet another alternate embodiment of teleportal site 704 .
  • User 110 has a wearable computer forming a “magic mirror” configuration of teleportal site 704 .
  • Teleportal headset 105 is connected to a wearable computer 712 .
  • the wearable computer 712 is linked to the remote user (not shown) preferably via a wireless network connection.
  • a wearable screen includes a hand-held surface 714 covered with a retro-reflective fabric for the display of the remote user.
  • a “magic mirror” configuration of teleportal site 704 is preferred in the outdoor setting because it is mobile and easy to transport. In the “magic mirror configuration,” the user holds the surface 714 , preferably via a handle and positions the surface 714 over a space to view the virtual environment projected by the projection display of the teleportal head set 105 .
  • FIG. 12 d shows yet another alternate embodiment of the teleportal site 810 .
  • a body shaped screen 812 is disposed on a person's body 814 .
  • Body shaped screen 812 can be continuous or substantially discontinuous depending upon the desire to cover certain body parts.
  • a body shaped screen 812 can be shaped for a patient's head, upper body, and lower body.
  • a body shaped screen 812 is beneficial for projecting images, such as that produced by MRI (or other digital images), onto the patient's body during surgery. This projecting permits a surgeon or user 816 to better approximate the location of internal organs prior to invasive treatment.
  • Body shaped screen 812 can further be formed as gloves 816 , thereby allowing the surgeon to place his hands (and arms) over the body of the patient yet continue to view the internal image in a virtual view without interference of his hands.
  • FIGS. 13 and 14 show a first preferred embodiment of a projection augmented-reality display 900 which includes a pair of LCD displays 902 coupled to headwear 905 .
  • a pair of LCD displays 902 project images to the eyes of the users.
  • a microphone 910 is also coupled to headwear 905 to sense the user's voice.
  • an earphone 912 is coupled to headwear 905 .
  • a lens 906 covers the eyes of the user 914 but still permits the user to view the surrounding around her.
  • the glass lens 906 transmits and reflects light. In this manner, the user's eyes are not occluded by the lens.
  • One preferred material for the transparent glass lens 906 is a “half silvered mirror.”
  • HMD helmet mounted display
  • the complete HMD design includes display components that display remote faces and scenes to the wearer as well as reality augmentation for the wearer's environment.
  • the system and method of the present invention provides a virtual frontal video of the HMD wearer. This virtual video (VV) is synthesized by warping and blending the two real side view videos.
  • Side view as used herein should be interpreted as any offset view.
  • the angle with respect to the face does not have to be directly from the side.
  • the side view can be from an angle beneath or above the face.
  • side views of faces of users are typically captured and used from/in a virtual view, it should be readily understood that other parts of a user may also be captured, such as a user's hand.
  • a prototype HMD facial capture system has been developed. The development of the video processing reported here was isolated from the HMD device and performed using a fixed lab bench and conventional computer. Porting and integration of the video processing with the mobile HMD hardware can be accomplished in a variety of ways as further described below.
  • FIG. 16 illustrates a lab bench used to develop the mobile face capture and image processing system and method.
  • the bench was built to accommodate human subjects so they could keep their heads fixed relative to two cameras 1000 A and 1000 B and a structured light projector 1002 .
  • the two cameras 1000 A and 1000 B are placed so that their images are similar to those that can be obtained from the HMD optics.
  • the light projector 1002 is used to orient the head precisely and to obtain calibration data used in image warping.
  • a video camera (not shown) placed on top of the projector records the subject's face during each experiment for comparison purposes.
  • the prototype uses an Intel Pentium III processor running at 746 MHz with 384 MB RAM having two Matrox Meteor II standard cards.
  • the problem is to generate a virtual frontal view from two side views.
  • the projected light grid provides a basis for mapping pixels from the side images into a virtual image with the projector's viewpoint.
  • the grid is projected onto the face for only a few frames so that mapping tables can be built, and then is switched off for regular operation.
  • a global 3D coordinate system is denoted; however, it must be emphasized that 3D coordinates are not required for the task according to some embodiments of the present invention.
  • the transformation tables are generated using the grid pattern coordinates.
  • a rectangular grid is projected onto the face and the two side views are captured as shown in FIGS. 16 and 18 .
  • the location of the grid regions in the side images define where real pixel data is to be accessed for placement in the virtual video.
  • Coordinate transformation is done between PCS and LCS and between PCS and RCS.
  • an algorithm can map every pixel in the front view to the appropriate side view. By centering the grid on the face, the grid also supports the correspondence between LCS and RCS and the blending of their pixels.
  • FIG. 17 The behavior of a single gridded cell in the original side view and the virtual frontal view is demonstrated in FIG. 17 .
  • a grid cell in the frontal image maps to a quadrilateral with curve edges in the side image.
  • Bilinear interpolation is used to reconstruct the original frontal grid pattern by warping a quadrilateral into a square or a rectangle.
  • Equations 1 and 2 are four functions determined during the calibration stage and implemented via the transformations tables. These transformation tables are then used in the operational stage immediately after the grid is switched off. During operation, it is known for each pixel V[x, y] in which grid cell of LCS or RCS it lies. Bilinear interpolation is then used on the grid cell corners to access an actual pixel value to be output to the VV.
  • warping can still be accomplished in one step by making the point correspondences.
  • bicubic interpolation may be employed instead of bilinear interpolation.
  • subpixel coordinates and multiple pixel sampling can be used in cases where the face texture changes fast or where the face normal is away from the sensor direction.
  • a rectangular grid of dimension 400 ⁇ 400 is projected onto the face.
  • the grid is made by repeating three colored lines.
  • White, green and cyan colors proved useful because of their bright appearance over the skin color.
  • This combination of hues demonstrated good performance over a wide variety of skin pigmentations.
  • it is envisioned that other hues may be employed.
  • the first few frames have the grid projected onto the face before the grid is turned off.
  • One of the frames with the grid is taken and the transformation tables are generated.
  • the size of the grid pattern that is projected in the calibration stage plays a significant role in the quality of the video. This size was decided based on the trade-off between the quality of the video and execution time. An appropriate grid size was chosen based on trial and error.
  • the trial and error process started by projecting a sparse grid pattern onto the face and then increasing the density of the grid pattern. At one point, the increase in the density did not significantly improve the quality of the face image but consumed too much time. At that point, the grid was finalized with a grid cell size of row-width 24 pixels and column-width 18 pixels.
  • FIG. 18 shows the frames that are captured during the calibration stage of the experiment. This calibration step is feasible for use in collaborative rooms; however, it is envisioned that the calibration is applicable to mobile users as well.
  • FIG. 19 shows the off-line calibration stage during the synthesis of the virtual frontal view.
  • Projector 1002 projects grid pattern 1004 onto human face 1006 .
  • Grid lines reflect off of human face 1006 to left and right mirrors 1008 A and 1008 B, and from the mirrors to respective left and right cameras 1000 A and 1000 B.
  • Quadrilaterals of left and right calibration face images 1010 A and 1010 B are mapped to corresponding squares or rectangles of grid pattern 1004 to form left and right transformation tables 1012 . It is envisioned that more than two side views can be used, and that other polygonal shapes besides quadrilaterals may be employed.
  • a grid pattern of predetermined polygonal shapes is projected onto the face from a virtual point of view, side view images of the face are captured, and pixels enclosed by the polygons of captured side view images are mapped back to corresponding predetermined polygonal shapes of the grid pattern to form the transformation tables.
  • the side view imaging arrays may be integrated into a projection screen of the HMD, thus eliminating the mirrors while retaining fixed positions respective of and orientations toward sides of the user's face.
  • each virtual frontal frame is generated.
  • the algorithm reconstructs each (x, y) coordinate in the virtual view by accessing the corresponding location in the transformation table and retrieving the pixel in I L (or I R ) using interpolation. Then a 1D linear smoothing filter is used to smooth the intensity across the vertical midline of the face. Without this smoothing, a human viewer usually perceives a slight intensity edge at the midline of the face.
  • FIG. 20 shows the complete block diagram of the operational phase. Transformation tables 1012 are used to warp left and right face images 1010 A and 1010 B into left warped face image 1014 A and right warped face image 1014 B. These portions of the virtual output image 1016 are then blended by mosaicking the face image. Post processing to linearly smooth the image is performed to result in a final virtual face image 1018 . Since the transformation is based on the bilinear interpolation technique, each pixel can be generated only when it is inside four grid coordinate points. Because the grid is not defined well at the periphery of the face, the algorithm is unable to generate the ears and hair portion of the face. The results of the warping during the calibration and the operation stage is shown in FIGS. 21 through 23 .
  • frames with a gridded pattern can be deleted from the final output: these can be identified by a large shift in intensity when the projected grid is switched off.
  • a microphone recording of the voice of the user stored in a separate .wav file, can be appended to the video file to obtain a final output.
  • Color balancing of the cameras can also be performed. Even though software based approaches for color balancing can be taken, the color balancing in the present work is done at the hardware level. Before the cameras are used for calibration, they are balanced using the white balancing technique. A single white paper is shown to both cameras and cameras are white balanced instantly.
  • the virtual video of the face can be adequate to support the communication of identity, mental state, gesture, and gaze direction.
  • the real video frames from the camcorder and the virtual video frames were normalized to the same size of 200 ⁇ 200 and compared using cross correlation and interpoint distances between salient face features.
  • Five images that were considered for evaluation are shown in FIG. 23 .
  • Important items considered were the smoothness and accuracy of lips and eyes and their movements, the quality of the intensities, and the synchronization of the audio and video.
  • flaws looked for were breaks at the centerline of the face due to blending and for other distortions that may have been caused by the sensing and warping process.
  • 1 6 ⁇ [ D a ⁇ ⁇ f + D b ⁇ ⁇ f + D c ⁇ ⁇ f + D c ⁇ ⁇ g + D d ⁇ ⁇ g + D e ⁇ ⁇ g ] .
  • Synchronization in the two videos is preferred in the invention application. Since two views of a face with lip movements are merged together, any small changes in the synchronization has high impact on the misalignment of the lips. This synchronization was evaluated based on sensitive movements such as eyeball movements and blinking eyelids. Similarly, mouth movements were examined in the virtual videos. FIGS. 26 to 27 show some of these effects.
  • the total computation time consists of (1) transferring the images into buffers, (2) warping by interpolating each of the grid blocks, and (3) linearly smoothing each output image.
  • the average time is about 60 ms per frame using a 746 MHz computer. Less than 30 ms would be considered to be real-time: this can be achieved with a current computer with clock rate of 2.6 GHz.
  • Some implementations can require more power to mosaic training data into the video to account for features occluded from the cameras.
  • the algorithm being used can be made to work in real-time.
  • the working prototype has been tested on a diverse set of seven individuals. From comparisons of the virtual videos with real videos, it is expected that important facial expressions will be represented adequately and not distorted by more than 2%.
  • the HMD system implementing the image processing software of the present invention can support the intended telecommunication applications.
  • 3D texture-mapped face models can also be created by calibrating the cameras and projector in the WCS. 3D models present the opportunity for greater compression of the signal and for arbitrary frontal viewpoints, which are desired for virtual face-to-face collaboration.
  • structured light projection is an obtrusive step in the process and may be cumbersome in the field.
  • a generic mesh model of the face can also be employed.
  • the 3D modeling embodiments include one or more of the following: (a) a calibration method that does not depend upon structured light, (b) an output format that is a dynamic 3D model rather than just a 2D video, and (c) a real-time tracking method that identifies salient face points in the two side videos and updates both the 3D structure and the texture of the 3D model accordingly.
  • This model can be rendered rapidly by standard graphics software and displayed by standard graphics cards.
  • the mesh point 3D coordinates are available for a generic face. Scaling and deformation transformations can be used to instantiate this model for an individual wearing the Face Capture Head Mounted Display Units (FCHMDs).
  • FCHMDs Face Capture Head Mounted Display Units
  • the model can be viewed/rendered from a general viewpoint within the coverage of the cameras and not just from the central point in front of the face.
  • Triangles of the mesh can be texture-mapped to the sensed images and to other stored face data that may be needed to fill in for unimaged patches.
  • the 3D face model can be instantiated to fit a specific individual by one or more of the following: (1) choosing special points by hand on a digital frontal and profile photo; (2) choosing special points from the two side video frames of a neutral expression taken from the FCHMD, and enabling the wearer to make adjustments while viewing the resulting rendered 3D model.
  • standard rendering of the face model requires one or more of the following: (1) the set of triangles modeling the 3D geometry; (2) the two side images from the FCHMD; (3) a mapping of all vertices of each 3D triangle into the 2D coordinate space of one of the side images; (4) a viewpoint from which to view the 3D model; and (5) a lighting model that determines how the 3D model is illuminated.
  • FIG. 28 illustrates the identification of some feature points in a side image and a set of triangles formed using the feature points as vertices. These triangles serve as bounding polygons for regions to be texture mapped to corresponding polygonally bounded regions of a generic mesh model.
  • the generic mesh model used is selected to maximize similarity between the feature points automatically recognized in the side view image and feature points of the mesh model as if viewed from the side.
  • scaling and deformation transformations already obtained for causing the generic mesh model to fit a particular user are next used to modify texture mapping of the generic mesh model to the side view images.
  • the resulting 3D model of the user's face can be rendered from a selected virtual point of view to result in an output image. Accordingly, input video streams of side view images can be used in realtime to produce a video stream of output images from a virtual point of view.
  • users communicating with one another may each wear a FCHMD, and that the FCHMD can operate in a variety of ways.
  • side views of a first user's face can be transmitted to the second user's FCHMD, where they can be warped and blended to produce the 3D model, which is then rendered from a selected perspective to produce the output image.
  • the first user's FCHMD can warp and blend the side views to produce the 3D model, and transmit the 3D model to the second user's FCHMD where it can be rendered from a selected perspective to produce the output image.
  • the first user's FCHMD can warp and blend the side views, render the resulting 3D model from a selected perspective to produce the output image, and transmit the output image to the second user's FCHMD.
  • an external image processing module external to the FCHMDs can perform some or all of the steps necessary to produce the output image from the side views. Further still, this external image processing module can be remotely located on a communications network, rather than physically located at a location of one or more of the user's.
  • a FCHMD may be adapted to transmit to a remote location and/or receive from a remote location at least one of the following: (1) side view images; (2) user-specific scaling and deformation transformations; (3) position of a user's face in a common coordinate system of a collaborative, virtual environment; (4) a 3D model of a user's face; (5) a selection of a virtual point of view from which to render a user's face; and (6) an output image.
  • Supplemental image data obtained from a particular user or from training users can also be transmitted or received, and can even be integrated into the generic mesh models ahead of time.
  • the FCHMD does not have to transmit or receive one or more of each of the types of data listed above.
  • an FCHMD may only transmit and receive output images.
  • an FCHMD may transmit and receive only two data types, including output images together with position of a user's face in a common coordinate system of a collaborative, virtual environment.
  • an FCHMD will transmit and receive only side view images.
  • an FCHMD will transmit and receive only two data types, including side view images, together with position of a user's face in a common coordinate system of a collaborative, virtual environment.
  • an FCHMD will transmit and receive only 3D models of user's faces.
  • an FCHMD will transmit and receive only two data types, including 3D models of user's faces, together with position of a user's face in a common coordinate system of a collaborative, virtual environment.
  • 3D models or side view images are transmitted and received, it may be the case that user-specific scaling and deformation transformations are transmitted and received at some point, perhaps during an initialization of collaboration.
  • one FCHMD can do most or all of the work for both FCHMDs, and receive side view images and face position data for a first user while transmitting output images or a 3D model for a second user. Accordingly, all of these embodiments and others that will be readily apparent to those skilled in the art are described above.
  • the FCHMD optics/electronics of some embodiments can sense in real time the real expressive face of the wearer from the two side videos, and the software can create in real time an active 3D face model to be transmitted to remote collaborators.
  • the morphable model is trained for dynamic use on a population of users.
  • a diverse set of training users may wear the FCHMD and follow a script that induces a variety of facial expressions, while frontal video is also recorded.
  • This training set can support salient point tracking and also the substitution of real data for viewpoints that cannot be observed by the side cameras (inside the mouth, for example).
  • the training videos can record sequences of articulator movements that can be used during online FCHMD use.
  • S be a set of shape vectors composed of the face surface points and a corresponding set T of texture vectors.
  • S j ( x 1 , y 1 , z 1 , . . . , x n , y n , z n ) (1)
  • T j ( r 1 g 1 b 1 , . . . , r n , g n , b n ) (2)
  • the shape points contain, as a subset, the salient points of the shape mesh. Training the model can be accomplished by hand labeling of the mesh points for a diverse set of faces and multiframe video recording followed by principal components analysis to obtain a minimum spanning dimensionality.
  • the parameters a j , b j represent the face p in terms of the training faces and the new illumination conditions and possibly slight variation in the camera view.
  • Tracking of salient feature points can be accomplished to dynamically change the transformation tables and achieve a dynamic model.
  • the parameters of the model a j , b j can be dynamically fit by optimizing the similarity between a model rendered using these parameters and the observed images.
  • E a j , b j ⁇ x , y ⁇ ⁇ I observed ⁇ [ x , y ] ⁇ ( 3 )
  • Fitting via hill-climbing is one designated optimization procedure in some embodiments so that small dynamic updates can be made to the model parameters for the next observed side video frames.
  • the FCHMD can be calibrated by finding the optimal fit between a parameterized model and the video data currently observed on the FCHMD. Once this fit is known, locations of the salient mesh points (X k , Y k , Z k ) are known and thus a texture map is defined between the 3D mesh and the 2D images for that instant of time (current expression). Since iterative hill-climbing is used for the fitting procedure, it is expected that either some intelligent guess or some hand selection will be needed to initialize the fitting. A fully automatic procedure can be initialized from an average wearer's face determined from the training data.
  • the control software for the FCHMD can have a back up procedure so that the HMD wearer can initialize the fitting by manually choosing some salient face points via the wearer viewing the video images and selecting points.

Abstract

Image processing procedures include receiving at least two side view images of a face of a user. In other aspects, side view images are warped and blended into an output image of a face of a user as if viewed from a virtual point of view. In further aspects, a virtual video is produced in real time of output images from a video feed of side view images.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation-in-part of U.S. patent application Ser. No. 09/748,761 filed on Dec. 22, 2000. The disclosure of the above application is incorporated herein by reference in its entirety for any purpose.
  • FIELD OF THE INVENTION
  • The present invention generally relates to computer-based teleconferencing in a networked virtual reality environment, and more particularly to mobile face capture and image processing.
  • BACKGROUND OF THE INVENTION
  • Networked virtual environments allow users at remote locations to use a telecommunication link to coordinate work and social interaction. Teleconferencing systems and virtual environments that use 3D computer graphics displays and digital video recording systems allow remote users to interact with each other, to view virtual work objects such as text, engineering models, medical models, play environments and other forms of digital data, and to view each other's physical environment.
  • A number of teleconferencing technologies support collaborative virtual environments Which allow interaction between individuals in local and remote sites. For example, video-teleconferencing systems use simple video screens and wide screen displays to allow interaction between individuals in local and remote sites. However, wide screen displays are disadvantageous because virtual 3D objects presented on the screen are not blended into the environment of the room of the users. In such an environment, local users cannot have a virtual object between them. This problem applies to representation of remote users as well. The location of the remote participants cannot be anywhere in the room or the space around the user, but is restricted to the screen.
  • Networked immersive virtual environments also present various disadvantages. Networked immersive virtual reality systems are sometimes used to allow remote users to connect via a telecommunication link and interact with each other and virtual objects. In many such systems the users must wear a virtual reality display where the user's eyes and a large part of the face are occluded. Because these systems only display 3D virtual environments, the user cannot see both the physical world of the site in which they are located and the virtual world which is displayed. Furthermore, people in the same room cannot see each others' full face and eyes, so local interaction is diminished. Because the face is occluded, such systems cannot capture and record a full stereoscopic view of remote users' faces.
  • Another teleconferencing system is termed CAVES. CAVES systems use multiple screens arranged in a room configuration to display virtual information. Such systems have several disadvantages. In CAVES systems, there is only one correct viewpoint, all other local users have a distorted perspective on the virtual scene. Scenes in the CAVES are only projected on a wall. So two local users can view a scene on the wall, but an object cannot be presented in the space between users. These systems also use multiple rear screen projectors, and therefore are very bulky and expensive. Additionally, CAVES systems may also utilize stereoscopic screen displays. Stereoscopic screen display systems do not present 3D stereoscopic views that interpose 3D objects between local users of the system. These systems sometimes use 3D glasses to present a 3D view, but only one viewpoint is shared among many users often with perspective distortions.
  • Consequently, there is a need for an augmented reality display that mitigates the above mentioned disadvantages and has the capability to display virtual objects and environments, superimpose virtual objects on the “real world” scenes, provide “face-to-face” recording and display, be used in various ambient lighting environments, and correct for optical distortion, while minimizing computational power and time.
  • Faces have been captured passively in rooms instrumented with a set of cameras, where stereo computations can be done using selected viewpoints. Other objects can be captured using the same methods. Such hardware configurations are unavailable for mobile use in arbitrary environments, however. Other work has shown that faces can be captured using a single camera and processing that uses knowledge of the human face. Either the face has to move relative to the camera, or assumptions of symmetry are employed. Our approach is to use two cameras affixed to the head, which is necessary to convey non symmetrical facial expression, such as the closing of one eye and not the other, or the reflection of a fire on only one side of the face.
  • There is little overlap in the images taken from outside the user's central field of view, so the frontal view synthesized is a novel view. In previous work, novel views have been synthesized by a panoramic system and/or by interpolating between a set of views. Producing novel views in a dynamic scenario was successfully shown for a highly rigid motion. This work extended interpolation techniques to the temporal domain from the spatial domain. A novel view at a new time instant was generated by interpolating views at nearby time intervals using spatio-temporal view interpolation, where a dynamic 3-D scene is modeled and novel views are generated at intermediate time intervals.
  • There remains a need for a way to generate in real time a synthetic frontal view of a human face from two real side views.
  • SUMMARY OF THE INVENTION
  • In accordance with the present invention, image processing procedures include receiving at least two side view images of a face of a user. In other aspects, side view images are warped and blended into an output image of a face of a user as if viewed from a virtual point of view. In further aspects, a video is produced in real time of output images from a video feed of side view images.
  • In yet other aspects, a teleportal system is provided. A principal feature of the teleportal system is that single or multiple users at a local site and a remote site use a telecommunication link to engage in face-to-face interaction with other users in a 3D augmented reality environment. Each user utilizes a system that includes a display such as a projection augmented-reality display and sensors such as a stereo facial expression video capture system. The video capture system allows the participants to view a 3D, stereoscopic, video-based image of the face of all remote participants and hear their voices, view unobstructed the local participants, and view a room that blends physical with virtual objects with which users can interact and manipulate.
  • In one preferred embodiment of the system, multiple local and remote users can interact in a room-sized space draped in a fine grained retro-reflective fabric. An optical tracker preferably having markers attached to each user's body and digital video cameras at the site records the location of each user at a site. A computer uses the information about each user's location to calculate the user's body location in space and create a correct perspective on the location of the 3D virtual objects in the room.
  • The projection augmented-reality display projects stereo images towards a screen which is covered by a fine grain retro-reflective fabric. The projection augmented-reality display uses an optics system that preferably includes two miniature source displays, and projection-optics, such as a double Gauss form lens combined with a beam splitter, to project an image via light towards the surface covered with the retro-reflective fabric. The retro-reflective fabric retro-reflects the projected light brightly and directly back to the eyes of the user. Because of the properties of the retro-reflective screen and the optics system, each eye receives the image from only one of the source displays. The user perceives a 3D stereoscopic image apparently floating in space. The projection augmented-reality display and video capture system does not occlude vision of the physical environment in which the user is located. The system of the present invention allows users to see both virtual and physical objects, so that the objects appear to occupy the same space. Depending on the embodiment of the system, the system can completely immerse the user in a virtual environment, or the virtual environment can be restricted to a specific region in space, such as a projection window or table top. Furthermore, the restricted regions can be made part of an immersive wrap-around display.
  • Further areas of applicability of the present invention will become apparent from the detailed description provided hereinafter. It should be understood that the detailed description and specific examples, while indicating the preferred embodiment of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present invention will become more fully understood from the detailed description and the accompanying drawings, wherein:
  • FIG. 1 is a plan view of a first preferred embodiment of a teleportal system of the present invention showing one local user at a first site and two remote users at a second site;
  • FIG. 2 is a block diagram depicting the teleportal system of the present invention;
  • FIG. 3 is a perspective view of the illumination system for a projection user-mounted display of the present invention;
  • FIG. 4 is a perspective view of a first preferred embodiment of a vertical architecture of the illumination system for the projection user-mounted display of the present invention;
  • FIG. 5 is a perspective view of a second preferred embodiment of a horizontal architecture of the illumination system for the projection user-mounted display of the present invention;
  • FIG. 6 is a diagram depicting an exemplary optical pathway associated with a projection user-mounted display of the present invention;
  • FIG. 7 is a side view of a projection lens used in the projection augmented-reality display of the present invention;
  • FIG. 8 is a side view of the projection augmented-reality display of FIG. 4 mounted into a headwear apparatus;
  • FIG. 9 is a perspective view of the video system in the teleportal headset of the present invention;
  • FIG. 10 is a side view of the video system of FIG. 9;
  • FIG. 11 is a top view of a video system of FIG. 9;
  • FIG. 12 a is an alternate embodiment of the teleportal site of the present invention with a wall screen;
  • FIG. 12 b is another alternate embodiment of the teleportal site of the present invention with a spherical screen;
  • FIG. 12 c is yet another alternate embodiment of the teleportal site of the present invention with a hand-held screen;
  • FIG. 12 d is yet another alternate embodiment of the teleportal site of the present invention with body shaped screens;
  • FIG. 13 is a first preferred embodiment of the projection augmented-reality display of the present invention;
  • FIG. 14 is a side view of the projection augmented-reality display of FIG. 13;
  • FIG. 15 is a view of a face capture concept and images from a prototype head mounted display unit;
  • FIG. 16 is a view of an experimental prototype of a face capture system;
  • FIG. 17 is a view demonstrating behavior of a grid pattern;
  • FIG. 18 is a view of face images captured during a calibration stage;
  • FIG. 19 is a block diagram of an off-line calibration stage during synthesis of a virtual frontal view;
  • FIG. 20 is a block diagram of an operational stage during synthesis of a virtual frontal view;
  • FIG. 21 is a set of views illustrating generation of a frontal view during a calibration stage and reconstruction of the frontal image from a side view using a grid: (a) left image captured during the calibration stage; (b) operational left image warped into virtual image plus calibration stripes; and (c) operational left image without stripes;
  • FIG. 22 is a set of views illustrating: (a) a frontal view obtained from a camcorder; and (b) a virtual frontal view obtained as a reconstructed frontal view from transformation tables and a side image of FIG. 21(c);
  • FIG. 23 is a set of views of images considered for objective evaluation with a top row of real video frames compared to a bottom row of virtual video frames;
  • FIG. 24 is a set of views of a real video image on the left compared to a corresponding virtual video image on the right, wherein facial regions are compared using cross-correlation;
  • FIG. 25 is a set of views of a real video image on the left compared to a corresponding virtual video image on the right, wherein distances between facial feature points are considered using a Euclidean distance measure;
  • FIG. 26 is a set of views with a top row showing images captured using a left camera, a second row showing images captured using a right camera; a third row showing images captured using a camcorder placed in front of the face, and a final row showing virtual frontal views generated from images in the first two rows;
  • FIG. 27 is a set of views illustrating synchronization of eyelids during blinking, with real video displayed in a top row and virtual video illustrated in a bottom row; and
  • FIG. 28 is a view identifying some feature points in a side image and a set of triangles formed using the feature points as vertices.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • The following description of the preferred embodiment(s) is merely exemplary in nature and is in no way intended to limit the invention, its application, or uses.
  • FIG. 1 depicts a teleportal system 100 using two display sites 101 and 102. Teleportal system 100 includes a first teleportal site or local site 101 and a second teleportal site or remote site 102. It should be appreciated that additional teleportal sites can be included in teleportal system 100. Although first teleportal site 101 is described in detail below, it should further be appreciated that the second teleportal site 102 can be identical to the first teleportal site 101. It should also be noted that the number of users and types of screens can vary at each site.
  • Teleportal sites 101 and 102 preferably include a screen 103. Screen 103 is made of a retro-reflective material such as beads-based or corner-cube based materials manufactured by 3M® and Reflexite Corporation. The retro-reflective material is preferably gold which produces a bright image with adequate resolution. Alternatively, other material which has metalic fiber adequate to reflect at least a majority of the image or light projected onto its surface may be used. The retro-reflective material preferably provides about 98 percent reflection of the incident light projected onto its surface. The material retro-reflects light projected onto its surface directly back upon its incident path and to the eyes of the user. Screen 103 can be a surface of any shape, including but not limited to a plane, sphere, pyramid, and body-shaped, for example, like a glove for a user's hand or a body suit for the entire body. Screen 103 can also be formed to a substantially cubic shape resembling a room, preferably similar to four walls and a ceiling which generally surround the users. In the preferred embodiment, screen 103 forms four walls which surround users 110. 3D graphics are visible via screen 103. Because the users can see 3D stereographic images, text, and animations, all surfaces that have retro-reflective property in the room or physical environment can carry information. For example, a spherical screen 104 is disposed within the room or physical environment for projecting images. The room or physical environment may include physical objects substantially unrelated to the teleportal system 100. For example, physical objects may include furniture, walls, floors, ceilings and/or other inanimate objects.
  • With a continued reference to FIG. 1, local site 101 includes a tracking system 106. Tracking system 106 is preferably an optical or optical/hybrid tracking system which may include at least one digital video camera or CCD camera. By way of example, four digital video cameras 114, 116, 118 and 120 are shown. By way of another example, several sets of three CCD arrays stacked up could be used for optical tracking. Visual processing software (not shown) processes teleportal site data acquired from digital video cameras 114, 116, 118 and 120. The software provides the data to the networked computer 107 a. Teleportal site data, for example, includes the position of users within the teleportal room.
  • Optical tracking system 106 further includes markers 96 that are preferably attached to one or more body parts of the user. In the preferred embodiment, markers 96 are coupled to each user's hand, which is monitored for movement and position. Markers 96 communicate marker location data regarding the location of the user's head and hands. It should be appreciated that the location of any other body part of the user or object to which a marker is attached can be acquired.
  • Users 110 wear a novel teleportal headset 105. Each headset preferably has displays and sensors. Each teleportal headset 105 communicates with a networked computer. For example, teleportal headsets 105 of site 101 communicate with networked computer 107 a. Networked computer 107 a communicates with a networked computer 107 b of site 102 via a networked data system 99. In this manner, teleportal headsets can exchange data via the networked computers. It should be appreciated that teleportal headset 105 can be connected via a wireless connection to the networked computers. It should also be appreciated that headset 105 can alternatively communicate directly to networked data system 99. One type of networked data system 99 is the Internet, a dedicated telecommunication line connecting the two sites, or a wireless network connection.
  • FIG. 2 is a block diagram showing the components for processing and distribution of information of the present invention teleportal system 100. It should be appreciated that information can be processed and distributed from other sources that provide visual data which can be projected by teleportal system 100. For example, digital pictures of body parts, images acquired via medical imaging technology and images of other three dimensional (3D) objects. Teleportal headset 105 includes at least one sensor array 220 which identifies and transmits the user's behavior. In the preferred embodiment, sensor array 220 includes a facial capture system 203 (described in further detail with reference to FIGS. 9, 10, and 11) that senses facial expression, an optical tracking system 106 that senses head motion, and a microphone 204 that senses voice and communication noise. It should be appreciated that other attributes of the user's behavior can be identified and transmitted by adding additional types of sensors.
  • Each of sensors 203, 106 and 204 are preferably connected to networked computer 107 and sends signals to the networked computer. Facial capture system 203 sends signals to the networked computer. However, it should be appreciated that sensors 203, 106 and 204 can directly communicate with a networked data system 99. Facial capture system 203 provides image signals based on the image viewed by a digital camera which are processed by a face-unwarping and image stitching module 207. Images or “first images” sensed by face capture system 203 are morphed for viewing by users at remote sites via a networked computer. The images for viewing are 3D and stereoscopic such that each user experiences a perspectively correct viewpoint on an augmented reality scene. The images of participants can be located anywhere in space around the user.
  • Morphing distorts the stereo images to produce a viewpoint of preferably a user's moving face that appears different from the viewpoint originally obtained by facial capture system 203. The distorted viewpoint is accomplished via image morphing to approximate a direct face-to-face view of the remote face. Face-warping and image-stitching module 207 morphs images to the user's viewpoint. The pixel correspondence algorithm or face warping and image stitching module 207 calculates the corresponding points between the first images to create second images for remote users. Image data retrieved from the first images allows for a calculation of a 3D structure of the head of the user. The 3D image is preferably a stereoscopic video image or a video texture mapping to a 3D virtual mesh. The 3D model can display the 3D structure or second images to the users in the remote location. Each user in the local and remote sites has a personal and correct perspective viewpoint on the augmented reality scene. Optical tracking system 106 and microphone 204 provide signals to networked computer 107 that are processed by a virtual environment module 208.
  • A display array 222 is provided to allow the user to experience the 3D virtual environment, for example via a projection augmented-reality display 401 and stereo audio earphones 205 which are connected to user 110. Display array 222 is connected to a networked computer. In the preferred embodiment, a modem 209 connects a networked computer to network 99.
  • FIGS. 3 through 5 illustrate a projection augmented-reality display 401 which can be used in a wide variety of lighting conditions, including indoor and outdoor environments. With specific reference to FIG. 3, a projection lens 502 is positioned to receive a beam from a beamsplitter 503. A source display 501, which is a reflective LCD panel, is positioned opposite of projection lens 502 from beamsplitter 503. Alternatively, source display 501 may be a DLP flipping mirror manufactured by Texas Instruments®. Beamsplitter 503 is angled at a position less than ninety degrees from the plane in which projection lens 502 is positioned. A collimating lens 302 is positioned to provide a collimating lens beam to beamsplitter 503. A mirror 304 is placed between collimating lens 302 and a surface mounted LCD 306. Surface mounted LCD 306 provides light to mirror 304 which passes through collimating lens 302 and beamsplitter 503.
  • Source display 501 transmits light to beamsplitter 503. It should be appreciated that FIG. 4 depicts a pair of the projection augmented-reality displays shown in FIG. 3; however, each of projection augmented- reality displays 530 and 532 are mounted in a vertical orientation relative to the head of the user. Furthermore, FIG. 5 depicts a pair of projection augmented-reality displays of the type shown in FIG. 3; however, each of projection augmented- reality displays 534 and 536 are mounted in a horizontal orientation relative to the hood of the user.
  • FIG. 6 illustrates the optics of projection augmented-reality display 500 relative to a user's eye 508. A projection lens 502 receives an image from a source display 501 located beyond the focal plane of projection lens 502. Source display 501 may be a reflective LCD panel. However, it should be appreciated that any miniature display including, but not limited to, miniature CRT displays, DLP flipping mirror systems and backlighting transmissive LCDs may be alternatively utilized. Source display 501 preferably provides an image that is further transmitted through projection lens 502. The image is preferably computer-generated. A translucent mirror or light beamsplitter 503 is placed after projection lens 502 at preferably 45 degrees with respect to the optical axis of projection lens 502; therefore, the light refracted by projection lens 502 produces an intermediary image 505 at its optical conjugate and the reflected light of the beam-splitter produces a projected image 506, symmetrical to intermediary image 505 about the plane in which light beamsplitter 503 is positioned. A retro-reflective screen 504 is placed in a position onto which projected image 506 is directed. Retro-reflective screen 504 may be located in front of or behind projected image 506 so that rays hitting the surface are reflected back in the opposite direction and travel through beamsplitter 503 to user's eye 508. The reflected image is of a sufficient brightness which permits improved resolution. User's eye 508 will perceive projected image 506 from an exit pupil 507 of the optical system.
  • FIG. 7 depicts a preferred optical form for projection lens 502. Projection lens 502 includes a variety of elements and can be accomplished with glass optics, plastic optics, or diffractive optics. A non-limiting example of projection lens 502 is a double Gauss lens form formed by a first singlet lens 609, a second singlet lens 613, a first doublet lens 610, a second doublet lens 612, and a stop surface 611, which are arranged in series. Projection lens 502 is made of a material which is transparent to visible light. The lens material may include glass and plastic materials.
  • Additionally, the projection augmented-reality display can be mounted on the head. More specifically, FIG. 8 shows projection augmented-reality display 800 mounted to headwear or helmet 810. Projection augmented-reality display 800 is mounted in a vertical direction. Projection augmented-reality display 800 can be used in various ambient light conditions, including, but not limited to, artificial light and natural sunlight. In the preferred embodiment, light source 812 transmits light to source display 814. Projection augmented-reality display 800 provides optics to produce an image to the user.
  • FIGS. 9, 10 and 11 illustrate teleportal headset 105 of the present invention. Teleportal headset 105 preferably includes a facial expression capture system 402, ear phones 404, and a microphone 403. Facial expression capture system 402 preferably includes digital video cameras 601 a and 601 b. In the preferred embodiment, digital video cameras 601 a and 601 b are disposed on either side of the user's face 606, such that images covering the entire face are captured, which are the used to create one image of the complete face, or a 3D model of the complete face that can then be used to generate single images or stereo images for general viewpoints of the face 606.
  • Each video camera 601 a and 601 b is mounted to a housing 406. Housing 406 is formed as a temple section of the headset 105. In the preferred embodiment, each digital video camera 601 a and 601 b is pointed at a respective convex mirror 602 a and 602 b. Each convex mirror 602 a and 602 b is connected to housing 406 and is angled to reflect an image of the adjacent side of the face. Digital cameras 601 a and 601 b located on each side of the user's face 410 capture a first image or particular image of the face from each convex mirror 602 a and 602 b associated with the individual digital cameras 601 a and 601 b, respectively, such that a stereo image of the face is captured. A lens 408 is located at each eye of user face 606. Lens 408 allows images to be displayed to the user as the lens 408 is positioned 45 percent relative to the axis in which a light beam is transmitted from a projector. Lens 408 is made of a material that reflects and transmits light. One preferred material is “half silvered mirror.”
  • FIGS. 12 a through 12 d show alternate configurations of a teleportal site of the present invention with various shaped screens. FIG. 12 a illustrates an alternate embodiment of the teleportal system 702 in which retro-reflective fabric screen 103 is used on a room's wall so that a more traditional teleconferencing system can be provided. FIG. 12 b illustrates another alternate embodiment of a teleportal site 704 in which a desktop system 702 is provided. In desktop system 702, two users 110 observe a 3D object on a table top screen 708. In the preferred embodiment, screen 708 is spherically shaped. All users in site of the screen 708 can view the perspective projections at the same time from their particular positions.
  • FIG. 12 c shows yet another alternate embodiment of teleportal site 704. User 110 has a wearable computer forming a “magic mirror” configuration of teleportal site 704. Teleportal headset 105 is connected to a wearable computer 712. The wearable computer 712 is linked to the remote user (not shown) preferably via a wireless network connection. A wearable screen includes a hand-held surface 714 covered with a retro-reflective fabric for the display of the remote user. A “magic mirror” configuration of teleportal site 704 is preferred in the outdoor setting because it is mobile and easy to transport. In the “magic mirror configuration,” the user holds the surface 714, preferably via a handle and positions the surface 714 over a space to view the virtual environment projected by the projection display of the teleportal head set 105.
  • FIG. 12 d shows yet another alternate embodiment of the teleportal site 810. A body shaped screen 812 is disposed on a person's body 814. Body shaped screen 812 can be continuous or substantially discontinuous depending upon the desire to cover certain body parts. For example, a body shaped screen 812 can be shaped for a patient's head, upper body, and lower body. A body shaped screen 812 is beneficial for projecting images, such as that produced by MRI (or other digital images), onto the patient's body during surgery. This projecting permits a surgeon or user 816 to better approximate the location of internal organs prior to invasive treatment. Body shaped screen 812 can further be formed as gloves 816, thereby allowing the surgeon to place his hands (and arms) over the body of the patient yet continue to view the internal image in a virtual view without interference of his hands.
  • FIGS. 13 and 14 show a first preferred embodiment of a projection augmented-reality display 900 which includes a pair of LCD displays 902 coupled to headwear 905. In the preferred embodiment, a pair of LCD displays 902 project images to the eyes of the users. A microphone 910 is also coupled to headwear 905 to sense the user's voice. Furthermore, an earphone 912 is coupled to headwear 905. A lens 906 covers the eyes of the user 914 but still permits the user to view the surrounding around her. The glass lens 906 transmits and reflects light. In this manner, the user's eyes are not occluded by the lens. One preferred material for the transparent glass lens 906 is a “half silvered mirror.”
  • Communication of the expressive human face is important to tele-communication and distributed collaborative work. In addition to sophisticated collaborative work environments, there is a strong popular trend for the merger of cell phone and video functionality at consumer prices. At both ends of the technology spectrum, there is a problem producing quality video of a person's face without interfering with that person's ability to perform some task requiring both visual and motor attention. When the person is mobile, the technology of most collaborative environments is unusable. Referring now to FIG. 15, the solution proposed here is to modify a helmet mounted display (HMD) for minimally intrusive face capture. The prototype HMD has small mirrors held above the temples and viewed by small video cameras above the ears, creating a helmet that is balanced and light and with minimal occlusion of the wearer's field of view. The complete HMD design includes display components that display remote faces and scenes to the wearer as well as reality augmentation for the wearer's environment. The system and method of the present invention provides a virtual frontal video of the HMD wearer. This virtual video (VV) is synthesized by warping and blending the two real side view videos.
  • Side view as used herein should be interpreted as any offset view. Thus, the angle with respect to the face does not have to be directly from the side. Also, the side view can be from an angle beneath or above the face. Further, while side views of faces of users are typically captured and used from/in a virtual view, it should be readily understood that other parts of a user may also be captured, such as a user's hand.
  • A prototype HMD facial capture system has been developed. The development of the video processing reported here was isolated from the HMD device and performed using a fixed lab bench and conventional computer. Porting and integration of the video processing with the mobile HMD hardware can be accomplished in a variety of ways as further described below.
  • The prototype system was configured with off-the-shelf hardware and software components. FIG. 16 illustrates a lab bench used to develop the mobile face capture and image processing system and method. The bench was built to accommodate human subjects so they could keep their heads fixed relative to two cameras 1000A and 1000B and a structured light projector 1002. The two cameras 1000A and 1000B are placed so that their images are similar to those that can be obtained from the HMD optics. The light projector 1002 is used to orient the head precisely and to obtain calibration data used in image warping. In addition to the equipment shown in FIG. 16, a video camera (not shown) placed on top of the projector records the subject's face during each experiment for comparison purposes. The prototype uses an Intel Pentium III processor running at 746 MHz with 384 MB RAM having two Matrox Meteor II standard cards.
  • In the experiment demonstrating feasibility of some embodiments of the present invention, several videos were taken for several volunteers so that the synthetic video could be compared to real video. One question posed was whether the synthetic frontal video would be of sufficient quality to support the applications intended for the HMD. The bench was set up for a general user and adjustments were made for individuals only when needed. Video and audio were recorded for each subject for offline processing.
  • The problem is to generate a virtual frontal view from two side views. The projected light grid provides a basis for mapping pixels from the side images into a virtual image with the projector's viewpoint. The grid is projected onto the face for only a few frames so that mapping tables can be built, and then is switched off for regular operation.
  • There are three 2D coordinate systems involved in creation of the virtual video. A global 3D coordinate system is denoted; however, it must be emphasized that 3D coordinates are not required for the task according to some embodiments of the present invention.
      • 1) World Coordinate System (WCS): for discussion only in some embodiments
      • 2) Left Camera Coordinate System (LCS): IL[s, t] is the left image with s, t coordinates.
      • 3) Right Camera Coordinate System (RCS): IR[u, v] is the right image with u, v coordinates.
      • 4) Projector Coordinate System (PCS): V[x, y] is the output virtual video image with coordinates defined by the projected grid.
  • During the calibration phase, the transformation tables are generated using the grid pattern coordinates. A rectangular grid is projected onto the face and the two side views are captured as shown in FIGS. 16 and 18. The location of the grid regions in the side images define where real pixel data is to be accessed for placement in the virtual video. Coordinate transformation is done between PCS and LCS and between PCS and RCS. Using transformation tables that store the locations of grid points, an algorithm can map every pixel in the front view to the appropriate side view. By centering the grid on the face, the grid also supports the correspondence between LCS and RCS and the blending of their pixels.
  • The behavior of a single gridded cell in the original side view and the virtual frontal view is demonstrated in FIG. 17. A grid cell in the frontal image maps to a quadrilateral with curve edges in the side image. Bilinear interpolation is used to reconstruct the original frontal grid pattern by warping a quadrilateral into a square or a rectangle.
    s=f 1(x, y) and t=g 1(x, y)   (1)
    u=f r(x, y) and v=g r(x, y)   (2)
  • Equations 1 and 2 are four functions determined during the calibration stage and implemented via the transformations tables. These transformation tables are then used in the operational stage immediately after the grid is switched off. During operation, it is known for each pixel V[x, y] in which grid cell of LCS or RCS it lies. Bilinear interpolation is then used on the grid cell corners to access an actual pixel value to be output to the VV.
  • In the case where convex mirrors, wide angle lenses, or equivalent sensors are employed to capture offset views of user faces, warping can still be accomplished in one step by making the point correspondences. However, in the case of strong nonlinear distortion, it is envisioned that bicubic interpolation may be employed instead of bilinear interpolation. It is also envisioned that subpixel coordinates and multiple pixel sampling can be used in cases where the face texture changes fast or where the face normal is away from the sensor direction.
  • Some implementation details are as follows. A rectangular grid of dimension 400×400 is projected onto the face. The grid is made by repeating three colored lines. White, green and cyan colors proved useful because of their bright appearance over the skin color. This combination of hues demonstrated good performance over a wide variety of skin pigmentations. However, it is envisioned that other hues may be employed. The first few frames have the grid projected onto the face before the grid is turned off. One of the frames with the grid is taken and the transformation tables are generated. The size of the grid pattern that is projected in the calibration stage plays a significant role in the quality of the video. This size was decided based on the trade-off between the quality of the video and execution time. An appropriate grid size was chosen based on trial and error. The trial and error process started by projecting a sparse grid pattern onto the face and then increasing the density of the grid pattern. At one point, the increase in the density did not significantly improve the quality of the face image but consumed too much time. At that point, the grid was finalized with a grid cell size of row-width 24 pixels and column-width 18 pixels. FIG. 18 shows the frames that are captured during the calibration stage of the experiment. This calibration step is feasible for use in collaborative rooms; however, it is envisioned that the calibration is applicable to mobile users as well.
  • FIG. 19 shows the off-line calibration stage during the synthesis of the virtual frontal view. Projector 1002 projects grid pattern 1004 onto human face 1006. Grid lines reflect off of human face 1006 to left and right mirrors 1008A and 1008B, and from the mirrors to respective left and right cameras 1000A and 1000B. Quadrilaterals of left and right calibration face images 1010A and 1010B are mapped to corresponding squares or rectangles of grid pattern 1004 to form left and right transformation tables 1012. It is envisioned that more than two side views can be used, and that other polygonal shapes besides quadrilaterals may be employed. Thus, a grid pattern of predetermined polygonal shapes is projected onto the face from a virtual point of view, side view images of the face are captured, and pixels enclosed by the polygons of captured side view images are mapped back to corresponding predetermined polygonal shapes of the grid pattern to form the transformation tables. It is envisioned that the side view imaging arrays may be integrated into a projection screen of the HMD, thus eliminating the mirrors while retaining fixed positions respective of and orientations toward sides of the user's face.
  • Using the transformation tables 1012 generated in the calibration phase, each virtual frontal frame is generated. The algorithm reconstructs each (x, y) coordinate in the virtual view by accessing the corresponding location in the transformation table and retrieving the pixel in IL (or IR) using interpolation. Then a 1D linear smoothing filter is used to smooth the intensity across the vertical midline of the face. Without this smoothing, a human viewer usually perceives a slight intensity edge at the midline of the face.
  • FIG. 20 shows the complete block diagram of the operational phase. Transformation tables 1012 are used to warp left and right face images 1010A and 1010B into left warped face image 1014A and right warped face image 1014B. These portions of the virtual output image 1016 are then blended by mosaicking the face image. Post processing to linearly smooth the image is performed to result in a final virtual face image 1018. Since the transformation is based on the bilinear interpolation technique, each pixel can be generated only when it is inside four grid coordinate points. Because the grid is not defined well at the periphery of the face, the algorithm is unable to generate the ears and hair portion of the face. The results of the warping during the calibration and the operation stage is shown in FIGS. 21 through 23.
  • Some other post-processing can be included. For example, frames with a gridded pattern can be deleted from the final output: these can be identified by a large shift in intensity when the projected grid is switched off. Also, a microphone recording of the voice of the user, stored in a separate .wav file, can be appended to the video file to obtain a final output.
  • Color balancing of the cameras can also be performed. Even though software based approaches for color balancing can be taken, the color balancing in the present work is done at the hardware level. Before the cameras are used for calibration, they are balanced using the white balancing technique. A single white paper is shown to both cameras and cameras are white balanced instantly.
  • The virtual video of the face can be adequate to support the communication of identity, mental state, gesture, and gaze direction. Some objective comparisons between the synthesized and real videos are reported below, plus a qualitative assessment.
  • The real video frames from the camcorder and the virtual video frames were normalized to the same size of 200×200 and compared using cross correlation and interpoint distances between salient face features. Five images that were considered for evaluation are shown in FIG. 23. Important items considered were the smoothness and accuracy of lips and eyes and their movements, the quality of the intensities, and the synchronization of the audio and video. In particular, flaws looked for were breaks at the centerline of the face due to blending and for other distortions that may have been caused by the sensing and warping process.
  • 1) Normalized cross-correlation: The cross correlation between regions of the virtual image and real image was computed for rectangular regions containing the eyes and mouth (FIG. 24). As Table 1 shows, there was high correlation between the real and the virtual images taken at the same instant of time. Frames 2 and 3 shown in FIG. 23 contain facial expressions (eye and lip movements) that were quite different from the expression used during the calibration stage and the generated view gave a slightly lower correlation value when compared with the other frames. Also, the facial expressions in the first and fourth frames were similar to that of the expression in the calibration frame. Hence, these frames have a higher correlation value compared to the rest. The eye and lip regions were considered for evaluating the system because during any facial movement, these regions change significantly and are more important in communication.
    TABLE 1
    Results of Normalized Cross-Correlation Between the Real and the Virtual
    Frontal Views Applied in Regions Around the Eyes and Mouth
    video left eye right eye mouth eyes + mouth complete
    Frame
    1 0.988 0.987 0.993 0.989 0.989
    Frame 2 0.969 0.972 0.985 0.978 0.985
    Frame 3 0.969 0.967 0.992 0.978 0.986
    Frame 4 0.991 0.989 0.993 0.990 0.990
    Frame 5 0.985 0.986 0.992 0.988 0.989
  • 2) Euclidean distance measure: The difference in the normalized Euclidean distances between some of the most prominent feature points were computed. The feature points are chosen in such a way that one of them is relatively static with respect to the other. For some prominent feature points, such as corners of the eyes, nose tip, corners of the mouth, the corners of the eyes are relatively static when compared with the corners of the mouth. FIG. 25 shows the most prominent facial feature points and the distances between those points. Let Rij represent the Euclidean distance between two feature points i and j in the real frontal image and Vij represent the Euclidean distance between two feature points in the virtual frontal image. The difference in the Euclidean distance is Dij=|Rij−Vij|. The average error ε for comparing the face images is defined by ɛ = 1 6 [ D a f + D b f + D c f + D c g + D d g + D e g ] .
    TABLE 2
    Euclidean Distance Measurement of the Prominent Facial Distances in
    the Real Image and Virtual Image and the Defined Average
    Error. All Dimensions are in Pixels.
    Frames Daf Dbf Dcf Dcg Ddg Deg Error (ε)
    Frame 1 2.00 0.80 4.15 3.49 2.95 3.46 2.80
    Frame 2 0.59 3.00 0.79 4.91 0.63 0.80 1.79
    Frame 3 1.88 3.84 4.29 4.34 2.68 1.83 3.14
    Frame 4 1.09 2.97 2.10 6.33 3.01 4.08 3.36
    Frame 5 1.62 2.21 5.57 4.99 1.24 1.90 2.92
  • The results in Table II indicate small errors in the Euclidean distance measurements of the order of 3 pixels in an image of size 200×200. The facial feature points in the five frames were selected manually and hence the errors might have also been caused due to the instability of manual selection. One can note that the error values of Dcf and Dcg are larger than the others. This is probably because the nose tip is not as robustly located compared to eye corners.
  • A preliminary subjective study was also performed. In general, the quality of the videos was assessed as adequate to support the variety of intended applications. The two halves of all the videos are well synchronized and color balanced. The quality of the audio is good and it has been synchronized well with the lip movements. Some observed problems were distortion in the eyes and teeth and in some cases a cross-eyed appearance. The face appears slightly bulged compared with the real videos, which is probably due to the combined radial distortions of the camera and projector lenses.
  • Synchronization in the two videos is preferred in the invention application. Since two views of a face with lip movements are merged together, any small changes in the synchronization has high impact on the misalignment of the lips. This synchronization was evaluated based on sensitive movements such as eyeball movements and blinking eyelids. Similarly, mouth movements were examined in the virtual videos. FIGS. 26 to 27 show some of these effects.
  • Analysis indicates that a real-time mobile system is feasible. The total computation time consists of (1) transferring the images into buffers, (2) warping by interpolating each of the grid blocks, and (3) linearly smoothing each output image. The average time is about 60 ms per frame using a 746 MHz computer. Less than 30 ms would be considered to be real-time: this can be achieved with a current computer with clock rate of 2.6 GHz. Some implementations can require more power to mosaic training data into the video to account for features occluded from the cameras.
  • It can be concluded that the algorithm being used can be made to work in real-time. The working prototype has been tested on a diverse set of seven individuals. From comparisons of the virtual videos with real videos, it is expected that important facial expressions will be represented adequately and not distorted by more than 2%. Thus, the HMD system implementing the image processing software of the present invention can support the intended telecommunication applications.
  • It is envisioned that calibration using a projected grid can be used with the algorithms described above. 3D texture-mapped face models can also be created by calibrating the cameras and projector in the WCS. 3D models present the opportunity for greater compression of the signal and for arbitrary frontal viewpoints, which are desired for virtual face-to-face collaboration. Although technically feasible, structured light projection is an obtrusive step in the process and may be cumbersome in the field. Thus, a generic mesh model of the face can also be employed.
  • There is a problem due to occlusion in the blending of the two side images. Some facial surface points that should be displayed in the frontal image are not visible in the side images. For example, the two cameras cannot see the back of the mouth. It is envisioned that training data may be taken from a user and patched into the synthetic video, either for that user or for another, similar user. During training, the user can generate a basis for all possible future output material. The system can contain methods to index to the right material and blend it with the regular warped output. A related problem is that facial deformations that make significant alterations to the face surface may not be rendered well by the static warp. Examples are tongue thrusts and severe facial distortions. The static warp algorithm achieves good results for moderate facial distortion: It does not crash when severe cases are encountered, but the virtual video can show a discontinuity in important facial features.
  • Other embodiments of the present invention employ a 3D model as described below. The 3D modeling embodiments include one or more of the following: (a) a calibration method that does not depend upon structured light, (b) an output format that is a dynamic 3D model rather than just a 2D video, and (c) a real-time tracking method that identifies salient face points in the two side videos and updates both the 3D structure and the texture of the 3D model accordingly.
  • The 3D face model can be represented by a closed mesh of n points (x, y, z), i=1, n and a texture map. This model can be rendered rapidly by standard graphics software and displayed by standard graphics cards. The mesh point 3D coordinates are available for a generic face. Scaling and deformation transformations can be used to instantiate this model for an individual wearing the Face Capture Head Mounted Display Units (FCHMDs). The model can be viewed/rendered from a general viewpoint within the coverage of the cameras and not just from the central point in front of the face. Triangles of the mesh can be texture-mapped to the sensed images and to other stored face data that may be needed to fill in for unimaged patches.
  • The 3D face model can be instantiated to fit a specific individual by one or more of the following: (1) choosing special points by hand on a digital frontal and profile photo; (2) choosing special points from the two side video frames of a neutral expression taken from the FCHMD, and enabling the wearer to make adjustments while viewing the resulting rendered 3D model.
  • In some embodiments, standard rendering of the face model requires one or more of the following: (1) the set of triangles modeling the 3D geometry; (2) the two side images from the FCHMD; (3) a mapping of all vertices of each 3D triangle into the 2D coordinate space of one of the side images; (4) a viewpoint from which to view the 3D model; and (5) a lighting model that determines how the 3D model is illuminated.
  • FIG. 28 illustrates the identification of some feature points in a side image and a set of triangles formed using the feature points as vertices. These triangles serve as bounding polygons for regions to be texture mapped to corresponding polygonally bounded regions of a generic mesh model. On a frame by frame basis, the generic mesh model used is selected to maximize similarity between the feature points automatically recognized in the side view image and feature points of the mesh model as if viewed from the side. In some embodiments, scaling and deformation transformations already obtained for causing the generic mesh model to fit a particular user are next used to modify texture mapping of the generic mesh model to the side view images. Then, the resulting 3D model of the user's face can be rendered from a selected virtual point of view to result in an output image. Accordingly, input video streams of side view images can be used in realtime to produce a video stream of output images from a virtual point of view.
  • It is envisioned that users communicating with one another may each wear a FCHMD, and that the FCHMD can operate in a variety of ways. For example, side views of a first user's face can be transmitted to the second user's FCHMD, where they can be warped and blended to produce the 3D model, which is then rendered from a selected perspective to produce the output image. Also, the first user's FCHMD can warp and blend the side views to produce the 3D model, and transmit the 3D model to the second user's FCHMD where it can be rendered from a selected perspective to produce the output image. Further, the first user's FCHMD can warp and blend the side views, render the resulting 3D model from a selected perspective to produce the output image, and transmit the output image to the second user's FCHMD. Yet further, an external image processing module external to the FCHMDs can perform some or all of the steps necessary to produce the output image from the side views. Further still, this external image processing module can be remotely located on a communications network, rather than physically located at a location of one or more of the user's. Accordingly, a FCHMD may be adapted to transmit to a remote location and/or receive from a remote location at least one of the following: (1) side view images; (2) user-specific scaling and deformation transformations; (3) position of a user's face in a common coordinate system of a collaborative, virtual environment; (4) a 3D model of a user's face; (5) a selection of a virtual point of view from which to render a user's face; and (6) an output image. Supplemental image data obtained from a particular user or from training users can also be transmitted or received, and can even be integrated into the generic mesh models ahead of time.
  • It should be readily understood that the FCHMD does not have to transmit or receive one or more of each of the types of data listed above. For example, it is possible that an FCHMD may only transmit and receive output images. It is also possible that an FCHMD may transmit and receive only two data types, including output images together with position of a user's face in a common coordinate system of a collaborative, virtual environment. It is further possible that an FCHMD will transmit and receive only side view images. It is still further possible that an FCHMD will transmit and receive only two data types, including side view images, together with position of a user's face in a common coordinate system of a collaborative, virtual environment. It is yet further possible that an FCHMD will transmit and receive only 3D models of user's faces. It is still yet further possible that an FCHMD will transmit and receive only two data types, including 3D models of user's faces, together with position of a user's face in a common coordinate system of a collaborative, virtual environment. In the cases where 3D models or side view images are transmitted and received, it may be the case that user-specific scaling and deformation transformations are transmitted and received at some point, perhaps during an initialization of collaboration. It is additionally possible that one FCHMD can do most or all of the work for both FCHMDs, and receive side view images and face position data for a first user while transmitting output images or a 3D model for a second user. Accordingly, all of these embodiments and others that will be readily apparent to those skilled in the art are described above.
  • During operation, the FCHMD optics/electronics of some embodiments can sense in real time the real expressive face of the wearer from the two side videos, and the software can create in real time an active 3D face model to be transmitted to remote collaborators.
  • The morphable model is trained for dynamic use on a population of users. A diverse set of training users may wear the FCHMD and follow a script that induces a variety of facial expressions, while frontal video is also recorded. This training set can support salient point tracking and also the substitution of real data for viewpoints that cannot be observed by the side cameras (inside the mouth, for example). Moreover, the training videos can record sequences of articulator movements that can be used during online FCHMD use.
  • Let S be a set of shape vectors composed of the face surface points and a corresponding set T of texture vectors.
    S j=(x 1 , y 1 , z 1 , . . . , x n , y n , z n)   (1)
    T j=(r 1g1 b 1 , . . . , r n , g n , b n)   (2)
    The shape points contain, as a subset, the salient points of the shape mesh. Training the model can be accomplished by hand labeling of the mesh points for a diverse set of faces and multiframe video recording followed by principal components analysis to obtain a minimum spanning dimensionality.
  • Any face Sp, Tp in the population can be represented as Spj=1 MajSj and Tpj=1 MajTj, with Σj=1 Maj=1 and Σj=1 Mbj1. The parameters aj, bj represent the face p in terms of the training faces and the new illumination conditions and possibly slight variation in the camera view.
  • Tracking of salient feature points can be accomplished to dynamically change the transformation tables and achieve a dynamic model. The parameters of the model aj, bj can be dynamically fit by optimizing the similarity between a model rendered using these parameters and the observed images. E a j , b j = x , y I observed [ x , y ] ( 3 )
    Fitting via hill-climbing is one designated optimization procedure in some embodiments so that small dynamic updates can be made to the model parameters for the next observed side video frames.
  • The FCHMD can be calibrated by finding the optimal fit between a parameterized model and the video data currently observed on the FCHMD. Once this fit is known, locations of the salient mesh points (Xk, Yk, Zk) are known and thus a texture map is defined between the 3D mesh and the 2D images for that instant of time (current expression). Since iterative hill-climbing is used for the fitting procedure, it is expected that either some intelligent guess or some hand selection will be needed to initialize the fitting. A fully automatic procedure can be initialized from an average wearer's face determined from the training data. The control software for the FCHMD can have a back up procedure so that the HMD wearer can initialize the fitting by manually choosing some salient face points via the wearer viewing the video images and selecting points.
  • The description of the invention is merely exemplary in nature and, thus, variations that do not depart from the gist of the invention are intended to be within the scope of the invention. Such variations are not to be regarded as a departure from the spirit and scope of the invention.

Claims (36)

1. An image processing method, comprising:
receiving at least two side view images of a face of a user;
warping and blending the side view images into an output image of the face of the user as if viewed from a virtual point of view; and
producing a virtual video in real time of output images from a video feed of side view images.
2. The method of claim 1, further comprising:
accessing a three-dimensional closed mesh model of points corresponding to salient facial feature points; and
warping and blending the side view images by texture mapping the side view images to the three-dimensional closed mesh model based on mappings of vertices of polygons of the mesh model into two-dimensional coordinate spaces of side view images.
3. The method of claim 2, further comprising instantiating a mesh model for an individual user by obtaining scaling and deformation transformations.
4. The method of claim 3, further comprising obtaining scaling and deformation transformations by choosing special points by hand on a digital frontal and profile photo.
5. The method of claim 3, further comprising obtaining scaling and deformation transformations by choosing special points from the two side images of a neutral facial expression of the user captured by imaging components of a head mounted display worn by the user, and enabling the user wearing the head mounted display to make adjustments that apply various scaling and deformation transformations while viewing a resulting output image rendered by the head mounted display.
6. The method of claim 2, further comprising:
receiving a selection of a virtual point of view from which to view a three-dimensional model of the face of the user; and
rendering the three-dimensional model from the virtual point of view based on the selection.
7. The method of claim 6, further comprising applying a lighting model that determines how the three-dimensional model appears to be illuminated.
8. The method of claim 2, further comprising dynamically fitting parameters of the mesh model by optimizing similarity between a three-dimensional model rendered from a virtual point of view corresponding to an actual point of view of a side view image while using the parameters and the side view image.
9. The method of claim 8, further comprising fitting the parameters via hill-climbing so that incremental dynamic updates can be made to the model parameters for sequentially observed side video frames.
10. The method of claim 2, further comprising training a morphable model for dynamic use on a population of users, including capturing side views and a frontal view of a diverse set of training speakers.
11. The method of claim 10, further comprising:
hand labeling mesh points for a diverse set of faces and multiframe video recording; and
performing principal components analysis to obtain a minimum spanning dimensionality.
12. The method of claim 2, further comprising texture mapping triangles of the mesh model to stored face data as needed to fill in un-imaged patches.
13. The method of claim 1, further comprising:
accessing transformation tables for the side view images, wherein the transformation tables define rules for interpolating regions of the side view images into side portions of the output image;
warping the side view images based on the transformation tables, thereby producing the side portions of the output image; and
blending the side portions of the output image, thereby producing the output image.
14. The method of claim 13, further comprising creating the transformation tables by projecting a grid pattern onto a human face at least as if from the virtual point of view and mapping polygons of left and right calibration face images to corresponding polygons of the grid pattern.
15. The method of claim 13, wherein warping the side view images includes reconstructing coordinates in side portions of the output image by accessing corresponding locations in the transformation tables and retrieving pixels in the side view images using interpolation.
16. The method of claim 1, wherein receiving at least two side view images includes receiving side view images captured via at least two imaging components of a head mounted display worn by the user, said imaging components attached to said head mounted display unit and thereby obtaining fixed positions and orientations relative to the face of the user and adapted to receive at least two side views of the face of the user;
17. The method of claim 1, further comprising linearly smoothing the output image in order to smooth intensity across a vertical midline of the face.
18. An apparatus, comprising:
a head mounted display unit worn by a first user, the display unit rendering to the first user an output image of a face of a second user virtually interacting with the first user in a collaborative, virtual environment, wherein the output image has been formed, based on offset view images of the face of the second user, such that the face of the second user appears as if viewed from a virtual point of view; and
an input port receiving at least one of the following: (a) offset view images of the face of the second user; (b) user-specific scaling and deformation transformations specific to the second user; (c) position of the face of the second user in a common coordinate system of the collaborative, virtual environment; (d) a three-dimensional model of the face of the second user; (e) a selection of a virtual point of view from which to render the three-dimensional model of the face of the second user; and (f) an output image of the face of the second user.
19. The apparatus of claim 18, further comprising:
an array of at least two imaging components having fixed positions and orientations relative to a face of the first user and adapted to receive at least two offset views of the face of the first user; and
an output port transmitting at least one of the following: (a) offset view images of the face of the first user; (b) user-specific scaling and deformation transformations specific to the first user; (c) position of the face of the first user in the common coordinate system of the collaborative, virtual environment within which the first user and the second user virtually interact; (d) a three-dimensional model of the face of the first user; (e) a selection of a virtual point of view from which to render the three-dimensional model of the face of the first user; and (f) an output image of the face of the first user.
20. The apparatus of claim 19, further comprising an image processing module accessing a three-dimensional closed mesh model of points corresponding to salient facial feature points of offset view images of the face of the first user, and combining the offset view images of the face of the first user by texture mapping the offset view images of the face of the first user to the three-dimensional closed mesh model based on mappings of vertices of polygons of the mesh model into two-dimensional coordinate spaces of the offset view images of the face of the first user, thereby forming the three-dimensional model of the face of the first user.
21. The apparatus of claim 20, wherein said image processing module is further adapted to select a virtual point of view from which to view the three-dimensional model of the face of the first user based on positions of faces of the first user and the second user in a common coordinate system of the collaborative environment, and to render the three-dimensional model of the face of the first user from the virtual point of view, thereby forming the output image of the face of the first user.
22. The apparatus of claim 21, wherein said image processing module is adapted to linearly smooth the output image in order to smooth intensity across a vertical midline of the face of the first user.
23. The apparatus of claim 21, wherein said image processing module is adapted to apply a lighting model that determines how the three-dimensional model of the face of the first user appears to be illuminated.
24. The apparatus of claim 18, further comprising an image processing module adapted to select a virtual point of view from which to view the three-dimensional model of the face of the second user based on positions of faces of the first user and the second user in a common coordinate system of the collaborative environment, and to render the three-dimensional model of the face of the second user from the virtual point of view, thereby forming the output image of the face of the second user.
25. The apparatus of claim 24, wherein said imaging module is further adapted to access a three-dimensional closed mesh model of points corresponding to salient facial feature points of offset view images of the face of the second user, and combining the offset view images of the face of the second user by texture mapping the offset view images of the face of the second user to the three-dimensional closed mesh model based on mappings of vertices of polygons of the mesh model into two-dimensional coordinate spaces of the offset view images of the face of the second user, thereby forming the three-dimensional model of the face of the second user.
26. The apparatus of claim 24, wherein said image processing module is further adapted to apply a lighting model that determines how the three-dimensional model appears to be illuminated.
27. The apparatus of claim 18, further comprising an image processing module adapted to linearly smooth the output image in order to smooth intensity across a vertical midline of the face.
28. An apparatus, comprising:
an array of at least two imaging components having fixed positions and orientations relative to a face of a first user and adapted to receive at least two offset views of the face of the first user; and
an output port transmitting at least one of the following: (a) offset view images of the face of the first user; (b) user-specific scaling and deformation transformations specific to the first user; (c) position of the face of the first user in a common coordinate system of a collaborative, virtual environment within which the first user and a second user virtually interact; (d) a three-dimensional model of the face of the first user; (e) a selection of a virtual point of view from which to render the three-dimensional model of the face of the first user; and (f) an output image of the face of the first user, wherein the output image of the face of the first user has been formed by combining offset view images of the face of the first user into an output image of the face of the first user as if viewed from a virtual point of view.
29. The apparatus of claim 28, further comprising an image processing module accessing a three-dimensional closed mesh model of points corresponding to salient facial feature points of offset view images of the face of the first user, and combining the offset view images of the face of the first user by texture mapping the offset view images of the face of the first user to the three-dimensional closed mesh model based on mappings of vertices of polygons of the mesh model into two-dimensional coordinate spaces of the offset view images of the face of the first user, thereby forming the three-dimensional model of the face of the first user.
30. The apparatus of claim 29, wherein said image processing module is further adapted to select a virtual point of view from which to view the three-dimensional model of the face of the first user based on positions of faces of the first user and the second user in a common coordinate system of the collaborative environment, and to render the three-dimensional model of the face of the first user from the virtual point of view, thereby forming the output image of the face of the first user.
31. The apparatus of claim 30, wherein said image processing module is adapted to linearly smooth the output image in order to smooth intensity across a vertical midline of the face of the first user.
32. The apparatus of claim 29, wherein said image processing module is adapted to apply a lighting model that determines how the three-dimensional model of the face of the first user appears to be illuminated.
33. Computer software, comprising:
first instructions receiving at least two offset view images of a contoured structure;
second instructions forming, from the offset view images, an output image of the contoured structure as if viewed from a virtual point of view.
34. The computer software of claim 33, wherein said second instructions are adapted to recognize feature points of the contoured structure in the offset view images, to access a three-dimensional closed mesh model of feature points similar to the recognized feature points, and to texture map the offset view images to the three-dimensional closed mesh model based on mappings of vertices of polygons of the mesh model into two-dimensional coordinate spaces of the offset view images, thereby forming a three-dimensional model of the contoured structure.
35. The computer software of claim 33, wherein said second set of instructions is further adapted to select a virtual point of view from which to view the three-dimensional model, and to render the three-dimensional model from the virtual point of view, thereby forming the output image.
36. Computer software, comprising:
a first set of instructions receiving at least two offset view images of a contoured structure;
a second set of instructions recognizing feature points of the contoured structure in the offset view images, accessing a three-dimensional closed mesh model of feature points similar to the recognized feature points, and texture mapping the offset view images to the three-dimensional closed mesh model based on mappings of vertices of polygons of the mesh model into two-dimensional coordinate spaces of the offset view images, thereby forming a three-dimensional model of the contoured structure.
US10/914,621 2000-12-22 2004-08-09 Mobile face capture and image processing system and method Abandoned US20050083248A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/914,621 US20050083248A1 (en) 2000-12-22 2004-08-09 Mobile face capture and image processing system and method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/748,761 US6774869B2 (en) 2000-12-22 2000-12-22 Teleportal face-to-face system
US10/914,621 US20050083248A1 (en) 2000-12-22 2004-08-09 Mobile face capture and image processing system and method

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/748,761 Continuation-In-Part US6774869B2 (en) 2000-12-22 2000-12-22 Teleportal face-to-face system

Publications (1)

Publication Number Publication Date
US20050083248A1 true US20050083248A1 (en) 2005-04-21

Family

ID=25010806

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/748,761 Expired - Fee Related US6774869B2 (en) 2000-12-22 2000-12-22 Teleportal face-to-face system
US10/914,621 Abandoned US20050083248A1 (en) 2000-12-22 2004-08-09 Mobile face capture and image processing system and method

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/748,761 Expired - Fee Related US6774869B2 (en) 2000-12-22 2000-12-22 Teleportal face-to-face system

Country Status (3)

Country Link
US (2) US6774869B2 (en)
AU (1) AU2002232809A1 (en)
WO (1) WO2002052330A2 (en)

Cited By (98)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040119662A1 (en) * 2002-12-19 2004-06-24 Accenture Global Services Gmbh Arbitrary object tracking in augmented reality applications
US20060026626A1 (en) * 2004-07-30 2006-02-02 Malamud Mark A Cue-aware privacy filter for participants in persistent communications
US20070229396A1 (en) * 2006-03-30 2007-10-04 Rajasingham Arjuna Indraeswara Virtual navigation system for virtual and real spaces
US20080002859A1 (en) * 2006-06-29 2008-01-03 Himax Display, Inc. Image inspecting device and method for a head-mounted display
US20080089611A1 (en) * 2006-10-17 2008-04-17 Mcfadyen Doug Calibration Technique For Heads Up Display System
US20090052767A1 (en) * 2006-08-23 2009-02-26 Abhir Bhalerao Modelling
US20090137860A1 (en) * 2005-11-10 2009-05-28 Olivier Lordereau Biomedical Device for Treating by Virtual Immersion
US20090172756A1 (en) * 2007-12-31 2009-07-02 Motorola, Inc. Lighting analysis and recommender system for video telephony
US20100014770A1 (en) * 2008-07-17 2010-01-21 Anthony Huggett Method and apparatus providing perspective correction and/or image dewarping
US20110187563A1 (en) * 2005-06-02 2011-08-04 The Boeing Company Methods for remote display of an enhanced image
US20110227924A1 (en) * 2010-03-17 2011-09-22 Casio Computer Co., Ltd. 3d modeling apparatus, 3d modeling method, and computer readable medium
US20110234759A1 (en) * 2010-03-29 2011-09-29 Casio Computer Co., Ltd. 3d modeling apparatus, 3d modeling method, and computer readable medium
US20120105473A1 (en) * 2010-10-27 2012-05-03 Avi Bar-Zeev Low-latency fusing of virtual and real content
US20120263449A1 (en) * 2011-02-03 2012-10-18 Jason R. Bond Head-mounted face image capturing devices and systems
US20120293935A1 (en) * 2010-11-17 2012-11-22 Thomas Mallory Sherlock Wearable Computer System
US8494254B2 (en) * 2010-08-31 2013-07-23 Adobe Systems Incorporated Methods and apparatus for image rectification for stereo display
WO2013173728A1 (en) * 2012-05-17 2013-11-21 The University Of North Carolina At Chapel Hill Methods, systems, and computer readable media for unified scene acquisition and pose tracking in a wearable display
US20150062125A1 (en) * 2013-09-03 2015-03-05 3Ditize Sl Generating a 3d interactive immersive experience from a 2d static image
US20150130702A1 (en) * 2013-11-08 2015-05-14 Sony Corporation Information processing apparatus, control method, and program
US9122053B2 (en) 2010-10-15 2015-09-01 Microsoft Technology Licensing, Llc Realistic occlusion for a head mounted augmented reality display
US9160906B2 (en) 2011-02-03 2015-10-13 Jason R. Bond Head-mounted face image capturing devices and systems
US9258525B2 (en) * 2014-02-25 2016-02-09 Alcatel Lucent System and method for reducing latency in video delivery
US9344612B2 (en) 2006-02-15 2016-05-17 Kenneth Ira Ritchey Non-interference field-of-view support apparatus for a panoramic facial sensor
US20160260260A1 (en) * 2014-10-24 2016-09-08 Usens, Inc. System and method for immersive and interactive multimedia generation
US9497380B1 (en) 2013-02-15 2016-11-15 Red.Com, Inc. Dense field imaging
US20160349839A1 (en) * 2015-05-27 2016-12-01 Sony Interactive Entertainment Inc. Display apparatus of front-of-the-eye mounted type
US20160360970A1 (en) * 2015-06-14 2016-12-15 Facense Ltd. Wearable device for taking thermal and visual measurements from fixed relative positions
WO2016198318A1 (en) * 2015-06-08 2016-12-15 Bitmanagement Software GmbH Method and device for generating data for a two or three-dimensional representation of at least one part of an object and for generating the two or three dimensional representation of at least the part of the object
US9536351B1 (en) * 2014-02-03 2017-01-03 Bentley Systems, Incorporated Third person view augmented reality
US20170091535A1 (en) * 2015-09-29 2017-03-30 BinaryVR, Inc. Head-mounted display with facial expression detecting capability
US20170116711A1 (en) * 2015-10-27 2017-04-27 Boe Technology Group Co., Ltd. Image reconstruction method and device, glasses device and display system
US20170128686A1 (en) * 2015-11-10 2017-05-11 Koninklijke Philips N.V. Determining information about a patients face
WO2017127832A1 (en) * 2016-01-20 2017-07-27 Gerard Dirk Smits Holographic video capture and telepresence system
US9753126B2 (en) 2015-12-18 2017-09-05 Gerard Dirk Smits Real time position sensing of objects
US9779750B2 (en) 2004-07-30 2017-10-03 Invention Science Fund I, Llc Cue-aware privacy filter for participants in persistent communications
US20170315359A1 (en) * 2014-10-21 2017-11-02 Koninklijke Philips N.V. Augmented reality patient interface device fitting appratus
US9810913B2 (en) 2014-03-28 2017-11-07 Gerard Dirk Smits Smart head-mounted projection system
US9852351B2 (en) 2014-12-16 2017-12-26 3Ditize Sl 3D rotational presentation generated from 2D static images
US9898866B2 (en) 2013-03-13 2018-02-20 The University Of North Carolina At Chapel Hill Low latency stabilization for head-worn displays
US9946076B2 (en) 2010-10-04 2018-04-17 Gerard Dirk Smits System and method for 3-D projection and enhancements for interactivity
US9968264B2 (en) 2015-06-14 2018-05-15 Facense Ltd. Detecting physiological responses based on thermal asymmetry of the face
US20180180894A1 (en) * 2016-12-23 2018-06-28 Realwear, Incorporated Interchangeable optics for a head-mounted display
US20180180891A1 (en) * 2016-12-23 2018-06-28 Samsung Electronics Co., Ltd. Electronic device and method of controlling the same
US10043282B2 (en) 2015-04-13 2018-08-07 Gerard Dirk Smits Machine vision for ego-motion, segmenting, and classifying objects
US10045699B2 (en) 2015-06-14 2018-08-14 Facense Ltd. Determining a state of a user based on thermal measurements of the forehead
US10045726B2 (en) 2015-06-14 2018-08-14 Facense Ltd. Selecting a stressor based on thermal measurements of the face
US10045737B2 (en) 2015-06-14 2018-08-14 Facense Ltd. Clip-on device with inward-facing cameras
US10067230B2 (en) 2016-10-31 2018-09-04 Gerard Dirk Smits Fast scanning LIDAR with dynamic voxel probing
US10064559B2 (en) 2015-06-14 2018-09-04 Facense Ltd. Identification of the dominant nostril using thermal measurements
US10076250B2 (en) 2015-06-14 2018-09-18 Facense Ltd. Detecting physiological responses based on multispectral data from head-mounted cameras
US10076270B2 (en) 2015-06-14 2018-09-18 Facense Ltd. Detecting physiological responses while accounting for touching the face
US10080861B2 (en) 2015-06-14 2018-09-25 Facense Ltd. Breathing biofeedback eyeglasses
US10085685B2 (en) 2015-06-14 2018-10-02 Facense Ltd. Selecting triggers of an allergic reaction based on nasal temperatures
US10092232B2 (en) 2015-06-14 2018-10-09 Facense Ltd. User state selection based on the shape of the exhale stream
US10113913B2 (en) 2015-10-03 2018-10-30 Facense Ltd. Systems for collecting thermal measurements of the face
US20180316908A1 (en) * 2017-04-27 2018-11-01 Google Llc Synthetic stereoscopic content capture
US10130299B2 (en) 2015-06-14 2018-11-20 Facense Ltd. Neurofeedback eyeglasses
US10130261B2 (en) 2015-06-14 2018-11-20 Facense Ltd. Detecting physiological responses while taking into account consumption of confounding substances
US10130308B2 (en) 2015-06-14 2018-11-20 Facense Ltd. Calculating respiratory parameters from thermal measurements
US10136856B2 (en) 2016-06-27 2018-11-27 Facense Ltd. Wearable respiration measurements system
US10136852B2 (en) 2015-06-14 2018-11-27 Facense Ltd. Detecting an allergic reaction from nasal temperatures
US10151636B2 (en) 2015-06-14 2018-12-11 Facense Ltd. Eyeglasses having inward-facing and outward-facing thermal cameras
US10154810B2 (en) 2015-06-14 2018-12-18 Facense Ltd. Security system that detects atypical behavior
US10159411B2 (en) 2015-06-14 2018-12-25 Facense Ltd. Detecting irregular physiological responses during exposure to sensitive data
US10216981B2 (en) 2015-06-14 2019-02-26 Facense Ltd. Eyeglasses that measure facial skin color changes
US10223834B2 (en) 2014-10-24 2019-03-05 Usens, Inc. System and method for immersive and interactive multimedia generation
US20190094981A1 (en) * 2014-06-14 2019-03-28 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
US10261183B2 (en) 2016-12-27 2019-04-16 Gerard Dirk Smits Systems and methods for machine perception
US20190139240A1 (en) * 2016-12-06 2019-05-09 Activision Publishing, Inc. Methods and Systems to Modify a Two Dimensional Facial Image to Increase Dimensional Depth and Generate a Facial Image That Appears Three Dimensional
US10299717B2 (en) 2015-06-14 2019-05-28 Facense Ltd. Detecting stress based on thermal measurements of the face
US10324187B2 (en) 2014-08-11 2019-06-18 Gerard Dirk Smits Three-dimensional triangulation and time-of-flight based tracking systems and methods
US10331021B2 (en) 2007-10-10 2019-06-25 Gerard Dirk Smits Method, apparatus, and manufacture for a tracking camera or detector with fast asynchronous triggering
US10365493B2 (en) 2016-12-23 2019-07-30 Realwear, Incorporated Modular components for a head-mounted display
US10379220B1 (en) 2018-01-29 2019-08-13 Gerard Dirk Smits Hyper-resolved, high bandwidth scanned LIDAR systems
US10393312B2 (en) 2016-12-23 2019-08-27 Realwear, Inc. Articulating components for a head-mounted display
US10410372B1 (en) 2018-06-14 2019-09-10 The University Of North Carolina At Chapel Hill Methods, systems, and computer-readable media for utilizing radial distortion to estimate a pose configuration
US10473921B2 (en) 2017-05-10 2019-11-12 Gerard Dirk Smits Scan mirror systems and methods
US10523852B2 (en) 2015-06-14 2019-12-31 Facense Ltd. Wearable inward-facing camera utilizing the Scheimpflug principle
US10524667B2 (en) 2015-06-14 2020-01-07 Facense Ltd. Respiration-based estimation of an aerobic activity parameter
US10524696B2 (en) 2015-06-14 2020-01-07 Facense Ltd. Virtual coaching based on respiration signals
US10582190B2 (en) 2015-11-23 2020-03-03 Walmart Apollo, Llc Virtual training system
US10591605B2 (en) 2017-10-19 2020-03-17 Gerard Dirk Smits Methods and systems for navigating a vehicle including a novel fiducial marker system
US10620910B2 (en) 2016-12-23 2020-04-14 Realwear, Inc. Hands-free navigation of touch-based operating systems
CN111344184A (en) * 2017-11-24 2020-06-26 麦克赛尔株式会社 Head-up display device
US20200290458A1 (en) * 2017-12-06 2020-09-17 Jvckenwood Corporation Projection control device, head-up display device, projection control method, and non-transitory storage medium
US10871570B2 (en) * 2017-09-14 2020-12-22 Everysight Ltd. System and method for position and orientation tracking
US10885702B2 (en) * 2018-08-10 2021-01-05 Htc Corporation Facial expression modeling method, apparatus and non-transitory computer readable medium of the same
US10936872B2 (en) 2016-12-23 2021-03-02 Realwear, Inc. Hands-free contextually aware object interaction for wearable display
US11049476B2 (en) 2014-11-04 2021-06-29 The University Of North Carolina At Chapel Hill Minimal-latency tracking and display for matching real and virtual worlds in head-worn displays
US11055356B2 (en) 2006-02-15 2021-07-06 Kurtis John Ritchey Mobile user borne brain activity data and surrounding environment data correlation system
US11099716B2 (en) 2016-12-23 2021-08-24 Realwear, Inc. Context based content navigation for wearable display
US11189084B2 (en) 2016-07-29 2021-11-30 Activision Publishing, Inc. Systems and methods for executing improved iterative optimization processes to personify blendshape rigs
US11228709B2 (en) 2018-02-06 2022-01-18 Hewlett-Packard Development Company, L.P. Constructing images of users' faces by stitching non-overlapping images
US11238302B2 (en) 2018-08-01 2022-02-01 Samsung Electronics Co., Ltd. Method and an apparatus for performing object illumination manipulation on an image
US11402927B2 (en) 2004-05-28 2022-08-02 UltimatePointer, L.L.C. Pointing device
US11507216B2 (en) 2016-12-23 2022-11-22 Realwear, Inc. Customizing user interfaces of binary applications
US11829059B2 (en) 2020-02-27 2023-11-28 Gerard Dirk Smits High resolution scanning of remote objects with fast sweeping laser beams and signal recovery by twitchy pixel array
US11841997B2 (en) 2005-07-13 2023-12-12 UltimatePointer, L.L.C. Apparatus for controlling contents of a computer-generated image using 3D measurements

Families Citing this family (138)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030210336A1 (en) * 2001-05-09 2003-11-13 Sal Khan Secure access camera and method for camera control
US7088234B2 (en) * 2001-11-27 2006-08-08 Matsushita Electric Industrial Co., Ltd. Wearing information notifying unit
JP2004222254A (en) * 2002-12-27 2004-08-05 Canon Inc Image processing system, method, and program
US7333113B2 (en) 2003-03-13 2008-02-19 Sony Corporation Mobile motion capture cameras
US7218320B2 (en) * 2003-03-13 2007-05-15 Sony Corporation System and method for capturing facial and body motion
US8106911B2 (en) * 2003-03-13 2012-01-31 Sony Corporation Mobile motion capture cameras
US7573480B2 (en) * 2003-05-01 2009-08-11 Sony Corporation System and method for capturing facial and body motion
US20040189701A1 (en) * 2003-03-25 2004-09-30 Badt Sig Harold System and method for facilitating interaction between an individual present at a physical location and a telecommuter
GB2400667B (en) * 2003-04-15 2006-05-31 Hewlett Packard Development Co Attention detection
US7358972B2 (en) * 2003-05-01 2008-04-15 Sony Corporation System and method for capturing facial and body motion
US7361171B2 (en) 2003-05-20 2008-04-22 Raydiance, Inc. Man-portable optical ablation system
US7046151B2 (en) * 2003-07-14 2006-05-16 Michael J. Dundon Interactive body suit and interactive limb covers
US8173929B1 (en) 2003-08-11 2012-05-08 Raydiance, Inc. Methods and systems for trimming circuits
US9022037B2 (en) 2003-08-11 2015-05-05 Raydiance, Inc. Laser ablation method and apparatus having a feedback loop and control unit
US8921733B2 (en) 2003-08-11 2014-12-30 Raydiance, Inc. Methods and systems for trimming circuits
US20050065502A1 (en) * 2003-08-11 2005-03-24 Richard Stoltz Enabling or blocking the emission of an ablation beam based on color of target
US9971398B2 (en) * 2003-12-12 2018-05-15 Beyond Imagination Inc. Virtual encounters
US9948885B2 (en) 2003-12-12 2018-04-17 Kurzweil Technologies, Inc. Virtual encounters
US9841809B2 (en) * 2003-12-12 2017-12-12 Kurzweil Technologies, Inc. Virtual encounters
US20050130108A1 (en) * 2003-12-12 2005-06-16 Kurzweil Raymond C. Virtual encounters
US20050219695A1 (en) * 2004-04-05 2005-10-06 Vesely Michael A Horizontal perspective display
US20070182812A1 (en) * 2004-05-19 2007-08-09 Ritchey Kurtis J Panoramic image-based virtual reality/telepresence audio-visual system and method
WO2005119376A2 (en) 2004-06-01 2005-12-15 Vesely Michael A Horizontal perspective display
US7688283B2 (en) * 2004-08-02 2010-03-30 Searete Llc Multi-angle mirror
US7671823B2 (en) * 2004-08-02 2010-03-02 Searete Llc Multi-angle mirror
US7259731B2 (en) * 2004-09-27 2007-08-21 Searete Llc Medical overlay mirror
US7663571B2 (en) * 2004-08-02 2010-02-16 Searete Llc Time-lapsing mirror
US7679580B2 (en) * 2004-08-02 2010-03-16 Searete Llc Time-lapsing mirror
US9155373B2 (en) 2004-08-02 2015-10-13 Invention Science Fund I, Llc Medical overlay mirror
US7283106B2 (en) * 2004-08-02 2007-10-16 Searete, Llc Time-lapsing mirror
US7657125B2 (en) * 2004-08-02 2010-02-02 Searete Llc Time-lapsing data methods and systems
US7133003B2 (en) * 2004-08-05 2006-11-07 Searete Llc Cosmetic enhancement mirror
US20080001851A1 (en) * 2006-06-28 2008-01-03 Searete Llc Cosmetic enhancement mirror
US7714804B2 (en) * 2004-09-15 2010-05-11 Searete Llc Multi-angle mirror
US7683858B2 (en) * 2004-08-02 2010-03-23 Searete Llc Cosmetic enhancement mirror
US7876289B2 (en) 2004-08-02 2011-01-25 The Invention Science Fund I, Llc Medical overlay mirror
US7636072B2 (en) 2004-08-02 2009-12-22 Searete Llc Cosmetic enhancement mirror
US7705800B2 (en) * 2004-09-15 2010-04-27 Searete Llc Multi-angle mirror
US7679581B2 (en) 2004-08-02 2010-03-16 Searete Llc Medical overlay mirror
US20060126927A1 (en) * 2004-11-30 2006-06-15 Vesely Michael A Horizontal perspective representation
US20060250390A1 (en) * 2005-04-04 2006-11-09 Vesely Michael A Horizontal perspective display
WO2006121956A1 (en) * 2005-05-09 2006-11-16 Infinite Z, Inc. Biofeedback eyewear system
US8717423B2 (en) * 2005-05-09 2014-05-06 Zspace, Inc. Modifying perspective of stereoscopic images based on changes in user viewpoint
US7907167B2 (en) 2005-05-09 2011-03-15 Infinite Z, Inc. Three dimensional horizontal perspective workstation
WO2006128648A2 (en) * 2005-05-30 2006-12-07 Andreas Durner Electronic day and night vision spectacles
US7875132B2 (en) * 2005-05-31 2011-01-25 United Technologies Corporation High temperature aluminum alloys
US20070030211A1 (en) * 2005-06-02 2007-02-08 Honeywell International Inc. Wearable marine heads-up display system
US8135050B1 (en) 2005-07-19 2012-03-13 Raydiance, Inc. Automated polarization correction
US20070043466A1 (en) * 2005-08-18 2007-02-22 Vesely Michael A Stereoscopic display using polarized eyewear
US20070040905A1 (en) * 2005-08-18 2007-02-22 Vesely Michael A Stereoscopic display using polarized eyewear
BRPI0506340A (en) * 2005-12-12 2007-10-02 Univ Fed Sao Paulo Unifesp augmented reality visualization system with pervasive computing
US7522344B1 (en) * 2005-12-14 2009-04-21 University Of Central Florida Research Foundation, Inc. Projection-based head-mounted display with eye-tracking capabilities
TWI341692B (en) * 2005-12-26 2011-05-01 Ind Tech Res Inst Online interactive multimedia system and the trasnsmission method thereof
US8189971B1 (en) 2006-01-23 2012-05-29 Raydiance, Inc. Dispersion compensation in a chirped pulse amplification system
US9130344B2 (en) 2006-01-23 2015-09-08 Raydiance, Inc. Automated laser tuning
US7444049B1 (en) 2006-01-23 2008-10-28 Raydiance, Inc. Pulse stretcher and compressor including a multi-pass Bragg grating
US8232687B2 (en) 2006-04-26 2012-07-31 Raydiance, Inc. Intelligent laser interlock system
US7822347B1 (en) 2006-03-28 2010-10-26 Raydiance, Inc. Active tuning of temporal dispersion in an ultrashort pulse laser system
US20080007617A1 (en) * 2006-05-11 2008-01-10 Ritchey Kurtis J Volumetric panoramic sensor systems
US20080076489A1 (en) * 2006-08-07 2008-03-27 Plantronics, Inc. Physically and electrically-separated, data-synchronized data sinks for wireless systems
US20080062250A1 (en) * 2006-09-13 2008-03-13 X10 Wireless Technologies, Inc. Panoramic worldview network camera with instant reply and snapshot of past events
US20080186255A1 (en) * 2006-12-07 2008-08-07 Cohen Philip R Systems and methods for data annotation, recordation, and communication
US20100315524A1 (en) * 2007-09-04 2010-12-16 Sony Corporation Integrated motion capture
US7903326B2 (en) 2007-11-30 2011-03-08 Radiance, Inc. Static phase mask for high-order spectral phase control in a hybrid chirped pulse amplifier system
US8125704B2 (en) 2008-08-18 2012-02-28 Raydiance, Inc. Systems and methods for controlling a pulsed laser by combining laser signals
US8498538B2 (en) 2008-11-14 2013-07-30 Raydiance, Inc. Compact monolithic dispersion compensator
US20100228476A1 (en) * 2009-03-04 2010-09-09 Microsoft Corporation Path projection to facilitate engagement
US8494215B2 (en) * 2009-03-05 2013-07-23 Microsoft Corporation Augmenting a field of view in connection with vision-tracking
US8943420B2 (en) * 2009-06-18 2015-01-27 Microsoft Corporation Augmenting a field of view
US8717360B2 (en) 2010-01-29 2014-05-06 Zspace, Inc. Presenting a view within a three dimensional scene
WO2012037465A1 (en) 2010-09-16 2012-03-22 Raydiance, Inc. Laser based processing of layered materials
US8554037B2 (en) 2010-09-30 2013-10-08 Raydiance, Inc. Hybrid waveguide device in powerful laser systems
WO2012075155A2 (en) 2010-12-02 2012-06-07 Ultradent Products, Inc. System and method of viewing and tracking stereoscopic video images
US9595127B2 (en) * 2010-12-22 2017-03-14 Zspace, Inc. Three-dimensional collaboration
US8738754B2 (en) 2011-04-07 2014-05-27 International Business Machines Corporation Systems and methods for managing computing systems utilizing augmented reality
US8913086B2 (en) 2011-04-07 2014-12-16 International Business Machines Corporation Systems and methods for managing errors utilizing augmented reality
US8810598B2 (en) 2011-04-08 2014-08-19 Nant Holdings Ip, Llc Interference based augmented reality hosting platforms
US8786529B1 (en) 2011-05-18 2014-07-22 Zspace, Inc. Liquid crystal variable drive voltage
US10239160B2 (en) 2011-09-21 2019-03-26 Coherent, Inc. Systems and processes that singulate materials
US9349217B1 (en) * 2011-09-23 2016-05-24 Amazon Technologies, Inc. Integrated community of augmented reality environments
US8970960B2 (en) 2011-12-22 2015-03-03 Mattel, Inc. Augmented reality head gear
US8693731B2 (en) 2012-01-17 2014-04-08 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging
US9679215B2 (en) 2012-01-17 2017-06-13 Leap Motion, Inc. Systems and methods for machine control
US8638989B2 (en) 2012-01-17 2014-01-28 Leap Motion, Inc. Systems and methods for capturing motion in three-dimensional space
US10691219B2 (en) 2012-01-17 2020-06-23 Ultrahaptics IP Two Limited Systems and methods for machine control
US11493998B2 (en) 2012-01-17 2022-11-08 Ultrahaptics IP Two Limited Systems and methods for machine control
US9501152B2 (en) 2013-01-15 2016-11-22 Leap Motion, Inc. Free-space user interface and control using virtual constructs
KR101922589B1 (en) * 2012-02-15 2018-11-27 삼성전자주식회사 Display apparatus and eye tracking method thereof
TWI584222B (en) * 2012-02-17 2017-05-21 鈺立微電子股份有限公司 Stereoscopic image processor, stereoscopic image interaction system, and stereoscopic image displaying method
US20130257686A1 (en) * 2012-03-30 2013-10-03 Elizabeth S. Baron Distributed virtual reality
JP6351579B2 (en) 2012-06-01 2018-07-04 ウルトラデント プロダクツ インク. Stereoscopic video imaging
US9285893B2 (en) 2012-11-08 2016-03-15 Leap Motion, Inc. Object detection and tracking with variable-field illumination devices
US20140176661A1 (en) * 2012-12-21 2014-06-26 G. Anthony Reina System and method for surgical telementoring and training with virtualized telestration and haptic holograms, including metadata tagging, encapsulation and saving multi-modal streaming medical imagery together with multi-dimensional [4-d] virtual mesh and multi-sensory annotation in standard file formats used for digital imaging and communications in medicine (dicom)
US10609285B2 (en) 2013-01-07 2020-03-31 Ultrahaptics IP Two Limited Power consumption in motion-capture systems
US9626015B2 (en) 2013-01-08 2017-04-18 Leap Motion, Inc. Power consumption in motion-capture systems with audio and optical signals
US9459697B2 (en) 2013-01-15 2016-10-04 Leap Motion, Inc. Dynamic, free-space user interactions for machine control
US9702977B2 (en) 2013-03-15 2017-07-11 Leap Motion, Inc. Determining positional information of an object in space
US9916009B2 (en) 2013-04-26 2018-03-13 Leap Motion, Inc. Non-tactile interface systems and methods
US10281987B1 (en) 2013-08-09 2019-05-07 Leap Motion, Inc. Systems and methods of free-space gestural interaction
US10846942B1 (en) 2013-08-29 2020-11-24 Ultrahaptics IP Two Limited Predictive information for free space gesture control and communication
US9632572B2 (en) 2013-10-03 2017-04-25 Leap Motion, Inc. Enhanced field of view to augment three-dimensional (3D) sensory space for free-space gesture interpretation
US9582516B2 (en) 2013-10-17 2017-02-28 Nant Holdings Ip, Llc Wide area augmented reality location-based services
US9996638B1 (en) 2013-10-31 2018-06-12 Leap Motion, Inc. Predictive information for free space gesture control and communication
US9672649B2 (en) 2013-11-04 2017-06-06 At&T Intellectual Property I, Lp System and method for enabling mirror video chat using a wearable display device
JP2017511615A (en) * 2013-11-27 2017-04-20 ウルトラデント プロダクツ インク. Video interaction between physical locations
US9613262B2 (en) 2014-01-15 2017-04-04 Leap Motion, Inc. Object detection and tracking for providing a virtual device experience
US9672416B2 (en) 2014-04-29 2017-06-06 Microsoft Technology Licensing, Llc Facial expression tracking
DE102014011590A1 (en) 2014-08-01 2016-02-04 Audi Ag Method for operating a virtual reality system and virtual reality system
JP6654625B2 (en) * 2014-08-04 2020-02-26 フェイスブック・テクノロジーズ・リミテッド・ライアビリティ・カンパニーFacebook Technologies, Llc Method and system for reconstructing occluded face parts in virtual reality environment
DE202014103729U1 (en) 2014-08-08 2014-09-09 Leap Motion, Inc. Augmented reality with motion detection
JP6574939B2 (en) * 2014-09-16 2019-09-18 ソニー株式会社 Display control device, display control method, display control system, and head-mounted display
GB2532465B (en) * 2014-11-19 2021-08-11 Bae Systems Plc Interactive control station
GB2532464B (en) 2014-11-19 2020-09-02 Bae Systems Plc Apparatus and method for selectively displaying an operational environment
EP3259739A4 (en) * 2015-02-17 2018-08-29 NEXTVR Inc. Methods and apparatus for generating and using reduced resolution images and/or communicating such images to a playback or content distribution device
US10362290B2 (en) 2015-02-17 2019-07-23 Nextvr Inc. Methods and apparatus for processing content based on viewing information and/or communicating content
EP3262488B1 (en) 2015-02-25 2021-04-07 BAE Systems PLC Apparatus and method for effecting a control action in respect of system functions
US9726885B2 (en) 2015-03-31 2017-08-08 Timothy A. Cummings System for virtual display and method of use
US9910275B2 (en) 2015-05-18 2018-03-06 Samsung Electronics Co., Ltd. Image processing for head mounted display devices
US10810797B2 (en) * 2015-05-22 2020-10-20 Otoy, Inc Augmenting AR/VR displays with image projections
US9520002B1 (en) 2015-06-24 2016-12-13 Microsoft Technology Licensing, Llc Virtual place-located anchor
US10701318B2 (en) 2015-08-14 2020-06-30 Pcms Holdings, Inc. System and method for augmented reality multi-view telepresence
US10304247B2 (en) 2015-12-09 2019-05-28 Microsoft Technology Licensing, Llc Third party holographic portal
ITUB20159701A1 (en) * 2015-12-23 2017-06-23 Massimiliano Bianchetti ASSISTIVE EQUIPMENT FOR THE DESIGN OF ENVIRONMENTS AND ENVIRONMENTAL DESIGN METHOD
JP6855493B2 (en) * 2016-02-23 2021-04-07 ジェラルド ディルク スミッツ Holographic video capture and telepresence system
WO2017172528A1 (en) 2016-04-01 2017-10-05 Pcms Holdings, Inc. Apparatus and method for supporting interactive augmented reality functionalities
WO2017177019A1 (en) * 2016-04-08 2017-10-12 Pcms Holdings, Inc. System and method for supporting synchronous and asynchronous augmented reality functionalities
US10019831B2 (en) 2016-10-20 2018-07-10 Zspace, Inc. Integrating real world conditions into virtual imagery
US10223821B2 (en) 2017-04-25 2019-03-05 Beyond Imagination Inc. Multi-user and multi-surrogate virtual encounters
CN108700912B (en) * 2017-06-30 2022-04-01 广东虚拟现实科技有限公司 Method and system for operating a device through augmented reality
EP3649538A1 (en) * 2017-07-04 2020-05-13 ATLAS ELEKTRONIK GmbH Assembly and method for communicating by means of two visual output devices
US11009640B2 (en) * 2017-08-11 2021-05-18 8259402 Canada Inc. Transmissive aerial image display
US11368670B2 (en) * 2017-10-26 2022-06-21 Yeda Research And Development Co. Ltd. Augmented reality display system and method
CN111936910A (en) 2018-03-13 2020-11-13 罗纳德·温斯顿 Virtual reality system and method
CN108848366B (en) * 2018-07-05 2020-12-18 盎锐(上海)信息科技有限公司 Information acquisition device and method based on 3D camera
JP6745301B2 (en) * 2018-07-25 2020-08-26 株式会社バーチャルキャスト Content distribution system, content distribution method, computer program
US11002971B1 (en) * 2018-08-24 2021-05-11 Apple Inc. Display device with mechanically adjustable optical combiner
EP3825816A1 (en) 2019-11-22 2021-05-26 Koninklijke Philips N.V. Rendering a virtual object on a virtual user interface
US11272164B1 (en) * 2020-01-17 2022-03-08 Amazon Technologies, Inc. Data synthesis using three-dimensional modeling

Citations (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3034940A (en) * 1956-11-30 1962-05-15 British Cotton Ind Res Assoc Metallized fabrics
US3089917A (en) * 1961-08-21 1963-05-14 Anthony J Fernicola Means and method for stereoscopic television viewing
US4727365A (en) * 1983-08-30 1988-02-23 General Electric Company Advanced video object generator
US5130794A (en) * 1990-03-29 1992-07-14 Ritchey Kurtis J Panoramic display system
US5418584A (en) * 1992-12-31 1995-05-23 Honeywell Inc. Retroreflective array virtual image projection screen
US5495576A (en) * 1993-01-11 1996-02-27 Ritchey; Kurtis J. Panoramic image based virtual reality/telepresence audio-visual system and method
US5510832A (en) * 1993-12-01 1996-04-23 Medi-Vision Technologies, Inc. Synthesized stereoscopic imaging system and method
US5572229A (en) * 1991-04-22 1996-11-05 Evans & Sutherland Computer Corp. Head-mounted projection display system featuring beam splitter and method of making same
US5581271A (en) * 1994-12-05 1996-12-03 Hughes Aircraft Company Head mounted visual display
US5606458A (en) * 1994-08-24 1997-02-25 Fergason; James L. Head mounted display and viewing system using a remote retro-reflector and method of displaying and viewing an image
US5621572A (en) * 1994-08-24 1997-04-15 Fergason; James L. Optical system for a head mounted display using a retro-reflector and method of displaying an image
US5642221A (en) * 1994-03-09 1997-06-24 Optics 1, Inc. Head mounted display system
US5644324A (en) * 1993-03-03 1997-07-01 Maguire, Jr.; Francis J. Apparatus and method for presenting successive images
US5673059A (en) * 1994-03-23 1997-09-30 Kopin Corporation Head-mounted display apparatus with color sequential illumination
US5684935A (en) * 1993-02-25 1997-11-04 Hughes Electronics Rendering and warping image generation system and method
US5708449A (en) * 1993-10-07 1998-01-13 Virtual Vision, Inc. Binocular head mounted display system
US5708529A (en) * 1993-03-02 1998-01-13 Olympus Optical Co., Ltd. Head-mounted image display apparatus
US5712732A (en) * 1993-03-03 1998-01-27 Street; Graham Stewart Brandon Autostereoscopic image display adjustable for observer location and distance
US5739955A (en) * 1994-08-10 1998-04-14 Virtuality (Ip) Limited Head mounted display optics
US5742263A (en) * 1995-12-18 1998-04-21 Telxon Corporation Head tracking system for a head mounted display system
US5751259A (en) * 1994-04-13 1998-05-12 Agency Of Industrial Science & Technology, Ministry Of International Trade & Industry Wide view angle display apparatus
US5774129A (en) * 1995-06-07 1998-06-30 Massachusetts Institute Of Technology Image analysis and synthesis networks using shape and texture information
US5777794A (en) * 1995-09-26 1998-07-07 Olympus Optical Co., Ltd. Image display apparatus
US5777795A (en) * 1994-10-17 1998-07-07 University Of North Carolina Optical path extender for compact imaging display systems
US5790311A (en) * 1996-01-19 1998-08-04 Olympus Optical Co., Ltd. Ocular optics system having at least four reflections occurring between curved surfaces
US5790312A (en) * 1996-03-25 1998-08-04 Olympus Optical Co., Ltd. Optical system
US5808589A (en) * 1994-08-24 1998-09-15 Fergason; James L. Optical system for a head mounted display combining high and low resolution images
US5812100A (en) * 1994-06-01 1998-09-22 Olympus Optical Co., Ltd. Image display apparatus
US5818462A (en) * 1994-07-01 1998-10-06 Digital Equipment Corporation Method and apparatus for producing complex animation from simpler animated sequences
US5822127A (en) * 1995-05-15 1998-10-13 Hughes Electronics Low-cost light-weight head-mounted virtual-image projection display with low moments of inertia and low center of gravity
US5844573A (en) * 1995-06-07 1998-12-01 Massachusetts Institute Of Technology Image compression by pointwise prototype correspondence using shape and texture information
US5853240A (en) * 1995-12-22 1998-12-29 Sharp Kabushiki Kaisha Projector using a small-size optical system
US5883606A (en) * 1995-12-18 1999-03-16 Bell Communications Research, Inc. Flat virtual displays for virtual reality
US5886823A (en) * 1996-05-15 1999-03-23 Sony Corporation Optical visual apparatus
US5886735A (en) * 1997-01-14 1999-03-23 Bullister; Edward T Video telephone headset
US6064749A (en) * 1996-08-02 2000-05-16 Hirota; Gentaro Hybrid tracking for augmented reality using both camera motion detection and landmark tracking
US6084979A (en) * 1996-06-20 2000-07-04 Carnegie Mellon University Method for creating virtual reality
US6121953A (en) * 1997-02-06 2000-09-19 Modern Cartoons, Ltd. Virtual reality system for sensing facial movements
US6124825A (en) * 1997-07-21 2000-09-26 Trimble Navigation Limited GPS based augmented reality collision avoidance system
US6147805A (en) * 1994-08-24 2000-11-14 Fergason; James L. Head mounted display and viewing system using a remote retro-reflector and method of displaying and viewing an image
US6229904B1 (en) * 1996-08-30 2001-05-08 American Alpha, Inc Automatic morphing photography booth
US6278479B1 (en) * 1998-02-24 2001-08-21 Wilson, Hewitt & Associates, Inc. Dual reality system
US6317127B1 (en) * 1996-10-16 2001-11-13 Hughes Electronics Corporation Multi-user real-time augmented reality system and method
US6381346B1 (en) * 1997-12-01 2002-04-30 Wheeling Jesuit University Three-dimensional face identification system
US6400364B1 (en) * 1997-05-29 2002-06-04 Canon Kabushiki Kaisha Image processing system
US6408257B1 (en) * 1999-08-31 2002-06-18 Xerox Corporation Augmented-reality display method and system
US20020075201A1 (en) * 2000-10-05 2002-06-20 Frank Sauer Augmented reality visualization device
US6433760B1 (en) * 1999-01-14 2002-08-13 University Of Central Florida Head mounted display with eyetracking capability
US6522312B2 (en) * 1997-09-01 2003-02-18 Canon Kabushiki Kaisha Apparatus for presenting mixed reality shared among operators
US6556196B1 (en) * 1999-03-19 2003-04-29 Max-Planck-Gesellschaft Zur Forderung Der Wissenschaften E.V. Method and apparatus for the processing of images
US6731434B1 (en) * 2001-05-23 2004-05-04 University Of Central Florida Compact lens assembly for the teleportal augmented reality system
US6919892B1 (en) * 2002-08-14 2005-07-19 Avaworks, Incorporated Photo realistic talking head creation system and method
US7106358B2 (en) * 2002-12-30 2006-09-12 Motorola, Inc. Method, system and apparatus for telepresence communications
US7127081B1 (en) * 2000-10-12 2006-10-24 Momentum Bilgisayar, Yazilim, Danismanlik, Ticaret, A.S. Method for tracking motion of a face

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2612351A1 (en) * 1987-03-10 1988-09-16 Witzig Patrick System for simultaneous transmission of radiobroadcast orders and televised images
US4859030A (en) * 1987-07-29 1989-08-22 Honeywell, Inc. Helmet mounted display with improved brightness
DE3737972A1 (en) * 1987-11-07 1989-05-24 Messerschmitt Boelkow Blohm HELMET LOCATION DEVICE
WO1991004508A2 (en) * 1989-09-14 1991-04-04 General Electric Company Helmet mounted display
US6405072B1 (en) * 1991-01-28 2002-06-11 Sherwood Services Ag Apparatus and method for determining a location of an anatomical target with reference to a medical apparatus
FR2677463B1 (en) * 1991-06-04 1994-06-17 Thomson Csf COLLIMATE VISUAL WITH LARGE HORIZONTAL AND VERTICAL FIELDS, PARTICULARLY FOR SIMULATORS.
CA2174510A1 (en) * 1993-10-22 1995-04-27 John C. C. Fan Head-mounted display system

Patent Citations (55)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3034940A (en) * 1956-11-30 1962-05-15 British Cotton Ind Res Assoc Metallized fabrics
US3089917A (en) * 1961-08-21 1963-05-14 Anthony J Fernicola Means and method for stereoscopic television viewing
US4727365A (en) * 1983-08-30 1988-02-23 General Electric Company Advanced video object generator
US4727365B1 (en) * 1983-08-30 1999-10-05 Lockheed Corp Advanced video object generator
US5130794A (en) * 1990-03-29 1992-07-14 Ritchey Kurtis J Panoramic display system
US5572229A (en) * 1991-04-22 1996-11-05 Evans & Sutherland Computer Corp. Head-mounted projection display system featuring beam splitter and method of making same
US5418584A (en) * 1992-12-31 1995-05-23 Honeywell Inc. Retroreflective array virtual image projection screen
US5495576A (en) * 1993-01-11 1996-02-27 Ritchey; Kurtis J. Panoramic image based virtual reality/telepresence audio-visual system and method
US5684935A (en) * 1993-02-25 1997-11-04 Hughes Electronics Rendering and warping image generation system and method
US5708529A (en) * 1993-03-02 1998-01-13 Olympus Optical Co., Ltd. Head-mounted image display apparatus
US5644324A (en) * 1993-03-03 1997-07-01 Maguire, Jr.; Francis J. Apparatus and method for presenting successive images
US5712732A (en) * 1993-03-03 1998-01-27 Street; Graham Stewart Brandon Autostereoscopic image display adjustable for observer location and distance
US5708449A (en) * 1993-10-07 1998-01-13 Virtual Vision, Inc. Binocular head mounted display system
US5510832A (en) * 1993-12-01 1996-04-23 Medi-Vision Technologies, Inc. Synthesized stereoscopic imaging system and method
US5642221A (en) * 1994-03-09 1997-06-24 Optics 1, Inc. Head mounted display system
US5673059A (en) * 1994-03-23 1997-09-30 Kopin Corporation Head-mounted display apparatus with color sequential illumination
US5751259A (en) * 1994-04-13 1998-05-12 Agency Of Industrial Science & Technology, Ministry Of International Trade & Industry Wide view angle display apparatus
US5812100A (en) * 1994-06-01 1998-09-22 Olympus Optical Co., Ltd. Image display apparatus
US5818462A (en) * 1994-07-01 1998-10-06 Digital Equipment Corporation Method and apparatus for producing complex animation from simpler animated sequences
US5739955A (en) * 1994-08-10 1998-04-14 Virtuality (Ip) Limited Head mounted display optics
US5808589A (en) * 1994-08-24 1998-09-15 Fergason; James L. Optical system for a head mounted display combining high and low resolution images
US5621572A (en) * 1994-08-24 1997-04-15 Fergason; James L. Optical system for a head mounted display using a retro-reflector and method of displaying an image
US6147805A (en) * 1994-08-24 2000-11-14 Fergason; James L. Head mounted display and viewing system using a remote retro-reflector and method of displaying and viewing an image
US5606458A (en) * 1994-08-24 1997-02-25 Fergason; James L. Head mounted display and viewing system using a remote retro-reflector and method of displaying and viewing an image
US5777795A (en) * 1994-10-17 1998-07-07 University Of North Carolina Optical path extender for compact imaging display systems
US5581271A (en) * 1994-12-05 1996-12-03 Hughes Aircraft Company Head mounted visual display
US5822127A (en) * 1995-05-15 1998-10-13 Hughes Electronics Low-cost light-weight head-mounted virtual-image projection display with low moments of inertia and low center of gravity
US5844573A (en) * 1995-06-07 1998-12-01 Massachusetts Institute Of Technology Image compression by pointwise prototype correspondence using shape and texture information
US5774129A (en) * 1995-06-07 1998-06-30 Massachusetts Institute Of Technology Image analysis and synthesis networks using shape and texture information
US5777794A (en) * 1995-09-26 1998-07-07 Olympus Optical Co., Ltd. Image display apparatus
US5742263A (en) * 1995-12-18 1998-04-21 Telxon Corporation Head tracking system for a head mounted display system
US5883606A (en) * 1995-12-18 1999-03-16 Bell Communications Research, Inc. Flat virtual displays for virtual reality
US5853240A (en) * 1995-12-22 1998-12-29 Sharp Kabushiki Kaisha Projector using a small-size optical system
US5790311A (en) * 1996-01-19 1998-08-04 Olympus Optical Co., Ltd. Ocular optics system having at least four reflections occurring between curved surfaces
US5790312A (en) * 1996-03-25 1998-08-04 Olympus Optical Co., Ltd. Optical system
US5886823A (en) * 1996-05-15 1999-03-23 Sony Corporation Optical visual apparatus
US6084979A (en) * 1996-06-20 2000-07-04 Carnegie Mellon University Method for creating virtual reality
US6064749A (en) * 1996-08-02 2000-05-16 Hirota; Gentaro Hybrid tracking for augmented reality using both camera motion detection and landmark tracking
US6229904B1 (en) * 1996-08-30 2001-05-08 American Alpha, Inc Automatic morphing photography booth
US6317127B1 (en) * 1996-10-16 2001-11-13 Hughes Electronics Corporation Multi-user real-time augmented reality system and method
US5886735A (en) * 1997-01-14 1999-03-23 Bullister; Edward T Video telephone headset
US6121953A (en) * 1997-02-06 2000-09-19 Modern Cartoons, Ltd. Virtual reality system for sensing facial movements
US6400364B1 (en) * 1997-05-29 2002-06-04 Canon Kabushiki Kaisha Image processing system
US6124825A (en) * 1997-07-21 2000-09-26 Trimble Navigation Limited GPS based augmented reality collision avoidance system
US6522312B2 (en) * 1997-09-01 2003-02-18 Canon Kabushiki Kaisha Apparatus for presenting mixed reality shared among operators
US6381346B1 (en) * 1997-12-01 2002-04-30 Wheeling Jesuit University Three-dimensional face identification system
US6278479B1 (en) * 1998-02-24 2001-08-21 Wilson, Hewitt & Associates, Inc. Dual reality system
US6433760B1 (en) * 1999-01-14 2002-08-13 University Of Central Florida Head mounted display with eyetracking capability
US6556196B1 (en) * 1999-03-19 2003-04-29 Max-Planck-Gesellschaft Zur Forderung Der Wissenschaften E.V. Method and apparatus for the processing of images
US6408257B1 (en) * 1999-08-31 2002-06-18 Xerox Corporation Augmented-reality display method and system
US20020075201A1 (en) * 2000-10-05 2002-06-20 Frank Sauer Augmented reality visualization device
US7127081B1 (en) * 2000-10-12 2006-10-24 Momentum Bilgisayar, Yazilim, Danismanlik, Ticaret, A.S. Method for tracking motion of a face
US6731434B1 (en) * 2001-05-23 2004-05-04 University Of Central Florida Compact lens assembly for the teleportal augmented reality system
US6919892B1 (en) * 2002-08-14 2005-07-19 Avaworks, Incorporated Photo realistic talking head creation system and method
US7106358B2 (en) * 2002-12-30 2006-09-12 Motorola, Inc. Method, system and apparatus for telepresence communications

Cited By (178)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040119662A1 (en) * 2002-12-19 2004-06-24 Accenture Global Services Gmbh Arbitrary object tracking in augmented reality applications
US7050078B2 (en) * 2002-12-19 2006-05-23 Accenture Global Services Gmbh Arbitrary object tracking augmented reality applications
US11416084B2 (en) * 2004-05-28 2022-08-16 UltimatePointer, L.L.C. Multi-sensor device with an accelerometer for enabling user interaction through sound or image
US11402927B2 (en) 2004-05-28 2022-08-02 UltimatePointer, L.L.C. Pointing device
US11755127B2 (en) 2004-05-28 2023-09-12 UltimatePointer, L.L.C. Multi-sensor device with an accelerometer for enabling user interaction through sound or image
US11409376B2 (en) 2004-05-28 2022-08-09 UltimatePointer, L.L.C. Multi-sensor device with an accelerometer for enabling user interaction through sound or image
US9779750B2 (en) 2004-07-30 2017-10-03 Invention Science Fund I, Llc Cue-aware privacy filter for participants in persistent communications
US9704502B2 (en) * 2004-07-30 2017-07-11 Invention Science Fund I, Llc Cue-aware privacy filter for participants in persistent communications
US20060026626A1 (en) * 2004-07-30 2006-02-02 Malamud Mark A Cue-aware privacy filter for participants in persistent communications
US8874284B2 (en) * 2005-06-02 2014-10-28 The Boeing Company Methods for remote display of an enhanced image
US20110187563A1 (en) * 2005-06-02 2011-08-04 The Boeing Company Methods for remote display of an enhanced image
US11841997B2 (en) 2005-07-13 2023-12-12 UltimatePointer, L.L.C. Apparatus for controlling contents of a computer-generated image using 3D measurements
US20090137860A1 (en) * 2005-11-10 2009-05-28 Olivier Lordereau Biomedical Device for Treating by Virtual Immersion
US7946974B2 (en) 2005-11-10 2011-05-24 Olivier Lordereau Biomedical device for treating by virtual immersion
US11055356B2 (en) 2006-02-15 2021-07-06 Kurtis John Ritchey Mobile user borne brain activity data and surrounding environment data correlation system
US9344612B2 (en) 2006-02-15 2016-05-17 Kenneth Ira Ritchey Non-interference field-of-view support apparatus for a panoramic facial sensor
US10120440B2 (en) * 2006-03-30 2018-11-06 Arjuna Indraeswaran Rajasingham Virtual navigation system for virtual and real spaces
US20070229396A1 (en) * 2006-03-30 2007-10-04 Rajasingham Arjuna Indraeswara Virtual navigation system for virtual and real spaces
US20150286278A1 (en) * 2006-03-30 2015-10-08 Arjuna Indraeswaran Rajasingham Virtual navigation system for virtual and real spaces
US9063633B2 (en) * 2006-03-30 2015-06-23 Arjuna Indraeswaran Rajasingham Virtual navigation system for virtual and real spaces
US20080002859A1 (en) * 2006-06-29 2008-01-03 Himax Display, Inc. Image inspecting device and method for a head-mounted display
US8170325B2 (en) * 2006-06-29 2012-05-01 Himax Display, Inc. Image inspecting device and method for a head-mounted display
GB2441228B (en) * 2006-08-23 2011-11-02 Univ Warwick Modelling
US20090052767A1 (en) * 2006-08-23 2009-02-26 Abhir Bhalerao Modelling
US20080089611A1 (en) * 2006-10-17 2008-04-17 Mcfadyen Doug Calibration Technique For Heads Up Display System
US7835592B2 (en) 2006-10-17 2010-11-16 Seiko Epson Corporation Calibration technique for heads up display system
US10331021B2 (en) 2007-10-10 2019-06-25 Gerard Dirk Smits Method, apparatus, and manufacture for a tracking camera or detector with fast asynchronous triggering
US10962867B2 (en) 2007-10-10 2021-03-30 Gerard Dirk Smits Method, apparatus, and manufacture for a tracking camera or detector with fast asynchronous triggering
US20090172756A1 (en) * 2007-12-31 2009-07-02 Motorola, Inc. Lighting analysis and recommender system for video telephony
US8411998B2 (en) 2008-07-17 2013-04-02 Aptina Imaging Corporation Method and apparatus providing perspective correction and/or image dewarping
US20100014770A1 (en) * 2008-07-17 2010-01-21 Anthony Huggett Method and apparatus providing perspective correction and/or image dewarping
US20110227924A1 (en) * 2010-03-17 2011-09-22 Casio Computer Co., Ltd. 3d modeling apparatus, 3d modeling method, and computer readable medium
US8482599B2 (en) * 2010-03-29 2013-07-09 Casio Computer Co., Ltd. 3D modeling apparatus, 3D modeling method, and computer readable medium
US20110234759A1 (en) * 2010-03-29 2011-09-29 Casio Computer Co., Ltd. 3d modeling apparatus, 3d modeling method, and computer readable medium
US8494254B2 (en) * 2010-08-31 2013-07-23 Adobe Systems Incorporated Methods and apparatus for image rectification for stereo display
US9946076B2 (en) 2010-10-04 2018-04-17 Gerard Dirk Smits System and method for 3-D projection and enhancements for interactivity
US9122053B2 (en) 2010-10-15 2015-09-01 Microsoft Technology Licensing, Llc Realistic occlusion for a head mounted augmented reality display
US20120105473A1 (en) * 2010-10-27 2012-05-03 Avi Bar-Zeev Low-latency fusing of virtual and real content
US9348141B2 (en) * 2010-10-27 2016-05-24 Microsoft Technology Licensing, Llc Low-latency fusing of virtual and real content
US9710973B2 (en) 2010-10-27 2017-07-18 Microsoft Technology Licensing, Llc Low-latency fusing of virtual and real content
CN102591449A (en) * 2010-10-27 2012-07-18 微软公司 Low-latency fusing of virtual and real content
US8896992B2 (en) * 2010-11-17 2014-11-25 Solatido Inc. Wearable computer system
US20120293935A1 (en) * 2010-11-17 2012-11-22 Thomas Mallory Sherlock Wearable Computer System
US20150077913A1 (en) * 2010-11-17 2015-03-19 Thomas Mallory Sherlock Wearable computer system
US9710015B2 (en) * 2010-11-17 2017-07-18 Solatido, Inc. Wearable computer system
US9160906B2 (en) 2011-02-03 2015-10-13 Jason R. Bond Head-mounted face image capturing devices and systems
US8573866B2 (en) * 2011-02-03 2013-11-05 Jason R. Bond Head-mounted face image capturing devices and systems
US20120263449A1 (en) * 2011-02-03 2012-10-18 Jason R. Bond Head-mounted face image capturing devices and systems
WO2013173728A1 (en) * 2012-05-17 2013-11-21 The University Of North Carolina At Chapel Hill Methods, systems, and computer readable media for unified scene acquisition and pose tracking in a wearable display
US10365711B2 (en) 2012-05-17 2019-07-30 The University Of North Carolina At Chapel Hill Methods, systems, and computer readable media for unified scene acquisition and pose tracking in a wearable display
US10939088B2 (en) 2013-02-15 2021-03-02 Red.Com, Llc Computational imaging device
US9497380B1 (en) 2013-02-15 2016-11-15 Red.Com, Inc. Dense field imaging
US9769365B1 (en) 2013-02-15 2017-09-19 Red.Com, Inc. Dense field imaging
US10277885B1 (en) 2013-02-15 2019-04-30 Red.Com, Llc Dense field imaging
US10547828B2 (en) 2013-02-15 2020-01-28 Red.Com, Llc Dense field imaging
US9898866B2 (en) 2013-03-13 2018-02-20 The University Of North Carolina At Chapel Hill Low latency stabilization for head-worn displays
US20150062125A1 (en) * 2013-09-03 2015-03-05 3Ditize Sl Generating a 3d interactive immersive experience from a 2d static image
US9990760B2 (en) * 2013-09-03 2018-06-05 3Ditize Sl Generating a 3D interactive immersive experience from a 2D static image
US20150130702A1 (en) * 2013-11-08 2015-05-14 Sony Corporation Information processing apparatus, control method, and program
US10254842B2 (en) * 2013-11-08 2019-04-09 Sony Corporation Controlling a device based on facial expressions of a user
US9536351B1 (en) * 2014-02-03 2017-01-03 Bentley Systems, Incorporated Third person view augmented reality
US9258525B2 (en) * 2014-02-25 2016-02-09 Alcatel Lucent System and method for reducing latency in video delivery
US9810913B2 (en) 2014-03-28 2017-11-07 Gerard Dirk Smits Smart head-mounted projection system
US10061137B2 (en) 2014-03-28 2018-08-28 Gerard Dirk Smits Smart head-mounted projection system
US20190094981A1 (en) * 2014-06-14 2019-03-28 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
US10852838B2 (en) * 2014-06-14 2020-12-01 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
US11507193B2 (en) * 2014-06-14 2022-11-22 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
US11137497B2 (en) 2014-08-11 2021-10-05 Gerard Dirk Smits Three-dimensional triangulation and time-of-flight based tracking systems and methods
US10324187B2 (en) 2014-08-11 2019-06-18 Gerard Dirk Smits Three-dimensional triangulation and time-of-flight based tracking systems and methods
US10459232B2 (en) * 2014-10-21 2019-10-29 Koninklijke Philips N.V. Augmented reality patient interface device fitting apparatus
US20170315359A1 (en) * 2014-10-21 2017-11-02 Koninklijke Philips N.V. Augmented reality patient interface device fitting appratus
US10223834B2 (en) 2014-10-24 2019-03-05 Usens, Inc. System and method for immersive and interactive multimedia generation
US10320437B2 (en) 2014-10-24 2019-06-11 Usens, Inc. System and method for immersive and interactive multimedia generation
US10256859B2 (en) * 2014-10-24 2019-04-09 Usens, Inc. System and method for immersive and interactive multimedia generation
US20160260260A1 (en) * 2014-10-24 2016-09-08 Usens, Inc. System and method for immersive and interactive multimedia generation
US11049476B2 (en) 2014-11-04 2021-06-29 The University Of North Carolina At Chapel Hill Minimal-latency tracking and display for matching real and virtual worlds in head-worn displays
US9852351B2 (en) 2014-12-16 2017-12-26 3Ditize Sl 3D rotational presentation generated from 2D static images
US10157469B2 (en) 2015-04-13 2018-12-18 Gerard Dirk Smits Machine vision for ego-motion, segmenting, and classifying objects
US10325376B2 (en) 2015-04-13 2019-06-18 Gerard Dirk Smits Machine vision for ego-motion, segmenting, and classifying objects
US10043282B2 (en) 2015-04-13 2018-08-07 Gerard Dirk Smits Machine vision for ego-motion, segmenting, and classifying objects
US10275021B2 (en) * 2015-05-27 2019-04-30 Sony Interactive Entertainment Inc. Display apparatus of front-of-the-eye mounted type
US20160349839A1 (en) * 2015-05-27 2016-12-01 Sony Interactive Entertainment Inc. Display apparatus of front-of-the-eye mounted type
US10546181B2 (en) 2015-06-08 2020-01-28 Bitmanagement Software GmbH Method and device for generating data for two-dimensional or three-dimensional depiction of at least part of an object and for generating the two-dimensional or three-dimensional depiction of the at least one part of the object
WO2016198318A1 (en) * 2015-06-08 2016-12-15 Bitmanagement Software GmbH Method and device for generating data for a two or three-dimensional representation of at least one part of an object and for generating the two or three dimensional representation of at least the part of the object
EP3789962A1 (en) * 2015-06-08 2021-03-10 Bitmanagement Software GmbH Method and device for generating data for two dimensional or three dimensional representation of at least part of an object and for generating the two or three-dimensional representation of at least part of the object
US10216981B2 (en) 2015-06-14 2019-02-26 Facense Ltd. Eyeglasses that measure facial skin color changes
US10130299B2 (en) 2015-06-14 2018-11-20 Facense Ltd. Neurofeedback eyeglasses
US10136852B2 (en) 2015-06-14 2018-11-27 Facense Ltd. Detecting an allergic reaction from nasal temperatures
US10151636B2 (en) 2015-06-14 2018-12-11 Facense Ltd. Eyeglasses having inward-facing and outward-facing thermal cameras
US10154810B2 (en) 2015-06-14 2018-12-18 Facense Ltd. Security system that detects atypical behavior
US10064559B2 (en) 2015-06-14 2018-09-04 Facense Ltd. Identification of the dominant nostril using thermal measurements
US10159411B2 (en) 2015-06-14 2018-12-25 Facense Ltd. Detecting irregular physiological responses during exposure to sensitive data
US10165949B2 (en) * 2015-06-14 2019-01-01 Facense Ltd. Estimating posture using head-mounted cameras
US10045726B2 (en) 2015-06-14 2018-08-14 Facense Ltd. Selecting a stressor based on thermal measurements of the face
US10085685B2 (en) 2015-06-14 2018-10-02 Facense Ltd. Selecting triggers of an allergic reaction based on nasal temperatures
US10045699B2 (en) 2015-06-14 2018-08-14 Facense Ltd. Determining a state of a user based on thermal measurements of the forehead
US10130261B2 (en) 2015-06-14 2018-11-20 Facense Ltd. Detecting physiological responses while taking into account consumption of confounding substances
US10080861B2 (en) 2015-06-14 2018-09-25 Facense Ltd. Breathing biofeedback eyeglasses
US10130308B2 (en) 2015-06-14 2018-11-20 Facense Ltd. Calculating respiratory parameters from thermal measurements
US10524696B2 (en) 2015-06-14 2020-01-07 Facense Ltd. Virtual coaching based on respiration signals
US9968264B2 (en) 2015-06-14 2018-05-15 Facense Ltd. Detecting physiological responses based on thermal asymmetry of the face
US10045737B2 (en) 2015-06-14 2018-08-14 Facense Ltd. Clip-on device with inward-facing cameras
US10524667B2 (en) 2015-06-14 2020-01-07 Facense Ltd. Respiration-based estimation of an aerobic activity parameter
US10523852B2 (en) 2015-06-14 2019-12-31 Facense Ltd. Wearable inward-facing camera utilizing the Scheimpflug principle
US10299717B2 (en) 2015-06-14 2019-05-28 Facense Ltd. Detecting stress based on thermal measurements of the face
US10076270B2 (en) 2015-06-14 2018-09-18 Facense Ltd. Detecting physiological responses while accounting for touching the face
US10376153B2 (en) 2015-06-14 2019-08-13 Facense Ltd. Head mounted system to collect facial expressions
US20160360970A1 (en) * 2015-06-14 2016-12-15 Facense Ltd. Wearable device for taking thermal and visual measurements from fixed relative positions
US9867546B2 (en) 2015-06-14 2018-01-16 Facense Ltd. Wearable device for taking symmetric thermal measurements
US10076250B2 (en) 2015-06-14 2018-09-18 Facense Ltd. Detecting physiological responses based on multispectral data from head-mounted cameras
US10092232B2 (en) 2015-06-14 2018-10-09 Facense Ltd. User state selection based on the shape of the exhale stream
US20170091535A1 (en) * 2015-09-29 2017-03-30 BinaryVR, Inc. Head-mounted display with facial expression detecting capability
CN108140105A (en) * 2015-09-29 2018-06-08 比纳里虚拟现实技术有限公司 Head-mounted display with countenance detectability
US10089522B2 (en) * 2015-09-29 2018-10-02 BinaryVR, Inc. Head-mounted display with facial expression detecting capability
US10113913B2 (en) 2015-10-03 2018-10-30 Facense Ltd. Systems for collecting thermal measurements of the face
US20170116711A1 (en) * 2015-10-27 2017-04-27 Boe Technology Group Co., Ltd. Image reconstruction method and device, glasses device and display system
US10398867B2 (en) * 2015-11-10 2019-09-03 Koninklijke Philips N.V. Determining information about a patients face
US20170128686A1 (en) * 2015-11-10 2017-05-11 Koninklijke Philips N.V. Determining information about a patients face
US10582190B2 (en) 2015-11-23 2020-03-03 Walmart Apollo, Llc Virtual training system
US10274588B2 (en) 2015-12-18 2019-04-30 Gerard Dirk Smits Real time position sensing of objects
US10502815B2 (en) 2015-12-18 2019-12-10 Gerard Dirk Smits Real time position sensing of objects
US9753126B2 (en) 2015-12-18 2017-09-05 Gerard Dirk Smits Real time position sensing of objects
US11714170B2 (en) 2015-12-18 2023-08-01 Samsung Semiconuctor, Inc. Real time position sensing of objects
US9813673B2 (en) 2016-01-20 2017-11-07 Gerard Dirk Smits Holographic video capture and telepresence system
US10477149B2 (en) * 2016-01-20 2019-11-12 Gerard Dirk Smits Holographic video capture and telepresence system
US10084990B2 (en) 2016-01-20 2018-09-25 Gerard Dirk Smits Holographic video capture and telepresence system
US20190028674A1 (en) * 2016-01-20 2019-01-24 Gerard Dirk Smits Holographic video capture and telepresence system
WO2017127832A1 (en) * 2016-01-20 2017-07-27 Gerard Dirk Smits Holographic video capture and telepresence system
US10136856B2 (en) 2016-06-27 2018-11-27 Facense Ltd. Wearable respiration measurements system
US11189084B2 (en) 2016-07-29 2021-11-30 Activision Publishing, Inc. Systems and methods for executing improved iterative optimization processes to personify blendshape rigs
US10935659B2 (en) 2016-10-31 2021-03-02 Gerard Dirk Smits Fast scanning lidar with dynamic voxel probing
US10451737B2 (en) 2016-10-31 2019-10-22 Gerard Dirk Smits Fast scanning with dynamic voxel probing
US10067230B2 (en) 2016-10-31 2018-09-04 Gerard Dirk Smits Fast scanning LIDAR with dynamic voxel probing
US10650539B2 (en) * 2016-12-06 2020-05-12 Activision Publishing, Inc. Methods and systems to modify a two dimensional facial image to increase dimensional depth and generate a facial image that appears three dimensional
US10991110B2 (en) 2016-12-06 2021-04-27 Activision Publishing, Inc. Methods and systems to modify a two dimensional facial image to increase dimensional depth and generate a facial image that appears three dimensional
US11423556B2 (en) 2016-12-06 2022-08-23 Activision Publishing, Inc. Methods and systems to modify two dimensional facial images in a video to generate, in real-time, facial images that appear three dimensional
US20190139240A1 (en) * 2016-12-06 2019-05-09 Activision Publishing, Inc. Methods and Systems to Modify a Two Dimensional Facial Image to Increase Dimensional Depth and Generate a Facial Image That Appears Three Dimensional
US10365493B2 (en) 2016-12-23 2019-07-30 Realwear, Incorporated Modular components for a head-mounted display
US10936872B2 (en) 2016-12-23 2021-03-02 Realwear, Inc. Hands-free contextually aware object interaction for wearable display
US11507216B2 (en) 2016-12-23 2022-11-22 Realwear, Inc. Customizing user interfaces of binary applications
US10437070B2 (en) * 2016-12-23 2019-10-08 Realwear, Inc. Interchangeable optics for a head-mounted display
US11099716B2 (en) 2016-12-23 2021-08-24 Realwear, Inc. Context based content navigation for wearable display
US11409497B2 (en) 2016-12-23 2022-08-09 Realwear, Inc. Hands-free navigation of touch-based operating systems
US11340465B2 (en) 2016-12-23 2022-05-24 Realwear, Inc. Head-mounted display with modular components
US10816800B2 (en) * 2016-12-23 2020-10-27 Samsung Electronics Co., Ltd. Electronic device and method of controlling the same
US10620910B2 (en) 2016-12-23 2020-04-14 Realwear, Inc. Hands-free navigation of touch-based operating systems
US20180180894A1 (en) * 2016-12-23 2018-06-28 Realwear, Incorporated Interchangeable optics for a head-mounted display
US10393312B2 (en) 2016-12-23 2019-08-27 Realwear, Inc. Articulating components for a head-mounted display
US11947752B2 (en) 2016-12-23 2024-04-02 Realwear, Inc. Customizing user interfaces of binary applications
US20180180891A1 (en) * 2016-12-23 2018-06-28 Samsung Electronics Co., Ltd. Electronic device and method of controlling the same
US11327320B2 (en) 2016-12-23 2022-05-10 Samsung Electronics Co., Ltd. Electronic device and method of controlling the same
US11709236B2 (en) 2016-12-27 2023-07-25 Samsung Semiconductor, Inc. Systems and methods for machine perception
US10261183B2 (en) 2016-12-27 2019-04-16 Gerard Dirk Smits Systems and methods for machine perception
US10564284B2 (en) 2016-12-27 2020-02-18 Gerard Dirk Smits Systems and methods for machine perception
US11178385B2 (en) * 2017-04-27 2021-11-16 Google Llc Synthetic stereoscopic content capture
US11765335B2 (en) * 2017-04-27 2023-09-19 Google Llc Synthetic stereoscopic content capture
US10645370B2 (en) * 2017-04-27 2020-05-05 Google Llc Synthetic stereoscopic content capture
US20220030213A1 (en) * 2017-04-27 2022-01-27 Google Llc Synthetic stereoscopic content capture
US20180316908A1 (en) * 2017-04-27 2018-11-01 Google Llc Synthetic stereoscopic content capture
CN113923438A (en) * 2017-04-27 2022-01-11 谷歌有限责任公司 Composite stereoscopic image content capture
CN110546951A (en) * 2017-04-27 2019-12-06 谷歌有限责任公司 Composite stereoscopic image content capture
US11067794B2 (en) 2017-05-10 2021-07-20 Gerard Dirk Smits Scan mirror systems and methods
US10473921B2 (en) 2017-05-10 2019-11-12 Gerard Dirk Smits Scan mirror systems and methods
US10871570B2 (en) * 2017-09-14 2020-12-22 Everysight Ltd. System and method for position and orientation tracking
US10591605B2 (en) 2017-10-19 2020-03-17 Gerard Dirk Smits Methods and systems for navigating a vehicle including a novel fiducial marker system
US10935989B2 (en) 2017-10-19 2021-03-02 Gerard Dirk Smits Methods and systems for navigating a vehicle including a novel fiducial marker system
US11709369B2 (en) * 2017-11-24 2023-07-25 Maxell, Ltd. Head up display apparatus
CN111344184A (en) * 2017-11-24 2020-06-26 麦克赛尔株式会社 Head-up display device
US11597278B2 (en) * 2017-12-06 2023-03-07 Jvckenwood Corporation Projection control device, head-up display device, projection control method, and non-transitory storage medium
US20200290458A1 (en) * 2017-12-06 2020-09-17 Jvckenwood Corporation Projection control device, head-up display device, projection control method, and non-transitory storage medium
US10379220B1 (en) 2018-01-29 2019-08-13 Gerard Dirk Smits Hyper-resolved, high bandwidth scanned LIDAR systems
US10725177B2 (en) 2018-01-29 2020-07-28 Gerard Dirk Smits Hyper-resolved, high bandwidth scanned LIDAR systems
US11727544B2 (en) 2018-02-06 2023-08-15 Hewlett-Packard Development Company, L.P. Constructing images of users' faces by stitching non-overlapping images
US11228709B2 (en) 2018-02-06 2022-01-18 Hewlett-Packard Development Company, L.P. Constructing images of users' faces by stitching non-overlapping images
US10410372B1 (en) 2018-06-14 2019-09-10 The University Of North Carolina At Chapel Hill Methods, systems, and computer-readable media for utilizing radial distortion to estimate a pose configuration
US11238302B2 (en) 2018-08-01 2022-02-01 Samsung Electronics Co., Ltd. Method and an apparatus for performing object illumination manipulation on an image
US10885702B2 (en) * 2018-08-10 2021-01-05 Htc Corporation Facial expression modeling method, apparatus and non-transitory computer readable medium of the same
US11829059B2 (en) 2020-02-27 2023-11-28 Gerard Dirk Smits High resolution scanning of remote objects with fast sweeping laser beams and signal recovery by twitchy pixel array

Also Published As

Publication number Publication date
WO2002052330A3 (en) 2003-02-13
AU2002232809A1 (en) 2002-07-08
US6774869B2 (en) 2004-08-10
US20020080094A1 (en) 2002-06-27
WO2002052330A2 (en) 2002-07-04

Similar Documents

Publication Publication Date Title
US20050083248A1 (en) Mobile face capture and image processing system and method
JP6824279B2 (en) Head-mounted display for virtual reality and mixed reality with inside-out position, user body, and environmental tracking
US11928784B2 (en) Systems and methods for presenting perspective views of augmented reality virtual object
US8218825B2 (en) Capturing and processing facial motion data
US6806898B1 (en) System and method for automatically adjusting gaze and head orientation for video conferencing
US9779512B2 (en) Automatic generation of virtual materials from real-world materials
TW297985B (en)
JP4865093B2 (en) Method and system for animating facial features and method and system for facial expression transformation
JP4932951B2 (en) Facial image processing method and system
US20100110069A1 (en) System for rendering virtual see-through scenes
WO2022076020A1 (en) Few-shot synthesis of talking heads
US6965385B2 (en) Method for simulating and demonstrating the optical effects of glasses on the human face
US20230012909A1 (en) Non-uniform stereo rendering
JP2009506442A (en) Capture and process facial movement data
CN110060349B (en) Method for expanding field angle of augmented reality head-mounted display equipment
Reddy et al. Mobile face capture for virtual face videos
GB2351425A (en) Video conferencing apparatus
GB2351636A (en) Virtual video conferencing apparatus
Reddy A non-obtrusive head-mounted face capture system

Legal Events

Date Code Title Description
AS Assignment

Owner name: UNIVERSITY OF CENTRAL FLORIDA, FLORIDA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BIOCCA, FRANK;ROLLAND, JANNICK P.;STOCKMAN, GEORGE C.;AND OTHERS;REEL/FRAME:020435/0772;SIGNING DATES FROM 20041207 TO 20080130

Owner name: BOARD OF TRUSTEES OPERATING MICHIGAN STATE UNIVERS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BIOCCA, FRANK;ROLLAND, JANNICK P.;STOCKMAN, GEORGE C.;AND OTHERS;REEL/FRAME:020435/0772;SIGNING DATES FROM 20041207 TO 20080130

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION