WO2010021972A1 - Surround structured lighting for recovering 3d object shape and appearance - Google Patents

Surround structured lighting for recovering 3d object shape and appearance Download PDF

Info

Publication number
WO2010021972A1
WO2010021972A1 PCT/US2009/054007 US2009054007W WO2010021972A1 WO 2010021972 A1 WO2010021972 A1 WO 2010021972A1 US 2009054007 W US2009054007 W US 2009054007W WO 2010021972 A1 WO2010021972 A1 WO 2010021972A1
Authority
WO
WIPO (PCT)
Prior art keywords
mirror
camera
appearance
light
shape
Prior art date
Application number
PCT/US2009/054007
Other languages
French (fr)
Inventor
Douglas Lanman
Daniel Crispell
Gabriel Taubin
Original Assignee
Brown University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Brown University filed Critical Brown University
Publication of WO2010021972A1 publication Critical patent/WO2010021972A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/586Depth or shape recovery from multiple images from multiple light sources, e.g. photometric stereo

Definitions

  • This invention generally relates to systems and methods for recovering the shape and appearance of objects from images captured under structured lighting. More particularly, this invention relates to systems and methods for recovering the shape and appearance of objects, where the systems comprise a single camera and a single projector displaying structured light patterns.
  • Coded structured light is a particularly reliable and inexpensive triangulation-based method for reconstructing the shape and appearance of three dimensional objects from image data.
  • a coded structured light system comprises a single calibrated projector-camera pair. By illuminating the object surface with a known sequence of coded images, the correspondence between projector pixels and camera pixels can be uniquely identified.
  • Gray codes One such sequence of coded images is known as Gray codes. Inokuchi et al. originally applied Gray codes to 3D scanning; see S. Inokuchi, K. Sato, and F.
  • each pattern is composed of a sequence of black and white stripes oriented along the projector scan-lines.
  • Each projector scan-line corresponds to a projected light plane.
  • the corresponding projector scan- line can be identified for each camera pixel.
  • Each image pixel defines a camera ray supported by a straight line in 3D.
  • the intersection of each camera rays with its corresponding projected light plane determines the location of a 3D point on the subset of the surface illuminated by the Gray codes.
  • the collection of all these 3D points is the result of the coded structured light method.
  • a polygon mesh is reconstructed by interconnecting these 3D points forming polygon mesh faces .
  • An alternative to moving the projector and camera is to use mirrors to create virtual structured light projectors and virtual camera views.
  • planar mirrors to create multiple virtual structured light projectors.
  • E. Epstein, M. Granger-Piche, and P. Poulin "Exploiting mirrors in interactive reconstruction with structured light", Vision, Modeling, and Visualization, 2004.
  • one or more planar mirrors are illuminated by a projector displaying a modified Gray code sequence which is invariant to mirror reflections.
  • the authors mask the projected patterns to ensure that each surface point is illuminated from a single direction in each image. While eliminating the need for multiplexing several projectors to obtain complete object models, this system still suffers from several limitations. Foremost, it increases the number of required patterns since the directly and indirectly viewed portions of the object surface cannot be illuminated simultaneously.
  • planar mirrors to obtain several virtual views of a 3D scene or object from a single image capture.
  • the virtual cameras created by planar mirror reflections have several benefits over multiple camera systems, including automatic frame synchronization, color calibration, and identical intrinsic calibration parameters; see J. Gluckman and S. Nayar, "Planar Catadioptric Stereo: Geometry and Calibration", International Conference on Computer Vision and Pattern Recognition, 1999. While Gluckman and Nayar restricted their attention to stereo catadioptric systems in that work, Forbes et al .
  • a three-dimensional (3D) scanning apparatus to recover the shape and appearance of an object, comprising a camera, an illuminator, and a mirror surface.
  • the camera and illuminator are placed in front of the object to be 3D scanned, and the mirror surface is placed behind the object.
  • the camera and illuminator are pointed towards the object and the mirror surface.
  • the illuminator projects light patterns on the object and on the mirror surface.
  • the light patterns are composed of projected light rays.
  • the mirror surface is a generalized cylinder with a mirror axis, and the projected light rays are perpendicular to the mirror axis.
  • the illuminator and mirror surface are placed in such a way that most of the surface of the object is illuminated either directly or indirectly. While part of the surface of the object is illuminated by projected light rays emanating directly from the illuminator, other parts of the surface of the object are illuminated indirectly by reflected light rays resulting from projected light rays bouncing off the mirror surface after one or more mirror reflections.
  • the camera has the object and part of the mirror surface simultaneously on view, resulting in the camera simultaneously observing the object from multiple directions.
  • the mirror surface is composed of a left planar mirror and a right planar mirror. The left planar mirror and right planar mirror meet at the mirror axis, and are separated by a mirror angle of approximately 72 degrees. The camera captures a single camera view which includes one real image and four reflected images.
  • the camera is used to capture camera views of the object under a given illumination pattern displayed by the illuminator.
  • the illumination patterns correspond to a sequence of Gray code patterns.
  • the Gray coded images are captured using the 3D scanning apparatus disclosed herein.
  • each pixel is identified as being either illuminated or in shadow.
  • the set of thresholded values for each camera pixel are analyzed to determine which projector scanline illuminated each pixel.
  • each projector scanline corresponds to one of the planes contained in a plurality of parallel light planes emitted by the illuminator
  • this second step determines which plane of light illuminates a given camera pixel.
  • the object shape is estimated from the structured light sequence.
  • ray-plane triangulation is used to reconstruct the three-dimensional coordinate of a point on the object surface for each camera pixel. Furthermore, these points can be analyzed to produce a polygon mesh to represent the object shape.
  • An optional additional image collected under ambient illumination, with no pattern projected by the illuminator, is used to further estimate the appearance (e.g., color) of the estimated object surface.
  • an optional additional image collected under ambient illumination, with no pattern projected by the illuminator is used to further estimate the appearance (e.g., color) of the estimated object surface.
  • a plurality of ambient images, collected for various poses of the object and ambient illumination could be collected to obtain a better approximation of the object appearance.
  • the set of illumination images and the ambient image provide sufficient information to reconstruct the shape and appearance of a 3D object placed within the system.
  • FIG. 1 is an illustration depicting the presently preferred embodiment of a system for obtaining the shape and appearance of an object by capturing a set of images under varying projected structured light illumination patterns using a camera, a standard "pinhole" projector, a mirror surface, and a large aperture converging Fresnel lens .
  • FIG. 2 is an illustration depicting a general form of a system for obtaining the shape and appearance of an object by capturing a set of images under varying projected structured light illumination patterns.
  • FIGS. 3a and 3b are diagrams depicting possible geometric properties of the light source and means of bending the light rays in order to form the plurality of first parallel planes shown in FIG. 2.
  • FIG. 4 is an illustration of the first four Gray code structured light illumination patterns.
  • FIGS. 5a, 5b, and 5c are illustrations depicting a typical image collected by the invention under preferred illumination, wherein FIG. 5a illustrates a typical object under ambient illumination, FIG. 5b shows the same object under a Gray code structured light illumination as in FIG. 4, and FIG. 5c shows the decoded projector scan-line with darker colors indicating larger scan-lines.
  • FIG. 6 is a logic flow diagram for the processing of the set of images computed in accordance with FIG. 4 into an estimate of the correspondence between camera pixels and projector scan-lines.
  • FIG. 7 is an illustration depicting a second embodiment of a system for obtaining the shape and appearance of an object by capturing a set of images under varying projected structured light illumination patterns, wherein an array of cameras replaced the single camera of FIG. 1.
  • FIG. 1 A presently preferred first embodiment of the invention is depicted in FIG. 1, a 3D scanning apparatus 100 to recover the shape and appearance of an object 120, comprising a camera 101, an illuminator 110, and a mirror surface 130; the camera 101 and illuminator 110 placed in front of an object 120 to be 3D scanned; the mirror surface 130 beeing placed behind the object 120; the camera 101 and illuminator 110 pointing towards the object 120 and the mirror surface 130; the illuminator 110 projecting light patterns 111 on the object 120 and mirror surface 130; the light patterns 111 composed of projected light rays 112, 113; the mirror surface 130 being a generalized cylinder with a mirror axis 133 as the generalized cylinder axis; the projected light rays 112, 113 being perpendicular to the mirror axis 133; the illuminator 110 and mirror surface 130 placed in such a way that most of the surface of the object 120 is illuminated either directly or indirectly; part of the surface of the object 120
  • a generalized cylinder is a surface with an axis and a cross section.
  • the axis is a straight line
  • the cross section is a planar curve contained in a plane perpendicular to the axis.
  • the generalized cylinder can be described as the result of sweeping the plane containing the cross section curve perpendicularly along the axis.
  • a point on a generalized cylinder can be described by two parameters: one along the axis, and the second one along the cross section.
  • two intersecting planes define a generalized cylinder where the axis is the intersecting line
  • the cross section is the union of two intersecting lines perpendicular to the axis.
  • the surfaces of so called “cylindrical lenses” and “cylindrical mirrors” are other examples of generalized cylinders, where the cross sections are circles or parabolas. It is a well known property that a plane tangent to a point on the surface of a generalized cylinder is parallel to the axis. Equivalently, the normal direction to a point on the surface of a generalized cylinder is a vector perpendicular to the axis. It follows that the reflection of a light ray perpendicular to the axis of a generalized cylinder mirror surface is also perpendicular to the axis.
  • the mirror surface 130 is composed of a left planar mirror 131 and a right planar mirror 132; the left planar mirror 131 and right planar mirror 132 meeting at the mirror axis 133; the left planar mirror 131 and right planar mirror 132 separated by a mirror angle 134 of approximately 72 degrees; the camera 101 capturing a single camera view 150; the single camera view 150 simultaneously measuring one real image 155 and four reflected images 151, 152, 153, and 154.
  • the reflected light ray 114 is also perpendicular to the mirror axis 133.
  • a key feature of this invention is this property which prevents the reflected light ray 114 from interfering with other projected light rays 112, 113, or other reflected light rays 114 on the surface of the object 120, after they reflect off the mirror surface 130. Through this mechanism, this invention is able to significantly reduce the number of images required to capture the shape and appearance of the object 120, and avoid moving the object 120 with respect to the imaging system.
  • the projected light rays 112, 113 are contained in a plurality of parallel light planes 115, so that all the reflected light rays 114 corresponding to projected light rays 112 contained in the same light plane 115, also belong to the same light plane 115, and light rays 112, 113, 114 contained in different first parallel planes do not interfere with each other. In an even more preferred embodiment all the projected light rays 112, 113 belonging to the same parallel light plane 115 are projected with the same color.
  • the illuminator 110 is an orthographic projector.
  • An orthographic projector emits projected light rays 112, 113 which are all parallel to each other.
  • the orthographic projector is composed of a pinhole projector and a coaxial convergent lens position so that one of its focal points coincides with the center of projection of the projector.
  • FIG. 2 shows a preferred embodiment of the illuminator 110, comprising a light source 200, and a means of bending 220; the light source 200 emitting light source rays 210; the light source rays 210 being transformed into projected light rays 112, 113 by the means of bending 220; the projected light rays constituting the light pattern 111.
  • the light source 200 is a data projector driven by a computer.
  • the means of bending 220 is achieved by using a convergent lens.
  • the means of bending 220 is achieved by using a large aperture convergent Fresnel lens.
  • the means of bending 220 is achieved by using a cylindrical lens.
  • the means of bending 220 is achieved by using a plurality of mirror surfaces.
  • the means of bending 220 is achieved by using a plurality of planar mirror surfaces.
  • FIG. 3a shows a second embodiment of the illuminator 110, wherein components that are also found in FIG. 2 are numbered accordingly.
  • the light source rays 210 are contained in a plurality of source parallel light planes 300, which are subsequently transformed by the means of bending 220 into the plurality of parallel light planes 115.
  • FIG. 3b shows a third embodiment of the illuminator 110, wherein components that are also found in FIG. 2 are numbered accordingly.
  • the light source emits a single source parallel plane 310, which is subsequently transformed by the means of bending 220.
  • the light source 200 which emits a single source parallel plane 310 is implemented using a laser stripe projector.
  • the means of bending 220 include a mechanically-manipulated mirror in order to time- sequentially redirect the single second parallel plane into each plane of the plurality of parallel light planes 115.
  • the positions of the camera 101, the light source 200, the means of bending 220, and the mirror surface 130 must be calibrated with respect to a coordinate system defined with respect to the center of projection of the camera (or other global reference position) . Any well- known calibration or measurement technique for obtaining camera and projector parameters and measuring object locations may be used. If the positions are suitably calibrated, the object 120 can be located at any point with a reconstruction volume within which points on the object surface are both imaged by the camera and illuminated by the projection system.
  • Camera 101 is used to capture camera views 150 of object 120 viewed under a given illumination pattern displayed by the illuminator 110.
  • the illumination patterns correspond to a sequence of Gray code patterns, a few of which 400, 410, 420, 430, are depicted in FIG. 4.
  • Each Gray code pattern is composed of alternating black and white stripes of increasingly-fine width.
  • An exemplary Gray coded captured camera view 510 is shown in FIG. 5. These images are processed according to the logic flow diagram depicted in FIG. 6.
  • Gray coded images 510 are captured.
  • step 610 for each image captured under a given structured light illumination pattern, each pixel is identified as being either illuminated or in shadow. Any suitable pixel thresholding operation may be performed in step 610.
  • step 620 after processing multiple such images, the set of thresholded values for each camera pixel are analyzed to determine which projector scanline illuminated each pixel. Equivalently, since each projector scanline corresponds to one of the planes contained in the plurality of parallel light planes 115 shown in FIG. 1, step 620 determines which plane of light illuminates a given camera pixel.
  • 520 is an exemplary labeling of projected scanlines produced by step 620. Any suitable analysis may be applied to decode the images to determine which plane illuminated a given camera pixel. For a Gray code structured light sequence, the Gray code decoding scheme may be employed. Any sequence which can be decoded in such a manner can be used within our invention.
  • step 630 the object shape is estimated from the structured light sequence.
  • any suitable prior-art method may be used for this purpose, such as the Gray code patterns shown in FIG. 4.
  • ray-plane triangulation is used to reconstruct the three-dimensional coordinate of a point on the object surface for each camera pixel.
  • the projector scan-line illuminating a given pixel is determined in step 620.
  • the calibration of the position of the camera, the projector, and the means of bending are used to determine the equation of the planes within the plurality of parallel light planes 115, as well as the equation of the line connecting the center of projection of the camera with a given pixel.
  • the intersection of a plane with a camera ray will provide an estimate for the three-dimensional coordinate of a point on the object surface.
  • This analysis can be applied independently to allow points on the object surface to represent the object shape. Furthermore, these points can be analyzed to produce a polygon mesh to represent the object shape.
  • An additional image 500 collected under ambient illumination, with no pattern projected by the illuminator 110, is be used to further estimate the appearance (e.g., color) of the estimated object surface.
  • the recovered set of three-dimensional points representing the object surface can be combined with the system calibration, to determine which pixels in the ambient image correspond to a given point on the surface. This correspondence can be used to estimate the appearance of each three-dimensional point.
  • This scheme is known as view-independent texture mapping. In general, any texture mapping or appearance modeling scheme could be applied, such as view-dependent texture mapping.
  • a plurality of ambient images, collected for various poses of the object and ambient illumination could be collected to obtain a better approximation of the object appearance as shown in FIG. 7.

Abstract

Disclosed are methods and apparatus for recovering the shape and appearance of an object illuminated by a coded structured light pattern that is observed by a camera, which simultaneously captures multiple views of the object. In one embodiment a Fresnel lens and a data projector are used as the means of illuminating the object with surrounding structured light patterns. The projector projects several patterns and the camera captures images of the object for each pattern. The images are decoded to determine the correspondence between projector scan-lines and camera pixels. Combined with the calibration of the relative position of the various optical elements, including the projector and camera, the ray-plane intersection is used to determine the three-dimensional depth for each part of the surface. An additional image captured under ambient illumination is used to recover the appearance for each surface point for each camera pixel.

Description

SURROUND STRUCTURED LIGHTING FOR RECOVERING 3D OBJECT SHAPE AND APPEARANCE
BACKGROUND OF THE INVENTION
This invention generally relates to systems and methods for recovering the shape and appearance of objects from images captured under structured lighting. More particularly, this invention relates to systems and methods for recovering the shape and appearance of objects, where the systems comprise a single camera and a single projector displaying structured light patterns.
Reconstructing the shape and appearance of three- dimensional (3D) objects is also referred to in the prior art as "3D scanning." Many prior art 3D scanning methods are referred to as triangulation-based methods. Coded structured light is a particularly reliable and inexpensive triangulation-based method for reconstructing the shape and appearance of three dimensional objects from image data. In its simplest form, a coded structured light system comprises a single calibrated projector-camera pair. By illuminating the object surface with a known sequence of coded images, the correspondence between projector pixels and camera pixels can be uniquely identified. One such sequence of coded images is known as Gray codes. Inokuchi et al. originally applied Gray codes to 3D scanning; see S. Inokuchi, K. Sato, and F. Matsuda, "Range imaging system for 3-D object recognition," International Conference on Pattern Recognition, 1984. In this system and method, each pattern is composed of a sequence of black and white stripes oriented along the projector scan-lines. Each projector scan-line corresponds to a projected light plane. By illuminating the object with a temporally-multiplexed sequence of increasingly- fine Gray code patterns, the corresponding projector scan- line can be identified for each camera pixel. Each image pixel defines a camera ray supported by a straight line in 3D. The intersection of each camera rays with its corresponding projected light plane determines the location of a 3D point on the subset of the surface illuminated by the Gray codes. The collection of all these 3D points is the result of the coded structured light method. Alternatively, a polygon mesh is reconstructed by interconnecting these 3D points forming polygon mesh faces .
The main problem with the coded structured light methods as described in the previous paragraph, is that only the portion of the object surface simultaneously illuminated by the Gray codes and visible from the camera can be reconstructed. To reconstruct other parts of the object surface, the projector and camera must me moved so that other parts of the object surface are simultaneously visible and illuminated. In general, this process needs to be repeated several times to cover the whole surface of the object, and then the various 3D scans produced at each relative position of camera and projector with respect to the object, must be merged to produce a single integrated 3D scan.
An alternative to moving the projector and camera is to use mirrors to create virtual structured light projectors and virtual camera views.
The idea of using planar mirrors to create multiple virtual structured light projectors was first presented by Epstein et al . ; see E. Epstein, M. Granger-Piche, and P. Poulin, "Exploiting mirrors in interactive reconstruction with structured light", Vision, Modeling, and Visualization, 2004. In their system, one or more planar mirrors are illuminated by a projector displaying a modified Gray code sequence which is invariant to mirror reflections. By visually tracking the relative camera, projector, and mirror positions and by interactively selecting a conservative object bounding box, the authors mask the projected patterns to ensure that each surface point is illuminated from a single direction in each image. While eliminating the need for multiplexing several projectors to obtain complete object models, this system still suffers from several limitations. Foremost, it increases the number of required patterns since the directly and indirectly viewed portions of the object surface cannot be illuminated simultaneously.
Several prior art methods use planar mirrors to obtain several virtual views of a 3D scene or object from a single image capture. As discussed by Gluckman and Nayar, the virtual cameras created by planar mirror reflections have several benefits over multiple camera systems, including automatic frame synchronization, color calibration, and identical intrinsic calibration parameters; see J. Gluckman and S. Nayar, "Planar Catadioptric Stereo: Geometry and Calibration", International Conference on Computer Vision and Pattern Recognition, 1999. While Gluckman and Nayar restricted their attention to stereo catadioptric systems in that work, Forbes et al . have explored a configuration comprising a mirror surface oriented such that an object placed between the two mirrors will produce one real and four virtual viewpoints, resulting from the first and second reflections; see K. Forbes, F. Nicolls, G. de Jager, and A. Voigt, "Shape-from-silhouette with two mirrors and an uncalibrated camera", European Computer on Computer Vision, 2006. These authors obtained a complete 3D surface model by estimating the approximate visual hull defined by the five object silhouettes. A large number of methods to reconstruct the shape of objects from multiple silhouettes exist in the prior art. These methods are referred to as shape-from-silhouette methods. Some shape- from-silhouette methods approximate the visual hull of the object. The main problem with these methods is that they are unable to estimate the 3D position of points located inside concavities of the object surface. On the other hand, methods based on coded structure lighting can recover those points as long as they are visible from the camera and illuminated by the projector.
In conclusion, there is a need for a 3D scanning method which overcomes the limitations of existing Coded structured light systems and methods, and is able to rapidly capture a full 360 degree 3D scan of an object surface using only a single camera and a single projector without any repositioning of cameras and projectors, or merging of multiple scans.
BRIEF SUMMARY OF THE INVENTION
Disclosed herein is a three-dimensional (3D) scanning apparatus to recover the shape and appearance of an object, comprising a camera, an illuminator, and a mirror surface. The camera and illuminator are placed in front of the object to be 3D scanned, and the mirror surface is placed behind the object. The camera and illuminator are pointed towards the object and the mirror surface. The illuminator projects light patterns on the object and on the mirror surface. The light patterns are composed of projected light rays. The mirror surface is a generalized cylinder with a mirror axis, and the projected light rays are perpendicular to the mirror axis. The illuminator and mirror surface are placed in such a way that most of the surface of the object is illuminated either directly or indirectly. While part of the surface of the object is illuminated by projected light rays emanating directly from the illuminator, other parts of the surface of the object are illuminated indirectly by reflected light rays resulting from projected light rays bouncing off the mirror surface after one or more mirror reflections. The camera has the object and part of the mirror surface simultaneously on view, resulting in the camera simultaneously observing the object from multiple directions. In a more preferred embodiment the mirror surface is composed of a left planar mirror and a right planar mirror. The left planar mirror and right planar mirror meet at the mirror axis, and are separated by a mirror angle of approximately 72 degrees. The camera captures a single camera view which includes one real image and four reflected images.
Disclosed herein is also a method to recover the shape and appearance of an object. The camera is used to capture camera views of the object under a given illumination pattern displayed by the illuminator. In the presently preferred embodiment, the illumination patterns correspond to a sequence of Gray code patterns. In a first step of the method the Gray coded images are captured using the 3D scanning apparatus disclosed herein. In a second step, for each image captured under a given structured light illumination pattern, each pixel is identified as being either illuminated or in shadow. In a third step, after processing multiple such images, the set of thresholded values for each camera pixel are analyzed to determine which projector scanline illuminated each pixel. Equivalently, since each projector scanline corresponds to one of the planes contained in a plurality of parallel light planes emitted by the illuminator, this second step determines which plane of light illuminates a given camera pixel. In a third step the object shape is estimated from the structured light sequence. In the preferred embodiment, ray-plane triangulation is used to reconstruct the three-dimensional coordinate of a point on the object surface for each camera pixel. Furthermore, these points can be analyzed to produce a polygon mesh to represent the object shape.
An optional additional image collected under ambient illumination, with no pattern projected by the illuminator, is used to further estimate the appearance (e.g., color) of the estimated object surface. Furthermore, a plurality of ambient images, collected for various poses of the object and ambient illumination could be collected to obtain a better approximation of the object appearance.
In summary, the set of illumination images and the ambient image provide sufficient information to reconstruct the shape and appearance of a 3D object placed within the system.
It is a first object and advantage of this invention to provide a system and method projecting structured illumination patterns which surround an object.
It is a second object and advantage of this invention to provide an improved system and method to obtain the 3D shape and photometric appearance of an object from one or more images.
It is a further object and advantage of this invention to provide a system and method for deriving the surface shape and appearance of an object from images collected by projecting structured illumination patterns wherein the projector is designed in order to surround an object using a multitude of optical elements.
These together with other objects of the invention, along with various features of novelty that characterize the invention, are pointed out with particularity in the claims annexed hereto and forming a part of this disclosure. For a better understanding of the invention, its operating advantages and the specific objects attained by its uses, reference should be had to the accompanying drawings and descriptive matter in which there is illustrated a preferred embodiment of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
The above set forth and other features of the invention are made more apparent in the ensuing Detailed Description of the Invention when read in conjunction with the attached Drawings, wherein:
FIG. 1 is an illustration depicting the presently preferred embodiment of a system for obtaining the shape and appearance of an object by capturing a set of images under varying projected structured light illumination patterns using a camera, a standard "pinhole" projector, a mirror surface, and a large aperture converging Fresnel lens .
FIG. 2 is an illustration depicting a general form of a system for obtaining the shape and appearance of an object by capturing a set of images under varying projected structured light illumination patterns.
FIGS. 3a and 3b are diagrams depicting possible geometric properties of the light source and means of bending the light rays in order to form the plurality of first parallel planes shown in FIG. 2.
FIG. 4 is an illustration of the first four Gray code structured light illumination patterns.
FIGS. 5a, 5b, and 5c are illustrations depicting a typical image collected by the invention under preferred illumination, wherein FIG. 5a illustrates a typical object under ambient illumination, FIG. 5b shows the same object under a Gray code structured light illumination as in FIG. 4, and FIG. 5c shows the decoded projector scan-line with darker colors indicating larger scan-lines.
FIG. 6 is a logic flow diagram for the processing of the set of images computed in accordance with FIG. 4 into an estimate of the correspondence between camera pixels and projector scan-lines. FIG. 7 is an illustration depicting a second embodiment of a system for obtaining the shape and appearance of an object by capturing a set of images under varying projected structured light illumination patterns, wherein an array of cameras replaced the single camera of FIG. 1.
DETAILED DESCRIPTION OF THE INVENTION
A presently preferred first embodiment of the invention is depicted in FIG. 1, a 3D scanning apparatus 100 to recover the shape and appearance of an object 120, comprising a camera 101, an illuminator 110, and a mirror surface 130; the camera 101 and illuminator 110 placed in front of an object 120 to be 3D scanned; the mirror surface 130 beeing placed behind the object 120; the camera 101 and illuminator 110 pointing towards the object 120 and the mirror surface 130; the illuminator 110 projecting light patterns 111 on the object 120 and mirror surface 130; the light patterns 111 composed of projected light rays 112, 113; the mirror surface 130 being a generalized cylinder with a mirror axis 133 as the generalized cylinder axis; the projected light rays 112, 113 being perpendicular to the mirror axis 133; the illuminator 110 and mirror surface 130 placed in such a way that most of the surface of the object 120 is illuminated either directly or indirectly; part of the surface of the object 120 being illuminated by projected light rays 113 emanating directly from the illuminator 110; other parts of the surface of the object 120 being illuminated indirectly by reflected light rays 114 resulting from projected light rays 112 bouncing off the mirror surface 130 after one or more mirror reflections; the camera 101 having the object 120 and part of the mirror surface 130 simultaneously on view, resulting in the camera 101 simultaneously observing the object 120 from multiple directions.
In the prior art, a generalized cylinder is a surface with an axis and a cross section. The axis is a straight line, and the cross section is a planar curve contained in a plane perpendicular to the axis. The generalized cylinder can be described as the result of sweeping the plane containing the cross section curve perpendicularly along the axis. As a result of this process, a point on a generalized cylinder can be described by two parameters: one along the axis, and the second one along the cross section. For example, two intersecting planes define a generalized cylinder where the axis is the intersecting line, and the cross section is the union of two intersecting lines perpendicular to the axis. The surfaces of so called "cylindrical lenses" and "cylindrical mirrors" are other examples of generalized cylinders, where the cross sections are circles or parabolas. It is a well known property that a plane tangent to a point on the surface of a generalized cylinder is parallel to the axis. Equivalently, the normal direction to a point on the surface of a generalized cylinder is a vector perpendicular to the axis. It follows that the reflection of a light ray perpendicular to the axis of a generalized cylinder mirror surface is also perpendicular to the axis. In a more preferred embodiment the mirror surface 130 is composed of a left planar mirror 131 and a right planar mirror 132; the left planar mirror 131 and right planar mirror 132 meeting at the mirror axis 133; the left planar mirror 131 and right planar mirror 132 separated by a mirror angle 134 of approximately 72 degrees; the camera 101 capturing a single camera view 150; the single camera view 150 simultaneously measuring one real image 155 and four reflected images 151, 152, 153, and 154.
Since the projected light ray 112 is perpendicular to the mirror axis 133, and the mirror surface 130 is a generalized cylinder with respect to the mirror axis 133, the reflected light ray 114 is also perpendicular to the mirror axis 133. A key feature of this invention is this property which prevents the reflected light ray 114 from interfering with other projected light rays 112, 113, or other reflected light rays 114 on the surface of the object 120, after they reflect off the mirror surface 130. Through this mechanism, this invention is able to significantly reduce the number of images required to capture the shape and appearance of the object 120, and avoid moving the object 120 with respect to the imaging system.
In a more preferred embodiment the projected light rays 112, 113, are contained in a plurality of parallel light planes 115, so that all the reflected light rays 114 corresponding to projected light rays 112 contained in the same light plane 115, also belong to the same light plane 115, and light rays 112, 113, 114 contained in different first parallel planes do not interfere with each other. In an even more preferred embodiment all the projected light rays 112, 113 belonging to the same parallel light plane 115 are projected with the same color.
In a preferred embodiment the illuminator 110 is an orthographic projector. An orthographic projector emits projected light rays 112, 113 which are all parallel to each other. In a more preferred embodiment the orthographic projector is composed of a pinhole projector and a coaxial convergent lens position so that one of its focal points coincides with the center of projection of the projector.
A prior art method to implement an orthographic projector system using a data projector and a Fresnel lens was introduced by Nayar and Anand; see S. K. Nayar and V. Anand, "Projection Volumetric Display Using Passive Optical Scatterers", 2006. In their application, the orthographic projector illuminated passive optical scatterers to create a volumetric display. While demonstrating the construction of an orthographic projector using a Fresnel lens and a DLP projector, their invention only applies to the display of 3D content; not to 3D scanning methods.
FIG. 2 shows a preferred embodiment of the illuminator 110, comprising a light source 200, and a means of bending 220; the light source 200 emitting light source rays 210; the light source rays 210 being transformed into projected light rays 112, 113 by the means of bending 220; the projected light rays constituting the light pattern 111. In a more preferred embodiment the light source 200 is a data projector driven by a computer. In another more preferred embodiment the means of bending 220 is achieved by using a convergent lens. In an even more preferred embodiment the means of bending 220 is achieved by using a large aperture convergent Fresnel lens. In another more preferred embodiment, the means of bending 220 is achieved by using a cylindrical lens. In another more preferred embodiment, the means of bending 220 is achieved by using a plurality of mirror surfaces. In a more preferred embodiment, the means of bending 220 is achieved by using a plurality of planar mirror surfaces.
FIG. 3a shows a second embodiment of the illuminator 110, wherein components that are also found in FIG. 2 are numbered accordingly. In this embodiment, the light source rays 210 are contained in a plurality of source parallel light planes 300, which are subsequently transformed by the means of bending 220 into the plurality of parallel light planes 115.
FIG. 3b shows a third embodiment of the illuminator 110, wherein components that are also found in FIG. 2 are numbered accordingly. In this embodiment, the light source emits a single source parallel plane 310, which is subsequently transformed by the means of bending 220. In a more preferred embodiment the light source 200 which emits a single source parallel plane 310 is implemented using a laser stripe projector. In an even more preferred embodiment the means of bending 220 include a mechanically-manipulated mirror in order to time- sequentially redirect the single second parallel plane into each plane of the plurality of parallel light planes 115.
The positions of the camera 101, the light source 200, the means of bending 220, and the mirror surface 130 must be calibrated with respect to a coordinate system defined with respect to the center of projection of the camera (or other global reference position) . Any well- known calibration or measurement technique for obtaining camera and projector parameters and measuring object locations may be used. If the positions are suitably calibrated, the object 120 can be located at any point with a reconstruction volume within which points on the object surface are both imaged by the camera and illuminated by the projection system.
Camera 101 is used to capture camera views 150 of object 120 viewed under a given illumination pattern displayed by the illuminator 110. In the presently preferred embodiment, the illumination patterns correspond to a sequence of Gray code patterns, a few of which 400, 410, 420, 430, are depicted in FIG. 4. Each Gray code pattern is composed of alternating black and white stripes of increasingly-fine width. An exemplary Gray coded captured camera view 510 is shown in FIG. 5. These images are processed according to the logic flow diagram depicted in FIG. 6. In step 600 Gray coded images 510 are captured. In step 610, for each image captured under a given structured light illumination pattern, each pixel is identified as being either illuminated or in shadow. Any suitable pixel thresholding operation may be performed in step 610. In step 620, after processing multiple such images, the set of thresholded values for each camera pixel are analyzed to determine which projector scanline illuminated each pixel. Equivalently, since each projector scanline corresponds to one of the planes contained in the plurality of parallel light planes 115 shown in FIG. 1, step 620 determines which plane of light illuminates a given camera pixel. 520 is an exemplary labeling of projected scanlines produced by step 620. Any suitable analysis may be applied to decode the images to determine which plane illuminated a given camera pixel. For a Gray code structured light sequence, the Gray code decoding scheme may be employed. Any sequence which can be decoded in such a manner can be used within our invention. In step 630 the object shape is estimated from the structured light sequence. Any suitable prior-art method may be used for this purpose, such as the Gray code patterns shown in FIG. 4. In the preferred embodiment, ray-plane triangulation is used to reconstruct the three-dimensional coordinate of a point on the object surface for each camera pixel. In such a method, the projector scan-line illuminating a given pixel is determined in step 620. The calibration of the position of the camera, the projector, and the means of bending are used to determine the equation of the planes within the plurality of parallel light planes 115, as well as the equation of the line connecting the center of projection of the camera with a given pixel. The intersection of a plane with a camera ray will provide an estimate for the three-dimensional coordinate of a point on the object surface. This analysis can be applied independently to allow points on the object surface to represent the object shape. Furthermore, these points can be analyzed to produce a polygon mesh to represent the object shape.
An additional image 500 collected under ambient illumination, with no pattern projected by the illuminator 110, is be used to further estimate the appearance (e.g., color) of the estimated object surface. The recovered set of three-dimensional points representing the object surface can be combined with the system calibration, to determine which pixels in the ambient image correspond to a given point on the surface. This correspondence can be used to estimate the appearance of each three-dimensional point. This scheme is known as view-independent texture mapping. In general, any texture mapping or appearance modeling scheme could be applied, such as view-dependent texture mapping. Furthermore, a plurality of ambient images, collected for various poses of the object and ambient illumination could be collected to obtain a better approximation of the object appearance as shown in FIG. 7.
While there is shown and described herein certain specific structure embodying the invention, it will be manifest to those skilled in the art that various modifications and rearrangements of the parts may be made without departing from the spirit and scope of the underlying inventive concept and that the same is not limited to the particular forms herein shown and described except insofar as indicated by the scope of the appended claims .

Claims

WHAT IS CLAIMED:
1. An apparatus for illuminating an object comprising:
(a) a mirror having a surface, the mirror surface being positioned behind the object and being in the form of a generalized cylinder axis, the mirror surface being a generalized cylinder having a mirror axis as the generalized cylinder axis;
(b) an illuminator formed and positioned in front of the object and having an illumination point directed toward the object and the mirror surface so as to project light patterns composed of light rays perpendicularly to the mirror axis onto the object and the mirror surface in such a way that most of the surface of the object is illuminated either directly or indirectly, with part of the surface of the object being illuminated by projected light rays emanating directly from the illuminator, and other parts of the surface of the object being illuminated indirectly by reflected light rays resulting from projected light rays bouncing off the mirror surface after one or more mirror reflections.
2. An apparatus as in claim 1, where the illuminator comprises a light source which emits light source rays, and a means of bending the light source rays to produce the projected light rays.
3. An apparatus as in claim 2, where the light source is a data projector driven by a computer.
4. An apparatus as in claim 2, where the means of bending is a convergent lens.
5. An apparatus as in claim 4, where the convergent lens is a Fresnel lens.
6. An apparatus as in claim 2, where the means of bending is a telecentric lens.
7. An apparatus as in claim 2, where the means of bending is a cylindrical lens.
8. An apparatus as in claim 2, where the means of bending is a plurality of mirror surfaces.
9. An apparatus as in claim 7, where the mirror surfaces are planar mirror surfaces.
10. An apparatus as in claim 2, where the light source rays are contained in a plurality of source parallel planes .
11. An apparatus as in claim 3 where the plurality of source parallel planes is composed of a single source parallel plane.
12. An apparatus as in claim 4 where the means of bending includes a mechanically manipulated mirror in order to time-sequentially redirect the single source parallel plane into each plane of the plurality of parallel light planes .
13. An apparatus as in claim 1, where the illuminator is an orthographic projector.
14. A three-dimensional (3D) scanning apparatus to recover the shape and appearance of an object, comprising:
(a) a mirror having a surface, the mirror surface being positioned behind the object and being in the form of a generalized cylinder axis, the mirror surface being a generalized cylinder having a mirror axis as the generalized cylinder axis;
(b) an illuminator formed and positioned in front of the object and having an illumination point directed toward the object and the mirror surface so as to project light patterns composed of light rays perpendicularly to the mirror axis onto the object and the mirror surface in such a way that most of the surface of the object is illuminated either directly or indirectly, with part of the surface of the object being illuminated by projected light rays emanating directly from the illuminator, and other parts of the surface of the object being illuminated indirectly by reflected light rays resulting from projected light rays bouncing off the mirror surface after one or more mirror reflections; and
(c) a camera, formed and positioned to have the object and part of the mirror surface simultaneously on view, resulting in the camera simultaneously observing the object from multiple directions.
15. A method of recovering the shape of an object, comprising the steps of:
(a) illuminating the object with an apparatus as in claim 1;
(b) capturing a plurality images of the object; each image corresponding to one light pattern;
(c) processing the plurality of images of the object to determine the correspondence between light pattern scanlines and image pixels, each light pattern scanline corresponding to a parallel light plane, each image pixel corresponding to a camera ray; and
(d) estimating the shape of the object from the structured light sequence using ray-plane triangulation, the shape comprising a plurality of shape points, each shape point resulting from the intersection of a parallel light plane and a corresponding camera ray.
16. A method as in claim 15, further comprising the step of reconstructing a polygon mesh from the plurality of shape points.
17. A method as in claim 15, wherein the processing step is based on determining which pixels are either illuminated or in shadow using a pixel thresholding operation .
18. A method as in claim 15, wherein the light pattern is a Gray code sequence, and the processing step is based on the Gray code decoding scheme.
19. A method as in claim 15, wherein each image of the object is captured by one of a plurality of cameras.
20. A method as in claim 15, wherein the plurality of images of the object are captured by a single camera.
21. A method as in claim 15, to recover the shape and appearance of an object, further comprising the step of: capturing an additional image under ambient illumination, with no pattern projected by the illuminator, to further estimate the appearance of the estimated object surface.
22. A method as in claim 21, wherein the reconstructed shape is represented as a plurality of points, and the appearance is represented as a three dimensional color vector per point.
23. A method as in claim 21, wherein the reconstructed shape is represented as a polygon mesh, and the appearance is represented as a three dimensional color vector per polygon mesh vertex.
24. A method as in claim 21, wherein the reconstructed shape is represented as a polygon mesh, and the appearance is represented as a texture mapping.
25. A method as in claim 24, wherein the texture mapping is a view-dependent texture mapping.
26. A method as in claim 15, to recover the shape and appearance of an object, further comprising the steps of: capturing a plurality of additional image under different ambient illuminations, with no pattern projected by the illuminator, to further estimate the appearance of the estimated object surface.
27. A method as in claim 26, wherein the reconstructed shape is represented as a polygon mesh, and the appearance is represented as a view-independent texture mapping; the view-independent texture mapping reconstructed using a photometric stereo algorithm.
PCT/US2009/054007 2008-08-18 2009-08-17 Surround structured lighting for recovering 3d object shape and appearance WO2010021972A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US18930608P 2008-08-18 2008-08-18
US61/189,306 2008-08-18

Publications (1)

Publication Number Publication Date
WO2010021972A1 true WO2010021972A1 (en) 2010-02-25

Family

ID=41707420

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2009/054007 WO2010021972A1 (en) 2008-08-18 2009-08-17 Surround structured lighting for recovering 3d object shape and appearance

Country Status (1)

Country Link
WO (1) WO2010021972A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AT511223A1 (en) * 2011-03-18 2012-10-15 A Tron3D Gmbh DEVICE FOR TAKING PICTURES OF THREE-DIMENSIONAL OBJECTS
US20120281087A1 (en) * 2011-05-02 2012-11-08 Faro Technologies, Inc. Three-dimensional scanner for hand-held phones
CN104296679A (en) * 2014-09-30 2015-01-21 唐春晓 Mirror image type three-dimensional information acquisition device and method
CN104506838A (en) * 2014-12-23 2015-04-08 宁波盈芯信息科技有限公司 Method, device and system for sensing depth of symbol array surface structured light
US9091529B2 (en) 2011-07-14 2015-07-28 Faro Technologies, Inc. Grating-based scanner with phase and pitch adjustment
US9170098B2 (en) 2011-07-13 2015-10-27 Faro Technologies, Inc. Device and method using a spatial light modulator to find 3D coordinates of an object
CN106556356A (en) * 2016-12-07 2017-04-05 西安知象光电科技有限公司 A kind of multi-angle measuring three-dimensional profile system and measuring method
CN106705896A (en) * 2017-03-29 2017-05-24 江苏大学 Electrical connector casing body defect detection device and method based on single camera omnibearing active vision
CN109285213A (en) * 2018-07-18 2019-01-29 西安电子科技大学 Comprehensive polarization three-dimensional rebuilding method
CN110514143A (en) * 2019-08-09 2019-11-29 南京理工大学 A kind of fringe projection system scaling method based on reflecting mirror
US10512508B2 (en) 2015-06-15 2019-12-24 The University Of British Columbia Imagery system
CN110672039A (en) * 2019-09-18 2020-01-10 南京理工大学 Object omnibearing three-dimensional measurement method based on plane reflector
CN111947598A (en) * 2020-07-24 2020-11-17 南京理工大学 360-degree three-dimensional human head measuring method based on plane reflector

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4206965A (en) * 1976-08-23 1980-06-10 Mcgrew Stephen P System for synthesizing strip-multiplexed holograms
US5127037A (en) * 1990-08-15 1992-06-30 Bynum David K Apparatus for forming a three-dimensional reproduction of an object from laminations
US5455689A (en) * 1991-06-27 1995-10-03 Eastman Kodak Company Electronically interpolated integral photography system
US5517603A (en) * 1991-12-20 1996-05-14 Apple Computer, Inc. Scanline rendering device for generating pixel values for displaying three-dimensional graphical images
US20020130820A1 (en) * 1998-04-20 2002-09-19 Alan Sullivan Multi-planar volumetric display system and method of operation
US20020150288A1 (en) * 2001-02-09 2002-10-17 Minolta Co., Ltd. Method for processing image data and modeling device
US20030194131A1 (en) * 2002-04-11 2003-10-16 Bin Zhao Object extraction
US20040151365A1 (en) * 2003-02-03 2004-08-05 An Chang Nelson Liang Multiframe correspondence estimation
US20040155877A1 (en) * 2003-02-12 2004-08-12 Canon Europa N.V. Image processing apparatus
US20040201584A1 (en) * 2003-02-20 2004-10-14 Binary Simplex, Inc. Spatial decomposition methods using bit manipulation
US20040207823A1 (en) * 2003-04-16 2004-10-21 Alasaarela Mikko Petteri 2D/3D data projector
US20050057569A1 (en) * 2003-08-26 2005-03-17 Berger Michael A. Static and dynamic 3-D human face reconstruction
US6947579B2 (en) * 2002-10-07 2005-09-20 Technion Research & Development Foundation Ltd. Three-dimensional face recognition
US20050259158A1 (en) * 2004-05-01 2005-11-24 Eliezer Jacob Digital camera with non-uniform image resolution
US20050259159A1 (en) * 1999-01-06 2005-11-24 Hideyoshi Horimai Apparatus and method for photographing three-dimensional image, apparatus and method for displaying three-dimensional image, and apparatus and method for converting three-dimensional image display position
US20060017722A1 (en) * 2004-06-14 2006-01-26 Canon Europa N.V. Texture data compression and rendering in 3D computer graphics
US20060066612A1 (en) * 2004-09-23 2006-03-30 Herb Yang Method and system for real time image rendering
US20070206836A1 (en) * 2004-09-06 2007-09-06 Bayerische Motoren Werke Aktiengesellschaft Device for the detection of an object on a vehicle seat
US20080018855A1 (en) * 2004-03-22 2008-01-24 Larichev Andrey V Aberrometer Provided with a Visual Acuity Testing System

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4206965A (en) * 1976-08-23 1980-06-10 Mcgrew Stephen P System for synthesizing strip-multiplexed holograms
US5127037A (en) * 1990-08-15 1992-06-30 Bynum David K Apparatus for forming a three-dimensional reproduction of an object from laminations
US5455689A (en) * 1991-06-27 1995-10-03 Eastman Kodak Company Electronically interpolated integral photography system
US5517603A (en) * 1991-12-20 1996-05-14 Apple Computer, Inc. Scanline rendering device for generating pixel values for displaying three-dimensional graphical images
US20020130820A1 (en) * 1998-04-20 2002-09-19 Alan Sullivan Multi-planar volumetric display system and method of operation
US20050259159A1 (en) * 1999-01-06 2005-11-24 Hideyoshi Horimai Apparatus and method for photographing three-dimensional image, apparatus and method for displaying three-dimensional image, and apparatus and method for converting three-dimensional image display position
US20020150288A1 (en) * 2001-02-09 2002-10-17 Minolta Co., Ltd. Method for processing image data and modeling device
US20030194131A1 (en) * 2002-04-11 2003-10-16 Bin Zhao Object extraction
US6947579B2 (en) * 2002-10-07 2005-09-20 Technion Research & Development Foundation Ltd. Three-dimensional face recognition
US20040151365A1 (en) * 2003-02-03 2004-08-05 An Chang Nelson Liang Multiframe correspondence estimation
US20040155877A1 (en) * 2003-02-12 2004-08-12 Canon Europa N.V. Image processing apparatus
US20040201584A1 (en) * 2003-02-20 2004-10-14 Binary Simplex, Inc. Spatial decomposition methods using bit manipulation
US20040207823A1 (en) * 2003-04-16 2004-10-21 Alasaarela Mikko Petteri 2D/3D data projector
US20050057569A1 (en) * 2003-08-26 2005-03-17 Berger Michael A. Static and dynamic 3-D human face reconstruction
US20080018855A1 (en) * 2004-03-22 2008-01-24 Larichev Andrey V Aberrometer Provided with a Visual Acuity Testing System
US20050259158A1 (en) * 2004-05-01 2005-11-24 Eliezer Jacob Digital camera with non-uniform image resolution
US20060017722A1 (en) * 2004-06-14 2006-01-26 Canon Europa N.V. Texture data compression and rendering in 3D computer graphics
US20070206836A1 (en) * 2004-09-06 2007-09-06 Bayerische Motoren Werke Aktiengesellschaft Device for the detection of an object on a vehicle seat
US20060066612A1 (en) * 2004-09-23 2006-03-30 Herb Yang Method and system for real time image rendering

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AT511223B1 (en) * 2011-03-18 2013-01-15 A Tron3D Gmbh DEVICE FOR TAKING PICTURES OF THREE-DIMENSIONAL OBJECTS
EP2499992A3 (en) * 2011-03-18 2013-05-22 a.tron3d GmbH Device for taking pictures of three-dimensional objects
US9101434B2 (en) 2011-03-18 2015-08-11 A.Tron3D Gmbh Device for recording images of three-dimensional objects
AT511223A1 (en) * 2011-03-18 2012-10-15 A Tron3D Gmbh DEVICE FOR TAKING PICTURES OF THREE-DIMENSIONAL OBJECTS
US20120281087A1 (en) * 2011-05-02 2012-11-08 Faro Technologies, Inc. Three-dimensional scanner for hand-held phones
US9170098B2 (en) 2011-07-13 2015-10-27 Faro Technologies, Inc. Device and method using a spatial light modulator to find 3D coordinates of an object
US9091529B2 (en) 2011-07-14 2015-07-28 Faro Technologies, Inc. Grating-based scanner with phase and pitch adjustment
CN104296679A (en) * 2014-09-30 2015-01-21 唐春晓 Mirror image type three-dimensional information acquisition device and method
CN104506838A (en) * 2014-12-23 2015-04-08 宁波盈芯信息科技有限公司 Method, device and system for sensing depth of symbol array surface structured light
US10512508B2 (en) 2015-06-15 2019-12-24 The University Of British Columbia Imagery system
CN106556356A (en) * 2016-12-07 2017-04-05 西安知象光电科技有限公司 A kind of multi-angle measuring three-dimensional profile system and measuring method
CN106705896A (en) * 2017-03-29 2017-05-24 江苏大学 Electrical connector casing body defect detection device and method based on single camera omnibearing active vision
CN106705896B (en) * 2017-03-29 2022-08-23 江苏大学 Electric connector shell defect detection device and method based on single-camera omnibearing active vision
CN109285213A (en) * 2018-07-18 2019-01-29 西安电子科技大学 Comprehensive polarization three-dimensional rebuilding method
CN110514143A (en) * 2019-08-09 2019-11-29 南京理工大学 A kind of fringe projection system scaling method based on reflecting mirror
WO2021027719A1 (en) * 2019-08-09 2021-02-18 南京理工大学 Reflector-based calibration method for fringe projection system
US11808564B2 (en) 2019-08-09 2023-11-07 Nanjing University Of Science And Technology Calibration method for fringe projection systems based on plane mirrors
CN110672039A (en) * 2019-09-18 2020-01-10 南京理工大学 Object omnibearing three-dimensional measurement method based on plane reflector
CN110672039B (en) * 2019-09-18 2021-03-26 南京理工大学 Object omnibearing three-dimensional measurement method based on plane reflector
CN111947598A (en) * 2020-07-24 2020-11-17 南京理工大学 360-degree three-dimensional human head measuring method based on plane reflector
CN111947598B (en) * 2020-07-24 2022-04-01 南京理工大学 360-degree three-dimensional human head measuring method based on plane reflector

Similar Documents

Publication Publication Date Title
WO2010021972A1 (en) Surround structured lighting for recovering 3d object shape and appearance
US6791542B2 (en) Modeling 3D objects with opacity hulls
US6831641B2 (en) Modeling and rendering of surface reflectance fields of 3D objects
US20190156557A1 (en) 3d geometric modeling and 3d video content creation
CN104335005B (en) 3D is scanned and alignment system
US9363501B2 (en) Combining depth-maps from different acquisition methods
US6903738B2 (en) Image-based 3D modeling rendering system
US6455835B1 (en) System, method, and program product for acquiring accurate object silhouettes for shape recovery
Zhang et al. Rapid shape acquisition using color structured light and multi-pass dynamic programming
US6792140B2 (en) Image-based 3D digitizer
US6858826B2 (en) Method and apparatus for scanning three-dimensional objects
US20130038696A1 (en) Ray Image Modeling for Fast Catadioptric Light Field Rendering
US20140002610A1 (en) Real-time 3d shape measurement system
US20020057438A1 (en) Method and apparatus for capturing 3D surface and color thereon in real time
CN101198964A (en) Creating 3D images of objects by illuminating with infrared patterns
WO2009120073A2 (en) A dynamically calibrated self referenced three dimensional structured light scanner
CN107370950B (en) Focusing process method, apparatus and mobile terminal
Lanman et al. Surround structured lighting: 3-D scanning with orthographic illumination
Lanman et al. Surround structured lighting for full object scanning
JP4335589B2 (en) How to model a 3D object
Balzer et al. Cavlectometry: Towards holistic reconstruction of large mirror objects
Aliaga Digital inspection: An interactive stage for viewing surface details
JP3932776B2 (en) 3D image generation apparatus and 3D image generation method
Jost et al. Modeling 3d textured objects by fusion of multiple views
Wong et al. Multi-view 3D model reconstruction: exploitation of color homogeneity in voxel mask

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09808660

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09808660

Country of ref document: EP

Kind code of ref document: A1