US20060017720A1 - System and method for 3D measurement and surface reconstruction - Google Patents

System and method for 3D measurement and surface reconstruction Download PDF

Info

Publication number
US20060017720A1
US20060017720A1 US10/891,632 US89163204A US2006017720A1 US 20060017720 A1 US20060017720 A1 US 20060017720A1 US 89163204 A US89163204 A US 89163204A US 2006017720 A1 US2006017720 A1 US 2006017720A1
Authority
US
United States
Prior art keywords
pattern
camera
projector
projecting
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/891,632
Inventor
You Li
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
City University of Hong Kong CityU
Original Assignee
City University of Hong Kong CityU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by City University of Hong Kong CityU filed Critical City University of Hong Kong CityU
Priority to US10/891,632 priority Critical patent/US20060017720A1/en
Assigned to CITY UNIVERSITY OF HONG KONG reassignment CITY UNIVERSITY OF HONG KONG ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LI, YOU FU
Publication of US20060017720A1 publication Critical patent/US20060017720A1/en
Priority to US12/269,124 priority patent/US8213707B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2504Calibration devices
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2509Color coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Definitions

  • the present invention relates to a system and method for 3D measurement and surface reconstruction of an image reconfigurable vision, and in particular to a reconfigurable vision system and method.
  • the first issue is how to acquire the 3D data for reconstructing the object surface.
  • a laser range finder/scanner [1] is widely used for 3D surface data acquisition in industry.
  • pattern projections can be employed [2].
  • Portable 3D imaging systems based a similar principle have also been designed recently.
  • the second issue is how to determine the next viewpoint for each view so that all the information about the object surface can be acquired in an optimal way.
  • This is also known as the NBV (Next Best View) problem, which determines the sensor direction (or pose) in the reconstruction process.
  • NBV Next Best View
  • viewpoint planning [3] for digitalization of 3D objects can be treated in different ways depending on whether or not the object's geometry is known beforehand [4,5].
  • conventional 3D reconstruction processes typically involve an incremental iterative cycle of viewpoint planning, digitizing, registration and view integration and is conventionally based on a partial model reconstructed thus far.
  • the NBV algorithm Based on a partial model reconstructed, the NBV algorithm then provides quantitative evaluations on the suitability of the remaining viewpoints.
  • the evaluation for each viewpoint is based on all visible surface elements of the object that can be observed.
  • the viewpoint with the highest visibility (evaluation score) is selected as the NBV.
  • the first problem is to determine the areas of the object which need to be sensed next and the second is to determine how to position the sensor to sample those areas.
  • the first problem is to determine the areas of the object which need to be sensed next and the second is to determine how to position the sensor to sample those areas.
  • Connolly [6] uses octree to represent object space, and the regions that have been scanned are labeled as seen, regions between the sensor and the surface are labeled as empty and all other regions are labeled as unseen.
  • a set of candidate viewpoints is enumerated at fixed increments around the object.
  • the Next Best View is calculated based on the evaluation of the visibility of each candidate viewpoint. This algorithm is computationally expensive and it does not incorporate the sensor geometry.
  • Maver and Bajesy [7] presented a solution to the NBV problem for a specific scanning setup consisting of an active optical range scanner and a turntable.
  • unseen regions of the objects are represented as polygons. Visibility constraints for the sensor to view the unseen region are computed from the polygon boundaries.
  • this solution is limited to a particular sensor configuration.
  • Whaite and Ferrie [9] use the superellipsoid model to represent an object and define a shell of uncertainty.
  • the Next Best View is selected at the sensor position where the uncertainty of the current model fitted to the partial data points is the largest.
  • This algorithm enables uncertainty-driven exploration of an object to build a model.
  • the superellipsoid cannot accurately represent objects with a complex surface shape.
  • surface visibility constraints were not incorporated in the viewpoint planning process.
  • Reed and Allen [10] propose a target-driven viewpoint planning method.
  • the volume model is used to represent the object by extrusion and intersection operations.
  • the constraints such as sensor imaging constraints, model occlusion constraints and sensor placement constraints, are also represented as solid modeling volumes and are incorporated into the viewpoint planning.
  • the algorithm involves expensive computation on the solid modeling and intersection operation.
  • Scott [11] considers viewpoint planning as integer programing. However, in this system the object must be scanned before viewpoint planning to obtain prior knowledge about an unknown object. Given a rough model of an unknown object, a sequential set of viewpoints is calculated to cover all surface patches of the object with registration constraint. However, the object must be scanned before viewpoint planning to obtain the prior knowledge about unknown objects.
  • a vision sensor In many applications, a vision sensor often needs to move from one place to another and change its configuration for perception of different object features.
  • a dynamic reconfigurable vision sensor is useful in such applications to provide an active view of the features.
  • a traditional vision sensor with fixed structure is often inadequate for the robot to perceive the object's features in an uncertain environment as the object distance and size are unknown before the robot sees the object.
  • a dynamically reconfigurable sensor may assist the robot in controlling the configuration and gaze at the object surfaces. For example, with a structured light system, the camera needs to see the object surface illuminated by the projector, to perform the 3D measurement and reconstruction task.
  • a calibration target/device is conventionally designed with a precision calibration fixture to provide a number of points whose world coordinates are precisely known [12]-[14]. With a planar calibration pattern, the target needs to be placed at several accurately known positions in front of the vision sensor. For dynamically reconfigurable vision systems, the vision system needs to have the ability of self-recalibration without requiring external 3D data provided by a precision calibration device.
  • Maybank and Faugeras [23] suggested the calibration of a camera using image correspondences in a sequence of images from a moving camera.
  • the kinds of constructions that could be achieved from a binocular stereo rig were further addressed in [24]. It was found that a unique projective representation of the scene up to an arbitrary projective transformation could be constructed if five arbitrary correspondences were chosen and an affine representation of the scene up to an arbitrary affine transformation could be constructed if four arbitrary correspondences were adopted.
  • the above-mentioned reconstruction methods are based on passive vision systems. As a result, they suffer from the ambiguity of correspondences between the camera images, which is a difficult problem to solve especially when free-form surfaces [33] are involved in the scene.
  • active vision may be adopted. Structured light or pattern projection systems have been used for this purpose. To reconstruct precisely a 3D shape with such a system, the active vision system consisting of a projector and a camera needs to be carefully calibrated [34, 35].
  • the traditional calibration procedure normally involves two separate stages: camera calibration and projector calibration. These individual calibrations are carried out off-line and they have to be repeated each time the setting is changed. As a result, the applications of active vision systems are limited, since the system configuration and parameters must be kept unchanged during the entire measurement process.
  • Jokinen [37] studied a self-calibration method based on multiple views, where the object is moved by steps. Several maps were acquired for the registration and calibration. The limitation of this method is that the object must be placed on a special device so that it can be precisely moved.
  • the present invention provides a method and system for the measurement and surface reconstruction of a 3D image of an object comprising projecting a pattern onto the surface to be imaged, examining distortion produced in the pattern by the surface, converting for example by a triangulation process the distortions produced in the pattern by the surface to a distance representation representative of the shape of the surface.
  • the surface shape of the 3D image may then be reconstructed, for example electronically such as digitally, for further processing.
  • the object is firstly sliced into a number of cross section curves, with each cross-section to be reconstructed by a closed B-spline curve. Then, a Bayesian information criterion (BIC) is applied for selecting the control point number of B-spline models. Based on the selected model, entropy is used as the measurement of uncertainly of the B-spline model to predict the information gain for each cross section curve. After obtaining the predicted information gain of all the B-spline models, the information gain of the B-spline models may be mapped into a view space. The viewpoint that contains maximal information gain for the object is then selected as the Next Best View. A 3D surface reconstruction may then be carried out.
  • BIC Bayesian information criterion
  • An advantage of one or more preferred embodiments of the invention system is that the 3D information of a scene may be acquired at high speed by taking a single picture of the scene.
  • a complex 3D shape may be divided into a series of cross section curves, each of which represents the local geometrical feature of the object.
  • These cross section curves may be described by a set of parametric equations.
  • the most common methods include spline function (e.g. B-spline) [43], implicit polynomial [44], [45] and superquadric (e.g. superellipsoid). [46].
  • B-spline has the following main advantages:
  • FIG. 1 is a schematic block diagram of an active vision system according to an embodiment of the invention
  • FIG. 2 is a schematic diagram showing the geometrical relationships between the components of the vision system of the embodiment of FIG. 1 ;
  • FIG. 3 is a diagram illustrating the illumination projection in the embodiment of FIG. 1 ;
  • FIG. 4 is a block schematic illustrating the color encoding for identification of coordinates on the projector of the system of FIG. 1 ;
  • FIG. 5 a illustrates an ideal step illumination curve of the blur area and its irradiant flux in the system of FIG. 1 ;
  • FIG. 5 b illustrates a graph of illumination against distance showing an out of focus blur area and an irradiated area for the system of FIG. 1 ;
  • FIG. 6 illustrates a graph of illumination against distance showing the determination of the blur radius for the system of FIG. 1 ;
  • FIG. 7 illustrates a graph of the variation with distance of the point spread function showing the determination of the best-focused location
  • FIG. 8 is a schematic diagram of an apparatus incorporating the system of FIG. 1 ;
  • FIG. 9 is a flow diagram of a view planning strategy according to an embodiment of the invention.
  • FIG. 10 is a flow diagram of information entropy calculation for viewpoint planning according to an embodiment of the invention.
  • FIG. 11 b is a schematic illustration of a viewpoint representation.
  • FIG. 1 shows an active vision system according to a preferred embodiment of the invention.
  • the system comprises an LCD projector 1 adapted to cast a pattern of light onto an object 2 which is then viewed by a camera and processor unit 3 .
  • the relative position between the projector 1 and the camera in the camera and processing unit 3 has six degrees of freedom (DOF).
  • DOF degrees of freedom
  • the distortions in the beam line may be translated into height variations via triangulation if the system is calibrated including the relative position between the projector 1 and camera.
  • the vision system may be self-recalibrated automatically if and when this relative position is changed.
  • the camera and processor unit 3 preferably includes a processor stage, as well as the camera, for processing the observed distortions in the projected pattern caused by the object 2 and associated data and for enabling and carrying out reconstruction.
  • the processor stage may be remotely located from the camera and may be connectable thereto to receive the data for processing and carrying out the reconstruction process.
  • FIG. 2 shows the geometrical relationship between the projector 1 , the object 2 and the camera of the system of FIG. 1 , and, in particular, the pattern projected by the projector 1 onto the object 2 and viewed by the camera 3 .
  • x c [ ⁇ x c ⁇ y c ⁇ ] T are the coordinates on the image sensor plane, ⁇ R is an uncertain scalar
  • w c [X c Y c Z c 1] T are the 3D coordinates of an object point from the view of the camera ( FIG. 2 ), and
  • v c is the distance between image plane and camera optical center
  • s xy is the ratio between the horizontal and vertical pixel cell sizes
  • x p [ ⁇ x p ⁇ y p ⁇ ] T are the coordinates on the projector plane
  • ⁇ R is also an uncertain scalar
  • w p [X p Y p Z p 1] T are the 3-D coordinates of the object point based on the view of projector (see FIG. 2 ), and
  • ⁇ x p h 1 w c
  • the relative positions of the camera 3 and the projector 1 may be changed dynamically during run-time of the system.
  • the camera which is acting as a sensor
  • the recalibration means that the camera (sensor) has been calibrated before installation in the system, but it should require calibrated again as the relative configuration changes.
  • the intrinsic parameters such as the focal lengths, scale factors, distortion coefficients will remain unchanged whereas the extrinsic parameters of the positions and orientations between the camera and projector have to be determined during the run-time of the system.
  • the whole calibration of the structured light system of FIG. 1 may be divided into two parts.
  • the first part concerns the calibration of the intrinsic parameters including the focal lengths and optical centers, this is called static calibration and may be performed off-line in a static manner.
  • the second part deals with calibration of the extrinsic parameters of the relative position of the camera 3 and the projector 1 , and this is hereinafter referred to as self-recalibration.
  • the static calibration needs to be performed only once.
  • the self-recalibration is thus more important and needs to be performed online whenever the system configuration is changed during a measurement task.
  • the projected patterns can be in black/white (b/w) or in colors. In either case, a coding method is in general needed. For b/w projections, gray codes can be used with the stripe light planes, which allows robust identifications of the stripe index ( FIG. 3 ).
  • a cell's coordinates on the projector 1 can be immediately determined by the colours of adjacent neighbouring cells in addition to its own when projecting a source pattern. Via a table look-up, each cell's position can be uniquely identified.
  • An example of such coded color pattern is shown in FIG. 4 .
  • the method of computing the values of blur diameters from an image has been proposed in [39,40] which is incorporated herein by reference.
  • the system may comprise a Color Coded Pattern Projection system, a CCD camera, and a mini-platform for housing the components and providing the relative motion in 6DOF between the projector 1 and the camera 3 .
  • the method for automatic system recalibration and uncalibrated 3D reconstruction may be implemented using such a system.
  • this system will provide enhanced automation and performance for the measurement and surface reconstruction of 3D objects.
  • the structured light vision system has 6DOF in its relative pose, i.e. three position parameters and three orientation parameters, between the camera 3 and the projector 1 .
  • the focal lengths of the projector 1 and the camera 3 are assumed to have been obtained in a previous static calibration stage.
  • the optical centers are fixed or can be described by a function of the focal length.
  • the projector 1 generates grid patterns with horizontal and vertical coordinates so that the projector's LCD/DMD can be considered an image of the scene.
  • a 6 [r a1 , r a2 , r a3 , r a4 , r a5 , r a6 ,] T
  • r ai [1x ci y ci x pi y pi x ci y ci ,] T
  • x ci is the x value of the ith point projected on the camera coordinate system.
  • the coordinates of a surface point projected on the camera (sensor) may be given by (X/Z, Y/Z).
  • Z c and Z p are the depth values based on the view of the camera 3 and projector 1 , respectively.
  • f 1 , f 2 , and f 3 are the three vectors in V corresponding to the least eigenvalues.
  • matrix B is exactly as just described, i.e. of rank 6.
  • SSE and/or MSE sum and or mean of squared errors
  • H u , H m , and H l are 3 ⁇ 3 matrices and each for three rows in H.
  • the last unknown, s in t may be determined here by a method using a constraint derived from the best-focused location (BFL). This is based on the fact that for a lens with a specified focal length, the object surface point is perfectly focused only at a special distance.
  • the operation “ ⁇ circle around (x) ⁇ ” represents two-dimensional convolution. The blur can be used as a cue to find the perfectly focused distance.
  • the scene irradiance caused by a light source is inversely proportional to the square of the distance from the light source.
  • z 0 v p ⁇ f p v p - f p
  • f p the intrinsic focus length of the projector.
  • v p is the distance to from the image plane to the optical center.
  • the effect of blurring can be described via a point spread function (PSF) to account for the diffraction effect of light wave.
  • PSF point spread function
  • a Gaussian model is normally used.
  • I F ( ⁇ ) I F i ( ⁇ ) H ⁇ ( ⁇ ), (18)
  • the time rate flow of radiant light energy, i.e. the irradiant power or irradiant flux ⁇ [watt] is also the area size S [watt] under the blurring curve (or surface, in the case of two-dimensional analysis) illustrated in FIG. 5 b.
  • S is then determined by summating the intensity function from 0 to x 1 .
  • l(x) changes sharply near the origin.
  • the best-focused location can be computed by analyzing the blur information in an image.
  • the best focused locations form a straight line which is the intersection of two planes.
  • a valley line can be found since the blur radius is unsigned.
  • the best-focused location can be determined by extending the above method making some minor modifications.
  • the method presented here automatically resolves six parameters of a color-encoded structured light system.
  • the 6 DOF relative placement of the active vision system can be automatically recalibrated with neither manual operations nor assistance of a calibration device. This feature is very important for many situations when the vision sensor needs to be reconfigured online during a measurement or reconstruction task.
  • a 2-step method may be used to solve this problem. That is, before the internal parameters are reconfigured, we take an image firstly to obtain the 6DOF external parameters. Then after changing the sensors' internal parameters, we take an image again to recalibrate them.
  • FIG. 8 illustrates the components of FIG. 1 housed in a robot apparatus.
  • the viewpoint planning is charged with the task of determining the position and orientation and system configuration parameters for each view to be taken. It is assumed for the purposes of this description that we are dealing with an unknown object, i.e. assuming no prior knowledge about the object model. It is also assumed here that the object is general free-form surfaced.
  • the approach is preferably to model a 3D object via a series of cross section curves. These cross section curves can be described by a set of parametric equations of B-spline curves. A criterion is proposed to select the optimal model structure for the available data points on the cross section curve.
  • volume reconstruction For object reconstructions, two conventional approaches are volume reconstruction and surface reconstruction.
  • the volume based technique is concerned with the manipulation of the volumetric objects stored in a volume raster of voxels.
  • Surface reconstruction may be approached in one of the two ways: 1) representing the surface with a mosaic of flat polygon titles, usually triangles: and 2) representing the surface with a series of curved patches joined with some order of continuity.
  • a preferred reconstruction method for embodiments of the present invention is to model a 3D object via a series of cross section curves.
  • the object 2 is sliced into a number of cross section curves, each of which represents the local geometrical features of the object.
  • These cross section curves may be described by a set of parametric equations.
  • B-spline For reconstruction of cross section curves, compared with implicit polynomial [47] and superquadric, B-spline has the following main advantages:
  • the amplitude of B j,4 (t) is in the range of (0.0, 1.0), and the support region of B j,4 (t) is compact and nonzero for t ⁇ [u j , u j+4 ].
  • ⁇ x [B ⁇ B] ⁇ 1 B ⁇ x ⁇ (2)
  • ⁇ y [B ⁇ B] ⁇ 1 B ⁇ y
  • x [x 1 , . . . , x m ] ⁇
  • y [y 1 , . . .
  • B [ B _ 0 , 4 1 + B _ n + 1 , 4 1 B _ 1 , 4 1 + B _ n + 2 , 4 1 B _ 2 , 4 1 + B _ n + 3 , 4 1 ... B _ n , 4 1 B _ 0 , 4 2 + B _ n + 1 , 4 2 B _ 1 , 4 2 + B _ n + 2 , 4 2 B _ 2 , 4 2 + B _ n + 3 , 4 2 ... B _ n , 4 2 ⁇ ⁇ ⁇ ⁇ B _ 0 , 4 m + B _ n + 1 , 4 m B _ 1 , 4 m + B _ n + 2 , 4 m B _ 2 , 4 m + B _ n + 3 , 4 m ... B _ n , 4 m ] and ⁇ overscore (B)
  • the chord length method may preferably be used for the parameterization of the B-spline.
  • model selection For a given set of measurement data, there exists a model of optimal complexity corresponding to the smallest prediction (generalization) error for further data.
  • the complexity of a B-spline model of a surface is related to its control point (parameter) number [43],[48]. If the B-spline model is too complicated, the approximated B-spline surface tends to over-fit noisy measurement data. If the model is too simple, then it is not capable of fitting the measurement data, making the approximation results under-fitted. In general, both over- and under-fitted surfaces have poor generalization capability. Therefore, the problem of finding an appropriate model, referred to as model selection, is important for achieving a high level generalization capability.
  • Model selection has been studied from various standpoints in the field of statistics. Examples include information statistics [49]-[51] Bayesian statistics [52]-[54], and structural risk minimization [55].
  • the Bayesian approach is a preferred model selection method. Based on posterior model probabilities, the Bayesian approach estimates a probability distribution over an ensemble of models. The prediction is accomplished by averaging over the ensemble of models. Accordingly, the uncertainty of the models is taken into account, and complex models with more degrees of freedom are penalized.
  • ⁇ circumflex over ( ⁇ ) ⁇ k is the maximum likelihood estimate of ⁇ k
  • d k is the parameter number of model M k
  • H( ⁇ circumflex over ( ⁇ ) ⁇ k ) is the Hessian matrix of ⁇ log p(r
  • ⁇ circumflex over ( ⁇ ) ⁇ k ,M k ) of closed B-spline cross section curves can be factored into x and y components as p ( r
  • ⁇ circumflex over ( ⁇ ) ⁇ k ,M k ) p ( x
  • H ⁇ ( ⁇ ⁇ k ) [ B T ⁇ B ⁇ ⁇ kx 2 ⁇ ( ⁇ ⁇ kx ) 0 0 B T ⁇ B ⁇ ⁇ ky 2 ⁇ ( ⁇ ⁇ ky ) ]
  • the first two terms ⁇ circumflex over ( ⁇ ) ⁇ kx 2 and ⁇ circumflex over ( ⁇ ) ⁇ ky 2 measure the prediction accuracy of the B-spline model, which increases with the complexity of the model.
  • the second term decreases and acts as a penalty for using additional parameters to model the data.
  • the ⁇ circumflex over ( ⁇ ) ⁇ kx 2 and ⁇ circumflex over ( ⁇ ) ⁇ ky 2 only depend on the training sample for model estimation, they are insensitive when under fitting or over fitting occurs. In the above equation, only penalty terms prevent the occurrence of over-fitting. In fact, an honest estimate of ⁇ kx 2 and ⁇ ky 2 should be based on a re-sampling procedure.
  • the available data may be divided into a training sample and a prediction sample.
  • the model ⁇ circumflex over ( ⁇ ) ⁇ k fitted to the training data is valid, then the estimated variance ⁇ circumflex over ( ⁇ ) ⁇ l 2 from a prediction sample should also be a
  • the Bayesian approach selects the model with the largest posterior probability.
  • ⁇ circumflex over ( ⁇ ) ⁇ k ,M k ) of aclosed B-spline cross section curve may be factored into x and y components as p ( r
  • ⁇ circumflex over ( ⁇ ) ⁇ k ,M k ) p ( x
  • the likelihood function of the y component may also be obtained.
  • H ⁇ ( ⁇ ⁇ k ) [ B T ⁇ B ⁇ ⁇ kx 2 ⁇ ( ⁇ ⁇ kx ) 0 0 B T ⁇ B ⁇ ⁇ ky 2 ⁇ ( ⁇ ⁇ ky ) ]
  • the first two terms measure the estimation accuracy of the B-spline model.
  • the smaller the variance value in ⁇ circumflex over ( ⁇ ) ⁇ k 2 the bigger the value of the first two terms (as the variance is much smaller than one) and therefore the higher the order (i.e.
  • the third term in the above equation could penalize over-fitting as it appears directly proportional to the number of control points used. In practice, however, it may be noted that the effect of this penalty term is insignificant compared with that of the first two terms.
  • the conventional BIC criterion is rather insensitive to the occurrence of over-fitting and tends to select more control points in the B-spline model to approximate the data point, which normally results in a model with poor generalization capability.
  • the reason for the occurrence of over-fitting in conventional BIC criterion lies in the way the variances ⁇ kx 2 and ⁇ ky 2 are obtained.
  • a reliable estimate of ⁇ kx 2 and ⁇ ky 2 should be based on re-sampling of the data, in other words, the generalization capability of a B-spline model should be validated using another set of data points rather than the same data used in obtaining the model.
  • the available data may be divided into two sets: a training sample and a prediction sample.
  • the training sample may be used only for model estimation, whereas the prediction sample may be used only for estimating data noise ⁇ kx 2 and ⁇ ky 2 .
  • the BIC may be evaluated via the following steps:
  • the estimated variance ⁇ circumflex over ( ⁇ ) ⁇ k 2 from the prediction sample should also be a valid estimate of the data noise. It may be seen that the data noise ⁇ k 2 estimated from the prediction sample may be more sensitive to the quality of the model than one directly estimated from a training sample, as the variance ⁇ k 2 estimated from the prediction sample may also have the capability of detecting the occurrence of over-fitting.
  • r) for k 1,2, . . .
  • BIC Bayesian information criterion
  • the available data into two sets: a training sample and a prediction sample.
  • the training sample is used only for model estimation, whereas the prediction sample is used only for estimating data noise.
  • the BIC is evaluated via the following steps:
  • the estimated variances from the prediction sample should also be a valid estimate of the data noise. If the variances found from the prediction sample are unexpectedly large, we have reasons to believe that the candidate model fits the data badly. It is seen that the data noise estimated from the prediction sample will thus be more sensitive to the quality of the model than the one directly estimated from training sample, as the variance estimated from the prediction sample also has the capability of detecting the occurrence of over-fitting.
  • the entropy function which measures the information about the model, given the available data points.
  • the entropy can be used as the measurement of the uncertainty of the model parameter.
  • r) of the given data is approximately proportional to p ⁇ ( ⁇ ⁇ r ) ⁇ p ⁇ ( r ⁇ ⁇ ⁇ ) ⁇ exp ⁇ [ - 1 2 ⁇ ( ⁇ - ⁇ ⁇ ) T ⁇ H m ⁇ ( ⁇ - ⁇ ⁇ ) ] ⁇ p ⁇ ( ⁇ ) where the p( ⁇ ) is the priori probability of the B-spline model parameters.
  • the entropy measures the information about the B-spline model parameters, given data points (r 1 , . . . , r m ,). The more information about ⁇ the smaller the entropy will be. In this work, we use the entropy as the measurement of the uncertainty of the model parameter ⁇ .
  • the posteriori distribution [seep1511] of given data may be approximately given as p ⁇ ( ⁇ ⁇ r ) ⁇ p ⁇ ( r ⁇ ⁇ ⁇ ) ⁇ exp ⁇ [ - 1 2 ⁇ ( ⁇ - ⁇ ⁇ ) T ⁇ H m ⁇ ( ⁇ - ⁇ ⁇ ) ] ⁇ p ⁇ ( ⁇ ) where the p( ⁇ ) is the priori probability of B-spline model parameters.
  • the entropy measures the information about B-spline model parameters, given data points. (r 1 , . . . r m ).
  • a new data point r m+1 may be assumed to have been collected along a contour.
  • the potential information gain is determined by incorporating the new data point r m+1 . If we move the new point r m+1 along the contour, the distribution of the potential information gain along the whole contour may be obtained.
  • H ⁇ ( ⁇ ⁇ k ) [ B T ⁇ B ⁇ ⁇ kx 2 ⁇ ( ⁇ ⁇ kx ) 0 0 B T ⁇ B ⁇ ⁇ ky 2 ⁇ ( ⁇ ⁇ ky ) ] the new data point r m+1 will incrementally update the Hessian matrix as follows: H m + 1 ⁇ H m + [ 1 ⁇ x 2 ⁇ B _ m + 1 T ⁇ B _ m + 1 0 0 1 ⁇ y 2 ⁇ B _ m + 1 T ⁇ B _ m + 1 ]
  • the resulting potential information gain of the B-spline model will change according to ⁇ E above.
  • the Next Best Viewpoint should be selected as the one that senses those new data points which give the largest possible potential information gain for the B-spline model.
  • the new data points will incrementally update the Hessian matrix.
  • the Next Best Viewpoint should be selected as the one that sense those new data points which give the largest possible potential information gain for the model.
  • the task in the view planning here is to obtain the visibility regions in the viewing space that contain the candidate viewpoints where the missing information about the 3D object can be obtained.
  • the NBV should be the viewpoint that can give maximum information about the object.
  • the view space for each data point is the set of all possible viewpoints that can see it.
  • the view space can be calculated via the following procedure:
  • m k , j ⁇ ⁇ n k ⁇ v j ⁇ if ⁇ ⁇ r k ⁇ ⁇ is ⁇ ⁇ visible ⁇ ⁇ to ⁇ ⁇ v j 0 otherwise ( 30 ) where v j is the direction vector of viewpoint v j .
  • the terminating condition is defined via the information gain.
  • the information gain will have outstanding peaks where data are missing.
  • the information gain will appear noise like indicating that the terminating condition is satisfied.
  • the configuration parameters can include optical settings of the camera and projector as well as the relative position and orientation between the camera and projector.
  • the planning needs to satisfy multiple constraints including visibility, focus, field of view, viewing angle, resolution, overlap, occlusion, and some operational constraints such as kinematic reachability of the sensor pose and robot-environment collision.
  • FIG. 9 A complete cycle in the incremental modeling process is illustrated in FIG. 9 . As shown in FIG. 9 , in a first stage static calibration and first view acquisition is carried out. In a second stage, 3D reconstruction via a single view is performed. Next, 3D model registration and fusion is performed followed by the determination of a next viewpoint decision and terminating condition. Sensor reconfiguration follows this step and recalibration is performed. The process may then be repeated from the 3D reconstruction stage.
  • FIG. 10 shows a flow diagram of information entropy based viewpoint planning for digitization of a 3D object according to a preferred embodiment.
  • 3D data is acquired from another viewpoint.
  • multiple view range images are registered.
  • a B-spline model is selected and the model parameters of each cross section curve are estimated.
  • the uncertainty of each cross section B-spline curve is analyzed and the information gain of the object is predicted.
  • information gain about the object is mapped into a view space.
  • Candidate viewpoints are then evaluated and the NBV selected. The process may then repeated.
  • the candidate viewpoints may be represented in a tessellated spherical view space by subdividing recursively each triangular facet of an icosahedron. If we assume that view space is centered at the object, arid its radius is equal to a priori specified distance from the sensor to the object, each viewpoint may be represented by pan-tilt angles ⁇ ([ ⁇ 180°, 180°]) and ⁇ ([ ⁇ 90°, 90°]), denoted as v( ⁇ , ⁇ ).
  • the view space V k may be calculated via the following procedure:
  • the measurement matrix may be constructed M
  • the column vector M RJ of the measurement matrix corresponds to the set R j of points visible for viewpoint v j while the row vector M k,v , corresponds to view space V k of the next best point q k .
  • View space is a set of 3D positions where the sensor (vision system) takes measurements. If we assume that the 3D object is within the field of view and time depth of view of the vision system and the optical settings of the vision system are fixed, based on these assumptions, the parameters of the vision system to be planned are the time viewing positions of the sensor.
  • the candidate viewpoints are represented in a spherical viewing space.
  • the viewing space is usually a continuous spherical surface. To reduce the number of viewpoints used in practice, it is necessary to discretize the surface by some kind of tessellation.
  • the fundamental task in the view planning here is to obtain the visibility regions in the viewing space that contain the candidate viewpoints where the missing information about the 3D object can be obtained without occlusions.
  • the NBV should be the viewpoint that can give maximum information about the object.
  • the view space V k for each data point r k , (k 1,2, . . . ) is the set of all possible viewpoints that can see r k .
  • the view space V k can be calculated via the following procedure:
  • m k,j ⁇ ⁇ n k ⁇ v j ⁇ if ⁇ ⁇ r k ⁇ ⁇ is ⁇ ⁇ visible ⁇ ⁇ to ⁇ ⁇ v j 0 otherwise where v j is the direction vector of viewpoint v j .
  • I( ⁇ , ⁇ ) a global measure of the information gain I( ⁇ , ⁇ ) as the criterion to be summed over all visible surface points seen under this view of the sensor.
  • one or more preferred embodiments of the present invention provide a viewpoint planning method by reducing incrementally the uncertainty of a closed B-spline curve. Also proposed is an improved BIC criterion for model selection, which accounts for acquired data well. By representing the object with a series of relatively simple cross section curves, it is possible to define entropy as a measurement of uncertainty to predict the information gain for a cross section B-spline model. Based on that, it is possible to establish View Space Visibility and select the viewpoint with maximum visibility as the Next Best View.
  • One or more preferred embodiments of the invention may have particular advantages in that by using encoded patterns projected over an area on the object surface, high speed 3D imaging may be achieved. Also, automated self-recalibration of the system may be performed when the system's configuration is changed or perturbed. In a further preferred embodiment, uncalibrated 3D reconstruction may be performed. Furthermore, in a preferred embodiment real Euclidean reconstruction of a 3D surface may be achieved.

Abstract

A system and method for measuring and surface reconstruction of a 3D image of an object comprises a projector arranged to project a pattern onto a surface of an object to be imaged; and a processor stage arranged to examine distortion or distortions produced in the pattern by the surface. The processor stage is arranged to convert by, for example, a triangulation process the distortion or distortions produced in the pattern by the surface to a distance representation representative of the shape of the surface. The processor stage is also arranged to reconstruct electronically the surface shape of the object.

Description

    FIELD OF THE INVENTION
  • The present invention relates to a system and method for 3D measurement and surface reconstruction of an image reconfigurable vision, and in particular to a reconfigurable vision system and method.
  • BACKGROUND OF THE INVENTION
  • In many practical applications, such as reverse engineering, robotic exploration/navigation in clustered environments, model construction for virtual reality, human body measurements, and advanced product inspection and manipulation by robots, the automatic measurement and reconstruction of 3D shapes with high speed and accuracy is of critical importance. Currently, the devices widely used in industry for obtaining 3D measurements involve the mechanical scanning of a scene, for example in a laser scanning digitizer, which inevitably makes the measurement a slow process. Some advanced active vision systems using structured lighting have been explored and built. However, the existing systems lack the ability to change their settings, to calibrate by themselves and to reconstruct the 3D scene automatically.
  • To reconstruct a complete and accurate 3D model of an unknown object, two fundamental issues must be addressed. The first issue is how to acquire the 3D data for reconstructing the object surface. Currently, a laser range finder/scanner [1] is widely used for 3D surface data acquisition in industry. However, due to the mechanical scanning involved, the acquisition speed is limited. To increase the efficiency in the 3D imaging, pattern projections can be employed [2]. Portable 3D imaging systems based a similar principle have also been designed recently.
  • The second issue is how to determine the next viewpoint for each view so that all the information about the object surface can be acquired in an optimal way. This is also known as the NBV (Next Best View) problem, which determines the sensor direction (or pose) in the reconstruction process. The problem of viewpoint planning [3] for digitalization of 3D objects can be treated in different ways depending on whether or not the object's geometry is known beforehand [4,5]. For an unknown object, since the number of viewpoints and their viewing direction are unknown or cannot be determined prior to data acquisition, conventional 3D reconstruction processes typically involve an incremental iterative cycle of viewpoint planning, digitizing, registration and view integration and is conventionally based on a partial model reconstructed thus far. Based on a partial model reconstructed, the NBV algorithm then provides quantitative evaluations on the suitability of the remaining viewpoints. The evaluation for each viewpoint is based on all visible surface elements of the object that can be observed. The viewpoint with the highest visibility (evaluation score) is selected as the NBV.
  • In general, there are two fundamental problems to be solved when determining the Next Best View. The first problem is to determine the areas of the object which need to be sensed next and the second is to determine how to position the sensor to sample those areas. As there is no prior knowledge about the object, it is impossible to obtain a complete description of an object when occlusion occurs. Therefore, it is not generally possible to obtain precisely the invisible portions from either the current viewpoint or the acquired partial description of the object, so only an estimation of the Next Best View may be derived.
  • Various Next Best View algorithms have been proposed to date, for example Connolly [6] uses octree to represent object space, and the regions that have been scanned are labeled as seen, regions between the sensor and the surface are labeled as empty and all other regions are labeled as unseen. A set of candidate viewpoints is enumerated at fixed increments around the object. The Next Best View is calculated based on the evaluation of the visibility of each candidate viewpoint. This algorithm is computationally expensive and it does not incorporate the sensor geometry.
  • Maver and Bajesy [7] presented a solution to the NBV problem for a specific scanning setup consisting of an active optical range scanner and a turntable. In this document, unseen regions of the objects are represented as polygons. Visibility constraints for the sensor to view the unseen region are computed from the polygon boundaries. However, this solution is limited to a particular sensor configuration.
  • Pito [8] proposes an approach based on an intermediate position space representation of both sensor visibility constraints and unseen portions of the viewing volume. The NBV is determined as the sensor position that maximized the unseen portion of the object volume. This approach has been demonstrated to have achieved automatic viewpoint planning for a range sensor constrained to move on a cylindrical path around the object.
  • Whaite and Ferrie [9] use the superellipsoid model to represent an object and define a shell of uncertainty. The Next Best View is selected at the sensor position where the uncertainty of the current model fitted to the partial data points is the largest. This algorithm enables uncertainty-driven exploration of an object to build a model. However, the superellipsoid cannot accurately represent objects with a complex surface shape. Furthermore, surface visibility constraints were not incorporated in the viewpoint planning process.
  • Reed and Allen [10] propose a target-driven viewpoint planning method. The volume model is used to represent the object by extrusion and intersection operations. The constraints, such as sensor imaging constraints, model occlusion constraints and sensor placement constraints, are also represented as solid modeling volumes and are incorporated into the viewpoint planning. The algorithm involves expensive computation on the solid modeling and intersection operation.
  • Scott [11] considers viewpoint planning as integer programing. However, in this system the object must be scanned before viewpoint planning to obtain prior knowledge about an unknown object. Given a rough model of an unknown object, a sequential set of viewpoints is calculated to cover all surface patches of the object with registration constraint. However, the object must be scanned before viewpoint planning to obtain the prior knowledge about unknown objects.
  • In many applications, a vision sensor often needs to move from one place to another and change its configuration for perception of different object features. A dynamic reconfigurable vision sensor is useful in such applications to provide an active view of the features.
  • Active robot vision, in which a vision sensor can move from one place to another for performing a multi-view vision task, is an active research area. A traditional vision sensor with fixed structure is often inadequate for the robot to perceive the object's features in an uncertain environment as the object distance and size are unknown before the robot sees the object. A dynamically reconfigurable sensor may assist the robot in controlling the configuration and gaze at the object surfaces. For example, with a structured light system, the camera needs to see the object surface illuminated by the projector, to perform the 3D measurement and reconstruction task.
  • The system must be calibrated and traditionally, the calibration task is accomplished statically by manual operations. A calibration target/device is conventionally designed with a precision calibration fixture to provide a number of points whose world coordinates are precisely known [12]-[14]. With a planar calibration pattern, the target needs to be placed at several accurately known positions in front of the vision sensor. For dynamically reconfigurable vision systems, the vision system needs to have the ability of self-recalibration without requiring external 3D data provided by a precision calibration device.
  • Self-calibration of vision sensors has been actively researched in the last decade. However, most of the conventionally available methods were developed for calibration of passive vision systems such as stereo vision and depth-from-motion [15]-[22]. Conventionally these systems require dedicated devices for calibrating the intrinsic and extrinsic parameters of the cameras. Due to the special calibration target needed, such a calibration is normally carried out off-line before a task begins. In many practical applications, on-line calibration during the execution of a task is needed. Over the years, efforts have been made in research to achieve efficient on-line calibrations.
  • Maybank and Faugeras [23] suggested the calibration of a camera using image correspondences in a sequence of images from a moving camera. The kinds of constructions that could be achieved from a binocular stereo rig were further addressed in [24]. It was found that a unique projective representation of the scene up to an arbitrary projective transformation could be constructed if five arbitrary correspondences were chosen and an affine representation of the scene up to an arbitrary affine transformation could be constructed if four arbitrary correspondences were adopted.
  • Hartly [25] gave a practical algorithm for Euclidean reconstruction from several views with the same camera based on Levenberg-Marquardt Minimization. A new approach based on stratification was introduced in [26].
  • In this context, much work has been conducted in Euclidean reconstruction up to a transformation. Pollefeys et al [27] proposed a method to obtain a Euclidean reconstruction from images taken with an uncalibrated camera with variable focal lengths. This method is based on an assumption that although the focal length is varied, the principal point of the camera remains unchanged. This assumption limits the range of applications of this method.
  • A similar assumption was also made in the investigations in [28,29]. In practice, when the focal length is changed (e.g. by zooming), the principal point may vary as well. In the work by Heyden and Astrom [30], they proved that it is possible to obtain Euclidean reconstruction up to a scale using an uncalibrated camera with known aspect ratio and skew parameters of the camera. A special case of a camera with Euclidean image plane was used for their study. A crucial step in the algorithm is the initialization which will affect the convergence. How to obtain a suitable initialization was still an issue to solve [31]. Kahl [32] presented an approach to self-calibration and Euclidean reconstruction of a scene, assuming an affine model with zero skew for the camera. Other parameters such as the intrinsic parameters could be unknown or varied. The reconstruction which needed a minimum of three images was an approximation and was up to a scale. Pollefeys et al gave the minimum number of images needed for achieving metric reconstruction, i.e. to restrict the projective ambiguity to a metric one according to the set of constraints available from each view [31].
  • The above-mentioned reconstruction methods are based on passive vision systems. As a result, they suffer from the ambiguity of correspondences between the camera images, which is a difficult problem to solve especially when free-form surfaces [33] are involved in the scene. However, to avoid this problem, active vision may be adopted. Structured light or pattern projection systems have been used for this purpose. To reconstruct precisely a 3D shape with such a system, the active vision system consisting of a projector and a camera needs to be carefully calibrated [34, 35]. The traditional calibration procedure normally involves two separate stages: camera calibration and projector calibration. These individual calibrations are carried out off-line and they have to be repeated each time the setting is changed. As a result, the applications of active vision systems are limited, since the system configuration and parameters must be kept unchanged during the entire measurement process.
  • For active vision systems using structured-light, the existing calibration methods are mostly based on static and manual operations. The available camera self-calibration methods cannot be applied directly to structured-light systems as they need more than two views for the calibration. Recently, there has been some work on self-calibration [36]-[40] of structured-light systems. Fofi et al. [36] investigated the self-calibration of structured-light systems, but the work was based on the assumption that a square projected onto a planar surface will most generally give a quadrilateral shape in the form of a parallelogram”.
  • Jokinen [37] studied a self-calibration method based on multiple views, where the object is moved by steps. Several maps were acquired for the registration and calibration. The limitation of this method is that the object must be placed on a special device so that it can be precisely moved.
  • Using a cube frame, Chu et al. [38] proposed a calibration free approach for recovering unified world coordinates.
  • Chen and Li [39, 40] recently proposed a self-recalibration method for a structured-light system allowing changes in the system configuration in two degrees of freedom.
  • In some applications, such as seabed metric reconstruction with an underwater robot, when the size or distance of the scene changes, the configuration and parameters of the vision system need to be changed to optimize the measurement. In such applications, uncalibrated reconstruction is needed. In this regard, efforts have been made in recent research. Fofi et al [41] studied the Euclidean reconstruction by means of an uncalibrated structured light system with a colour-coded grid pattern. They modeled the pattern projector as a pseudo camera and then the whole system as a two-camera system. Uncalibrated Euclidean reconstruction was performed with varying focus, zoom and aperture of the camera. The parameters of the structured light sensor were computed according to the stratified algorithm [26], [42]. However, it was not clear how many of the parameters of the camera and projector could be self-determined in the uncalibrated reconstruction process.
  • Thus, there is a need for a reconfigurable vision system and method for 3D measurement and reconstruction in which recalibration may be conducted without having to use special calibration apparatus as required by traditional calibration methods.
  • SUMMARY OF THE INVENTION
  • In general terms, the present invention provides a method and system for the measurement and surface reconstruction of a 3D image of an object comprising projecting a pattern onto the surface to be imaged, examining distortion produced in the pattern by the surface, converting for example by a triangulation process the distortions produced in the pattern by the surface to a distance representation representative of the shape of the surface. The surface shape of the 3D image may then be reconstructed, for example electronically such as digitally, for further processing.
  • In a preferred embodiment, the object is firstly sliced into a number of cross section curves, with each cross-section to be reconstructed by a closed B-spline curve. Then, a Bayesian information criterion (BIC) is applied for selecting the control point number of B-spline models. Based on the selected model, entropy is used as the measurement of uncertainly of the B-spline model to predict the information gain for each cross section curve. After obtaining the predicted information gain of all the B-spline models, the information gain of the B-spline models may be mapped into a view space. The viewpoint that contains maximal information gain for the object is then selected as the Next Best View. A 3D surface reconstruction may then be carried out.
  • An advantage of one or more preferred embodiments of the invention system is that the 3D information of a scene may be acquired at high speed by taking a single picture of the scene.
  • With this method, a complex 3D shape may be divided into a series of cross section curves, each of which represents the local geometrical feature of the object. These cross section curves may be described by a set of parametric equations. For reconstruction purposes using parametric equations, the most common methods include spline function (e.g. B-spline) [43], implicit polynomial [44], [45] and superquadric (e.g. superellipsoid). [46]. Compared with implicit polynomial and superquadric, B-spline has the following main advantages:
  • 1.Smoothness and continuity, which allows a curve to consist of a concatenation of curve segments, yet be treated as a single unit;
  • 2.Built-in boundedness, a property which is lacking in implicit or explicit polynomial representations whose zero set can shoot to infinity;
  • 3.Parameterized representation, which decouples the x, y coordinates enabling them to be treated separately;
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Preferred embodiments of the invention will now be described by way of example and with reference to the accompanying drawings, in which:
  • FIG. 1 is a schematic block diagram of an active vision system according to an embodiment of the invention;
  • FIG. 2 is a schematic diagram showing the geometrical relationships between the components of the vision system of the embodiment of FIG. 1;
  • FIG. 3 is a diagram illustrating the illumination projection in the embodiment of FIG. 1;
  • FIG. 4 is a block schematic illustrating the color encoding for identification of coordinates on the projector of the system of FIG. 1;
  • FIG. 5 a illustrates an ideal step illumination curve of the blur area and its irradiant flux in the system of FIG. 1;
  • FIG. 5 b illustrates a graph of illumination against distance showing an out of focus blur area and an irradiated area for the system of FIG. 1;
  • FIG. 6 illustrates a graph of illumination against distance showing the determination of the blur radius for the system of FIG. 1;
  • FIG. 7 illustrates a graph of the variation with distance of the point spread function showing the determination of the best-focused location;
  • FIG. 8 is a schematic diagram of an apparatus incorporating the system of FIG. 1;
  • FIG. 9 is a flow diagram of a view planning strategy according to an embodiment of the invention;
  • FIG. 10 is a flow diagram of information entropy calculation for viewpoint planning according to an embodiment of the invention;
  • FIG. 11 a is a schematic illustration of a view space with Q=16, and
  • FIG. 11 b is a schematic illustration of a viewpoint representation.
  • DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
  • FIG. 1 shows an active vision system according to a preferred embodiment of the invention. The system comprises an LCD projector 1 adapted to cast a pattern of light onto an object 2 which is then viewed by a camera and processor unit 3. The relative position between the projector 1 and the camera in the camera and processing unit 3 has six degrees of freedom (DOF). When a beam of light is cast from the projector 1 and viewed obliquely by the camera, the distortions in the beam line may be translated into height variations via triangulation if the system is calibrated including the relative position between the projector 1 and camera. The vision system may be self-recalibrated automatically if and when this relative position is changed. The camera and processor unit 3 preferably includes a processor stage, as well as the camera, for processing the observed distortions in the projected pattern caused by the object 2 and associated data and for enabling and carrying out reconstruction. In a further preferred embodiment, the processor stage may be remotely located from the camera and may be connectable thereto to receive the data for processing and carrying out the reconstruction process.
  • FIG. 2 shows the geometrical relationship between the projector 1, the object 2 and the camera of the system of FIG. 1, and, in particular, the pattern projected by the projector 1 onto the object 2 and viewed by the camera 3.
  • For the camera,
    xc=Pcwc,
    where
  • xc=[λxc λyc λ]T are the coordinates on the image sensor plane, λεR is an uncertain scalar
  • wc=[Xc Yc Zc 1]T are the 3D coordinates of an object point from the view of the camera (FIG. 2), and
  • Pc is the 3×4 perspective matrix P c = [ v c k sy x c 0 0 0 s xy v c y c 0 0 0 0 1 0 ] 3 × 4
    where
  • vc is the distance between image plane and camera optical center,
  • sxy is the ratio between the horizontal and vertical pixel cell sizes,
  • kxy represents the placement perpendicularity of the cell grids,
  • and (xc 0, yc 0) is the center offset on the camera sensor.
  • Similarly, for the projector,
    xp=Ppwp,
    where
  • xp=[κxp κyp κ]T are the coordinates on the projector plane,
  • κεR is also an uncertain scalar,
  • wp=[Xp Yp Zp 1]T are the 3-D coordinates of the object point based on the view of projector (see FIG. 2), and
  • Pp is the inverse perspective matrix of the projector P p = [ v p 0 x p 0 0 0 v p y p 0 0 0 0 1 0 ] 3 × 4 .
  • The relationship between the camera coordinate system and projector coordinate system is w p = Mw c = [ R t o T 1 ] w c ,
    where M is a 4×4 matrix and t is the translation vector,
    t=s[tx ty tz]T.
  • Here s is a scaling factor to normalize tx 2+ty 2+tz 2=1.
    xp=Ppwp=PpMwc.
  • Let H = P p M = [ h 1 h 2 h 3 ] ( 3 × 4 ) ,
    where h1, h2, and h3 are 4-dimensional vectors. We have
    κxp=h1wc, κyp=h2wc and κ=h3wc.
  • So
    (x p h 3 −h 1)w c=0.
  • Then the following can be derived: [ P c x p h 3 - h 1 ] w c = [ x c 0 ] ,
  • Denote xc+=[xc T,0]T and Q = [ P c x p h 3 - h 1 ] = [ q 11 q 12 q 13 0 0 q 22 q 23 0 0 0 1 0 q 41 q 42 q 43 q 44 ] .
  • The 3-dimensional world position of a point on the object surface can be determined by
    wc=Q−1xc+.
  • As mentioned above, the relative positions of the camera 3 and the projector 1 may be changed dynamically during run-time of the system. As the camera (which is acting as a sensor) is reconfigurable during the run-time, it should be automatically recalibrated for 3-D perception tasks. Here the recalibration means that the camera (sensor) has been calibrated before installation in the system, but it should require calibrated again as the relative configuration changes. It is assumed for present purposes that the intrinsic parameters such as the focal lengths, scale factors, distortion coefficients will remain unchanged whereas the extrinsic parameters of the positions and orientations between the camera and projector have to be determined during the run-time of the system.
  • System Reconfiguration and Automatic Recalibration
  • The whole calibration of the structured light system of FIG. 1 may be divided into two parts. The first part concerns the calibration of the intrinsic parameters including the focal lengths and optical centers, this is called static calibration and may be performed off-line in a static manner. The second part deals with calibration of the extrinsic parameters of the relative position of the camera 3 and the projector 1, and this is hereinafter referred to as self-recalibration. The static calibration needs to be performed only once. The self-recalibration is thus more important and needs to be performed online whenever the system configuration is changed during a measurement task.
  • Once the perspective projection matrices of the camera 3 and the projector 1 relative to a global coordinate frame are computed from the static calibration, it is possible to obtain Pc and Pp which are the perspective projection matrices of the camera and the projector respectively relative to a global coordinate frame. The dynamic self-recalibration task requires the determination of the relative position M between the camera 3 and the projector 1. There are 6 unknown parameters, three for 3-axis rotation and three for 3-dimensional translation (as shown in FIG. 1).
  • For a point on an object surface, it is known that its coordinates on the camera's sensor plane xc=[λxc λyc λ]T and on the projector's source plane xp=[κxp κyp κ]T are related via the following:
    xp TFxc=0,
    where F is a 3×3 essential matrix: F = sRS = s [ F 11 F 12 F 13 F 21 F 22 F 23 F 31 F 32 F 33 ] ,
  • Here R is a 3-axis rotation matrix and S is a skew-symmetric matrix S = [ 0 - t z t y t z 0 - t x - t y t x 0 ] ,
    based on the translation vector t.
  • The recalibration task is to determine the 6 independent parameters in R and t (FIG. 2). For each surface point, xp TFxc=0 may be expressed as:
    ai Tf=0.
  • Here
    f=[F11 F21 F31 F12 F22 F32 F13 F23 F33]T,
    ai=[xcxp xcyp xc ycxp ycyp yc xp yp 1]T.
    where (xc, yc) is the coordinates on the camera's sensor and (xp, yp) is the coordinates on the projector's LCD/DMD.
  • The projected patterns can be in black/white (b/w) or in colors. In either case, a coding method is in general needed. For b/w projections, gray codes can be used with the stripe light planes, which allows robust identifications of the stripe index (FIG. 3).
  • In a preferred embodiment of the invention, if an illumination pattern with colour-encoded grids is used, a cell's coordinates on the projector 1 can be immediately determined by the colours of adjacent neighbouring cells in addition to its own when projecting a source pattern. Via a table look-up, each cell's position can be uniquely identified. An example of such coded color pattern is shown in FIG. 4. The method of computing the values of blur diameters from an image has been proposed in [39,40] which is incorporated herein by reference. In a preferred embodiment, the system may comprise a Color Coded Pattern Projection system, a CCD camera, and a mini-platform for housing the components and providing the relative motion in 6DOF between the projector 1 and the camera 3. The method for automatic system recalibration and uncalibrated 3D reconstruction according to one or more preferred embodiments of the present invention may be implemented using such a system. With the adaptively adjustable sensor settings, this system will provide enhanced automation and performance for the measurement and surface reconstruction of 3D objects.
  • For n points observed, an n×9 matrix A can be obtained as the calibration data:
    A=[a1, a2, . . . , an]T,
    Af=0,
  • If it is assumed that the structured light vision system has 6DOF in its relative pose, i.e. three position parameters and three orientation parameters, between the camera 3 and the projector 1. The focal lengths of the projector 1 and the camera 3 are assumed to have been obtained in a previous static calibration stage. The optical centers are fixed or can be described by a function of the focal length. The projector 1 generates grid patterns with horizontal and vertical coordinates so that the projector's LCD/DMD can be considered an image of the scene. The relative position between the camera 3 and the projector 1 may be described by [ X p Y p Z p ] = R [ X c Y c Z c ] - Rt .
  • If there are n (n>5) points observed on a plane and they do not lie on the same line, we have proved that the rank of the calibration matrix A is six, i.e.
    Rank (A)=6.
  • If the following 6-by-6 matrix is considered which is a sub-matrix of matrix A,
    A6=[ra1, ra2, ra3, ra4, ra5, ra6,]T
    where rai=[1xci yci xpi ypi xciyci,]T and xci is the x value of the ith point projected on the camera coordinate system.
  • The matrix A6 can be diagonalized by basic row-operations:
    D(A 6)=diag (1,x c,2 −x c,1, . . . )
  • Since xc,i≠xcj, yc,i≠ycj, xp,i≠xpj, yp,i≠ypj, it can be proved that every element in D(A6) is non-zero if no four points of the sampled data lie on the same line. Therefore det(A6)≠0 and,
    Rank(A)≧Rank(A 6)=6.
  • On the other hand, based on the projection model of the camera 3 and projector model 1, the coordinates of a surface point projected on the camera (sensor) may be given by (X/Z, Y/Z). For a point, (xc,yc) and (xp,yp) are related by: Z p [ x p y p 1 ] = Z c R [ x c y c 1 ] - Rt .
    where Zc and Zp are the depth values based on the view of the camera 3 and projector 1, respectively.
  • For the camera 3 and projector 1, the scene plane may be defined as follows: Z = C 3 1 - C 1 x - C 2 y
  • Let r1, r2, and r3 be the three rows in R, and then C 3 c 1 - C 1 c x p - C 2 c y p [ x p y p 1 ] = C 3 c 1 - C 1 c x c - C 2 c y c [ r 1 r 2 r 3 ] [ x c y c 1 ] - [ r 1 r 2 r 3 ] t ,
    which contains three equations and then, this equation may be considered to be equivalent to the following system: { τ 11 x c x p + τ 12 x c y p + τ 13 x p y c + τ 14 y c y p + τ 15 x c + τ 16 x p + τ 17 y c + τ 18 y p + τ 19 = 0 τ 21 x c x p + τ 22 x c y p + τ 23 x p y c + τ 24 y c y p + τ 25 x c + τ 26 x p + τ 27 y c + τ 28 y p + τ 29 = 0 τ 31 x c x p + τ 32 x c y p + τ 33 x p y c + τ 34 y c y p + τ 35 x c + τ 36 x p + τ 37 y c + τ 38 y p + τ 39 = 0
    or
    Γa=0,
    where Γ is a 3-by-9 matrix, a is described above, and {τij} are constants.
  • It can be proved that there is no linear relationship among the above three equations, i.e. rank(Γ)=3.
  • Considering 9 points as the calibration data, then matrix A is 9-by-9 in size. Since it is constrained by Γa=0, the maximum rank of A is 6.
  • Therefore the rank of matrix A must be 6.
  • The general solution of the equation ai Tf=0 has the form of
    f=ξ 1 f 11 f 21 f 3,
    where ξ1, ξ2, and ξ3 are real numbers, fi is a 9-dimensional vector, and [f1; f2; f3] is the null-basis of A.
  • Using singular value decomposition (SVD), we have
    B=svd(A T A)=UDV T,
    where ATA is a 9×9 matrix, D is a non-decreasing diagonal matrix, and U and V are orthogonal matrices.
  • Then, f1, f2, and f3 are the three vectors in V corresponding to the least eigenvalues. Theoretically, if there is no noise, matrix B is exactly as just described, i.e. of rank 6. In such a case, there would be three vanishing/singular values in the diagonal matrix D and the sum and or mean of squared errors (SSE and/or MSE) would be zero, since the vector f lies exactly in the null-space of B.
  • However, in a practical system, there maybe fewer than 3 singular values in the matrix D, as the matrix B can be perturbed. Since the data are from real measurements, B may have a rank of 7 or even higher. In such a case it is still possible to take the three column vectors from the matrix V as the basis vectors corresponding to the least values in D. It will still be the best in the sense B will have been tailored (with rank B>6) to some other matrix C with rank C=6 in such a way that C is the “nearest” to B among all the n-by-9 matrices with rank 6 in terms of the spectral norm and Frobenius norm.
  • Define
    f=Hk,
    where k=[ξ1 ξ2 ξ3]T and H=[f1 f2 f3]=[Hu T, Hm T, Hl T]T. Hu, Hm, and Hl are 3×3 matrices and each for three rows in H.
  • The above can be written as
    Hu k=Fc1, Hm k=Fc2, and Hl k=Fc3,
    where Fc1, Fc2, and Fc3 are the three columns in F. Therefore, G = F T F = [ k T H u T H u k k T H u T H m k k T H u T H l k k T H m T H u k k T H m T H m k k T H m T H l k k T H l T H u k k T H l T H m k k T H l T H l k ]
  • As R is orthogonal, FTF can also be expressed as G = S T R T RS = S T S = [ 1 - t x 2 - t x t y - t x t z - t x t y 1 - t y 2 - t y t z - t x t z - t y t z 1 - t z 2 ] .
  • The three unknowns of k=[ξ1 ξ2 ξ3]T can be determined. The normalized relative position tn=[tx ty tz]T can then be solved: { t x = ± 1 - k T H u T H u k t y = ± 1 - k T H m T H m k t z = ± 1 - k T H l T H l k
  • It should be noted that multiple solutions exist. In fact, if [k t]T is a solution of the system, [±k ±t]T must also be the solutions. One of these solutions is correct for a real system setup. To find this, the re-projection method can be used.
  • When k and t are known, the rotation matrix R can be determined by R = [ r 1 r 2 r 3 ] = [ F c1 × t + ( F c2 × t ) × ( F c3 × t ) F c2 × t + ( F c3 × t ) × ( F c1 × t ) F c3 × t + ( F c1 × t ) × ( F c2 × t ) ]
    where “x” is the cross product of two vectors.
  • Among the six unknown parameters, the five in R and tn have been determined so far and reconstruction can be performed but up to a scaling factor. The last unknown, s in t, may be determined here by a method using a constraint derived from the best-focused location (BFL). This is based on the fact that for a lens with a specified focal length, the object surface point is perfectly focused only at a special distance.
  • For an imaging system, the mathematical model for standard linear degradation caused by blurring and additive noise is usually described by g ( i , j ) = k = 1 m l = 1 n h ( i - k , j - l ) f ( k , l ) + n ( i , j ) , or g = h f + n
    where f is the original image, h is the point spread function, n is the additive noise, m×n is the image size. The operation “{circle around (x)}” represents two-dimensional convolution. The blur can be used as a cue to find the perfectly focused distance.
  • For the projector in such a system, the most significant blur is the out-of-focus blur. This results from the fact that for a lens with a specific focal length the illumination pattern will be blurred on the object surface unless it is projected on the perfectly focused distance. Since the noise n only affects the accuracy of the result, it will not be considered in the following deduction. The illumination pattern on the source plane (LCD or DMD) to be projected is described as I s ( x ) = { L a , ( - T 2 < x - 2 nT < T 2 ) , n N 0 , otherwise ,
    where T is the stripe width of the source pattern.
  • The scene irradiance caused by a light source is inversely proportional to the square of the distance from the light source. On the other hand, the image intensity on the camera sensor is independent of the scene distance and is only proportional to the scene irradiance. Therefore, the image intensity can be described as I = C c C l l 2 ,
    where Cc is the sensing constant of the camera and Cl is the irradiant constant of light projection. They are related to many factors, such as the diameter of sensor's aperture, the focal length, and properties of surface materials.
  • Assume that the intensity at the point where the best-focused plane intersects the principal axis of the lens is l0(x=0, z=z0), where z0 is the best-focused distance. That is I 0 = C c C I ( z 0 + v p ) 2 ,
    where vp is the focal length, i.e. the distance between the source plane and optical center of the projector lens.
  • Consider a straight-line in the scene projected on the X-Z coordinate system:
    z=c 1 x+c 0.
  • For an arbitrary point, we have l=√{square root over (x2+(z+vp)2)} and thus I = ( z 0 + v p ) 2 I 0 l 2 .
  • In the view of the projector, when the illumination pattern casts on a line, the intensity distribution becomes nonlinear and is given by I i ( x ) = ( z 0 + v p ) 2 ( z + v p ) 2 + x 2 ( 1 + v p z ) 2 I 0 , ( - T 2 < v p z x - 2 nT < T 2 ) , n N . ( 8 )
  • Transforming the x-axis to align with the observed line, we have:
    x l =√{square root over (1+c 1 2 x)}.
  • The above gives I i ( x l ) = ( z 0 + v p ) 2 [ 1 + ( c 2 x l c 2 x l + c 0 ) 2 ] [ ( c 2 + 1 ) x l + c 0 ] 2 I 0 , where c 2 = c 1 1 + c 1 2 .
  • The illumination will be blurred unless it is projected on a plane at the perfectly focused distance: z 0 = v p f p v p - f p ,
    where fp is the intrinsic focus length of the projector. For all other points in the scene, the z-displacement of a surface point to the best-focused location is: Δ z = z - z 0 = z - v p f p v p - f p .
    where vp is the distance to from the image plane to the optical center.
  • The corresponding blur radius is proportional to Δz: σ = v p - f p v p F mm Δ z ,
    where F mm = f p r
    is the f-number of the lens setting.
  • For out-of-focus blur, the effect of blurring can be described via a point spread function (PSF) to account for the diffraction effect of light wave. A Gaussian model is normally used.
  • With our light stripes, the one-dimensional PSF is h σ ( x ) = 1 2 π σ - x 2 2 σ 2 .
  • The brightness of the illumination from the projector is the convolution of the ideal illumination intensity curve with the PSF blur model:
    I(x)=I i(x){circle around (x)}h σ(x)=∫ 28 I i(u)h σ(x−u)du.
  • The Fourier transform of the above is
    I F(ω)=I F i(ω)H σ(ω),   (18)
    where Hσ(ω) is the Fourier transform of the Gaussian function H σ ( ω ) = - + 1 2 π σ - x 2 2 σ 2 - jωπ x = - σ 2 ω 2 2 .
  • Without significant loss of accuracy, li(x) may be approximated by averaging the intensity on a light stripe to simplify the Fourier transform, Ii(x)={overscore (I)}(x). If a coordinate system with its origin at the center of the bright stripe is used, this intensity can be written as I _ ( x ) = I 0 [ ɛ ( x + T 2 ) - ɛ ( x - T 2 ) ] ,
    where ε is a unit step function.
  • The Fourier transform of the above is I F i ( ω ) = I 0 T sin ( ω T 2 ) ω T 2 = I 0 TS a ( ω T 2 ) .
  • Since l(x) is measured by the camera, its Fourier transform IF(ω) can be calculated. Using integration, we have - σ 2 ω 2 2 ω = I 0 T I F ( ω ) S a ( ω T 2 ) ω .
  • The left side is found to be Left = 2 σ - + - ( σω 2 ) 2 ( σω 2 ) = 2 π σ .
  • Therefore the blur radius can be computed by σ = [ I 0 T 2 π I F ( ω ) S a ( ω T 2 ) ω ] - 1 .
  • Neglecting the effect of blurring caused by multiple illumination stripes, we have the following theorem to determine the blur radius with low computational cost and high precision.
  • Theorem 1. With the projection of a step illumination on the object surface, the blur radius is proportional to the time rate flow of irradiant light energy in the blurred area: σ = 2 π I 0 S ,
    where l0 is the ideal intensity and S is the area size as illustrated in FIG. 5 b.
  • This means that the blur radius σ[m] is proportional to the area size under the blurring curve: σ=√{square root over (2π)}SlI0. The time rate flow of radiant light energy, i.e. the irradiant power or irradiant flux Φ [watt], is also the area size S [watt] under the blurring curve (or surface, in the case of two-dimensional analysis) illustrated in FIG. 5 b.
  • Therefore, we only need to compute the area size S for every stripe to determine the blur radius. In a simple way, the edge position (x=0) can be detected by a gradient method, and S is then determined by summating the intensity function from 0 to x1. However, even using a sub-pixel method for the edge detection, errors are still considerable since l(x) changes sharply near the origin.
  • To solve this problem, we propose an accurate method in the sense of energy minimization. As illustrated in FIG. 6, we have F s ( x o ) = S 1 ( x o ) + S 2 ( x o ) I 0 π x o 2 σ + - y 2 y + I 0 - I 0 π - x o 2 σ - y 2 y .
  • It can be proved that the derivative of the above function is
    Fx′≧0, if xo≧0,
    where “=” holds if and only if xo=0.
  • The same situation occurs when xo≦0. Therefore, at xo=0, we have
    S=min(F s)/2.
  • This means that the same quantity of light energy flows from S2 to S1. This method for computing S is more stable than traditional methods and it yields high accuracy.
  • Now the best-focused location can be computed by analyzing the blur information in an image. With Theorem 1, by integrating the blurring curve on each stripe edge, the blur radius ai can be calculated. These blur radiuses are recorded as a set
    D={(i,σ i)|i ε N},
    where i is the stripe index on the projector's source pattern.
  • The blur size is proportional to the displacement of a scene point from the BFL, σ=kfΔz. Since the blur diameters are unsigned, a minimum value σmin in the data set D can be found. For a line in the scene, in order to obtain a straight line corresponding to the linearly changing depth in the scene, we separate D into two parts and apply linear best-fit to obtain two straight lines:
    σl(x)=k 1 +k 2 and σr(x)=k 3 x+k 4.
  • Finding the intersection of the two lines gives a best-focused location (as shown in FIG. 7) x b = k 4 - k 2 k 1 - k 3 ,
    which corresponds to Δz=0 or z ( x b ) = v p f p v p - f p .
    The corresponding coordinates on the image are (xh,yh), where yb is determined according to the scanning line on the image.
  • From the above analysis, there exists a point which represents the best-focused location for a line in the scene. Now consider the blur distribution on a plane in the scene. This happens when we analyze multiple scan lines crossing the light stripes or when the source illumination is a grid pattern.
  • For a plane in scene, z=c1x+c2y+c3, the blur radius is σ ( x , y ) = v p - f p v p F num : - v p f p v p - f p = ( v p - f p ) ( c 1 x + c 2 y + c 3 ) - v p f p v p F num
  • The best focused locations form a straight line which is the intersection of two planes. A valley line can be found since the blur radius is unsigned.
  • For a freeform surface in the scene, the best-focused location can be determined by extending the above method making some minor modifications. For each light stripe, we can also compute its blur diameter and obtain a pair (zri, σl), where iεN is the stripe index and zri iε[0, 1] is its relative depth in the camera view. Plotting these pairs in a coordinate system with σ as the vertical axis and zr as the horizontal axis, we can also find a valley to be the best focused distance, zrb.
  • For the point with minimum blur value, i.e. the best-focused location (Δz=0), is constrained by Z c [ x c y c 1 ] - s [ t x t y t z ] = v p f p v p - f p R - 1 [ x p y p 1 ] = [ b 1 b 2 b 3 ] ,
  • The scaling factor s can thus be determined: s = x c b 2 - y c b 1 y c t x - x c t y .
  • The procedures for self-recalibration of a structured light vision sensor are given as follows:
      • Step 1: projecting grid encoded patterns onto the scene;
      • Step 2: determining tn and R for 5 unknowns;
      • Step 3: computing the blur distribution and determining the best-focused location;
      • Step 4: determining the scaling factor s; and
      • Step 5: combining the relative matrix for 3-D reconstruction.
  • The method presented here automatically resolves six parameters of a color-encoded structured light system. When the internal parameters of the camera 3 and the projector 1 are known via pre-calibration, the 6 DOF relative placement of the active vision system can be automatically recalibrated with neither manual operations nor assistance of a calibration device. This feature is very important for many situations when the vision sensor needs to be reconfigured online during a measurement or reconstruction task.
  • The method itself does not require the six parameters all to be the external ones. In fact, only if the number of unknown parameters of the system does not exceed six, the method can still be applied. For example, if the two focal lengths, vc and vp, are variable during the system reconfiguration and the relative structure has 4DOFs, we may solve them in a similar way, by replacing the matrix F by F=(Pc −1)TRSPp −1 and modifying the decomposition method accordingly.
  • When the unknown parameters exceed six, the above-described method may not solve them directly. However, a 2-step method may be used to solve this problem. That is, before the internal parameters are reconfigured, we take an image firstly to obtain the 6DOF external parameters. Then after changing the sensors' internal parameters, we take an image again to recalibrate them.
  • Special features of a method according to a preferred embodiment of the invention the invented method include:
      • The single image based recalibration allows measurement or reconstruction to be performed immediately after reconfiguration in the software, without requiring any extra requirement.
      • Metric measurement of the absolute geometry of the 3D shape may be obtained by replacing r(xb, yb) with zrb. This is different from most of the currently available conventional methods where the 3D reconstruction supported is up to a certain transformation.
        Automatic viewpoint planning
  • To obtain a complete 3D model of an object 2 with the vision system as shown in FIG. 1, multiple views may be required, e.g. via robot vision as shown in FIG. 8 which illustrates the components of FIG. 1 housed in a robot apparatus. The viewpoint planning is charged with the task of determining the position and orientation and system configuration parameters for each view to be taken. It is assumed for the purposes of this description that we are dealing with an unknown object, i.e. assuming no prior knowledge about the object model. It is also assumed here that the object is general free-form surfaced. The approach is preferably to model a 3D object via a series of cross section curves. These cross section curves can be described by a set of parametric equations of B-spline curves. A criterion is proposed to select the optimal model structure for the available data points on the cross section curve.
  • For object reconstructions, two conventional approaches are volume reconstruction and surface reconstruction. The volume based technique is concerned with the manipulation of the volumetric objects stored in a volume raster of voxels. Surface reconstruction may be approached in one of the two ways: 1) representing the surface with a mosaic of flat polygon titles, usually triangles: and 2) representing the surface with a series of curved patches joined with some order of continuity. However, as mentioned above, a preferred reconstruction method for embodiments of the present invention is to model a 3D object via a series of cross section curves.
  • Model Selection for Reconstruction
  • The object 2 is sliced into a number of cross section curves, each of which represents the local geometrical features of the object. These cross section curves may be described by a set of parametric equations. For reconstruction of cross section curves, compared with implicit polynomial [47] and superquadric, B-spline has the following main advantages:
  • 1) Smoothness and continuity, which allows any curve to be composed of a concatenation of curve segments and yet be treated as a single unit,
  • 2) Built-in boundedness. a property which is lacking in implicit or explicit polynomial representation whose zero set can shoot to infinity; and
  • 3) Parameterized representation, which decouples the x, z coordinates to be treated separately.
  • 2.1 Closed B-Spline Curve Approximation
  • A closed cubic B-spline curve consisting of n+1 curve segments may be defined by p ( t ) = j = 0 n + 3 B j , 4 ( t ) · Φ j ( 1 )
    where p(t)=[x(t), z(t)] is a point on B-spline curve with location parameter t. In this section we use the chord length method for parameterization. In (1), Bj,4(t) is the jth normalized cubic B-spline basis function, which is defined over the uniform knots vector
    [u −3 , u −2 , . . . , U n+4,]=[−3, −2, . . . , n+4]
  • In addition, the amplitude of Bj,4(t) is in the range of (0.0, 1.0), and the support region of Bj,4(t) is compact and nonzero for t ε [uj, uj+4]. The (Φj)j=0 n+3 are cyclical control points satisfying the following conditions:
    Φn+10n+21n+32
  • By factorization of the B-spline model, the parameters of B-spline model can be represented as:
    x τz τ]τ=[Φx0, . . . Φxnz0, . . . Φzn]τ
  • For a set of m data points r=(ri)i=1 m=([xi,yi])i=1 m let d2 be the sum of the squared residual errors between the data points and their corresponding points on the B-spline curve, ie d 2 = i = 1 m r i - p ( t i ) 2 = i = 1 m [ x i - j = 0 n + 3 B j , 4 ( t t ) · Φ xj ] 2 + i = 1 m [ y i - j = 0 n + 3 B j , 4 ( t t ) · Φ yj ] 2
  • From the cyclical condition of control points in the equation Φn+10, Φn+21, Φn+32, there are only n+1 control points to be estimated. The LS estimation method of the n+1 control points may be obtained from the curve points by minimizing d2 above with respect to Φ=[Φx T, Φy T]T=[Φx0, . . . Φxn, Φy0, . . . Φyn]T.
  • The following estimation of Φ may then be obtained by factorization of the B-spline:
    Φx=[BτB]−1Bτx   }(2)
    Φy=[BτB]−1Bτy
    where x=[x1, . . . , xm]τ, y=[y1, . . . , ym]τ B = [ B _ 0 , 4 1 + B _ n + 1 , 4 1 B _ 1 , 4 1 + B _ n + 2 , 4 1 B _ 2 , 4 1 + B _ n + 3 , 4 1 B _ n , 4 1 B _ 0 , 4 2 + B _ n + 1 , 4 2 B _ 1 , 4 2 + B _ n + 2 , 4 2 B _ 2 , 4 2 + B _ n + 3 , 4 2 B _ n , 4 2 B _ 0 , 4 m + B _ n + 1 , 4 m B _ 1 , 4 m + B _ n + 2 , 4 m B _ 2 , 4 m + B _ n + 3 , 4 m B _ n , 4 m ]
    and {overscore (B)}j,4 i=Bj,4i).
  • The chord length method may preferably be used for the parameterization of the B-spline. The chord length L of a curve may be calculated as follows: L = i = 2 m + 1 r i - r i - 1
    where rm+1=r1 for a closed curve. The ti associated with point qi may be given as: t i = t i - 1 + r i - r i - 1 L · t max
    where t1=0 and tmax=n+1
    Model Selection with Improved BIC Criterion
  • For a given set of measurement data, there exists a model of optimal complexity corresponding to the smallest prediction (generalization) error for further data. The complexity of a B-spline model of a surface is related to its control point (parameter) number [43],[48]. If the B-spline model is too complicated, the approximated B-spline surface tends to over-fit noisy measurement data. If the model is too simple, then it is not capable of fitting the measurement data, making the approximation results under-fitted. In general, both over- and under-fitted surfaces have poor generalization capability. Therefore, the problem of finding an appropriate model, referred to as model selection, is important for achieving a high level generalization capability.
  • Model selection has been studied from various standpoints in the field of statistics. Examples include information statistics [49]-[51] Bayesian statistics [52]-[54], and structural risk minimization [55]. The Bayesian approach is a preferred model selection method. Based on posterior model probabilities, the Bayesian approach estimates a probability distribution over an ensemble of models. The prediction is accomplished by averaging over the ensemble of models. Accordingly, the uncertainty of the models is taken into account, and complex models with more degrees of freedom are penalized.
  • For a given set of models {Mk, k=1,2, . . . } and data r, there exists a model of optimal model structure corresponding to smallest generalization error for further data and the Bayesian approach may be used to select the model with the largest (maximum) posterior probability to account for the data acquired so far.
  • In a first preferred method, the model M may be denoted by:
    M=argM k,k=1, . . . , k max max{p(r|M k)}
    where the posterior probability of model Mk may be denoted by p ( r M k ) = Φ k p ( r Φ k , M k ) p ( Φ k M k ) Φ k ( 2 π ) d k / 2 H ( Φ ^ k ) - 1 / 2 p ( r Φ ^ k , M k ) p ( Φ ^ k M k )
  • Neglecting the term p({circumflex over (Φ)}k|Mk), the posterior probability of model Mk becomes [11]: M = arg max M k , k = 1 , k max { log p ( r Φ ^ k , M k ) - 1 2 log H ( Φ ^ k ) }
    where {circumflex over (Φ)}k is the maximum likelihood estimate of Φk, and dk is the parameter number of model Mk, H({circumflex over (Φ)}k) is the Hessian matrix of −log p(r|{circumflex over (Φ)}k,Mk) evaluated at {circumflex over (Φ)}k.
  • The likelihood function p(r|{circumflex over (Φ)}k,Mk) of closed B-spline cross section curves can be factored into x and y components as
    p(r|{circumflex over (Φ)} k ,M k)=p(x|{circumflex over (Φ)} kx ,M kp(y|{circumflex over (Φ)} ky ,M k)
    where {circumflex over (Φ)}kx and {circumflex over (Φ)}ky can be calculated by { Φ x = [ B T B ] - 1 B T x Φ y = [ B T B ] - 1 B T y
  • Consider, for example, the x component. Assuming the residual error sequence to be zero mean and white Gaussian with variance σkx 2({circumflex over (Φ)}kx), we have the following likelihood function: p ( x Φ ^ kx , M k ) = ( 1 2 πσ kx 2 ( Φ ^ kx ) ) m / 2 · exp { - 1 2 σ kx 2 ( Φ ^ kx ) k = 0 m - 1 [ x k - B k Φ ^ kx ] 2 }
    and or σkx 2({circumflex over (Φ)}kx,Mk) is estimated by: σ ^ kx 2 ( Φ ^ kx ) = 1 m k = 0 m - 1 [ x k - B k Φ ^ kx ] 2
  • In a similar way, the likelihood function of the y component can also be obtained. The corresponding Hessian matrix insert Ĥk of −log p(r|Φk,Mk) evaluated at {circumflex over (Φ)}k may be denoted by: H ( Φ ^ k ) = [ B T B σ ^ kx 2 ( Φ ^ kx ) 0 0 B T B σ ^ ky 2 ( Φ ^ ky ) ]
  • By approximating 1 2 log H ( Φ ^ k )
    by the asymptotic expected value of Hessian 1 2 ( d kx + d ky ) log ( m ) ,
    we can obtain the BIC criterion for B-spline model selection as follows: M = arg max M k , k = 1 , k max { - m 2 log σ ^ kx 2 ( Φ ^ kx ) - m 2 log σ ^ ky 2 ( Φ ^ ky ) - 1 2 ( d kx + d ky ) log ( m ) }
    where dkx and dky are the number of control points in the x and y directions respectively, and m is the number of data points.
  • In the above equation, the first two terms {circumflex over (σ)}kx 2 and {circumflex over (σ)}ky 2 measure the prediction accuracy of the B-spline model, which increases with the complexity of the model.
  • In contrast, the second term decreases and acts as a penalty for using additional parameters to model the data. However, since the {circumflex over (σ)}kx 2 and {circumflex over (σ)}ky 2 only depend on the training sample for model estimation, they are insensitive when under fitting or over fitting occurs. In the above equation, only penalty terms prevent the occurrence of over-fitting. In fact, an honest estimate of σkx 2 and σky 2 should be based on a re-sampling procedure. Here, the available data may be divided into a training sample and a prediction sample. The training sample is used only for model estimation, whereas the prediction sample is used only for estimating the prediction data noise {circumflex over (σ)}kx 2 and {circumflex over (σ)}ky 2. That is, the training sample is used to estimate the model parameter {circumflex over (Φ)}k by: Φx=[BτB]−1Bτx, Φy=[BτB]−1Bτy while the prediction sample is used to predict data noise σk 2 by σ ^ kx 2 ( Φ ^ kx ) = 1 m k = 0 m - 1 [ x k - B k Φ ^ kx ] 2 .
    In fact, if the model {circumflex over (Φ)}k fitted to the training data is valid, then the estimated variance {circumflex over (σ)}l 2 from a prediction sample should also be a valid estimate of data noise.
  • In another preferred embodiment, for a given a set of models insert p51 and data r, the Bayesian approach selects the model with the largest posterior probability. The posterior probability of model Mk may be denoted by: p ( M k r ) = p ( r M k ) p ( M k ) L = 1 K max p ( r M L ) p ( M L )
    where p(r|Mk) is the integrated likelihood of model Mk and P(Mk) is the prior probability of model Mk. To find the model with the largest posterior probability, evaluate
    p(M k |r) for k=1,2, . . . ,k max
    and select the model that has the maximum p(Mk|r), that is M = arg max M k , k = 1 , k max { p ( M k r ) } = arg max M k , k = 1 , k max { p ( r M k ) p ( M k ) L = 1 k max p ( r M L ) p ( M L ) }
  • Here, we assume that the models have the same likelihood a priori, so that p(Mk)=1/kmax, (k=1 . . . kmax). Therefore, the model selection in p ( M k r ) = p ( r M k ) p ( M k ) L = 1 k max p ( r M L ) p ( M L )
    will not be affected by p(Mk). This is also the case with ΣL=1 k max p(r|ML)p(ML) since it is not a function of Mk. Consequently, the factors p(Mk) and ΣL=1 K max p(r|ML)p(ML) may be ignored in computing the model criteria.
  • Equation M = arg max M k k = 1 , …k max { p ( M k r ) } = arg max M k k = 1 , …k max { p ( r M k ) p ( M k ) L = 1 k max p ( r M L ) p ( M L ) }
    then becomes M = arg max M k k = 1 , …k max { p ( r M k ) }
  • To calculate the posterior probability of model Mk, we need to evaluate the marginal density of data for each model p(r|Mk) which requires multidimensional integration:
    p(r|M k)=∫Φ k p(r|Φ k ,M k)pk |M k) k
    where Φk is the parameter vector for model Mk,p(r|Φk,Mk) is the likelihood and p(Φk|Mk) is the prior distribution for model Mk.
  • In practice, calculating the multidimensional integration is very difficult, especially for obtaining a closed form analytical solution. The research in this area has resulted in many approximation methods for achieving this. The Laplace's approximation method for the integration appears to be a simple one and has become a standard method for calculating the integration of multi-variable Gaussians [53]. This gives: p ( r M k ) = Φ k p ( r Φ k , M k ) p ( Φ k M k ) Φ k ( 2 π ) d k / 2 H ( Φ ^ k ) - 1 / 2 p ( r Φ ^ k , M k ) p ( Φ ^ k M k )
    where {circumflex over (Φ)}k is the maximum likelihood estimate of Φk, dk denotes the number of parameters (control points for B-spline model) in model Mk, and H({circumflex over (Φ)}52 is the Hessian matrix of −log p(r |Φk,Mk) evaluated at {circumflex over (Φ)}k, H ( Φ ^ k ) = 2 log p ( r Φ k , M k ) Φ k Φ k T Φ k = Φ ^ k
  • This approximation is particularly good when the likelihood function is highly peaked around {circumflex over (Φ)}k. This is usually the case when the number of data samples is large. Neglecting the terms of p({circumflex over (Φ)}k|Mk) and using log in the calculation, the posterior probability of model Mk becomes: M = arg max M k , k = 1 , …k max { log p ( r Φ ^ k , M k ) - 1 2 log H ( Φ ^ k ) }
  • The likelihood function p(r|{circumflex over (Φ)}k,Mk) of aclosed B-spline cross section curve may be factored into x and y components as
    p(r|{circumflex over (Φ)} k ,M k)=p(x|{circumflex over (Φ)} kx ,M kp(y|{circumflex over (Φ)} ky ,M k)
    where {circumflex over (Φ)}kx and {circumflex over (Φ)}ky may be calculated by { Φ x = [ B T B ] - 1 B T x Φ y = [ B T B ] - 1 B T y
  • Consider the x component. Assuming that the residual error sequence is zero mean and white Gaussian with a variance σkx 2({circumflex over (Φ)}kx). The likelihood function may be denoted as follows: p ( x Φ ^ kx , M k ) = ( 1 2 πσ kx 2 ( Φ ^ kx ) ) m / 2 exp { - 1 2 σ kx 2 ( Φ ^ kx ) k = 0 m - 1 [ x k - B k Φ ^ kx ] 2 }
    with σkx 2({circumflex over (Φ)}kx,Mk) estimated by σ ^ kx 2 ( Φ ^ kx ) = 1 m k = 0 m - 1 [ x k - B k Φ ^ kx ] 2
  • Similarly, the likelihood function of the y component may also be obtained. The corresponding Hessian matrix Ĥk of −log p(r|Φk,Mk)
  • Evaluated at {circumflex over (Φ)}k is H ( Φ ^ k ) = [ B T B σ ^ kx 2 ( Φ ^ kx ) 0 0 B T B σ ^ ky 2 ( Φ ^ ky ) ]
  • Approximating 1 2 log H ( Φ ^ k )
    by the asymptotic expected value of Hessian insert 1 2 ( d kx + d ky ) log ( m )
    the Bayesian information criterion (BEC) for selecting the structure of a B-spline curve is M = arg max M k , k = 1 , k max { - m 2 log σ ^ kx 2 ( Φ ^ kx ) - m 2 log σ ^ ky 2 ( Φ ^ ky ) - 1 2 ( d kx + k ky ) log ( m ) }
    where dkx and dky are the number of control points in x and y directions respectively, and m is the number of data points.
  • In the conventional BIC criterion as shown in the above equation, the first two terms measure the estimation accuracy of the B-spline model. In general, the variance {circumflex over (σ)}k 2 estimated from σ ^ kx 2 ( Φ ^ kx ) = 1 m k = 0 m - 1 [ x k - B k Φ ^ kx ] 2
    tends to decrease with the increase in the number of control points. The smaller the variance value in {circumflex over (σ)}k 2, the bigger the value of the first two terms (as the variance is much smaller than one) and therefore the higher the order (i.e. the more control points) of the model resulting from M = arg max M k , k = 1 , k max { - m 2 log σ ^ kx 2 ( Φ ^ kx ) - m 2 log σ ^ ky 2 ( Φ ^ ky ) - 1 2 ( d kx + k ky ) log ( m ) }
  • However, if too many control points are used, the B-spline model will over-fit noisy data points. An over-fitted B-spline model will have poor generalization capability. Model selection thus should achieve a proper tradeoff between the approximation accuracy and the number of control points of the B-spline model. With a conventional BIC criterion, the same data set is used for estimating both the control points of the B-spline model and the variances. Thus the first two terms in the above equation cannot detect the occurrence of over fitting in the B-spline model selected.
  • In theory, the third term in the above equation could penalize over-fitting as it appears directly proportional to the number of control points used. In practice, however, it may be noted that the effect of this penalty term is insignificant compared with that of the first two terms. As a result, the conventional BIC criterion is rather insensitive to the occurrence of over-fitting and tends to select more control points in the B-spline model to approximate the data point, which normally results in a model with poor generalization capability.
  • The reason for the occurrence of over-fitting in conventional BIC criterion lies in the way the variances σkx 2 and σky 2 are obtained. A reliable estimate of σkx 2 and σky 2 should be based on re-sampling of the data, in other words, the generalization capability of a B-spline model should be validated using another set of data points rather than the same data used in obtaining the model.
  • To achieve this, the available data may be divided into two sets: a training sample and a prediction sample. The training sample may be used only for model estimation, whereas the prediction sample may be used only for estimating data noise σkx 2 and σky 2.
  • For a candidate B-spline model Mk with dkx and dky control points in the x and y directions, the BIC may be evaluated via the following steps:
  • 1)Estimate the model parameter {circumflex over (Φ)}k using the training sample by { Φ x = [ B T B ] - 1 B T x Φ y = [ B T B ] - 1 B T y
  • 2) Estimate the data noise σk 2 using the prediction sample by equation σ ^ kx 2 ( Φ ^ kx ) = 1 m k = 0 m - 1 [ x k - B k Φ ^ kx ] 2
  • If the model {circumflex over (Φ)}k fitted to the training data is valid, then the estimated variance {circumflex over (σ)}k 2 from the prediction sample should also be a valid estimate of the data noise. It may be seen that the data noise σk 2 estimated from the prediction sample may be more sensitive to the quality of the model than one directly estimated from a training sample, as the variance σk 2 estimated from the prediction sample may also have the capability of detecting the occurrence of over-fitting.
  • Thus, in one or more preferred embodiments, a Bayesian based approach may be adopted as the model selection method. Based on the posterior model probabilities, the Bayesian based approach estimates a probability distribution over an ensemble of models. The prediction is accomplished by averaging over the ensemble of models. Accordingly, the uncertainty of the models is taken into account, and complex models with more degrees of freedom are penalized. Given a set of models {Mk,k=1,2, . . . ,kmax} and data r, the Bayesian approach selects the model with the largest posterior probability. To find the model with the largest posterior probability, we evaluate p(Mk|r) for k=1,2, . . . ,kmax and select the model that has the maximum p(Mk|r), that is M = arg max M k , k = 1 , k max { p ( M k r ) } = arg max M k , k = 1 , k max { p ( r M k ) p ( M k ) L = 1 k max p ( r M L ) p ( M L ) }
  • Assuming that the models have the same likelihood a priori, so that p(Mk)=1/kmax, (k=1, . . . ,kmax) the model selection will not be affected by p(Mk) . This is also the case with ΣL=1 k max p(r|ML)p(ML) since it is not a function of Mk. Consequently, we have M = arg max M k , k = 1 , k max { p ( r M k ) }
  • Using Laplace's approximation for calculating the integration of multi-variable Gaussians, we can obtain the Bayesian information criterion (BIC) for selecting the structure of B-spline curve M = arg max M k , k = 1 , k max { - m 2 log σ ^ kx 2 ( Φ ^ kx ) - m 2 log σ ^ ky 2 ( Φ ^ ky ) - 1 2 ( d kx + k ky ) log ( m ) }
    where dkx and dky are the number of control points in x and y directions respectively, m is the number of data points.
  • Here we divide the available data into two sets: a training sample and a prediction sample. The training sample is used only for model estimation, whereas the prediction sample is used only for estimating data noise. For a candidate B-spline model with its control points, the BIC is evaluated via the following steps:
  • 1) Estimate the model parameter using the training sample;
  • 2) Estimate the data noise using the prediction sample.
  • If the model fitted to the training data is valid, then the estimated variances from the prediction sample should also be a valid estimate of the data noise. If the variances found from the prediction sample are unexpectedly large, we have reasons to believe that the candidate model fits the data badly. It is seen that the data noise estimated from the prediction sample will thus be more sensitive to the quality of the model than the one directly estimated from training sample, as the variance estimated from the prediction sample also has the capability of detecting the occurrence of over-fitting.
  • We further define an entropy function which measures the information about the model, given the available data points. The entropy can be used as the measurement of the uncertainty of the model parameter.
  • Uncertainty Analysis
  • In this section, we will analyze the uncertainty of the B-spline model for guiding data selection so that new data points will maximize information on the B-spline model's parameter Φ. Here Φk is replaced by Φ to simplify the descriptions and to show that we may deal with the selected “best” B-spline model with dkx and dky control points.
  • To obtain the approximate B-spline model, we will predict the distribution of the information gain about the model's parameter Φ along each cross section curve. A measure of the information gain will be obtained whose expected value will be maximal when the new measurement data are acquired. The measurement is based on Shannon's entropy whose properties make it a sensible information measure here. We will describe the information entropy of the B-spline model and how to use it to achieve maximal information gain about the parameters of the B-spline model Φ.
  • Information Entropy of a B-Spline Model
  • In a first preferred embodiment, given Φ and the data points r=(ri)i=1 m are assumed to be statistically independent, with Gaussian noise of zero mean and variance σ2 the joint probability of r=(ri)i=1 m may be denoted by p ( r Φ ) = 1 ( 2 πσ 2 ) m / 2 · exp [ - 1 2 σ 2 ( r - B · Φ ) T ( r - B · Φ ) ]
  • The above equation has an asymptotic approximation representation defined by [27] p ( r Φ ) p ( r Φ ^ ) exp [ - 1 2 ( Φ - Φ ^ ) T H m ( Φ - Φ ^ ) ]
    where {circumflex over (Φ)} is the maximum likelihood estimation of Φ given the data points and Hm p53] is the Hessian matrix of −log p(r|Φ) evaluated at {circumflex over (Φ)} given data points r=(ri)i=1 m. The posteriori distribution p(Φ|r) of the given data is approximately proportional to p ( Φ r ) p ( r Φ ^ ) · exp [ - 1 2 ( Φ - Φ ^ ) T H m ( Φ - Φ ^ ) ] p ( Φ )
    where the p(Φ) is the priori probability of the B-spline model parameters.
  • If the priori has a Gaussian distribution with mean {circumflex over (Φ)} and covariance Hm −1, we have p ( Φ r ) exp [ - 1 2 ( Φ - Φ ^ ) T H m ( Φ - Φ ^ ) ]
  • From Shannon's information entropy, the conditional entropy of p(Φ|r) is defined by
    E m(Φ)=∫p(Φ|r)·log p(Φ|r)
  • If p(Φ|r) obeys Gaussian distribution, the corresponding entropy is [28] E m = Δ + 1 2 log ( det H m - 1 )
    where Δ is a constant.
  • The entropy measures the information about the B-spline model parameters, given data points (r1, . . . , rm,). The more information about Φ the smaller the entropy will be. In this work, we use the entropy as the measurement of the uncertainty of the model parameter Φ.
  • Thus, to minimize Em, we will make detHm −1 as small as possible.
  • In a further preferred embodiment, for parameter Φ, the joint probability of r=(ri)i=1 m has an asymptotic approximation representation p ( r Φ ) p ( r Φ ^ ) exp [ - 1 2 ( Φ - Φ ^ ) T H m ( Φ - Φ ^ ) ]
    where Hm is the Hessian matrix given points r=(ri)i=1 m.
  • Therefore, the posteriori distribution [seep1511] of given data may be approximately given as p ( Φ r ) p ( r Φ ^ ) · exp [ - 1 2 ( Φ - Φ ^ ) T H m ( Φ - Φ ^ ) ] p ( Φ )
    where the p(Φ) is the priori probability of B-spline model parameters. If we assume that the priori probability over the B-spline model parameters is initialized as uniform distribution in the interval which they lie in, we have p ( Φ r ) exp [ - 1 2 ( Φ - Φ ^ ) T H m ( Φ - Φ ^ ) ]
  • It is easy to confirm that if p(Φ|r) obeys Gaussian distribution, the correspond rig entropy is [12] E m = Δ + 1 2 log ( det H m - 1 )
    where Δm is the constant.
  • The entropy measures the information about B-spline model parameters, given data points. (r1, . . . rm).
  • Thus, in a preferred embodiment, we select the entropy as the measurement of uncertainty of the model parameter Φ.
  • Information Gain
  • In order to predict the distribution of the information gain, a new data point rm+1 may be assumed to have been collected along a contour. The potential information gain is determined by incorporating the new data point rm+1. If we move the new point rm+1 along the contour, the distribution of the potential information gain along the whole contour may be obtained.
  • To derive the relationship between the information gain and the new data point {seep54}, firstly we may assume that a new data point {seep54} has been collected. Then, let p(Φ|r1, . . . ,rm,rm+1) the probability distribution of model parameter Φi after a new point rm+1 is added. Its corresponding entropy is E m + 1 = Δ + 1 2 log ( det H ^ m + 1 - 1 ) .
    The information gain then is Δ E = E m - E m + 1 = 1 2 log det H m - 1 det H m + 1 - 1
  • From H ( Φ ^ k ) = [ B T B σ ^ kx 2 ( Φ ^ kx ) 0 0 B T B σ ^ ky 2 ( Φ ^ ky ) ]
    the new data point rm+1 will incrementally update the Hessian matrix as follows: H m + 1 H m + [ 1 σ x 2 · B _ m + 1 T B _ m + 1 0 0 1 σ y 2 · B _ m + 1 T B _ m + 1 ]
    where {circumflex over (σ)}m+1 2≈{circumflex over (σ)}m 2·{overscore (B)}m+1 is defined by
    {overscore (B)} m+1 =[{overscore (B)} 0,4 m+1 +{overscore (B)} n+1,4 m+1 ,{overscore (B)} 1,4 m+1 +{overscore (B)} n+2,4 m+1 , . . . ,{overscore (B)} n,4 m+1].
  • The determinant of Hm+1 det H m + 1 det [ I + [ 1 σ ^ x 2 · B _ m + 1 T B _ m + 1 0 0 1 σ ^ y 2 · B _ m + 1 T B _ m + 1 ] H m - 1 ] · det H m
    can be simplified to
    det Hm+1≈(1+{overscore (B)}m+1·[BTB]−1·{overscore (B)}m+1 T)2·det Hm
  • Since det H−1=1/det H Δ E = E m - E m + 1 = 1 2 log det H m - 1 det H m + 1 - 1
    can be simplified to
    ΔE=log(1+{overscore (B)} m+1 ·[B T B] −1 ·{overscore (B)} m+1 T)
  • Assuming that the new additional data point rm+1 travels along the contour, the resulting potential information gain of the B-spline model will change according to ΔE above. In order to reduce the uncertainty of the model, it may be desirable to have the new data point at such location that the potential information gain attainable is largest. Therefore, after reconstructing the section curve by fitting partial data acquired from previous viewpoints, the Next Best Viewpoint should be selected as the one that senses those new data points which give the largest possible potential information gain for the B-spline model.
  • Thus, in order to predict the distribution of the information gain, we assume a new data point collected along a contour. The potential information gain is determined by incorporating the new data point. If we move the new point along the contour, the distribution of the potential information gain along the whole contour can be obtained. Now, we will derive the relationship between the information gain and the new data point.
  • As mentioned above, the new data points will incrementally update the Hessian matrix. In order to reduce the uncertainty of the model, we would like to have the new data point at such location that the potential information gain attainable is largest. Therefore, after reconstructing the section curve by fitting partial data acquired from previous viewpoints, the Next Best Viewpoint should be selected as the one that sense those new data points which give the largest possible potential information gain for the model.
  • Next Best View
  • The task in the view planning here is to obtain the visibility regions in the viewing space that contain the candidate viewpoints where the missing information about the 3D object can be obtained. The NBV should be the viewpoint that can give maximum information about the object. We need to map the predicted information gain to the view space for viewpoint planning. For a viewpoint, we say that a data point on the object is visible if the angle between its normal and the view direction is smaller than a breakdown angle of the sensor. The view space for each data point is the set of all possible viewpoints that can see it. The view space can be calculated via the following procedure:
  • 1) Calculating the normal vector of a point on the object, using a least square error fitting of a local surface patch in its neighbourhood.
  • 2) Extracting viewpoints from which the point is visible. These viewpoints are denoted as view space.
  • After the view space is extracted, we construct a measurement matrix. The components of the measurement matrix is given as m k , j = { n k · v j if r k is visible to v j 0 otherwise ( 30 )
    where vj is the direction vector of viewpoint vj.
  • Then, for each view, we define a global measure of the information gain as the criterion to be summed over all visible surface points seen under this view of the sensor. This measure is defined by I j ( p j ) = k R j m k , j · Δ E k
    where pj contains the location parameters at a viewpoint, ΔEk is the information gain at surface point rk, which is weighted by Mk,j.
  • Therefore, the Next Best View p* is one that maximizes the information gain function of I(p)
    (p*)=max I j(p j)
  • At the new viewpoint, another set of data is acquired, registered, and integrated with the previous partial model. This process is repeated until all data are acquired to build a complete model of the 3D surface. The terminating condition is defined via the information gain. When there are missing data, the information gain will have outstanding peaks where data are missing. When all data are obtained, there will be no obvious peaks. Rather, the information gain will appear noise like indicating that the terminating condition is satisfied.
  • In planning the viewpoint, we can also specify the vision system's configuration parameters. The configuration parameters can include optical settings of the camera and projector as well as the relative position and orientation between the camera and projector. The planning needs to satisfy multiple constraints including visibility, focus, field of view, viewing angle, resolution, overlap, occlusion, and some operational constraints such as kinematic reachability of the sensor pose and robot-environment collision. A complete cycle in the incremental modeling process is illustrated in FIG. 9. As shown in FIG. 9, in a first stage static calibration and first view acquisition is carried out. In a second stage, 3D reconstruction via a single view is performed. Next, 3D model registration and fusion is performed followed by the determination of a next viewpoint decision and terminating condition. Sensor reconfiguration follows this step and recalibration is performed. The process may then be repeated from the 3D reconstruction stage.
  • FIG. 10 shows a flow diagram of information entropy based viewpoint planning for digitization of a 3D object according to a preferred embodiment. In a first stage, 3D data is acquired from another viewpoint. Next, multiple view range images are registered. In the next stage, a B-spline model is selected and the model parameters of each cross section curve are estimated. Following this, the uncertainty of each cross section B-spline curve is analyzed and the information gain of the object is predicted. Next, information gain about the object is mapped into a view space. Candidate viewpoints are then evaluated and the NBV selected. The process may then repeated.
  • In a preferred embodiment, the candidate viewpoints may be represented in a tessellated spherical view space by subdividing recursively each triangular facet of an icosahedron. If we assume that view space is centered at the object, arid its radius is equal to a priori specified distance from the sensor to the object, each viewpoint may be represented by pan-tilt angles φ([−180°, 180°]) and θ([−90°, 90°]), denoted as v(θ,φ).
  • For a viewpoint v(θ,φ) on the object, it may be considered to be visible if the angle between its normal and the view direction is smaller than a breakdown angle α for the ranger sensor being used. The view space Vk for each point rk (k=1,2, . . . ) to be sensed by the range sensor is the set of all possible directions that can access to rk. The view space Vk may be calculated via the following procedure:
  • 1) Calculating the normal vector nk of one point rk (k−=,2, . . . ) using on the object, a least square error fitting of a 3×3 local surface patch in its neighborhood.
  • 2) Extracting viewpoints from which qk is visible. These viewpoints are denoted as view space Vk.
  • After the view space Vk, (k=1,2, . . . ), has been extracted, the measurement matrix may be constructed M The column vector MRJ of the measurement matrix corresponds to the set Rj of points visible for viewpoint vj while the row vector Mk,v, corresponds to view space Vk of the next best point qk. The components mkj of /-by-w measure matrix may be defined as follows: m k , j = { n k · v j if r k is visible to v j 0 otherwise
    where vj is the direction vector of viewpoint vj.
  • Then, for each view v(θ,φ), the View Space visibility may be defined which may measure the global information gain I(θ,φ) by I j ( θ j , ϕ j ) = k R j m k , j · Δ E k
    where ΔEk is the information gain at surface point rk, which is weighted by mk,j
  • Therefore, the Next Best View (θ*,φ*) may be considered to be the one that maximizes the information gain function of I(θ,φ) ( θ * , ϕ * ) = max θ j , ϕ j I j ( θ j , ϕ j )
    View Space Representation
  • View space is a set of 3D positions where the sensor (vision system) takes measurements. If we assume that the 3D object is within the field of view and time depth of view of the vision system and the optical settings of the vision system are fixed, based on these assumptions, the parameters of the vision system to be planned are the time viewing positions of the sensor. As in the embodiment described above, in this embodiment, the candidate viewpoints are represented in a spherical viewing space. The viewing space is usually a continuous spherical surface. To reduce the number of viewpoints used in practice, it is necessary to discretize the surface by some kind of tessellation.
  • In general, there are two methods for tessellating a view sphere, namely latitude-longitude based methods and icosahedron based methods. For a latitude-longitude based tessellation, the distribution of viewpoints varies considerably from time poles to the equator. For this reason, uniformly segmented geodesic tessellation is widely used [29,30,31 ]. This method tessellates the sphere by subdividing recursively each triangular facet of the icosahedrons. Using the geodesic dome construction technique, the constructed dome contains 20×Q2 triangles and 10×Q2+2 vertices, where Q is the frequency of the geodesic division. The vertices of the triangles represent the candidate viewpoints.
  • By way of example, a rhombus-shaped array data structure may be used [30]. For example, we may calculate the view space with Q=16 as shown in FIG. 11(a). In addition, if we assume that the view space is centered around the object, and its radius is equal to a priori specified distance from the sensor to the object, as shown in FIG. 11(b), since the optical axis of the sensor passes through the center of the object, the viewpoint may be represented by pan-tilt angles φ([−180°, 180°]) and θ([−90°, 90°]).
  • According to the representation of the viewing space, the fundamental task in the view planning here is to obtain the visibility regions in the viewing space that contain the candidate viewpoints where the missing information about the 3D object can be obtained without occlusions. The NBV should be the viewpoint that can give maximum information about the object.
  • With the above view space representation, we can now map the predicted information gain to the view space for viewpoint planning. For a viewpoint v(θ,φ), we say one data point on the object is visible if the angle between its normal and the view direction is smaller than a breakdown angle α of the sensor. The view space Vk for each data point rk, ( k 1,2, . . . ) is the set of all possible viewpoints that can see rk. The view space Vk can be calculated via the following procedure:
  • 1) Calculating the normal vector nk of a point rk (k=1,2, . . . ) on the object, using a least square error fitting of a 3×3 local surface patch in its neighborhood.
  • 2) Extracting viewpoints from which rk is visible. These viewpoints are denoted as view space Vk.
  • After the view space Vk, (k=1,2, . . . ), is extracted, we construct a measurement matrix M. The components mk,j of an l-by-w measurement matrix may be given as m k , j = { n k · v j if r k is visible to v j 0 otherwise
    where vj is the direction vector of viewpoint vj.
  • Then, for each view v(θ,φ) we define a global measure of the information gain I(θ,φ) as the criterion to be summed over all visible surface points seen under this view of the sensor. |I(θ,φ) is defined by I j ( θ j , ϕ j ) = k R j m k , j · Δ E k
    where ΔEk is the information gain at surface point rk, which is weighted by mkj.
  • Therefore, the Next Best View (θ*,φ*) is one that maximizes the information gain function of I(θ,φ) ( θ * , ϕ * ) = max θ j , ϕ j I j ( θ j , ϕ j )
  • In summary, one or more preferred embodiments of the present invention provide a viewpoint planning method by reducing incrementally the uncertainty of a closed B-spline curve. Also proposed is an improved BIC criterion for model selection, which accounts for acquired data well. By representing the object with a series of relatively simple cross section curves, it is possible to define entropy as a measurement of uncertainty to predict the information gain for a cross section B-spline model. Based on that, it is possible to establish View Space Visibility and select the viewpoint with maximum visibility as the Next Best View.
  • One or more embodiments of the present invention may find particular application in the following fields but application of the invention is not to be considered limited to the following:
      • in reverse engineering, to obtain a digitized 3D data/model of a physical product;
      • human body measurements for the apparel industry or for tailor made clothing design;
      • advanced object recognition, product inspection and manipulation;
      • environment model construction for virtual reality;
      • as a 3D sensor for robotic exploration/navigation in clustered environments.
  • One or more preferred embodiments of the invention may have particular advantages in that by using encoded patterns projected over an area on the object surface, high speed 3D imaging may be achieved. Also, automated self-recalibration of the system may be performed when the system's configuration is changed or perturbed. In a further preferred embodiment, uncalibrated 3D reconstruction may be performed. Furthermore, in a preferred embodiment real Euclidean reconstruction of a 3D surface may be achieved.
  • It will be appreciated that the scope of the present invention is not restricted to the described embodiments. For example, whilst the embodiments have been described in terms of four sensors and four variable gain control components, a different number of such components may be used. Numerous other modifications, changes, variations, substitutions and equivalents will therefore occur to those skilled in the art without departing from the spirit and scope of the present invention.
  • The results of a series of experiments conducted in respect of a number of preferred embodiments according to the present invention are set out in the attached Schedule 1, the contents of which is incorporated herein in total. Furthermore, details of the application of a number of preferred embodiments of the present invention to uncalibrated Euchlidean 3D reconstruction using an active vision system according to an embodiment of the present invention is set out in Schedule 2, the contents of which is incorporated herein in total.
  • REFERENCES
  • The contents of the following documents which have been referred to throughout the specification are hereby incorporated herein by reference:
    • [1] Y. F. Li and S. Chen, Automatic Recalibration of an Active Structured Light Vision System, IEEE Transactions on Robotics and Automation. 19(2): 259-268, April 2003.
    • [2] R. Pito, A Solution to the Next Best View Problem for Automated Surface Acquisition, IEEE Trans. Pattern Analysis ad Machine Intelligence, 21(10):1016-1030, October 1999.
    • [3] C. I. Connolly, The Determinant of Next Best Views, Proc. IEEE Intl. Conf. on Robotics and Automation, pp. 432-435, 1985.
    • [4] J. Maver and R. Bajcsy, Occlusions as a Guide for Planning the Next View, IEEE Trans. Pattern Analysis and Machine Intelligence, 15(2): 417-433, February 1993.
    • [5] P. Whaite and F. P. Ferrie, Autonomous Exploration: Driven by Uncertainty, IEEE Trans. Pattern Analysis and Machine Intelligence, 19(3):193-205, March 1997.
    • [6] C. I. Connolly, The Determinant of Next Best Views, Proc. IEEE Intl. Conf. on Robotics and Automation, pp. 432-435, 1985.
    • [7] J. Maver and R. Bajcsy, Occlusions as a Guide for Planning the Next View, IEEE Trans. Pattern Analysis and Machine Intelligence, 15(2): 417-433,February 1993.
    • [8] R. Pito, A Solution to the Next Best View Problem for Automated Surface Acquisition, IEEE Trans. Pattern Analysis and Machine Intelligence, 21(10):1016-1030, October 1999.
    • [9] P. Whaite and F. P. Ferrie, Autonomous Exploration: Driven by Uncertainty, IEEE Trans. Pattern Analysis and Machine Intelligence, 19(3):193-205, March 1997.
    • [10] M. K. Reed, P. K. Allen, 3D Modeling from Range Imagery: An Incremental Method with a Planning Component, Image and Vision Computing, 17(2):99-111, 1999.
    • [11] W. Scott, G. Roth and J. Rivest, View Planning with a Registration Constraint, IEEE Int. Conf. Recent Advances in 3D Digital Imaging and Modeling, pages 127-134, 2001.
    • [12] A. M. Mclvor, “Nonlinear Calibration of a Laser Stripe Profiler”, Optical Engineering, vol. 41, no. 1, January 2002, pp. 205-212.
    • [13] D. Q. Huynh, “Calibration of a Structured Light System: A Projective Approach”, Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 1997, pp. 225 -230.
    • [14] R J. Valkenburg and A. M. Mclvor, “Accurate 3D measurement using a structured light system”, Image and Vision Computing, vol. 16, no. 2, February 1998, pp. 99-110.
    • [15] A. Zomet, L. Wolf and A. Shashua, “Omni-rig: linear self-recalibration of a rig with varying internal and external parameters”, Proc. Eighth IEEE lot. Conf. on Computer Vision, vol. 1, 2001, pp. 135 -141.
    • [16] J. Dias, A. de Almeida, H. Araújo and J. Batista, “Camera Recalibration with Hand-Eye Robotic System”, IECON 91, Kobe, Japan, October 1991.
    • [17] C. T. Huang and O. R. Mitchell, “Dynamic camera calibration”, Proc. Int. Symposium on Computer Vision, 1995, pp. 169-174.
    • [18] S. B. Kang, “Catadioptric selfcalibration”, Proc. IEEE Conf. on Computer Vision and Pattern Recognition, vol. 1, 2000, pp. 201 -207.
    • [19] Y. Seo and K. S. Hong, “Theory and practice on the self-calibration of a rotating and zooming camera from two views”, IEE proc. on Vision, Image and Signal Processing, vol. 148, no. 3, June 2001, pp. 166-172.
    • [20] Y. Ma , R. Vidal, J. Kosecka and S. Sastry, “Kruppa's Equations Revisited: its Degeneracy, Renormalization and Relations to Chirality”, Proc. of European Conf. on Computer Vision, Trinity College Dublin, Ireland, 2000.
    • [21] A. Bartoli, P. Stunn and R. Horaud, “Structure and Motion from Two Uncalibrated Views Using Points on Planes”, Proc. of the third Int. Conf. on 3D Digital Imaging and Modeling, Quebec City, Canada, pp. 83-90, June 2001.
    • [22] A. Fusiello, “Uncalibrated Euclidean reconstruction: A review”, Image and Vision Computing, 18(6-7), May 2000, pp. 555-563.
    • [23] S. J. Maybank and O. D. Faugeras, A theory of self-calibration of a moving camera, International Journal of Computer Vision, Vol. 8, No. 2, pp. 123-151, November 1992.
    • [24] O. Faugeras, What can be seen in three dimensions with .an uncalibrated stereo rig? Computer Vision—ECCV'92, Lecture Notes in Computer Science, Proc. Of the Second European Conference on Computer Vision, Santa Margherita Ligure, Italy, pp. 563-578, May, 1992.
    • [25] R. I. Hartley, Euclidean reconstruction from uncalibrated views, Applications of Invariance in Computer Vision, Lecture Notes in Computer Science, 852, Springer, Berlin, pp. 237-256, 1993.
    • [26] O. Faugeras, Stratification of three-dimensional vision: projective, affine, and metric representations, Journal of the Optical Society of America A, Vol. 12, No. 3, pp. 465-484, March 1994.
    • [27] M. Pollefeys, L. Van Gool and M. Proesmans, Euclidean 3D reconstruction from image sequences with variable focal lengths, Proc. European Conference on Computer Vision, Cambridge, UK, Vol. 1, pp. 31-42, 1996.
    • [28] Y. Seo and K. S. Hong, About the self-calibration of a rotating and zooming camera: Theory and practice, Proceedings of the Seventh IEEE International Conference on Computer Vision, Corfu, Greece, Vol. 1, pp. 183 -189, September 1999.
    • [29] H. Kim and K. S. Hong, A practical self-calibration method of rotating and zooming cameras, Proceedings 15th International Conference on Pattern Recognition, Barcelona, Spain, Vol. 1, pp. 354 -357, September 2000,
    • [30] A. Heyden and K. Astrom, Euclidean reconstruction from image sequences with varying and unknown focal length and principal point, Proc. of IEEE Conference on Computer Vision and Pattern Recognition, San Juan, Puerto Rico, pp. 438-443, June 1997.
    • [31] M. Pollefeys, R. Koch and L. V. Gool, Self-calibration and metric reconstruction in spite of varying and unknown intrinsic camera parameters, International Journal of Computer Vision, Vol. 32, No. 1, pp. 7-25, August 1999.
    • [32] F. Kahl and A. Heyden, Robust self-calibration and Euclidean reconstruction via affine approximation, Proceedings Fourteenth International Conference on Pattern Recognition, Brisbane, Australia, Vol. 1, pp. 56-58, August 1998.
    • [33] Y. F. Li and Z. Liu, Method for Determining the Probing Points for Efficient Measurement and Reconstruction of Freeform Surfaces, Measurement Science and Technology, Vol. 14, No. 8, August 2003.
    • [34] Y. F. Li and S. Chen, Automatic Recalibration of an Active . Structured Light Vision System, IEEE Transactions on Robotics and Automation, Vol. 19, No. 2, pp. 259-268, April 2003.
    • [35] S. Chen and Y. F. Li, Dynamically Reconfigurable Visual Sensing for 3D Perception, Proc. IEEE International Conference on Robotics and Automation, Taipei, Taiwan, September 2003.
    • [36] D. Fofi, J. Salvi and E. Mouaddib, “Uncalibrated Vision based on Structured Light”, IEEE Int. Conf. on Robotics and Automation, Seoul, Korea, May 2001.
    • [37] O. Jokinen, “Self-calibration of a light striping system by matching multiple 3-D profile maps”, Proc. Second Int. Conf. on 3-D Digital Imaging and Modeling, Ottawa, 1999, pp. 180-190.
    • [38] C. W. Chu, S. Hwang and S. K. Jung, “Calibration-free Approach to 3D Reconstruction Using Light Stripe Projections on a Cube Frame”, Proc. IEEE 3rd Int. Conf. on 3D Digital Imaging and Modeling, Quebec City, Canada, June 2001, pp. 13-19.
    • [39] S. Y. Chen and Y. F. Li, “Self Recalibration of a Structured Light Vision System from a Single View”, Proc. 2002 IEEE Int. Conf. on Robotics and Automation, Washington D.C., May 2002, pp. 2539-2544.
    • [40] Y. F. Li and S.Y. Chen, “Automatic Recalibration of an Active Vision System Using a Single View”, IEEE Trans. on Robotics and Automation, vol. 19, no. 2, April 2003.
    • [41] D. Fofi, E. M. Mouaddib and J. Salvi, How to self-calibrate a structured light sensor, Proc. 9th International Symposium on Intelligent Robotic System, Toulouse, France, July 2001.
    • [42] A. Fusiello, Uncalibrated Euclidean reconstruction: a review, Image and Vision Computing, Vol. 18, No. 6-7, pp. 555-563, May 2000.
    • [43]: S. Fernand, and Y. Wang, “Part 1: Modeling Image Curves Using Invariant 3D Object Curve Models: A Path to 3D Recognition and Shape Estimation from Image Contours”, IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 16, No. 1, pp. 1-12, 1994.
    • [44] D. Keren, D. B. Cooper and J. Subrahmonia, “Describing Complicated Objects by Implicit Polynomials”, IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 16, No. 2, pp. 38-53, 1994.
    • [45] G. Taubin, “Estimation of Planar Curves, Surfaces and Nonplanar Space Curves Defined by Implicit Equations, with Application to Edge and Range Image Segmentation”, IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 13, No. 11, pp. 1115-1138, 1991.
    • [46] P. Whaite and P. P. Ferrie, “Autonomous Exploration: Driven by Uncertainty”, IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 19, No. 3, pp. 193-205, 1997.
    • [47] D. Keren, D. B. Cooper and J. Subrahmonia, Describing Complicated Objects by Implicit Polynomials, IEEE Trans. Pattern Analysis and Machine Intelligence, 16(2):38-53, February 1994.
    • [48] Z. Yan, B. Yang, and C. Menq, “Uncertainty Analysis and Variation Reduction of Three Dimensional Coordinate Metrology. Part 1: Geometric Error Decomposition,” International Journal of Machine Tools and Manufacture, Vol. 39, No. 8, pp. 1199-1217, 1999.
  • H. Akaike, “A New Look at the Statistical Model Identification,” IEEE Trans. Automatic Control, Vol. 19, No. 6, pp. 716-726, 1974.
    • [50] S. Konishi and G. Kitagawa, “Generalized Information Criterion in Model Selection”, Biometrika, Vol. 83, pp. 875-890, 1996.
    • [51] M. Sugiyama and H. Ogawa, “Subspace Information Criterion for Model Selection,” Neural Computation, Vol. 13, No. 8, pp. 1863-1889, 2001.
    • [52] G. Shwartz, “Estimating the Dimension of a Model”, Annals of Statistics, Vol. 6, pp. 461-464, 1978.
    • [53] P. Torr, “Bayesian Model Estimation and Selection for Epipolar Geometry and Generic Manifold Fitting,” International Journal of Computer Vision, Vol. 50, No. 1, pp. 35-61, 2002.
    • [54] P. M. Djuric, “Asymptotic MAP Criteria for Model Selection”, IEEE Trans. Signal Processing, Vol. 46, No. 10, pp. 2726-2734, 1998.
    • [55] V. Cherkassky, X. Shao, F. Mulier, and V. Vapnik, “Model Complexity Control for Regression Using VC Generalization Bounds,” IEEE Trans. Neural Networks, Vol. 10, No. 5, pp. 1075-1089, 1999.

Claims (44)

1. A method for measuring and surface reconstruction of a 3D image of an object comprising:
projecting a pattern onto a surface of an object to be imaged;
examining in a processor stage distortion or distortions produced in said pattern by said surface;
converting in said processor stage said distortion or distortions produced in said pattern by said surface to a distance representation representative of the shape of the surface; and
reconstructing electronically said surface shape of said object.
2. A method according to claim 1, wherein the step of projecting a pattern comprises projecting a pattern of rectangles onto a surface of an object to be imaged.
3. A method according to claim 1, wherein the step of projecting a pattern comprises projecting a striped pattern onto a surface of an object to be imaged.
4. A method according to claim 1, wherein the step of projecting a pattern comprises projecting a pattern of squares onto a surface of an object to be imaged.
5. A method according to claim 1, wherein the step of projecting a pattern comprises projecting a pattern using an LCD projector.
6. A method according to claim 1, wherein the step of projecting a pattern comprises projecting a colour-coded array pattern onto a surface of an object to be imaged.
7. A method according to claim 1, further comprising viewing using a camera said pattern projected onto said surface and passing one or more signals from said camera representative of said pattern to said processing stage.
8. A method according to claim 7, wherein the step of viewing using a camera comprises viewing using a CCD camera.
9. A method according to claim 7, wherein said step of projecting comprises projecting using a projector, said method further comprising arranging said camera and said projector to have 6 degrees of freedom relative to each other.
10. A method according to claim 9, wherein said step of arranging comprises arranging said camera and said projector to have 3 linear degrees of freedom and 3 rotational degrees of freedom relative to each other.
11. A method according to claim 1, wherein said step of projecting comprises projecting using a projector, the method further comprising calibrating said projector prior to projecting said pattern.
12. A method according to claim 9, further comprising automatically reconfiguring one or more settings of said degrees of freedom if said one or more settings are varied during operation.
13. A method according to claim 12, wherein said step of reconfiguring comprises taking a single image of said surface for reconfiguring one or more external parameters of said camera and/or said projector.
14. A method according to claim 13, wherein said step of reconfiguring comprises taking a further image of said surface for reconfiguring one or more internal parameters of said camera and/or said projector.
15. A method according to claim 1, further comprising viewing said surface obliquely to monitor distortion or distortions in said pattern.
16. A method according to claim 1, wherein said step of reconstructing comprises reconstructing said surface from a single image.
17. A method according to claim 1, wherein said step of reconstructing comprises reconstructing said surface from two or more images taken from different positions if one or more portions of said image are obscured in a first image taken.
18. A method according to claim 1, wherein said step of examining comprises:
slicing in said processor stage said pattern as distorted by said surface into a number of cross section curves;
reconstructing one or more of said cross-section curves by a closed B-spline curve technique;
selecting a control point number of B-spline models from said one or more curves;
determining using entropy techniques representation of uncertainty in said selected B-spline models to predict the information gain for each cross section curve;
mapping said information gain of said B-spline models into a view space; and
selecting as the Next Best View a view point in said view space containing maximum information gain for said object.
19. A method according to claim 18, wherein a Bayesian information criterion (BIC) is applied for selecting the control point number of B-spline models from said one or more curves.
20. A method according to claim 18, further comprising terminating said method when said entire surface of said object has been examined and it has been determined that there is no further information to be gained from said surface.
21. A method according to claim 1, further comprising taking metric readings from said reconstructed surface shape.
22. A method according to claim 1, wherein said step of converting said distortion or distortions comprises converting using a triangulation process.
23. A system for measuring and surface reconstruction of a 3D image of an object comprising:
a projector arranged to project a pattern onto a surface of an object to be imaged;
a processor stage arranged to examine distortion or distortions produced in said pattern by said surface;
said processor stage further being arranged to convert said distortion or distortions produced in said pattern by said surface to a distance representation representative of the shape of the surface; and
said processor stage being arranged to reconstruct electronically said surface shape of said object.
24. A system according to claim 23, wherein said pattern comprises an array of rectangles.
25. A system according to claim 23, wherein said pattern comprises an array of stripes.
26. A system according to claim 23, wherein said pattern comprises an array of squares.
27. A system according to claim 23, wherein said projector comprises an LCD projector.
28. A system according to claim 23, wherein said pattern comprises a colour-coded array pattern.
29. A system according to claim 23, further comprising a camera arranged to view said pattern projected onto said surface; said camera being arranged to pass one or more signals representative of said pattern to said processor.
30. A system according to claim 29, wherein said camera comprises a CCD camera.
31. A system according to claim 29, wherein said projector and said camera are arranged to have 6 degrees of freedom relative to each other.
32. A system according to claim 31, wherein said projector and said camera are arranged to have 3 linear degrees of freedom and 3 rotational degrees of freedom relative to each other.
33. A system according to claim 23, wherein said projector is calibrated prior to projecting said pattern.
34. A system according to claim 29, wherein said processor is arranged to automatically reconfigure one or more settings of said degrees of freedom if said one or more settings are varied during operation.
35. A system according to claim 34, wherein processor is arranged to reconfigure said one or more settings by taking a single image of said surface for reconfiguring one or more external parameters of said camera and/or said projector.
36. A system according to claim 35, wherein processor is arranged to reconfigure said one or more settings by taking a further image of said surface for reconfiguring one or more internal parameters of said camera and/or said projector.
37. A system according to claim 29, wherein said camera is arranged to view said surface obliquely to monitor distortion or distortions in said pattern.
38. A system according to claim 23, wherein said processor is arranged to reconstruct said surface from a single image.
39. A system according to claim 23, wherein said processor is arranged to reconstruct said surface from two or more images taken from different positions if one or more portions of said image are obscured in a first image taken.
40. A system according to claim 23, wherein said processor is arranged to:
slice said processor stage said pattern as distorted by said surface into a number of cross section curves;
reconstruct one or more of said cross-section curves by a closed B-spline curve technique;
select a control point number of B-spline models from said one or more curves;
determine using entropy techniques representation of uncertainty in said selected B-spline models to predict the information gain for each cross section curve;
map said information gain of said B-spline models into a view space; and
select as the Next Best View a view point in said view space containing maximum information gain for said object.
41. A system according to claim 40, wherein said processor is arranged to apply a Bayesian information criterion (BIC) for selecting the control point number of B-spline models from said one or more curves.
42. A system according to claim 40, wherein said processor is arranged to terminate one or more processing steps when said entire surface of said object has been examined and it has been determined that there is no further information to be gained from said surface.
43. A system according to claim 23, wherein said processor is arranged to convert said distortion or distortions using a triangulation process.
44. An active vision system comprising the system according to claim 23.
US10/891,632 2004-07-15 2004-07-15 System and method for 3D measurement and surface reconstruction Abandoned US20060017720A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US10/891,632 US20060017720A1 (en) 2004-07-15 2004-07-15 System and method for 3D measurement and surface reconstruction
US12/269,124 US8213707B2 (en) 2004-07-15 2008-11-12 System and method for 3D measurement and surface reconstruction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US10/891,632 US20060017720A1 (en) 2004-07-15 2004-07-15 System and method for 3D measurement and surface reconstruction

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US12/269,124 Continuation US8213707B2 (en) 2004-07-15 2008-11-12 System and method for 3D measurement and surface reconstruction

Publications (1)

Publication Number Publication Date
US20060017720A1 true US20060017720A1 (en) 2006-01-26

Family

ID=35656644

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/891,632 Abandoned US20060017720A1 (en) 2004-07-15 2004-07-15 System and method for 3D measurement and surface reconstruction
US12/269,124 Expired - Fee Related US8213707B2 (en) 2004-07-15 2008-11-12 System and method for 3D measurement and surface reconstruction

Family Applications After (1)

Application Number Title Priority Date Filing Date
US12/269,124 Expired - Fee Related US8213707B2 (en) 2004-07-15 2008-11-12 System and method for 3D measurement and surface reconstruction

Country Status (1)

Country Link
US (2) US20060017720A1 (en)

Cited By (108)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070132763A1 (en) * 2005-12-08 2007-06-14 Electronics And Telecommunications Research Institute Method for creating 3-D curved suface by using corresponding curves in a plurality of images
US20070253617A1 (en) * 2006-04-27 2007-11-01 Mako Surgical Corp. Contour triangulation system and method
US20080144973A1 (en) * 2006-12-13 2008-06-19 Hailin Jin Rendering images under cylindrical projections
WO2008104082A1 (en) * 2007-03-01 2008-09-04 Titan Medical Inc. Methods, systems and devices for threedimensional input, and control methods and systems based thereon
US20090140926A1 (en) * 2007-12-04 2009-06-04 Elden Douglas Traster System and method for localization utilizing dynamically deployable beacons
US20090184961A1 (en) * 2005-12-16 2009-07-23 Ihi Corporation Three-dimensional shape data recording/display method and device, and three-dimensional shape measuring method and device
WO2009112895A1 (en) * 2008-03-10 2009-09-17 Timothy Webster Position sensing of a piston in a hydraulic cylinder using a photo image sensor
US20090284593A1 (en) * 2008-05-16 2009-11-19 Lockheed Martin Corporation Accurate image acquisition for structured-light system for optical shape and positional measurements
US20090287450A1 (en) * 2008-05-16 2009-11-19 Lockheed Martin Corporation Vision system for scan planning of ultrasonic inspection
US20090287427A1 (en) * 2008-05-16 2009-11-19 Lockheed Martin Corporation Vision system and method for mapping of ultrasonic data into cad space
US20100034429A1 (en) * 2008-05-23 2010-02-11 Drouin Marc-Antoine Deconvolution-based structured light system with geometrically plausible regularization
US20100114374A1 (en) * 2008-11-03 2010-05-06 Samsung Electronics Co., Ltd. Apparatus and method for extracting feature information of object and apparatus and method for creating feature map
WO2010072912A1 (en) 2008-12-22 2010-07-01 Noomeo Device for three-dimensional scanning with dense reconstruction
US20110170767A1 (en) * 2007-09-28 2011-07-14 Noomeo Three-dimensional (3d) imaging method
US8116558B2 (en) 2005-12-16 2012-02-14 Ihi Corporation Three-dimensional shape data position matching method and device
US8121399B2 (en) 2005-12-16 2012-02-21 Ihi Corporation Self-position identifying method and device, and three-dimensional shape measuring method and device
CN102436676A (en) * 2011-09-27 2012-05-02 夏东 Three-dimensional reestablishing method for intelligent video monitoring
US20130155417A1 (en) * 2010-08-19 2013-06-20 Canon Kabushiki Kaisha Three-dimensional measurement apparatus, method for three-dimensional measurement, and computer program
US8533967B2 (en) 2010-01-20 2013-09-17 Faro Technologies, Inc. Coordinate measurement machines with removable accessories
US8537374B2 (en) 2010-01-20 2013-09-17 Faro Technologies, Inc. Coordinate measuring machine having an illuminated probe end and method of operation
CN103389048A (en) * 2012-05-10 2013-11-13 康耐视公司 Laser profiling attachment for a vision system camera
US8601702B2 (en) 2010-01-20 2013-12-10 Faro Technologies, Inc. Display for coordinate measuring machine
WO2013184340A1 (en) * 2012-06-07 2013-12-12 Faro Technologies, Inc. Coordinate measurement machines with removable accessories
US8607536B2 (en) 2011-01-14 2013-12-17 Faro Technologies, Inc. Case for a device
US8615893B2 (en) 2010-01-20 2013-12-31 Faro Technologies, Inc. Portable articulated arm coordinate measuring machine having integrated software controls
WO2013155379A3 (en) * 2012-04-12 2014-01-03 Smart Picture Technologies Inc. Orthographic image capture system
US8630314B2 (en) 2010-01-11 2014-01-14 Faro Technologies, Inc. Method and apparatus for synchronizing measurements taken by multiple metrology devices
US8638446B2 (en) 2010-01-20 2014-01-28 Faro Technologies, Inc. Laser scanner or laser tracker having a projector
US8677643B2 (en) 2010-01-20 2014-03-25 Faro Technologies, Inc. Coordinate measurement machines with removable accessories
CN103810700A (en) * 2014-01-14 2014-05-21 燕山大学 Method for determining next optimal observation orientation by occlusion information based on depth image
US8744763B2 (en) 2011-11-17 2014-06-03 Honeywell International Inc. Using structured light to update inertial navigation systems
US8773526B2 (en) 2010-12-17 2014-07-08 Mitutoyo Corporation Edge detection using structured illumination
US8832954B2 (en) 2010-01-20 2014-09-16 Faro Technologies, Inc. Coordinate measurement machines with removable accessories
US20140277731A1 (en) * 2013-03-18 2014-09-18 Kabushiki Kaisha Yaskawa Denki Robot picking system, control device, and method of manufacturing a workpiece
US8875409B2 (en) 2010-01-20 2014-11-04 Faro Technologies, Inc. Coordinate measurement machines with removable accessories
ITPI20130041A1 (en) * 2013-05-14 2014-11-15 Benedetto Allotta METHOD OF DETERMINING THE ORIENTATION OF A SUBMERSIBLE SURFACE AND EQUIPMENT THAT ACTIVATE THIS METHOD
US8898919B2 (en) 2010-01-20 2014-12-02 Faro Technologies, Inc. Coordinate measurement machine with distance meter used to establish frame of reference
CN104240214A (en) * 2012-03-13 2014-12-24 湖南领创智能科技有限公司 Depth camera rapid calibration method for three-dimensional reconstruction
WO2015026636A1 (en) * 2013-08-21 2015-02-26 Faro Technologies, Inc. Real-time inspection guidance of triangulation scanner
US8970693B1 (en) * 2011-12-15 2015-03-03 Rawles Llc Surface modeling with structured light
US8997362B2 (en) 2012-07-17 2015-04-07 Faro Technologies, Inc. Portable articulated arm coordinate measuring machine with optical communications bus
US9074883B2 (en) 2009-03-25 2015-07-07 Faro Technologies, Inc. Device for optically scanning and measuring an environment
US9113023B2 (en) 2009-11-20 2015-08-18 Faro Technologies, Inc. Three-dimensional scanner with spectroscopic energy detector
US9163922B2 (en) 2010-01-20 2015-10-20 Faro Technologies, Inc. Coordinate measurement machine with distance meter and camera to determine dimensions within camera images
US9168654B2 (en) 2010-11-16 2015-10-27 Faro Technologies, Inc. Coordinate measuring machines with dual layer arm
JP2015195576A (en) * 2014-03-25 2015-11-05 パナソニックIpマネジメント株式会社 Imaging method of multi-viewpoint image and image display method
US9185364B1 (en) * 2014-11-20 2015-11-10 Robert Odierna Sub-surface marine light unit with variable wavelength light emission and an integrated camera
US9210288B2 (en) 2009-11-20 2015-12-08 Faro Technologies, Inc. Three-dimensional scanner with dichroic beam splitters to capture a variety of signals
CN105160700A (en) * 2015-06-18 2015-12-16 上海工程技术大学 Cross section curve reconstruction method for three-dimensional model reconstruction
US20150369593A1 (en) * 2014-06-19 2015-12-24 Kari MYLLYKOSKI Orthographic image capture system
US9329271B2 (en) 2010-05-10 2016-05-03 Faro Technologies, Inc. Method for optically scanning and measuring an environment
US9372265B2 (en) 2012-10-05 2016-06-21 Faro Technologies, Inc. Intermediate two-dimensional scanning with a three-dimensional scanner to speed registration
US9417316B2 (en) 2009-11-20 2016-08-16 Faro Technologies, Inc. Device for optically scanning and measuring an environment
US9417056B2 (en) 2012-01-25 2016-08-16 Faro Technologies, Inc. Device for optically scanning and measuring an environment
JP2016197127A (en) * 2016-08-02 2016-11-24 キヤノン株式会社 Measurement device, control method of measurement device, and program
US9513107B2 (en) 2012-10-05 2016-12-06 Faro Technologies, Inc. Registration calculation between three-dimensional (3D) scans based on two-dimensional (2D) scan data from a 3D scanner
US20160370171A1 (en) * 2011-04-15 2016-12-22 Faro Technologies, Inc. Diagnosing multipath interference and eliminating multipath interference in 3d scanners using projection patterns
US9529083B2 (en) 2009-11-20 2016-12-27 Faro Technologies, Inc. Three-dimensional scanner with enhanced spectroscopic energy detector
CN106323241A (en) * 2016-06-12 2017-01-11 广东警官学院 Method for measuring three-dimensional information of person or object through monitoring video or vehicle-mounted camera
US9551575B2 (en) 2009-03-25 2017-01-24 Faro Technologies, Inc. Laser scanner having a multi-color light source and real-time color receiver
US9607239B2 (en) 2010-01-20 2017-03-28 Faro Technologies, Inc. Articulated arm coordinate measurement machine having a 2D camera and method of obtaining 3D representations
US9628775B2 (en) 2010-01-20 2017-04-18 Faro Technologies, Inc. Articulated arm coordinate measurement machine having a 2D camera and method of obtaining 3D representations
WO2017095580A1 (en) * 2015-12-02 2017-06-08 Qualcomm Incorporated Active camera movement determination for object position and extent in three-dimensional space
WO2017151669A1 (en) 2016-02-29 2017-09-08 Aquifi, Inc. System and method for assisted 3d scanning
US20170278221A1 (en) * 2016-03-22 2017-09-28 Samsung Electronics Co., Ltd. Method and apparatus of image representation and processing for dynamic vision sensor
WO2017174791A1 (en) * 2016-04-08 2017-10-12 Carl Zeiss Ag Device and method for measuring a surface topography, and calibration method
US9846940B1 (en) * 2016-08-15 2017-12-19 Canon U.S.A., Inc. Spectrally encoded endoscopic image process
US9879976B2 (en) 2010-01-20 2018-01-30 Faro Technologies, Inc. Articulated arm coordinate measurement machine that uses a 2D camera to determine 3D coordinates of smoothly continuous edge features
US10068344B2 (en) 2014-03-05 2018-09-04 Smart Picture Technologies Inc. Method and system for 3D capture based on structure from motion with simplified pose detection
US10067231B2 (en) 2012-10-05 2018-09-04 Faro Technologies, Inc. Registration calculation of three-dimensional scanner data performed between scans based on measurements by two-dimensional scanner
US10074191B1 (en) 2015-07-05 2018-09-11 Cognex Corporation System and method for determination of object volume with multiple three-dimensional sensors
US10083522B2 (en) 2015-06-19 2018-09-25 Smart Picture Technologies, Inc. Image based measurement system
US10115035B2 (en) * 2015-01-08 2018-10-30 Sungkyunkwan University Foundation For Corporation Collaboration Vision system and analytical method for planar surface segmentation
US10119805B2 (en) 2011-04-15 2018-11-06 Faro Technologies, Inc. Three-dimensional coordinate scanner and method of operation
WO2018217911A1 (en) * 2017-05-24 2018-11-29 Augustyn + Company Method, system, and apparatus for rapidly measuaring incident solar irradiance on multiple planes of differing angular orientations
US10175037B2 (en) 2015-12-27 2019-01-08 Faro Technologies, Inc. 3-D measuring device with battery pack
US10209059B2 (en) 2010-04-21 2019-02-19 Faro Technologies, Inc. Method and apparatus for following an operator and locking onto a retroreflector with a laser tracker
US10222607B2 (en) 2016-12-14 2019-03-05 Canon U.S.A., Inc. Three-dimensional endoscope
US10302413B2 (en) 2011-04-15 2019-05-28 Faro Technologies, Inc. Six degree-of-freedom laser tracker that cooperates with a remote sensor
US10304254B2 (en) 2017-08-08 2019-05-28 Smart Picture Technologies, Inc. Method for measuring and modeling spaces using markerless augmented reality
CN109922345A (en) * 2013-04-08 2019-06-21 杜比国际公司 To the LUT method encoded and the method and corresponding equipment that are decoded
US10360693B2 (en) * 2017-03-01 2019-07-23 Cognex Corporation High speed structured light system
US20190304175A1 (en) * 2018-03-30 2019-10-03 Konica Minolta Laboratory U.S.A., Inc. Three-dimensional modeling scanner
US20190339369A1 (en) * 2018-05-04 2019-11-07 Microsoft Technology Licensing, Llc Field Calibration of a Structured Light Range-Sensor
US20200061769A1 (en) * 2017-11-07 2020-02-27 Dalian University Of Technology Monocular vision six-dimensional measurement method for high-dynamic large-range arbitrary contouring error of cnc machine tool
US10613228B2 (en) 2017-09-08 2020-04-07 Microsoft Techology Licensing, Llc Time-of-flight augmented structured light range-sensor
CN111311742A (en) * 2020-03-27 2020-06-19 北京百度网讯科技有限公司 Three-dimensional reconstruction method, three-dimensional reconstruction device and electronic equipment
US10724853B2 (en) 2017-10-06 2020-07-28 Advanced Scanners, Inc. Generation of one or more edges of luminosity to form three-dimensional models of objects
US10728519B2 (en) 2004-06-17 2020-07-28 Align Technology, Inc. Method and apparatus for colour imaging a three-dimensional structure
US10794732B2 (en) 2018-11-08 2020-10-06 Canon U.S.A., Inc. Apparatus, system and method for correcting nonuniform rotational distortion in an image comprising at least two stationary light transmitted fibers with predetermined position relative to an axis of rotation of at least one rotating fiber
US20210041236A1 (en) * 2018-04-27 2021-02-11 China Agricultural University Method and system for calibration of structural parameters and construction of affine coordinate system of vision measurement system
US10952827B2 (en) 2014-08-15 2021-03-23 Align Technology, Inc. Calibration of an intraoral scanner
US10991112B2 (en) * 2018-01-24 2021-04-27 Qualcomm Incorporated Multiple scale processing for received structured light
US11010877B2 (en) 2017-01-27 2021-05-18 Canon U.S.A., Inc. Apparatus, system and method for dynamic in-line spectrum compensation of an image
US20210209340A1 (en) * 2019-09-03 2021-07-08 Zhejiang University Methods for obtaining normal vector, geometry and material of three-dimensional objects based on neural network
US11138757B2 (en) 2019-05-10 2021-10-05 Smart Picture Technologies, Inc. Methods and systems for measuring and modeling spaces using markerless photo-based augmented reality process
US20210349030A1 (en) * 2018-10-19 2021-11-11 Renishaw Plc Spectroscopic apparatus and methods
US11216005B1 (en) * 2020-10-06 2022-01-04 Accenture Global Solutions Limited Generating a point cloud capture plan
CN114167866A (en) * 2021-12-02 2022-03-11 桂林电子科技大学 Intelligent logistics robot and control method
US20220092804A1 (en) * 2019-02-22 2022-03-24 Prophesee Three-dimensional imaging and sensing using a dynamic vision sensor and pattern projection
US20220092814A1 (en) * 2019-01-09 2022-03-24 Trinamix Gmbh Detector for determining a position of at least one object
WO2022124232A1 (en) * 2020-12-10 2022-06-16 ファナック株式会社 Image processing system and image processing method
US11576736B2 (en) 2007-03-01 2023-02-14 Titan Medical Inc. Hand controller for robotic surgery system
US11629835B2 (en) * 2019-07-31 2023-04-18 Toyota Jidosha Kabushiki Kaisha Auto-calibration of vehicle sensors
CN116051696A (en) * 2023-01-10 2023-05-02 之江实验室 Reconstruction method and device of human body implicit model capable of being re-illuminated
US11688087B1 (en) * 2022-08-26 2023-06-27 Illuscio, Inc. Systems and methods for using hyperspectral data to produce a unified three-dimensional scan that incorporates depth
US11704537B2 (en) 2017-04-28 2023-07-18 Microsoft Technology Licensing, Llc Octree-based convolutional neural network
CN117146710A (en) * 2023-10-30 2023-12-01 中国科学院自动化研究所 Dynamic projection three-dimensional reconstruction system and method based on active vision

Families Citing this family (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2618102A2 (en) 2006-11-21 2013-07-24 Mantisvision Ltd. 3d geometric modeling and 3d video content creation
US8090194B2 (en) 2006-11-21 2012-01-03 Mantis Vision Ltd. 3D geometric modeling and motion capture using both single and dual imaging
US8564502B2 (en) * 2009-04-02 2013-10-22 GM Global Technology Operations LLC Distortion and perspective correction of vector projection display
US8547374B1 (en) * 2009-07-24 2013-10-01 Lockheed Martin Corporation Detection and reconstruction of 3D objects with passive imaging sensors
US9366772B2 (en) 2009-11-05 2016-06-14 Exxonmobil Upstream Research Company Method for creating a hierarchically layered earth model
CN101986347B (en) * 2010-10-28 2012-12-12 浙江工业大学 Method for reconstructing stereoscopic vision sequence
US8941651B2 (en) * 2011-09-08 2015-01-27 Honeywell International Inc. Object alignment from a 2-dimensional image
US8638989B2 (en) * 2012-01-17 2014-01-28 Leap Motion, Inc. Systems and methods for capturing motion in three-dimensional space
US9070019B2 (en) 2012-01-17 2015-06-30 Leap Motion, Inc. Systems and methods for capturing motion in three-dimensional space
US9679215B2 (en) 2012-01-17 2017-06-13 Leap Motion, Inc. Systems and methods for machine control
US11493998B2 (en) 2012-01-17 2022-11-08 Ultrahaptics IP Two Limited Systems and methods for machine control
US8693731B2 (en) 2012-01-17 2014-04-08 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging
US9501152B2 (en) 2013-01-15 2016-11-22 Leap Motion, Inc. Free-space user interface and control using virtual constructs
US10691219B2 (en) 2012-01-17 2020-06-23 Ultrahaptics IP Two Limited Systems and methods for machine control
US8662676B1 (en) * 2012-03-14 2014-03-04 Rawles Llc Automatic projector calibration
US9207070B2 (en) 2012-05-24 2015-12-08 Qualcomm Incorporated Transmission of affine-invariant spatial mask for active depth sensing
US9285893B2 (en) 2012-11-08 2016-03-15 Leap Motion, Inc. Object detection and tracking with variable-field illumination devices
US10609285B2 (en) 2013-01-07 2020-03-31 Ultrahaptics IP Two Limited Power consumption in motion-capture systems
US9626015B2 (en) 2013-01-08 2017-04-18 Leap Motion, Inc. Power consumption in motion-capture systems with audio and optical signals
US10042510B2 (en) 2013-01-15 2018-08-07 Leap Motion, Inc. Dynamic user interactions for display control and measuring degree of completeness of user gestures
US9459697B2 (en) 2013-01-15 2016-10-04 Leap Motion, Inc. Dynamic, free-space user interactions for machine control
JP6037901B2 (en) * 2013-03-11 2016-12-07 日立マクセル株式会社 Operation detection device, operation detection method, and display control data generation method
US9702977B2 (en) 2013-03-15 2017-07-11 Leap Motion, Inc. Determining positional information of an object in space
US10620709B2 (en) 2013-04-05 2020-04-14 Ultrahaptics IP Two Limited Customized gesture interpretation
US9916009B2 (en) 2013-04-26 2018-03-13 Leap Motion, Inc. Non-tactile interface systems and methods
US9747696B2 (en) 2013-05-17 2017-08-29 Leap Motion, Inc. Systems and methods for providing normalized parameters of motions of objects in three-dimensional space
US10281987B1 (en) 2013-08-09 2019-05-07 Leap Motion, Inc. Systems and methods of free-space gestural interaction
US10846942B1 (en) 2013-08-29 2020-11-24 Ultrahaptics IP Two Limited Predictive information for free space gesture control and communication
US9632572B2 (en) 2013-10-03 2017-04-25 Leap Motion, Inc. Enhanced field of view to augment three-dimensional (3D) sensory space for free-space gesture interpretation
CN103530907B (en) * 2013-10-21 2017-02-01 深圳市易尚展示股份有限公司 Complicated three-dimensional model drawing method based on images
US9996638B1 (en) 2013-10-31 2018-06-12 Leap Motion, Inc. Predictive information for free space gesture control and communication
US9613262B2 (en) 2014-01-15 2017-04-04 Leap Motion, Inc. Object detection and tracking for providing a virtual device experience
DE202014103729U1 (en) 2014-08-08 2014-09-09 Leap Motion, Inc. Augmented reality with motion detection
US9773302B2 (en) * 2015-10-08 2017-09-26 Hewlett-Packard Development Company, L.P. Three-dimensional object model tagging
US9996944B2 (en) 2016-07-06 2018-06-12 Qualcomm Incorporated Systems and methods for mapping an environment
US11875012B2 (en) 2018-05-25 2024-01-16 Ultrahaptics IP Two Limited Throwable interface for augmented reality and virtual reality environments
CN109754428B (en) * 2018-11-26 2022-04-26 西北工业大学 Method for measuring underwater binocular vision positioning error
CN112084826A (en) 2019-06-14 2020-12-15 北京三星通信技术研究有限公司 Image processing method, image processing apparatus, and monitoring system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5831621A (en) * 1996-10-21 1998-11-03 The Trustees Of The University Of Pennyslvania Positional space solution to the next best view problem

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5831621A (en) * 1996-10-21 1998-11-03 The Trustees Of The University Of Pennyslvania Positional space solution to the next best view problem

Cited By (188)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10750152B2 (en) 2004-06-17 2020-08-18 Align Technology, Inc. Method and apparatus for structure imaging a three-dimensional structure
US10812773B2 (en) 2004-06-17 2020-10-20 Align Technology, Inc. Method and apparatus for colour imaging a three-dimensional structure
US10944953B2 (en) 2004-06-17 2021-03-09 Align Technology, Inc. Method and apparatus for colour imaging a three-dimensional structure
US10728519B2 (en) 2004-06-17 2020-07-28 Align Technology, Inc. Method and apparatus for colour imaging a three-dimensional structure
US10750151B2 (en) 2004-06-17 2020-08-18 Align Technology, Inc. Method and apparatus for colour imaging a three-dimensional structure
US10924720B2 (en) 2004-06-17 2021-02-16 Align Technology, Inc. Systems and methods for determining surface topology and associated color of an intraoral structure
US10764557B2 (en) 2004-06-17 2020-09-01 Align Technology, Inc. Method and apparatus for imaging a three-dimensional structure
US7812839B2 (en) * 2005-12-08 2010-10-12 Electronics And Telecommunications Research Institute Method for creating 3-D curved suface by using corresponding curves in a plurality of images
US20070132763A1 (en) * 2005-12-08 2007-06-14 Electronics And Telecommunications Research Institute Method for creating 3-D curved suface by using corresponding curves in a plurality of images
US20090184961A1 (en) * 2005-12-16 2009-07-23 Ihi Corporation Three-dimensional shape data recording/display method and device, and three-dimensional shape measuring method and device
US8300048B2 (en) * 2005-12-16 2012-10-30 Ihi Corporation Three-dimensional shape data recording/display method and device, and three-dimensional shape measuring method and device
US8121399B2 (en) 2005-12-16 2012-02-21 Ihi Corporation Self-position identifying method and device, and three-dimensional shape measuring method and device
US8116558B2 (en) 2005-12-16 2012-02-14 Ihi Corporation Three-dimensional shape data position matching method and device
US20070253617A1 (en) * 2006-04-27 2007-11-01 Mako Surgical Corp. Contour triangulation system and method
US7623702B2 (en) * 2006-04-27 2009-11-24 Mako Surgical Corp. Contour triangulation system and method
US7822292B2 (en) * 2006-12-13 2010-10-26 Adobe Systems Incorporated Rendering images under cylindrical projections
US20110058753A1 (en) * 2006-12-13 2011-03-10 Adobe Systems Incorporated Rendering images under cylindrical projections
US8023772B2 (en) 2006-12-13 2011-09-20 Adobe System Incorporated Rendering images under cylindrical projections
US20080144973A1 (en) * 2006-12-13 2008-06-19 Hailin Jin Rendering images under cylindrical projections
WO2008104082A1 (en) * 2007-03-01 2008-09-04 Titan Medical Inc. Methods, systems and devices for threedimensional input, and control methods and systems based thereon
US10695139B2 (en) 2007-03-01 2020-06-30 Titan Medical Inc. Robotic system display system for displaying auxiliary information
US11576736B2 (en) 2007-03-01 2023-02-14 Titan Medical Inc. Hand controller for robotic surgery system
US8792688B2 (en) * 2007-03-01 2014-07-29 Titan Medical Inc. Methods, systems and devices for three dimensional input and control methods and systems based thereon
US10357319B2 (en) 2007-03-01 2019-07-23 Titan Medical Inc. Robotic system display method for displaying auxiliary information
US11806101B2 (en) 2007-03-01 2023-11-07 Titan Medical Inc. Hand controller for robotic surgery system
US9421068B2 (en) * 2007-03-01 2016-08-23 Titan Medical Inc. Methods, systems and devices for three dimensional input and control methods and systems based thereon
US20100036393A1 (en) * 2007-03-01 2010-02-11 Titan Medical Inc. Methods, systems and devices for threedimensional input, and control methods and systems based thereon
US20110170767A1 (en) * 2007-09-28 2011-07-14 Noomeo Three-dimensional (3d) imaging method
US8483477B2 (en) 2007-09-28 2013-07-09 Noomeo Method of constructing a digital image of a three-dimensional (3D) surface using a mask
US20090140926A1 (en) * 2007-12-04 2009-06-04 Elden Douglas Traster System and method for localization utilizing dynamically deployable beacons
WO2009112895A1 (en) * 2008-03-10 2009-09-17 Timothy Webster Position sensing of a piston in a hydraulic cylinder using a photo image sensor
KR101489030B1 (en) * 2008-05-16 2015-02-02 록히드 마틴 코포레이션 Accurate Image Acqusition for structured-light System For Optical Shape And Positional Measurements
JP2011521231A (en) * 2008-05-16 2011-07-21 ロッキード・マーチン・コーポレーション Accurate image acquisition on structured light systems for optical measurement of shape and position
TWI464365B (en) * 2008-05-16 2014-12-11 Lockheed Corp Method of providing a three -dimensional representation of an article and apparatus for providing a three-dimensional representation of an object
US8220335B2 (en) * 2008-05-16 2012-07-17 Lockheed Martin Corporation Accurate image acquisition for structured-light system for optical shape and positional measurements
US20090287427A1 (en) * 2008-05-16 2009-11-19 Lockheed Martin Corporation Vision system and method for mapping of ultrasonic data into cad space
AU2009246265B2 (en) * 2008-05-16 2014-04-03 Lockheed Martin Corporation Accurate image acquisition for structured-light system for optical shape and positional measurements
US20090287450A1 (en) * 2008-05-16 2009-11-19 Lockheed Martin Corporation Vision system for scan planning of ultrasonic inspection
WO2009140461A1 (en) 2008-05-16 2009-11-19 Lockheed Martin Corporation Accurate image acquisition for structured-light system for optical shape and positional measurements
CN102084214A (en) * 2008-05-16 2011-06-01 洛伊马汀公司 Accurate image acquisition for structured-light system for optical shape and positional measurements
US20090284593A1 (en) * 2008-05-16 2009-11-19 Lockheed Martin Corporation Accurate image acquisition for structured-light system for optical shape and positional measurements
US20100034429A1 (en) * 2008-05-23 2010-02-11 Drouin Marc-Antoine Deconvolution-based structured light system with geometrically plausible regularization
US8411995B2 (en) * 2008-05-23 2013-04-02 National Research Council Of Canada Deconvolution-based structured light system with geometrically plausible regularization
JP2010107495A (en) * 2008-11-03 2010-05-13 Samsung Electronics Co Ltd Apparatus and method for extracting characteristic information of object and apparatus and method for producing characteristic map using the same
US20100114374A1 (en) * 2008-11-03 2010-05-06 Samsung Electronics Co., Ltd. Apparatus and method for extracting feature information of object and apparatus and method for creating feature map
US8352075B2 (en) * 2008-11-03 2013-01-08 Samsung Electronics Co., Ltd. Apparatus and method for extracting feature information of object and apparatus and method for creating feature map
WO2010072912A1 (en) 2008-12-22 2010-07-01 Noomeo Device for three-dimensional scanning with dense reconstruction
US9074883B2 (en) 2009-03-25 2015-07-07 Faro Technologies, Inc. Device for optically scanning and measuring an environment
US9551575B2 (en) 2009-03-25 2017-01-24 Faro Technologies, Inc. Laser scanner having a multi-color light source and real-time color receiver
US9113023B2 (en) 2009-11-20 2015-08-18 Faro Technologies, Inc. Three-dimensional scanner with spectroscopic energy detector
US9210288B2 (en) 2009-11-20 2015-12-08 Faro Technologies, Inc. Three-dimensional scanner with dichroic beam splitters to capture a variety of signals
US9529083B2 (en) 2009-11-20 2016-12-27 Faro Technologies, Inc. Three-dimensional scanner with enhanced spectroscopic energy detector
US9417316B2 (en) 2009-11-20 2016-08-16 Faro Technologies, Inc. Device for optically scanning and measuring an environment
US8630314B2 (en) 2010-01-11 2014-01-14 Faro Technologies, Inc. Method and apparatus for synchronizing measurements taken by multiple metrology devices
US8677643B2 (en) 2010-01-20 2014-03-25 Faro Technologies, Inc. Coordinate measurement machines with removable accessories
US9628775B2 (en) 2010-01-20 2017-04-18 Faro Technologies, Inc. Articulated arm coordinate measurement machine having a 2D camera and method of obtaining 3D representations
US10281259B2 (en) 2010-01-20 2019-05-07 Faro Technologies, Inc. Articulated arm coordinate measurement machine that uses a 2D camera to determine 3D coordinates of smoothly continuous edge features
US8898919B2 (en) 2010-01-20 2014-12-02 Faro Technologies, Inc. Coordinate measurement machine with distance meter used to establish frame of reference
US8533967B2 (en) 2010-01-20 2013-09-17 Faro Technologies, Inc. Coordinate measurement machines with removable accessories
US9163922B2 (en) 2010-01-20 2015-10-20 Faro Technologies, Inc. Coordinate measurement machine with distance meter and camera to determine dimensions within camera images
US8942940B2 (en) 2010-01-20 2015-01-27 Faro Technologies, Inc. Portable articulated arm coordinate measuring machine and integrated electronic data processing system
US8832954B2 (en) 2010-01-20 2014-09-16 Faro Technologies, Inc. Coordinate measurement machines with removable accessories
US10060722B2 (en) 2010-01-20 2018-08-28 Faro Technologies, Inc. Articulated arm coordinate measurement machine having a 2D camera and method of obtaining 3D representations
US9879976B2 (en) 2010-01-20 2018-01-30 Faro Technologies, Inc. Articulated arm coordinate measurement machine that uses a 2D camera to determine 3D coordinates of smoothly continuous edge features
US8763266B2 (en) 2010-01-20 2014-07-01 Faro Technologies, Inc. Coordinate measurement device
US8875409B2 (en) 2010-01-20 2014-11-04 Faro Technologies, Inc. Coordinate measurement machines with removable accessories
US9607239B2 (en) 2010-01-20 2017-03-28 Faro Technologies, Inc. Articulated arm coordinate measurement machine having a 2D camera and method of obtaining 3D representations
US8683709B2 (en) 2010-01-20 2014-04-01 Faro Technologies, Inc. Portable articulated arm coordinate measuring machine with multi-bus arm technology
US8638446B2 (en) 2010-01-20 2014-01-28 Faro Technologies, Inc. Laser scanner or laser tracker having a projector
US9009000B2 (en) 2010-01-20 2015-04-14 Faro Technologies, Inc. Method for evaluating mounting stability of articulated arm coordinate measurement machine using inclinometers
US8615893B2 (en) 2010-01-20 2013-12-31 Faro Technologies, Inc. Portable articulated arm coordinate measuring machine having integrated software controls
US8601702B2 (en) 2010-01-20 2013-12-10 Faro Technologies, Inc. Display for coordinate measuring machine
US8537374B2 (en) 2010-01-20 2013-09-17 Faro Technologies, Inc. Coordinate measuring machine having an illuminated probe end and method of operation
US10209059B2 (en) 2010-04-21 2019-02-19 Faro Technologies, Inc. Method and apparatus for following an operator and locking onto a retroreflector with a laser tracker
US10480929B2 (en) 2010-04-21 2019-11-19 Faro Technologies, Inc. Method and apparatus for following an operator and locking onto a retroreflector with a laser tracker
US9684078B2 (en) 2010-05-10 2017-06-20 Faro Technologies, Inc. Method for optically scanning and measuring an environment
US9329271B2 (en) 2010-05-10 2016-05-03 Faro Technologies, Inc. Method for optically scanning and measuring an environment
US20130155417A1 (en) * 2010-08-19 2013-06-20 Canon Kabushiki Kaisha Three-dimensional measurement apparatus, method for three-dimensional measurement, and computer program
US8964189B2 (en) * 2010-08-19 2015-02-24 Canon Kabushiki Kaisha Three-dimensional measurement apparatus, method for three-dimensional measurement, and computer program
US9168654B2 (en) 2010-11-16 2015-10-27 Faro Technologies, Inc. Coordinate measuring machines with dual layer arm
US8773526B2 (en) 2010-12-17 2014-07-08 Mitutoyo Corporation Edge detection using structured illumination
US8607536B2 (en) 2011-01-14 2013-12-17 Faro Technologies, Inc. Case for a device
US20160370171A1 (en) * 2011-04-15 2016-12-22 Faro Technologies, Inc. Diagnosing multipath interference and eliminating multipath interference in 3d scanners using projection patterns
US10267619B2 (en) 2011-04-15 2019-04-23 Faro Technologies, Inc. Three-dimensional coordinate scanner and method of operation
US10119805B2 (en) 2011-04-15 2018-11-06 Faro Technologies, Inc. Three-dimensional coordinate scanner and method of operation
US10302413B2 (en) 2011-04-15 2019-05-28 Faro Technologies, Inc. Six degree-of-freedom laser tracker that cooperates with a remote sensor
US10578423B2 (en) * 2011-04-15 2020-03-03 Faro Technologies, Inc. Diagnosing multipath interference and eliminating multipath interference in 3D scanners using projection patterns
CN102436676A (en) * 2011-09-27 2012-05-02 夏东 Three-dimensional reestablishing method for intelligent video monitoring
US8744763B2 (en) 2011-11-17 2014-06-03 Honeywell International Inc. Using structured light to update inertial navigation systems
US8970693B1 (en) * 2011-12-15 2015-03-03 Rawles Llc Surface modeling with structured light
US9417056B2 (en) 2012-01-25 2016-08-16 Faro Technologies, Inc. Device for optically scanning and measuring an environment
CN104240214A (en) * 2012-03-13 2014-12-24 湖南领创智能科技有限公司 Depth camera rapid calibration method for three-dimensional reconstruction
WO2013155379A3 (en) * 2012-04-12 2014-01-03 Smart Picture Technologies Inc. Orthographic image capture system
CN103389048A (en) * 2012-05-10 2013-11-13 康耐视公司 Laser profiling attachment for a vision system camera
US8675208B2 (en) * 2012-05-10 2014-03-18 Cognex Corporation Laser profiling attachment for a vision system camera
CN104380033A (en) * 2012-06-07 2015-02-25 法罗技术股份有限公司 Coordinate measurement machines with removable accessories
GB2517621A (en) * 2012-06-07 2015-02-25 Faro Tech Inc Coordinate measurement machines with removable accessories
WO2013184340A1 (en) * 2012-06-07 2013-12-12 Faro Technologies, Inc. Coordinate measurement machines with removable accessories
US8997362B2 (en) 2012-07-17 2015-04-07 Faro Technologies, Inc. Portable articulated arm coordinate measuring machine with optical communications bus
US9739886B2 (en) 2012-10-05 2017-08-22 Faro Technologies, Inc. Using a two-dimensional scanner to speed registration of three-dimensional scan data
US11035955B2 (en) 2012-10-05 2021-06-15 Faro Technologies, Inc. Registration calculation of three-dimensional scanner data performed between scans based on measurements by two-dimensional scanner
US11112501B2 (en) 2012-10-05 2021-09-07 Faro Technologies, Inc. Using a two-dimensional scanner to speed registration of three-dimensional scan data
US9618620B2 (en) 2012-10-05 2017-04-11 Faro Technologies, Inc. Using depth-camera images to speed registration of three-dimensional scans
US9746559B2 (en) 2012-10-05 2017-08-29 Faro Technologies, Inc. Using two-dimensional camera images to speed registration of three-dimensional scans
US10203413B2 (en) 2012-10-05 2019-02-12 Faro Technologies, Inc. Using a two-dimensional scanner to speed registration of three-dimensional scan data
US11815600B2 (en) 2012-10-05 2023-11-14 Faro Technologies, Inc. Using a two-dimensional scanner to speed registration of three-dimensional scan data
US10067231B2 (en) 2012-10-05 2018-09-04 Faro Technologies, Inc. Registration calculation of three-dimensional scanner data performed between scans based on measurements by two-dimensional scanner
US9513107B2 (en) 2012-10-05 2016-12-06 Faro Technologies, Inc. Registration calculation between three-dimensional (3D) scans based on two-dimensional (2D) scan data from a 3D scanner
US10739458B2 (en) 2012-10-05 2020-08-11 Faro Technologies, Inc. Using two-dimensional camera images to speed registration of three-dimensional scans
US9372265B2 (en) 2012-10-05 2016-06-21 Faro Technologies, Inc. Intermediate two-dimensional scanning with a three-dimensional scanner to speed registration
US20140277731A1 (en) * 2013-03-18 2014-09-18 Kabushiki Kaisha Yaskawa Denki Robot picking system, control device, and method of manufacturing a workpiece
US9149932B2 (en) * 2013-03-18 2015-10-06 Kabushiki Kaisha Yaskawa Denki Robot picking system, control device, and method of manufacturing a workpiece
CN109922345A (en) * 2013-04-08 2019-06-21 杜比国际公司 To the LUT method encoded and the method and corresponding equipment that are decoded
WO2014184748A1 (en) 2013-05-14 2014-11-20 Universita' Degli Studi Di Firenze Method for determining the orientation of a submerged surface and apparatus that carries out this method
ITPI20130041A1 (en) * 2013-05-14 2014-11-15 Benedetto Allotta METHOD OF DETERMINING THE ORIENTATION OF A SUBMERSIBLE SURFACE AND EQUIPMENT THAT ACTIVATE THIS METHOD
WO2015026636A1 (en) * 2013-08-21 2015-02-26 Faro Technologies, Inc. Real-time inspection guidance of triangulation scanner
US20150054946A1 (en) * 2013-08-21 2015-02-26 Faro Technologies, Inc. Real-time inspection guidance of triangulation scanner
US10812694B2 (en) * 2013-08-21 2020-10-20 Faro Technologies, Inc. Real-time inspection guidance of triangulation scanner
CN103810700A (en) * 2014-01-14 2014-05-21 燕山大学 Method for determining next optimal observation orientation by occlusion information based on depth image
US10068344B2 (en) 2014-03-05 2018-09-04 Smart Picture Technologies Inc. Method and system for 3D capture based on structure from motion with simplified pose detection
JP2015195576A (en) * 2014-03-25 2015-11-05 パナソニックIpマネジメント株式会社 Imaging method of multi-viewpoint image and image display method
US20150369593A1 (en) * 2014-06-19 2015-12-24 Kari MYLLYKOSKI Orthographic image capture system
US10952827B2 (en) 2014-08-15 2021-03-23 Align Technology, Inc. Calibration of an intraoral scanner
US9185364B1 (en) * 2014-11-20 2015-11-10 Robert Odierna Sub-surface marine light unit with variable wavelength light emission and an integrated camera
US10115035B2 (en) * 2015-01-08 2018-10-30 Sungkyunkwan University Foundation For Corporation Collaboration Vision system and analytical method for planar surface segmentation
CN105160700A (en) * 2015-06-18 2015-12-16 上海工程技术大学 Cross section curve reconstruction method for three-dimensional model reconstruction
US10083522B2 (en) 2015-06-19 2018-09-25 Smart Picture Technologies, Inc. Image based measurement system
US10074191B1 (en) 2015-07-05 2018-09-11 Cognex Corporation System and method for determination of object volume with multiple three-dimensional sensors
CN108367436A (en) * 2015-12-02 2018-08-03 高通股份有限公司 Determination is moved for the voluntary camera of object space and range in three dimensions
US10268188B2 (en) 2015-12-02 2019-04-23 Qualcomm Incorporated Active camera movement determination for object position and extent in three-dimensional space
WO2017095580A1 (en) * 2015-12-02 2017-06-08 Qualcomm Incorporated Active camera movement determination for object position and extent in three-dimensional space
US10175037B2 (en) 2015-12-27 2019-01-08 Faro Technologies, Inc. 3-D measuring device with battery pack
EP3422955A4 (en) * 2016-02-29 2019-08-07 Aquifi, Inc. System and method for assisted 3d scanning
CN113532326A (en) * 2016-02-29 2021-10-22 派克赛斯有限责任公司 System and method for assisted 3D scanning
CN109069132A (en) * 2016-02-29 2018-12-21 艾奎菲股份有限公司 System and method for auxiliary type 3D scanning
WO2017151669A1 (en) 2016-02-29 2017-09-08 Aquifi, Inc. System and method for assisted 3d scanning
US9934557B2 (en) * 2016-03-22 2018-04-03 Samsung Electronics Co., Ltd Method and apparatus of image representation and processing for dynamic vision sensor
US20170278221A1 (en) * 2016-03-22 2017-09-28 Samsung Electronics Co., Ltd. Method and apparatus of image representation and processing for dynamic vision sensor
US10935372B2 (en) 2016-04-08 2021-03-02 Carl Zeiss Ag Device and method for measuring a surface topography, and calibration method
WO2017174791A1 (en) * 2016-04-08 2017-10-12 Carl Zeiss Ag Device and method for measuring a surface topography, and calibration method
EP3462129A1 (en) * 2016-04-08 2019-04-03 Carl Zeiss AG Device and method for measuring a surface topography and calibration method
CN109416245A (en) * 2016-04-08 2019-03-01 卡尔蔡司股份公司 For measuring the device and method and calibration method of surface topography
CN111595269A (en) * 2016-04-08 2020-08-28 卡尔蔡司股份公司 Apparatus and method for measuring surface topography and calibration method
CN106323241A (en) * 2016-06-12 2017-01-11 广东警官学院 Method for measuring three-dimensional information of person or object through monitoring video or vehicle-mounted camera
JP2016197127A (en) * 2016-08-02 2016-11-24 キヤノン株式会社 Measurement device, control method of measurement device, and program
US9846940B1 (en) * 2016-08-15 2017-12-19 Canon U.S.A., Inc. Spectrally encoded endoscopic image process
US10222607B2 (en) 2016-12-14 2019-03-05 Canon U.S.A., Inc. Three-dimensional endoscope
US11010877B2 (en) 2017-01-27 2021-05-18 Canon U.S.A., Inc. Apparatus, system and method for dynamic in-line spectrum compensation of an image
US10803622B2 (en) * 2017-03-01 2020-10-13 Cognex Corporation High speed structured light system
US10360693B2 (en) * 2017-03-01 2019-07-23 Cognex Corporation High speed structured light system
US11704537B2 (en) 2017-04-28 2023-07-18 Microsoft Technology Licensing, Llc Octree-based convolutional neural network
WO2018217911A1 (en) * 2017-05-24 2018-11-29 Augustyn + Company Method, system, and apparatus for rapidly measuaring incident solar irradiance on multiple planes of differing angular orientations
US10281552B2 (en) 2017-05-24 2019-05-07 Augustyn + Company Method, system, and apparatus for rapidly measuring incident solar irradiance on multiple planes of differing angular orientations
US11164387B2 (en) 2017-08-08 2021-11-02 Smart Picture Technologies, Inc. Method for measuring and modeling spaces using markerless augmented reality
US11682177B2 (en) 2017-08-08 2023-06-20 Smart Picture Technologies, Inc. Method for measuring and modeling spaces using markerless augmented reality
US10304254B2 (en) 2017-08-08 2019-05-28 Smart Picture Technologies, Inc. Method for measuring and modeling spaces using markerless augmented reality
US10679424B2 (en) 2017-08-08 2020-06-09 Smart Picture Technologies, Inc. Method for measuring and modeling spaces using markerless augmented reality
US10613228B2 (en) 2017-09-08 2020-04-07 Microsoft Techology Licensing, Llc Time-of-flight augmented structured light range-sensor
US11852461B2 (en) 2017-10-06 2023-12-26 Visie Inc. Generation of one or more edges of luminosity to form three-dimensional models of objects
US10724853B2 (en) 2017-10-06 2020-07-28 Advanced Scanners, Inc. Generation of one or more edges of luminosity to form three-dimensional models of objects
US10890439B2 (en) 2017-10-06 2021-01-12 Advanced Scanners, Inc. Generation of one or more edges of luminosity to form three-dimensional models of objects
US20200061769A1 (en) * 2017-11-07 2020-02-27 Dalian University Of Technology Monocular vision six-dimensional measurement method for high-dynamic large-range arbitrary contouring error of cnc machine tool
US11014211B2 (en) * 2017-11-07 2021-05-25 Dalian University Of Technology Monocular vision six-dimensional measurement method for high-dynamic large-range arbitrary contouring error of CNC machine tool
US10991112B2 (en) * 2018-01-24 2021-04-27 Qualcomm Incorporated Multiple scale processing for received structured light
US20190304175A1 (en) * 2018-03-30 2019-10-03 Konica Minolta Laboratory U.S.A., Inc. Three-dimensional modeling scanner
US10650584B2 (en) * 2018-03-30 2020-05-12 Konica Minolta Laboratory U.S.A., Inc. Three-dimensional modeling scanner
US20210041236A1 (en) * 2018-04-27 2021-02-11 China Agricultural University Method and system for calibration of structural parameters and construction of affine coordinate system of vision measurement system
US20190339369A1 (en) * 2018-05-04 2019-11-07 Microsoft Technology Licensing, Llc Field Calibration of a Structured Light Range-Sensor
US10663567B2 (en) * 2018-05-04 2020-05-26 Microsoft Technology Licensing, Llc Field calibration of a structured light range-sensor
US20210349030A1 (en) * 2018-10-19 2021-11-11 Renishaw Plc Spectroscopic apparatus and methods
US11927536B2 (en) * 2018-10-19 2024-03-12 Renishaw Plc Spectroscopic apparatus and methods
US10794732B2 (en) 2018-11-08 2020-10-06 Canon U.S.A., Inc. Apparatus, system and method for correcting nonuniform rotational distortion in an image comprising at least two stationary light transmitted fibers with predetermined position relative to an axis of rotation of at least one rotating fiber
US11922657B2 (en) * 2019-01-09 2024-03-05 Trinamix Gmbh Detector for determining a position of at least one object
US20220092814A1 (en) * 2019-01-09 2022-03-24 Trinamix Gmbh Detector for determining a position of at least one object
US20220092804A1 (en) * 2019-02-22 2022-03-24 Prophesee Three-dimensional imaging and sensing using a dynamic vision sensor and pattern projection
US11527009B2 (en) 2019-05-10 2022-12-13 Smart Picture Technologies, Inc. Methods and systems for measuring and modeling spaces using markerless photo-based augmented reality process
US11138757B2 (en) 2019-05-10 2021-10-05 Smart Picture Technologies, Inc. Methods and systems for measuring and modeling spaces using markerless photo-based augmented reality process
US11629835B2 (en) * 2019-07-31 2023-04-18 Toyota Jidosha Kabushiki Kaisha Auto-calibration of vehicle sensors
US20210209340A1 (en) * 2019-09-03 2021-07-08 Zhejiang University Methods for obtaining normal vector, geometry and material of three-dimensional objects based on neural network
US11748618B2 (en) * 2019-09-03 2023-09-05 Zhejiang University Methods for obtaining normal vector, geometry and material of three-dimensional objects based on neural network
CN111311742A (en) * 2020-03-27 2020-06-19 北京百度网讯科技有限公司 Three-dimensional reconstruction method, three-dimensional reconstruction device and electronic equipment
US11216005B1 (en) * 2020-10-06 2022-01-04 Accenture Global Solutions Limited Generating a point cloud capture plan
WO2022124232A1 (en) * 2020-12-10 2022-06-16 ファナック株式会社 Image processing system and image processing method
CN114167866A (en) * 2021-12-02 2022-03-11 桂林电子科技大学 Intelligent logistics robot and control method
US11688087B1 (en) * 2022-08-26 2023-06-27 Illuscio, Inc. Systems and methods for using hyperspectral data to produce a unified three-dimensional scan that incorporates depth
WO2024044077A1 (en) * 2022-08-26 2024-02-29 Illuscio, Inc. Systems and methods for using hyperspectral data to produce a unified three-dimensional scan that incorporates depth
CN116051696A (en) * 2023-01-10 2023-05-02 之江实验室 Reconstruction method and device of human body implicit model capable of being re-illuminated
CN117146710A (en) * 2023-10-30 2023-12-01 中国科学院自动化研究所 Dynamic projection three-dimensional reconstruction system and method based on active vision

Also Published As

Publication number Publication date
US20090102840A1 (en) 2009-04-23
US8213707B2 (en) 2012-07-03

Similar Documents

Publication Publication Date Title
US8213707B2 (en) System and method for 3D measurement and surface reconstruction
US8032327B2 (en) Auto-referenced sensing method for three-dimensional scanning
US8284240B2 (en) System for adaptive three-dimensional scanning of surface characteristics
US7769205B2 (en) Fast three dimensional recovery method and apparatus
US7098435B2 (en) Method and apparatus for scanning three-dimensional objects
JP4245963B2 (en) Method and system for calibrating multiple cameras using a calibration object
EP2751521B1 (en) Method and system for alignment of a pattern on a spatial coded slide image
Tardif et al. Self-calibration of a general radially symmetric distortion model
CN112161619B (en) Pose detection method, three-dimensional scanning path planning method and detection system
WO2000027131A2 (en) Improved methods and apparatus for 3-d imaging
CN111060006A (en) Viewpoint planning method based on three-dimensional model
Siddique et al. 3d object localization using 2d estimates for computer vision applications
Mauthner et al. Region matching for omnidirectional images using virtual camera planes
Morel et al. Calibration of catadioptric sensors by polarization imaging
Beschi et al. Stereo camera system calibration: the need of two sets of parameters
Lei et al. Unwrapping and stereo rectification for omnidirectional images
Masuda et al. Simultaneous determination of registration and deformation parameters among 3D range images
Castanheiro et al. Modeling Hyperhemispherical Points and Calibrating a Dual-Fish-Eye System for Close-Range Applications
Banerjee et al. A low-cost portable 3d laser scanning system with aptness from acquisition to visualization
Alboul et al. A system for reconstruction from point clouds in 3D: Simplification and mesh representation
Singh et al. Accurate 3D terrain modeling by range data fusion from two heterogeneous range scanners
Park et al. Specularity elimination in range sensing for accurate 3D modeling of specular objects
Li et al. Uncertainty-driven viewpoint planning for 3D object measurements
Kim et al. An improved ICP algorithm based on the sensor projection for automatic 3D registration
Boutteau et al. An omnidirectional stereoscopic system for mobile robot navigation

Legal Events

Date Code Title Description
AS Assignment

Owner name: CITY UNIVERSITY OF HONG KONG, HONG KONG

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LI, YOU FU;REEL/FRAME:015315/0964

Effective date: 20041025

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION