WO1996012160A1 - Method and apparatus for three-dimensional digitising of object surfaces - Google Patents

Method and apparatus for three-dimensional digitising of object surfaces Download PDF

Info

Publication number
WO1996012160A1
WO1996012160A1 PCT/GB1995/002431 GB9502431W WO9612160A1 WO 1996012160 A1 WO1996012160 A1 WO 1996012160A1 GB 9502431 W GB9502431 W GB 9502431W WO 9612160 A1 WO9612160 A1 WO 9612160A1
Authority
WO
WIPO (PCT)
Prior art keywords
detector
phase
fringe
period
radiation
Prior art date
Application number
PCT/GB1995/002431
Other languages
French (fr)
Inventor
John Humphrey Moore
Original Assignee
John Humphrey Moore
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by John Humphrey Moore filed Critical John Humphrey Moore
Priority to EP95934704A priority Critical patent/EP0786072A1/en
Priority to AU37022/95A priority patent/AU3702295A/en
Publication of WO1996012160A1 publication Critical patent/WO1996012160A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2531Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object using several gratings, projected with variable angle of incidence on the object, and one detection device

Definitions

  • Non-contact 3-D digitisers are devices that remotely and without touching an object's surface, are able to sample the analogue shape of that surface and to determine 3-D co-ordinates on the surface relative to an origin.
  • Optical non-contact digitisers use structured lighting, whereby 3-D co-ordinates on an object surface are determined by triangulation between an adjacent detector and projector.
  • the well known shadow and projection moire techniques use similar periodic patterns in the fields of view of both the projector and the detector so that the detected image is a moire fringe from which fringe contours it is possible to determine object surface height values.
  • the fringe contours have an ambiguous order which may be resolved by considering the phase of the moire fringe, rather than its intensity.
  • Those techniques which instead analyse the features of the detected pattern, or a stereo pair, or a series of such patterns, do not consider the phase of the detected pattern.
  • phase-shifting or phase-stepping techniques use synchronous detection to determine the phase at each image point from a series of phase-shifted fringe patterns, obtained, for example, by translating one of the patterns in its own plane by equal amounts.
  • phase-shifting or phase-stepping techniques is the expensive hardware required to effect the necessary translation.
  • One of the objects of preferred embodiments of the present invention is to retain the advantages of spatial phase measurement techniques (i.e. the use of simple apparatus with no moving parts) but to overcome its disadvantages by using a technique presented here called virtual phase-stepping.
  • virtual phase-stepping uses apparatus similar to that used in Fourier analysis, that is, a projector which casts for example a line pattern, of square or sinusoidal intensity profile, onto the surface of an object, which pattern is viewed by an area sensor.
  • the detected image is operated upon by a procedure which is equivalent to, for example, adding a minimum of three phase-shifted patterns to the detected image to generate three phase-shifted images of moire fringe patterns of the contour type, which moire patterns may be used to determine the phase at each image point using synchronous detection.
  • phase image howsoever determined, contains 2p discontinuities, which have to be resolved to determine the relative order of the fringes.
  • phase-unwrapping techniques are known which determine relative fringe orders with respect to a reference point in the phase image, but most such approaches will propagate phase errors throughout the phase image and all will be unable to assign relative fringe orders to regions of the phase image not contiguous with the reference point.
  • the 3-D fringe geometry is such that fringes of constant order resemble skewed paraboloids.
  • a second important object of preferred embodiments of this invention is to simplify the calibration of fringe projection devices by using phase values and relative fringe orders for both the calibration and subsequent 3-D reconstruction. This does not require a mathematical model of the optical apparatus, which may be incomplete, however rigorously developed.
  • so-called direct fringe calibration presented here a 3-D surface of known shape, preferably a flat surface perpendicular to the optical axis of the sensor, is moved in known steps through a measurement volume. At each successive position of the surface, a relative fringe order is determined for every point in the phase image, for example by phase-unwrapping, and the relative fringe order, the phase, and the known 3-D co-ordinates of the surface, are stored such that they may later be retrieved.
  • the 3-D co-ordinates of the corresponding point on the object surface can be found by correlating the phase value and the fringe order with those stored.
  • a third object of preferred embodiments of this invention is to provide a means referred to here as multiple-fringe indexing, whereby using a single sensor and two or more fringe projectors, a relative fringe number may be assigned independently to any point in the phase image, without knowing the relative position and orientation of the detector and any projectors, and without phase-unwrapping.
  • Multiple-fringe indexing requires that two or more fringe patterns be projected onto the same object surface in view of the sensor, for example by using multiple grating projectors, but in any case contrived so that the different fringe patterns are instances of different 3-D fringe geometries.
  • the fringe geometry's have a common reference, for example the relative fringe numbers may be equal at a specific point in the measurement volume, or the fringe geometry's may by referenced to 3-D co-ordinates with a common origin for example by using the aforementioned direct fringe calibration technique, then, of the plurality of possible fringe orders which could be correct for a point in the phase image of each fringe geometry, by correlating them together, that order which in each case gives most nearly the same value in a common origin is selected as that most likely to be the correct order.
  • the reliability of this technique can be improved by simultaneously considering a group of image points.
  • the statistically most common fringe order in the group is taken as the correct order, and this order may be used to determine the correct order of any rogue point whose order considered in isolation is ambiguous or incorrect.
  • Multiple-fringe indexing is a significant advance for fringe analysis since it permits a relative fringe number to be determined without phase-unwrapping, avoiding the propagation of phase errors and permitting the reconstruction of discontinuous surfaces.
  • a further benefit is the automatic detection of poor data, i.e. where none of the possible fringe orders correlate sufficiently with respect to a common origin. This facility can save the hours of manual editing that is often necessary after non-contact 3-D data acquisition, the surface has been reconstructed.
  • a further filter can be provided so that only those points are reconstructed which are within a tolerance which can be set by the user.
  • one fringe pattern has a coarse period with respect to another, and especially when the period of the coarse fringe is greater than or equal to the depth of field of the measurement volume, since in these cases the coarse fringe pattern may be used for indexing and the fine pattern used to reconstruct the surface at a relatively higher accuracy.
  • the technique will also work with two patterns of similar period and although in this case it is less robust, it has the advantage that one fringe pattern may be substantially horizontal with respect to the sensor image plane, the other substantially vertical, so that the phase in any region may be determined according to the fringe geometry which, according to noise in the image or steep surface slope, is likely to contain the most accurate phase values.
  • This technique may be extended by using three or more fringe projectors, for example with increasingly smaller fringe period, or a mixture of coarse and fine fringes of different orientations.
  • Spatial phase-measurement techniques can be implemented with very simple apparatus, such as a 35 mm slide projector with a square-wave grating and a standard video camera, but this is unlikely to be adequate for accurate measurements.
  • In order to achieve the desired accuracy and flexibility it is often necessary to use a variety of slides and mounts to hold the projection and observation componentry, all mounted on an optical table, but such apparatus is expensive and bulky, and time-consuming to set-up for different configurations.
  • a further problem with such apparatus is maintaining a constant relationship between the image planes of the projectors and the sensor without a constant temperature environment.
  • Another object of preferred embodiments of this invention is to provide a non-contact 3-D vision probe which greatly simplifies the apparatus required for spatial phase-measurement by providing a flexible and compact unit which can be variously mounted, containing one or more fringe projectors and an area sensor, the fields of view of which may be varied by, for example, using different standard lenses, and the stand-off in front of the camera can be modified by, for example, converging or diverging the fringe projectors such that their optical axes cross the optical axes of the sensor at different distances in front of it. Furthermore, particular care is taken to maintain the important relationship between the image planes of the sensor and projectors.
  • the probe may, for example, contain just one fringe projector in which case relative fringe numbers can be determined, for example, by reference to a mark or local modulation introduced to the projected fringe.
  • the aforementioned phase-unwrapping techniques are used to determine relative fringe numbers, and consequently, this technique is limited to simple quasi-cylindrical objects.
  • the probe may instead contain two or more fringe projectors, suitable for the aforementioned multiple-fringe indexing technique and variants thereof, which techniques may be extended further to perform video-rate data acquisition by having all the projectors on at once, where each phase pattern is extracted from an acquired monochrome image by using, for example, Fourier techniques, or from an acquired colour image by using, for example, the red, green and blue components of the image.
  • the probe may be further extended whereby it or the object to be measured is supported by a manipulator with one or many rotational or linear axes of movement so as to present multiple views of the object surface to the sensor.
  • the multiple views may be acquired so as to obtain 3-D co-ordinates over substantially the whole object surface.
  • a closing procedure can be employed to automatically digitise the whole object surface.
  • a further extension is to employ motorised zoom lenses and motorised projector alignment, so as to reconstruct 3-D co-ordinates at different levels of magnification.
  • the probe could switch between various pre-determined modes for which it has been calibrated, or alternatively, the changes in fringe geometry at different zoom settings can be calibrated explicitly and inte ⁇ olated appropriately for any arbitrary zoom setting.
  • a non-contact 3-D digitiser comprising :-
  • At least one projector having a source of electromagnetic radiation and means for collecting, modulating and focusing the radiation to cast a regular pattern of dark and light stripes onto the surface of a remote subject; a detector incorporating a focusing device and a plurality of detector elements sensitive to radiation projected by the projector and so disposed as to view a subject irradiated by such projected radiation;
  • said means for determining the phase of radiation detects the phase by the steps of:
  • the integer sampling period N is approximately the same as the period of the detected radiation.
  • a digitiser as above may further comprise a moveable target at which a subject to be measured may be disposed.
  • a digitiser as above may further comprise means for calibrating a volume by the following steps:
  • a digitiser as above may include means for subsequently reconstructing an unknown object surface placed within the volume described by the motion of the target surface by the following steps: determining the phase of detected radiation;
  • a digitiser as above may comprise at least two said projectors, each with a different 3-D fringe geometry.
  • said geometries are cross-indexed to determine the depth co-ordinates of a point on an unknown surface by the following steps:
  • the projectors irradiate in sequence the same part of an unknown surface which is in sight of the detector so that respective series of moire fringes apparent on the surface are different in orientation and/or period;
  • phase of the detected radiation at any detector element is determined for each fringe geometry, and used to index the stored phase values at that detector element such that a series of possible object depth co-ordinates are determined for each fringe geometry, where the correct depth co-ordinate in each case is that which minimises the difference between them.
  • phase values from adjacent detector elements are grouped together.
  • a digitiser as above may comprise at least three said projectors, each with a different 3-D fringe geometry, wherein each fringe projector has a successively smaller period, and/or different fringe projectors have significantly different orientation.
  • phase values determined during calibration are sampled from an origin to determine the period of the detected radiation at each detector element and target position;
  • the corresponding object depth co-ordinate is determined such that for a unique integer period, a series of depth co-ordinates may be fitted to an appropriate mathematical function, the parameters of which are stored, along with the fringe integer, such that they may later be retrieved;
  • the phase and period of the detected radiation is used as an index to the stored values to determine the depth co-ordinates of a point on an unknown surface corresponding to a detector element.
  • the target is a substantially flat target coplanar to the detector, and the target may be moved in equal steps pe ⁇ endicular to the detector.
  • an omni-directional mounting is provided for the detector, and the detector may be established coplanar to the target by the following steps:
  • a digitiser as above may have adjacent detector and projector(s) combined into a flexible compact unit such that the or each projector may be adjusted so that the optical axes of the projector and detector coincide at different distances from the detector, and wherein the magnification of the focusing device and the fringe orientation can be varied.
  • a substantially fixed relationship is maintained between the image planes of the detector and the or each projector throughout a wide temperature range; and/or a substantially linear relationship is maintained between the image planes of the detector and the or each projector throughout a wide temperature range, which relationship is compensated for by calibration; and/or
  • the detector is cooled.
  • means are provided to carry out a semi- or fully-automatic calibration technique; wherein the detector may be positioned coplanar to a substantially flat reference target; wherein the detector focusing device may be adjusted to give optimum magnification, focus and image contrast by viewing the target; wherein the orientation of the or each projector may be established by viewing the target and its magnification, focus and image contrast optimised; and wherein the magnification, fringe orientation and period of the or each projector may be established according to a desired calibrated volume size and stand-off.
  • a digitiser as above may comprise either a single probe unit comprising detector and projector(s) and a multi-axis manipulator to hold the subject or the probe; or multiple probe units disposed around the subject, whereby different views of the subject surface may be presented to the detector(s).
  • a digitiser as above may include means for reconstructing a sample surface semi- or fully- automatically by the following steps:
  • each focusing device has variable magnification, varying this parameter to digitise gross or fine surface detail.
  • Figure 1 shows a schematic layout of one example of apparatus used for virtual phase-stepping
  • Figure 2 shows a schematic layout of one example of apparatus suitable for direct fringe calibration
  • Figure 3 shows a schematic layout of one example of apparatus used for multiple-fringe indexing
  • Figure 4 shows a schematic layout of one example of a compact and flexible 3-D vision probe
  • Figure 5 shows a schematic layout of one example of apparatus used for fringe projection.
  • Appendix A gives the mathematical equations required for a particular example of virtual phase-stepping.
  • one example of a non-contact 3-D digitiser comprises a fringe projector 1 which projects a 3-D fringe pattern 2 onto a remote object surface 3. Adjacent to the projector 1 is positioned an area array sensor and lens 4 such that the surface of the object 3 within the field of view of the sensor 5 is illuminated by the projected fringe 2.
  • the fringe projector 1 is oriented so that the projected fringes 2 are approximately pe ⁇ endicular to the plane of the sensor 4 and fringe projector 1 and are approximately parallel to the sensing elements within the sensor 4.
  • the image of the fringes on the object surface 3 is acquired by the sensor 4 and stored in an image processor 6 such that it may later be retrieved.
  • the image may be shown on a cathode ray tube (CRT) 7 or other display means connected to the image processor 6, which CRT may also shows a user-interface whereby the user can interact with the image processor 6 using a mouse, keyboard, light-pen, or other input device 8 to select different operations or processes.
  • CRT cathode ray tube
  • phase of the fringe pattern acquired using, for example the apparatus in Figure 1 may be determined using the aforementioned virtual phase-stepping method.
  • the virtual fringe comprises a series of regularly spaced parallel black bars, where the fringe period is similar to the period of the projected fringes 2, but in any case is an integer greater than or equal to three pels (picture elements).
  • the spaces between the black bars are only one pel wide, in which case the data under the bars may be reconstructed using linear inte ⁇ olation, but higher order inte ⁇ olation schemes could also be employed.
  • a minimum of three phase-shifted fringe contours are generated as above by repeatedly phase-stepping the virtual grating by one pel pe ⁇ endicular to the acquired fringe pattern, from which the phase value at any pel position may be determined using synchronous detection.
  • a second example is as explained above, but where the virtual fringe pattern has a sinusoidal profile which is combined with the acquired fringe pattern with a mathematical operation, for example add, subtract, multiply, or divide, to generate a moire fringe of the contour type.
  • a filter for example a one-dimensional (1-D) low-pass filter, to the result, before using synchronous detection.
  • This approach has the advantage of using only conventional image processing techniques but is less accurate than the first example given above.
  • a third example is similar to the two previous examples, but relies upon the observation that contour type moire fringes may generated directly from the acquired fringe image by simply re-ordering the intensity data pe ⁇ endicular to the pattern in a repetitive manner with a period equivalent to that of the acquired fringe. This is the fastest technique to process, but is the least accurate.
  • the acquired fringe pattern may be analysed by the image processor, one line of data pe ⁇ endicular to the fringes at a time, and the relevant processes, such as from the examples above, combined into a single function, such as that given in Appendix A, which returns a phase value for each pel using the technique given by the first example above.
  • Many other virtual fringe profiles, and many other known temporal phase-measurement techniques, may also be used as the basis for virtual phase- stepping.
  • phase image as determined above, or otherwise, is the basis for the following 3-D reconstruction, but the apparatus used, for example the apparatus in Figure 1 , must be calibrated to relate fringe numbers to distances in front of the camera.
  • FIG. 2 shows apparatus identical to Figure 1 but with the addition of a motion table 9 and controller 11.
  • the motion table 9 is arranged to hold the object, and to have a linear axis of motion 10 preferably collinear with the optical axis of the sensor 4.
  • the table 9 is moved repeatedly by known amounts along its axis of motion 10 by means of an indexer, controller, driver, or other electronic device 11 under the control of the image processor 6.
  • an object 3 with known shape is placed on the table 9, which table 9 is moved in preferably equal steps either towards or away from the sensor.
  • the known object is preferably a flat surface positioned coplanar with the sensor image plane.
  • a fringe pattern of the object surface as illuminated by the fringe projector 1 is acquired by the sensor 4 and stored in the image processor 6, from which fringe pattern a phase image may be determined using virtual phase-stepping as above, or by some other means, which phase image is stored in the image processor 6 such that it may later be retrieved.
  • phase images determined by the apparatus in Figure 2 are unwrapped to give relative fringe numbers using one of the aforementioned phase-unwrapping techniques.
  • This process is applied to each phase image, and also to the series of phase images, this latter either by ensuring that the steps by which the table 9 is moved are less than half the separation of adjacent fringes in that same direction, or by providing the projected fringe pattern 2 with a mark or local intensity modulation whose centre can be identified in the acquired fringe pattern and which can be removed from that pattern before calculating the phase values.
  • the relative fringe numbers of two points so found on different phase images is given by the distance between them pe ⁇ endicular to the fringes, divided by the period of the virtual fringe pattern.
  • a second technique is to determine those sub-pixel positions in the phase image which represent relative fringe orders, and save them to a store, along with the 3-D co-ordinates as before.
  • a third method is to represent the fringe geometry mathematically, using for example a quadratic, and find the parameters of the quadratic by least-squares fitting of the relative fringe orders found as above.
  • a fourth and preferred method is to perform quadratic least-squares fitting of the series of relative fringe numbers at any common pel location.
  • an unknown object surface placed within the measurement volume described by the shape and motion of the surface 3 may be digitised in 3-D by acquiring an image with the sensor 4 of the fringe 2 projected onto the object surface, which acquired fringe pattern is processed using virtual phase-stepping or some other phase-measurement technique to calculate a phase image, which phase image is unwrapped to determine relative fringe numbers, and which relative fringe numbers are indexed with those relative fringe numbers stored previously, for example by adding a mark to the projected fringe as before, to determine 3-D co-ordinates at specific pel locations in the phase image.
  • indexing process depends upon the shape of the known object surface, its orientation with respect to the sensor axis, and the structure of the CLUT, but in any case it is possible to recover the z co-ordinate of a point on an unknown surface, or distance along the sensor optical axis from an origin, given the relative fringe number at a corresponding pel location.
  • a flat surface is positioned at 3 parallel to the sensor image plane, which flat surface is matte black except for a white rectangle of known size, preferably positioned such that its edges are parallel to the rows and columns of the sensor image plane, and it nearly fills the sensor 4 field of view 5.
  • the flat surface is moved in known, preferably equal, steps by the motion table 9 along its principal axis 10, and at each position of the table 9 an image is acquired of the white rectangle, which edges are located in the image to sub-pixel resolution and least-squares fitted to their known dimensions to determine a series of horizontal and vertical scale factors at varying distances from the sensor.
  • These scale factors are preferably best-fitted to an equation for a line, in which case it is only necessary to store six parameter values to represent both horizontal and vertical scale factors at any depth.
  • the calibration of x and y co-ordinates as above can be combined into a single operation with the aforementioned calibration of z co-ordinates.
  • the two calibration processes may be distinct, in which case a more complex target, for example a grid pattern, may be used to calibrate the x and y co-ordinates across the field of view, as well as through the depth of field, by determining the parameters affecting radial lens distortion.
  • the image plane of the sensor 4 may be set-up coplanar to a flat surface positioned at 3.
  • the flat surface is pe ⁇ endicular to the table axis of motion 10 and the sensor is mounted on, for example, a tilt-and-rotation stage such that it may be rotated around all three of its principal axes.
  • the flat surface is further provided with a pattern of dark and light stripes, for example of square- wave intensity profile, which lines are preferably pe ⁇ endicular to the table axis of motion 10.
  • the sensor 4 is approximately positioned with its optical axis pe ⁇ endicular to the flat surface, and an image is acquired by the sensor 4 of the lines on the flat surface, which image is stored in the image processor 6 such that it may later be retrieved. If the period of the lines or fringes on the flat surface is approximately equivalent to two pels in the acquired image then the fringes will interfere with the resolution of the sensor, generating a moire fringe pattern of more or less parallel and equispaced lines, depending upon the orientation of the sensor relative to the flat surface. Where the fringe period is approximately equivalent to three or more pels in the acquired image, then alternatively a phase image may be calculated from the acquired fringe pattern, for example by using virtual phase-stepping.
  • the senor may be iteratively rotated around its three axes until the phase or moire fringe distribution is everywhere even and parallel, in which case the sensor image plane and the flat surface are coplanar.
  • An alternative but preferred technique is to use a chequered pattern, where adjacent rectangular tiles contain only vertical or horizontal lines, which pattern is converted to phase values using, for example, virtual phase-stepping respectively applied horizontally or vertically.
  • These techniques may be further extended to operate automatically, whereby, iteratively, an image is acquired by the sensor 4 of the line pattern, from which is determined a moire fringe or a phase image as above, after which the sensor 4 is moved to minimise asymmetries in the moire fringe or phase image distribution.
  • Figure 3 A specific example of a technique for non-contact 3-D digitising of discontinuous surfaces, previously referred to as multiple-fringe indexing, is now explained by reference to Figure 3, which apparatus is similar to that of Figure 2 but with the addition of a second fringe projector 12, placed near the sensor 4 , which projects a second fringe pattern 13 onto the object surface positioned at 3, the relationship between said fringe projectors being unimportant so long as each results in a different 3-D fringe geometry, provided, for example, by the fringe projectors having different fringe rotation, period or magnification.
  • the two projectors 1 , 12 are controlled by the image processor 6 via a controller or other device 14 such that the object surface positioned at 3 is illuminated by one or other projector, or alternatively both projectors are off.
  • the system is preferably calibrated using the aforementioned direct fringe calibration technique, modified to now store two sets of relative fringe numbers in the CLUT, one for each of the projected fringe geometry's, where both are referenced to the same 3-D co-ordinate system.
  • fringe patterns are acquired by the sensor 4 of the object surface illuminated by fringe projectors 1,12, which fringe patterns are used to calculate phase images using, for example, virtual phase-stepping, which phase images are correlated with the CLUT at any pel location such that the correct fringe orders of each phase image are those which give most nearly the same z co-ordinate.
  • FIG. 4 shows a schematic layout of the probe in which two fringe projectors 1 ,12 are symmetrically disposed either side of a sensor 4 so that the optical axes are coplanar.
  • Standard lenses 15,16 are interchangeable with other standard lenses of different magnification, and the fringe projectors 1,12, are held by pivots 17 within a member or members 18 which also holds the sensor 4, such that the projectors 1 , 12 may be rotated around the pivot to intersect the optical axis of the sensor 4 at different distances in front of the sensor 4 and may afterwards be fixed firmly in place.
  • each fringe projector may comprise, for example, a point source 21 , a spherical reflector 22 and a condenser 23 whereby illumination is focused through a transmission grating of parallel dark and light stripes 24 into the aperture of the objective lens 25.
  • a point source 21 a point source
  • a spherical reflector 22 a condenser 23
  • the grating 24 may be moved along the principal optical axis, in order that it may be focused for different lenses 5, and also rotated around the principal optical axis, in order to control and adjust the fringe period.
  • Figure 4 shows two projectors 1 ,12 positioned either side of the sensor
  • the probe may instead contain one fringe projector, or three or more projectors, and any projector may be disposed above or below the sensor.
  • the 3-D vision probe is provided with substantially the apparatus shown in Figure 3, where the two fringe projectors
  • 1,12 and sensor 4 of Figure 3 are replaced by the probe shown in Figure 4, by which means the aforementioned virtual phase-stepping, direct fringe calibration, multiple-fringe indexing, or other techniques, are effected.
  • the lenses 15,16 are motorised zoom lenses, and the rotation of the projector or projectors around the pivots 17 is similarly motorised, all under control of an image processor such as the item 6 in Figure 3. With appropriate control processes this provides a flexible and compact 3-D vision probe which can accommodate itself to changing or different circumstances.
  • a virtual grating of integer pixels pitch M is combined with a deformed fringe pattern to generate a second moire fringe pattern, and by then shifting the virtual grating pe ⁇ endicular to the grating lines, further phase-shifted second moire fringe patterns are generated, which can be fitted to a sine wave to find the phase values.
  • phase-maps are generated, which provides three separate phase values for each pixel, each phase-shifted by 1/3 of a period. From these values. Fourier Sine and Cosine series can be formed.
  • the inte ⁇ olation and series summing can be accomplished simultaneously with the following general equation:
  • n is the pixel number and m is the grating period.

Abstract

An optical device and method for three-dimensional digitising comprising a detector (4) and two or more projected patterns (2, 13) of different period, whereby the phase distribution of each pattern imaged by the detector (4) is determined by phase-stepping a second, virtual, pattern across the detected image, the relationship between phase values and three-dimensional object co-ordinates is determined by repeatedly moving a known target through a prescribed volume, which relationship is stored, and three-dimensional co-ordinates of the surface of an object (3) placed within the prescribed volume are subsequently determined by correlating the detected and stored phase in all the projected patterns.

Description

METHOD AND APPARATUS FOR THREE-DIMENSIONAL
DIGITISING OF OBJECT SURFACES
This invention relates to a method and apparatus for non-contact 3-D (three-dimensional) digitising, or data acquisition, of object surfaces. Non-contact 3-D digitisers are devices that remotely and without touching an object's surface, are able to sample the analogue shape of that surface and to determine 3-D co-ordinates on the surface relative to an origin. Optical non-contact digitisers use structured lighting, whereby 3-D co-ordinates on an object surface are determined by triangulation between an adjacent detector and projector.
There are many structured lighting techniques in the prior art which project a spot or stripe onto the object surface, but many data acquisition cycles are necessary to reconstruct a dense matrix of 3-D co-ordinates. There are by contrast many known techniques where the object is illuminated by multiple stripes or by some other pattern filling the view of the detector, permitting the simultaneous acquisition of many 3-D co-ordinates, but introducing the problem of how to index each stripe. This is the field of the present invention.
The well known shadow and projection moire techniques use similar periodic patterns in the fields of view of both the projector and the detector so that the detected image is a moire fringe from which fringe contours it is possible to determine object surface height values. The fringe contours have an ambiguous order which may be resolved by considering the phase of the moire fringe, rather than its intensity. Those techniques which instead analyse the features of the detected pattern, or a stereo pair, or a series of such patterns, do not consider the phase of the detected pattern.
Known so-called temporal phase-measurement techniques use synchronous detection to determine the phase at each image point from a series of phase-shifted fringe patterns, obtained, for example, by translating one of the patterns in its own plane by equal amounts. A disadvantage of these so-called phase-shifting or phase-stepping techniques is the expensive hardware required to effect the necessary translation.
Known so-called spatial phase-measurement techniques, for example, Takeda et al. Applied Optics, Vol. 22 No. 24 (1983) pp 3977-3982, K.H.Womack Optical Engineering, Vol. 23 (1984) pp 391-395, and German Application P 40 14 019.9, have the advantage of requiring just one moire fringe pattern but they are complicated to use and slow to process on conventional computing hardware.
One of the objects of preferred embodiments of the present invention is to retain the advantages of spatial phase measurement techniques (i.e. the use of simple apparatus with no moving parts) but to overcome its disadvantages by using a technique presented here called virtual phase-stepping.
Preferably, virtual phase-stepping uses apparatus similar to that used in Fourier analysis, that is, a projector which casts for example a line pattern, of square or sinusoidal intensity profile, onto the surface of an object, which pattern is viewed by an area sensor. The detected image is operated upon by a procedure which is equivalent to, for example, adding a minimum of three phase-shifted patterns to the detected image to generate three phase-shifted images of moire fringe patterns of the contour type, which moire patterns may be used to determine the phase at each image point using synchronous detection.
The phase image, howsoever determined, contains 2p discontinuities, which have to be resolved to determine the relative order of the fringes. Many so-called phase-unwrapping techniques are known which determine relative fringe orders with respect to a reference point in the phase image, but most such approaches will propagate phase errors throughout the phase image and all will be unable to assign relative fringe orders to regions of the phase image not contiguous with the reference point.
Howsoever relative fringe orders are determined, means must be provided whereby object surface depths, or distances in front of the sensor, may be determined given a relative fringe order and fractional phase value.
For devices employing standard divergent optics the 3-D fringe geometry is such that fringes of constant order resemble skewed paraboloids.
Conventional calibration techniques use a mathematical model of the apparatus and 3-D fringe geometry, as given, for example, in M. Idesawa et al. Applied Optics Vol. 16 No. 8 (1977) pp. 2151-2162, and the unknown geometric parameters of this model are determined using a known target and so-called bundle adjustment, which is common in stereo-photogrammetry. Alternatively, using an object surface of known shape, the best-fit parameters may be determined using the downhill Simplex method given for example in Press et al. Numerical Recipes for Pascal, Cambridge University Press (1990). These techniques do not refer to the phase image and in practice are limited by how accurately the mathematical model actually represents the apparatus. When using poor lenses, for example, there may be different degrees of accuracy over the lens field of view.
A second important object of preferred embodiments of this invention is to simplify the calibration of fringe projection devices by using phase values and relative fringe orders for both the calibration and subsequent 3-D reconstruction. This does not require a mathematical model of the optical apparatus, which may be incomplete, however rigorously developed. In so- called direct fringe calibration presented here, a 3-D surface of known shape, preferably a flat surface perpendicular to the optical axis of the sensor, is moved in known steps through a measurement volume. At each successive position of the surface, a relative fringe order is determined for every point in the phase image, for example by phase-unwrapping, and the relative fringe order, the phase, and the known 3-D co-ordinates of the surface, are stored such that they may later be retrieved. When an unknown object is subsequently placed within the measurement volume, and the relative fringe order is calculated at any point in the phase image, the 3-D co-ordinates of the corresponding point on the object surface can be found by correlating the phase value and the fringe order with those stored.
The aforementioned fringe indexing problem is common to all full-field structured lighting techniques and is the subject of several recent Patents, for example, US 5, 175,601 and US 5,307, 151 , which both have the disadvantage of requiring two sensors; and US 5,003,187 and US 5,339,254, which both use one sensor and several grating patterns of different period, but the gratings are parallel to each other and fringe orders are resolved by reference to the intensity of the projected illumination. A third object of preferred embodiments of this invention is to provide a means referred to here as multiple-fringe indexing, whereby using a single sensor and two or more fringe projectors, a relative fringe number may be assigned independently to any point in the phase image, without knowing the relative position and orientation of the detector and any projectors, and without phase-unwrapping.
Multiple-fringe indexing requires that two or more fringe patterns be projected onto the same object surface in view of the sensor, for example by using multiple grating projectors, but in any case contrived so that the different fringe patterns are instances of different 3-D fringe geometries. Provided that the fringe geometry's have a common reference, for example the relative fringe numbers may be equal at a specific point in the measurement volume, or the fringe geometry's may by referenced to 3-D co-ordinates with a common origin for example by using the aforementioned direct fringe calibration technique, then, of the plurality of possible fringe orders which could be correct for a point in the phase image of each fringe geometry, by correlating them together, that order which in each case gives most nearly the same value in a common origin is selected as that most likely to be the correct order.
The reliability of this technique can be improved by simultaneously considering a group of image points. The statistically most common fringe order in the group is taken as the correct order, and this order may be used to determine the correct order of any rogue point whose order considered in isolation is ambiguous or incorrect. Multiple-fringe indexing is a significant advance for fringe analysis since it permits a relative fringe number to be determined without phase-unwrapping, avoiding the propagation of phase errors and permitting the reconstruction of discontinuous surfaces. A further benefit is the automatic detection of poor data, i.e. where none of the possible fringe orders correlate sufficiently with respect to a common origin. This facility can save the hours of manual editing that is often necessary after non-contact 3-D data acquisition, the surface has been reconstructed. A further filter can be provided so that only those points are reconstructed which are within a tolerance which can be set by the user. These techniques facilitate the measurement of problem surfaces, for example those which are not matte white, are reflective, or are textured.
Additional benefits of multiple-fringe indexing are obtained when one fringe pattern has a coarse period with respect to another, and especially when the period of the coarse fringe is greater than or equal to the depth of field of the measurement volume, since in these cases the coarse fringe pattern may be used for indexing and the fine pattern used to reconstruct the surface at a relatively higher accuracy. The technique will also work with two patterns of similar period and although in this case it is less robust, it has the advantage that one fringe pattern may be substantially horizontal with respect to the sensor image plane, the other substantially vertical, so that the phase in any region may be determined according to the fringe geometry which, according to noise in the image or steep surface slope, is likely to contain the most accurate phase values. This technique may be extended by using three or more fringe projectors, for example with increasingly smaller fringe period, or a mixture of coarse and fine fringes of different orientations. Spatial phase-measurement techniques can be implemented with very simple apparatus, such as a 35 mm slide projector with a square-wave grating and a standard video camera, but this is unlikely to be adequate for accurate measurements. In order to achieve the desired accuracy and flexibility it is often necessary to use a variety of slides and mounts to hold the projection and observation componentry, all mounted on an optical table, but such apparatus is expensive and bulky, and time-consuming to set-up for different configurations. A further problem with such apparatus is maintaining a constant relationship between the image planes of the projectors and the sensor without a constant temperature environment.
Another object of preferred embodiments of this invention is to provide a non-contact 3-D vision probe which greatly simplifies the apparatus required for spatial phase-measurement by providing a flexible and compact unit which can be variously mounted, containing one or more fringe projectors and an area sensor, the fields of view of which may be varied by, for example, using different standard lenses, and the stand-off in front of the camera can be modified by, for example, converging or diverging the fringe projectors such that their optical axes cross the optical axes of the sensor at different distances in front of it. Furthermore, particular care is taken to maintain the important relationship between the image planes of the sensor and projectors.
The probe may, for example, contain just one fringe projector in which case relative fringe numbers can be determined, for example, by reference to a mark or local modulation introduced to the projected fringe. In this case the aforementioned phase-unwrapping techniques are used to determine relative fringe numbers, and consequently, this technique is limited to simple quasi-cylindrical objects. The probe may instead contain two or more fringe projectors, suitable for the aforementioned multiple-fringe indexing technique and variants thereof, which techniques may be extended further to perform video-rate data acquisition by having all the projectors on at once, where each phase pattern is extracted from an acquired monochrome image by using, for example, Fourier techniques, or from an acquired colour image by using, for example, the red, green and blue components of the image.
The probe may be further extended whereby it or the object to be measured is supported by a manipulator with one or many rotational or linear axes of movement so as to present multiple views of the object surface to the sensor. In this embodiment the multiple views may be acquired so as to obtain 3-D co-ordinates over substantially the whole object surface. When used in conjunction with multiple-fringe indexing, where only valid data is reconstructed in each sensor view, a closing procedure can be employed to automatically digitise the whole object surface. A further extension is to employ motorised zoom lenses and motorised projector alignment, so as to reconstruct 3-D co-ordinates at different levels of magnification. The probe could switch between various pre-determined modes for which it has been calibrated, or alternatively, the changes in fringe geometry at different zoom settings can be calibrated explicitly and inteφolated appropriately for any arbitrary zoom setting.
According to one aspect of the present invention, there is provided a non-contact 3-D digitiser comprising :-
at least one projector having a source of electromagnetic radiation and means for collecting, modulating and focusing the radiation to cast a regular pattern of dark and light stripes onto the surface of a remote subject; a detector incorporating a focusing device and a plurality of detector elements sensitive to radiation projected by the projector and so disposed as to view a subject irradiated by such projected radiation;
means for determining the phase of radiation detected by the detector;
means for determining the period of radiation detected by the detector; and
means for determining the position of a point on the subject surface, in a 3-D co-ordinate system, from the phase and period of the radiation detected at a corresponding one of the detector elements.
Preferably, said means for determining the phase of radiation detects the phase by the steps of:
establishing an integer sampling period N;
sampling the detected radiation every Nth data entry from an origin and interpolating between the sampled values such that the resultant waveform is essentially sinusoidal; and
repeating this N-l times, each time beginning one entry further from the origin, such that at any detector element the resultant series has a sinusoidal waveform which can be fitted to a sine wave to determine a phase value; Preferably, the integer sampling period N is approximately the same as the period of the detected radiation.
A digitiser as above may further comprise a moveable target at which a subject to be measured may be disposed.
A digitiser as above may further comprise means for calibrating a volume by the following steps:
positioning a target of known surface shape in sight of the detector and projector(s), and determining the phase of any detected radiation;
repeatedly moving the target, each time a known distance and direction, but within sight of the detector and projector(s), and such that the phase determined at any detector element varies by less than one half-period between each successive target position;
storing the phase at each element of the detector, at each position of the target, along with the known object depth co-ordinate of the corresponding point on the target, such that they may later be retrieved; and
determining the period at each detector element by sampling the stored phase from an origin.
A digitiser as above may include means for subsequently reconstructing an unknown object surface placed within the volume described by the motion of the target surface by the following steps: determining the phase of detected radiation;
determining the period of the detected radiation by sampling the phase from the same origin as that used for calibration;
using the period of the detected radiation to index the stored values to determine the depth co-ordinate of any point on the unknown surface corresponding to a detector element.
A digitiser as above may comprise at least two said projectors, each with a different 3-D fringe geometry.
Preferably, said geometries are cross-indexed to determine the depth co-ordinates of a point on an unknown surface by the following steps:
the projectors irradiate in sequence the same part of an unknown surface which is in sight of the detector so that respective series of moire fringes apparent on the surface are different in orientation and/or period;
in a calibration step, the relationship between phase, period and depth co-ordinates are determined for each 3-D fringe geometry;
in a reconstruction step, the phase of the detected radiation at any detector element is determined for each fringe geometry, and used to index the stored phase values at that detector element such that a series of possible object depth co-ordinates are determined for each fringe geometry, where the correct depth co-ordinate in each case is that which minimises the difference between them. Preferably, phase values from adjacent detector elements are grouped together.
A digitiser as above may comprise at least three said projectors, each with a different 3-D fringe geometry, wherein each fringe projector has a successively smaller period, and/or different fringe projectors have significantly different orientation.
Preferably:
the phase values determined during calibration are sampled from an origin to determine the period of the detected radiation at each detector element and target position;
where the period is exactly an integer, the corresponding object depth co-ordinate is determined such that for a unique integer period, a series of depth co-ordinates may be fitted to an appropriate mathematical function, the parameters of which are stored, along with the fringe integer, such that they may later be retrieved;
or the series of depth co-ordinates at any detector element are fitted to an appropriate mathematical function, the parameters of which are stored such that they may later be retrieved; and
when reconstructing an unknown surface, the phase and period of the detected radiation is used as an index to the stored values to determine the depth co-ordinates of a point on an unknown surface corresponding to a detector element. Preferably, the target is a substantially flat target coplanar to the detector, and the target may be moved in equal steps peφendicular to the detector.
Preferably, an omni-directional mounting is provided for the detector, and the detector may be established coplanar to the target by the following steps:
applying a periodic waveform to the target which is in sight of the detector;
determining the phase of detected radiation;
adjusting the detector orientation and re-determining the phase; and
repeating until the phase is equally distributed across the detector at which time the reference plane and detector are coplanar.
A digitiser as above may have adjacent detector and projector(s) combined into a flexible compact unit such that the or each projector may be adjusted so that the optical axes of the projector and detector coincide at different distances from the detector, and wherein the magnification of the focusing device and the fringe orientation can be varied.
Preferably, a substantially fixed relationship is maintained between the image planes of the detector and the or each projector throughout a wide temperature range; and/or a substantially linear relationship is maintained between the image planes of the detector and the or each projector throughout a wide temperature range, which relationship is compensated for by calibration; and/or
the detector is cooled.
Preferably, means are provided to carry out a semi- or fully-automatic calibration technique; wherein the detector may be positioned coplanar to a substantially flat reference target; wherein the detector focusing device may be adjusted to give optimum magnification, focus and image contrast by viewing the target; wherein the orientation of the or each projector may be established by viewing the target and its magnification, focus and image contrast optimised; and wherein the magnification, fringe orientation and period of the or each projector may be established according to a desired calibrated volume size and stand-off.
A digitiser as above may comprise either a single probe unit comprising detector and projector(s) and a multi-axis manipulator to hold the subject or the probe; or multiple probe units disposed around the subject, whereby different views of the subject surface may be presented to the detector(s).
A digitiser as above may include means for reconstructing a sample surface semi- or fully- automatically by the following steps:
placing the sample in sight of the or each probe and reconstructing the visible surface at those detector elements where the phase is considered to be reliable; moving the sample or probe(s) a known distance along or around any axis of movement, and reconstructing the visible surface at those detector elements where the phase is considered to be reliable;
fully traversing the sample surface by initial relatively coarse movements, in order to determine an approximate centre of the sample from 3-D co-ordinates thus far reconstructed;
subsequently moving the sample or probe(s), recalculating the sample centre and using this origin as part of a spherical shape closing process, which determines how best the probe(s) or sample may be moved in order to acquire missing data;
repeating this process until full closure is achieved according to predetermined criteria; and
optionally, where the or each focusing device has variable magnification, varying this parameter to digitise gross or fine surface detail.
According to another aspect of the present invention, there is provided a method of digitising an object surface by means of a digitiser according to any of the preceding aspects of the invention, the method including the steps of:
casting a regular pattern of dark and light stripes onto the surface of a remote subject by the or each said projector; viewing a subject irradiated by such projected radiation by the or each said detector;
determining the phase of radiation detected by the detector;
determining the period of any radiation detected by the detector; and
determining the position of a point on the subject surface, in a 3-D co-ordinate system, from the phase and period of the radiation detected at a corresponding one of the detector elements.
For a better understanding of the invention, and to show how embodiments of the same may be carried into effect, reference will now be made, by way of example, to the accompanying diagrammatic drawings, in which:
Figure 1 shows a schematic layout of one example of apparatus used for virtual phase-stepping;
Figure 2 shows a schematic layout of one example of apparatus suitable for direct fringe calibration;
Figure 3 shows a schematic layout of one example of apparatus used for multiple-fringe indexing;
Figure 4 shows a schematic layout of one example of a compact and flexible 3-D vision probe; and Figure 5 shows a schematic layout of one example of apparatus used for fringe projection.
Appendix A gives the mathematical equations required for a particular example of virtual phase-stepping.
With reference to Figure 1 , one example of a non-contact 3-D digitiser comprises a fringe projector 1 which projects a 3-D fringe pattern 2 onto a remote object surface 3. Adjacent to the projector 1 is positioned an area array sensor and lens 4 such that the surface of the object 3 within the field of view of the sensor 5 is illuminated by the projected fringe 2. The fringe projector 1 is oriented so that the projected fringes 2 are approximately peφendicular to the plane of the sensor 4 and fringe projector 1 and are approximately parallel to the sensing elements within the sensor 4. The image of the fringes on the object surface 3 is acquired by the sensor 4 and stored in an image processor 6 such that it may later be retrieved. The image may be shown on a cathode ray tube (CRT) 7 or other display means connected to the image processor 6, which CRT may also shows a user-interface whereby the user can interact with the image processor 6 using a mouse, keyboard, light-pen, or other input device 8 to select different operations or processes.
Examples are given below of how the phase of the fringe pattern acquired using, for example the apparatus in Figure 1 , may be determined using the aforementioned virtual phase-stepping method.
In one example the virtual fringe comprises a series of regularly spaced parallel black bars, where the fringe period is similar to the period of the projected fringes 2, but in any case is an integer greater than or equal to three pels (picture elements). By interfering the acquired and virtual fringe patterns, a moire fringe of the contour type will be apparent, although partially obscured by the black bars. Preferably, the spaces between the black bars are only one pel wide, in which case the data under the bars may be reconstructed using linear inteφolation, but higher order inteφolation schemes could also be employed. A minimum of three phase-shifted fringe contours are generated as above by repeatedly phase-stepping the virtual grating by one pel peφendicular to the acquired fringe pattern, from which the phase value at any pel position may be determined using synchronous detection.
A second example is as explained above, but where the virtual fringe pattern has a sinusoidal profile which is combined with the acquired fringe pattern with a mathematical operation, for example add, subtract, multiply, or divide, to generate a moire fringe of the contour type. In this case it will also be necessary to apply a filter, for example a one-dimensional (1-D) low-pass filter, to the result, before using synchronous detection. This approach has the advantage of using only conventional image processing techniques but is less accurate than the first example given above.
A third example is similar to the two previous examples, but relies upon the observation that contour type moire fringes may generated directly from the acquired fringe image by simply re-ordering the intensity data peφendicular to the pattern in a repetitive manner with a period equivalent to that of the acquired fringe. This is the fastest technique to process, but is the least accurate.
In practice it is not necessary to actually generate a virtual fringe pattern; instead the acquired fringe pattern may be analysed by the image processor, one line of data peφendicular to the fringes at a time, and the relevant processes, such as from the examples above, combined into a single function, such as that given in Appendix A, which returns a phase value for each pel using the technique given by the first example above. Many other virtual fringe profiles, and many other known temporal phase-measurement techniques, may also be used as the basis for virtual phase- stepping.
The phase image as determined above, or otherwise, is the basis for the following 3-D reconstruction, but the apparatus used, for example the apparatus in Figure 1 , must be calibrated to relate fringe numbers to distances in front of the camera.
An example of a calibration technique referred to earlier as direct fringe calibration is now explained by reference to Figure 2 which shows apparatus identical to Figure 1 but with the addition of a motion table 9 and controller 11. The motion table 9 is arranged to hold the object, and to have a linear axis of motion 10 preferably collinear with the optical axis of the sensor 4. The table 9 is moved repeatedly by known amounts along its axis of motion 10 by means of an indexer, controller, driver, or other electronic device 11 under the control of the image processor 6.
With reference to Figure 2, an object 3 with known shape is placed on the table 9, which table 9 is moved in preferably equal steps either towards or away from the sensor. The known object is preferably a flat surface positioned coplanar with the sensor image plane. At each position of the linear table 9, a fringe pattern of the object surface as illuminated by the fringe projector 1 is acquired by the sensor 4 and stored in the image processor 6, from which fringe pattern a phase image may be determined using virtual phase-stepping as above, or by some other means, which phase image is stored in the image processor 6 such that it may later be retrieved.
The phase images determined by the apparatus in Figure 2 are unwrapped to give relative fringe numbers using one of the aforementioned phase-unwrapping techniques. This process is applied to each phase image, and also to the series of phase images, this latter either by ensuring that the steps by which the table 9 is moved are less than half the separation of adjacent fringes in that same direction, or by providing the projected fringe pattern 2 with a mark or local intensity modulation whose centre can be identified in the acquired fringe pattern and which can be removed from that pattern before calculating the phase values. In this latter case, the relative fringe numbers of two points so found on different phase images is given by the distance between them peφendicular to the fringes, divided by the period of the virtual fringe pattern. By these means or otherwise, the relative fringe numbers of all the phase images are calculated and stored, along with the known 3-D co-ordinates of corresponding points on the object surface, in a CLUT (calibration look up table), such that they may later be retrieved.
Several schemes may be employed to reduce the considerable size of the CLUT. For example, when the 3-D fringe geometry is parallel to the rows or columns of the phase image, then the fringes are identical along each peφendicular line. This approach will be inadequate when conventional inexpensive lenses are used. A second technique is to determine those sub-pixel positions in the phase image which represent relative fringe orders, and save them to a store, along with the 3-D co-ordinates as before. A third method is to represent the fringe geometry mathematically, using for example a quadratic, and find the parameters of the quadratic by least-squares fitting of the relative fringe orders found as above. A fourth and preferred method is to perform quadratic least-squares fitting of the series of relative fringe numbers at any common pel location.
Howsoever the CLUT has been developed, an unknown object surface placed within the measurement volume described by the shape and motion of the surface 3, may be digitised in 3-D by acquiring an image with the sensor 4 of the fringe 2 projected onto the object surface, which acquired fringe pattern is processed using virtual phase-stepping or some other phase-measurement technique to calculate a phase image, which phase image is unwrapped to determine relative fringe numbers, and which relative fringe numbers are indexed with those relative fringe numbers stored previously, for example by adding a mark to the projected fringe as before, to determine 3-D co-ordinates at specific pel locations in the phase image.
The details of this indexing process depends upon the shape of the known object surface, its orientation with respect to the sensor axis, and the structure of the CLUT, but in any case it is possible to recover the z co-ordinate of a point on an unknown surface, or distance along the sensor optical axis from an origin, given the relative fringe number at a corresponding pel location.
A specific example is now given for the calibration of x and y co-ordinates with reference to Figure 2, whereby a flat surface is positioned at 3 parallel to the sensor image plane, which flat surface is matte black except for a white rectangle of known size, preferably positioned such that its edges are parallel to the rows and columns of the sensor image plane, and it nearly fills the sensor 4 field of view 5. The flat surface is moved in known, preferably equal, steps by the motion table 9 along its principal axis 10, and at each position of the table 9 an image is acquired of the white rectangle, which edges are located in the image to sub-pixel resolution and least-squares fitted to their known dimensions to determine a series of horizontal and vertical scale factors at varying distances from the sensor. These scale factors are preferably best-fitted to an equation for a line, in which case it is only necessary to store six parameter values to represent both horizontal and vertical scale factors at any depth.
The calibration of x and y co-ordinates as above can be combined into a single operation with the aforementioned calibration of z co-ordinates. Alternatively the two calibration processes may be distinct, in which case a more complex target, for example a grid pattern, may be used to calibrate the x and y co-ordinates across the field of view, as well as through the depth of field, by determining the parameters affecting radial lens distortion.
A specific example is now given, with reference to Figure 2, whereby the image plane of the sensor 4 may be set-up coplanar to a flat surface positioned at 3. In this example, the flat surface is peφendicular to the table axis of motion 10 and the sensor is mounted on, for example, a tilt-and-rotation stage such that it may be rotated around all three of its principal axes. The flat surface is further provided with a pattern of dark and light stripes, for example of square- wave intensity profile, which lines are preferably peφendicular to the table axis of motion 10. The sensor 4 is approximately positioned with its optical axis peφendicular to the flat surface, and an image is acquired by the sensor 4 of the lines on the flat surface, which image is stored in the image processor 6 such that it may later be retrieved. If the period of the lines or fringes on the flat surface is approximately equivalent to two pels in the acquired image then the fringes will interfere with the resolution of the sensor, generating a moire fringe pattern of more or less parallel and equispaced lines, depending upon the orientation of the sensor relative to the flat surface. Where the fringe period is approximately equivalent to three or more pels in the acquired image, then alternatively a phase image may be calculated from the acquired fringe pattern, for example by using virtual phase-stepping. In either case, the sensor may be iteratively rotated around its three axes until the phase or moire fringe distribution is everywhere even and parallel, in which case the sensor image plane and the flat surface are coplanar. An alternative but preferred technique is to use a chequered pattern, where adjacent rectangular tiles contain only vertical or horizontal lines, which pattern is converted to phase values using, for example, virtual phase-stepping respectively applied horizontally or vertically. These techniques may be further extended to operate automatically, whereby, iteratively, an image is acquired by the sensor 4 of the line pattern, from which is determined a moire fringe or a phase image as above, after which the sensor 4 is moved to minimise asymmetries in the moire fringe or phase image distribution.
A specific example of a technique for non-contact 3-D digitising of discontinuous surfaces, previously referred to as multiple-fringe indexing, is now explained by reference to Figure 3, which apparatus is similar to that of Figure 2 but with the addition of a second fringe projector 12, placed near the sensor 4 , which projects a second fringe pattern 13 onto the object surface positioned at 3, the relationship between said fringe projectors being unimportant so long as each results in a different 3-D fringe geometry, provided, for example, by the fringe projectors having different fringe rotation, period or magnification.
With reference to Figure 3, the two projectors 1 , 12 are controlled by the image processor 6 via a controller or other device 14 such that the object surface positioned at 3 is illuminated by one or other projector, or alternatively both projectors are off. The system is preferably calibrated using the aforementioned direct fringe calibration technique, modified to now store two sets of relative fringe numbers in the CLUT, one for each of the projected fringe geometry's, where both are referenced to the same 3-D co-ordinate system.
When an unknown object surface is placed within the measurement volume, again with reference to Figure 3, two fringe patterns are acquired by the sensor 4 of the object surface illuminated by fringe projectors 1,12, which fringe patterns are used to calculate phase images using, for example, virtual phase-stepping, which phase images are correlated with the CLUT at any pel location such that the correct fringe orders of each phase image are those which give most nearly the same z co-ordinate.
When no correlation gives a similar z co-ordinate to within a given threshold, or when the phase at any particular pel is different from its neighbours by more than a second given threshold, that pel is marked as unreliable and is either isolated from further processing or is assigned a correct fringe order or phase value by considering the fringe order or phase value of a group of adjacent pels. An example of apparatus for non-contact 3-D digitising, referred to as a 3-D vision probe, is presented here by reference to Figure 4, which shows a schematic layout of the probe in which two fringe projectors 1 ,12 are symmetrically disposed either side of a sensor 4 so that the optical axes are coplanar. Standard lenses 15,16 are interchangeable with other standard lenses of different magnification, and the fringe projectors 1,12, are held by pivots 17 within a member or members 18 which also holds the sensor 4, such that the projectors 1 , 12 may be rotated around the pivot to intersect the optical axis of the sensor 4 at different distances in front of the sensor 4 and may afterwards be fixed firmly in place.
With reference to Figure 5, each fringe projector may comprise, for example, a point source 21 , a spherical reflector 22 and a condenser 23 whereby illumination is focused through a transmission grating of parallel dark and light stripes 24 into the aperture of the objective lens 25. In this case, it is possible to vary the diameter of light focused into the aperture to accommodate different lenses with different aperture diameters, provided, for example, by moving the condenser 23 along its optical axis. It is also of benefit if the grating 24 may be moved along the principal optical axis, in order that it may be focused for different lenses 5, and also rotated around the principal optical axis, in order to control and adjust the fringe period.
Figure 4 shows two projectors 1 ,12 positioned either side of the sensor
4 such that all the optical axes are co-planar, but both projectors may be on the same side of the sensor, the probe may instead contain one fringe projector, or three or more projectors, and any projector may be disposed above or below the sensor. With reference to Figure 4, the 3-D vision probe is provided with substantially the apparatus shown in Figure 3, where the two fringe projectors
1,12 and sensor 4 of Figure 3 are replaced by the probe shown in Figure 4, by which means the aforementioned virtual phase-stepping, direct fringe calibration, multiple-fringe indexing, or other techniques, are effected.
With reference to Figure 4 it may be that the lenses 15,16 are motorised zoom lenses, and the rotation of the projector or projectors around the pivots 17 is similarly motorised, all under control of an image processor such as the item 6 in Figure 3. With appropriate control processes this provides a flexible and compact 3-D vision probe which can accommodate itself to changing or different circumstances.
The above described embodiments of the invention may be used with visible light. Alternative embodiments of the invention may be used with electromagnetic radiation of a wavelength less or greater than the spectrum of visible light.
The reader's attention is directed to all papers and documents which are filed concurrently with or previous to this specification in connection with this application and which are open to public inspection with this specification, and the contents of all such papers and documents are incoφorated herein by reference.
All of the features disclosed in this specification (including any accompanying claims, abstract and drawings) and/or all of the steps of any method so disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive.
Each feature disclosed in this specification (including any accompanying claims, abstracts and drawings), may be replaced by alternative features serving the same or similar process, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features.
The invention is not restricted to the details of the foregoing embodiment(s). The invention extends to any novel one, or any novel combination, of the features disclosed in this specification (including any accompanying claims, abstracts and drawings), or to any novel one, or any novel combination, of the steps of any method or process so disclosed.
APPENDIX A
Virtual Phase Stepping Using Interpolated Virtual Fringes
In virtual phase-stepping a virtual grating of integer pixels pitch M is combined with a deformed fringe pattern to generate a second moire fringe pattern, and by then shifting the virtual grating peφendicular to the grating lines, further phase-shifted second moire fringe patterns are generated, which can be fitted to a sine wave to find the phase values.
When the virtual grating period is three pixels, then three different phase-maps are generated, which provides three separate phase values for each pixel, each phase-shifted by 1/3 of a period. From these values. Fourier Sine and Cosine series can be formed.
/n(n)sin — .0 + /. («)sin — . 1 +/,(/ι)sin — .2 0 3 l 3 " 3
7n(n)cos — .0 +1, (n)cos — . 1 + /,(«)cos — .2 0 3 ' 3 2 3 Then the phase value at any pixel is given by the arctan of the ratio of the sine and cosine series thus:
Figure imgf000031_0001
In an image processor (e.g. the image processor 6 of Figure 3), the inteφolation and series summing can be accomplished simultaneously with the following general equation:
Φ(n) = arctan
Figure imgf000031_0002
Where n is the pixel number and m is the grating period.

Claims

1. A non-contact 3-D digitiser comprising:-
at least one projector having a source of electromagnetic radiation and means for collecting, modulating and focusing the radiation to cast a regular pattern of dark and light stripes onto the surface of a remote subject;
a detector incoφorating a focusing device and a plurality of detector elements sensitive to radiation projected by the projector and so disposed as to view a subject irradiated by such projected radiation;
means for determining the phase of radiation detected by the detector;
means for determining the period of radiation detected by the detector; and
means for determining the position of a point on the subject surface, in a 3-D co-ordinate system, from the phase and period of the radiation detected at a corresponding one of the detector elements.
2. A digitiser according to claim 1 , wherein said means for determining the phase of radiation detects the phase by the steps of:
establishing an integer sampling period N; sampling the detected radiation every Nth data entry from an origin and inteφolating between the sampled values such that the resultant waveform is essentially sinusoidal; and
repeating this N-l times, each time beginning one entry further from the origin, such that at any detector element the resultant series has a sinusoidal waveform which can be fitted to a sine wave to determine a phase value;
3. A digitiser according to claim 1 or 2, wherein the integer sampling period N is approximately the same as the period of the detected radiation.
4. A digitiser according to claim 1 , 2 or 3, further comprising a moveable target at which a subject to be measured may be disposed.
5. A digitiser according to claim 4, further comprising means for calibrating a volume by the following steps:
positioning a target of known surface shape in sight of the detector and projector(s), and determining the phase of any detected radiation;
repeatedly moving the target, each time a known distance and direction, but within sight of the detector and projector(s), and such that the phase determined at any detector element varies by less than one half-period between each successive target position; storing the phase at each element of the detector, at each position of the target, along with the known object depth co-ordinate of the corresponding point on the target, such that they may later be retrieved; and
determining the period at each detector element by sampling the stored phase from an origin.
6. A digitiser according to claim 5, including means for subsequently reconstructing an unknown object surface placed within the volume described by the motion of the target surface by the following steps:
determining the phase of detected radiation;
determining the period of the detected radiation by sampling the phase from the same origin as that used for calibration;
using the period of the detected radiation to index the stored values to determine the depth co-ordinate of any point on the unknown surface corresponding to a detector element.
7. A digitiser according to any of the preceding claims, comprising at least two said projectors, each with a different 3-D fringe geometry.
8. A digitiser according to claim 7, wherein said geometries are cross-indexed to determine the depth co-ordinates of a point on an unknown surface by the following steps: the projectors irradiate in sequence the same part of an unknown surface which is in sight of the detector so that respective series of moire fringes apparent on the surface are different in orientation and/or period;
in a calibration step, the relationship between phase, period and depth co-ordinates are determined for each 3-D fringe geometry;
in a reconstruction step, the phase of the detected radiation at any detector element is determined for each fringe geometry, and used to index the stored phase values at that detector element such that a series of possible object depth co-ordinates are determined for each fringe geometry, where the correct depth co-ordinate in each case is that which minimises the difference between them.
9. A digitiser according to claim 8, wherein phase values from adjacent detector elements are grouped together.
10. A digitiser according to any of the preceding claims, with at least three said projectors, each with a different 3-D fringe geometry, wherein each fringe projector has a successively smaller period, and/or different fringe projectors have significantly different orientation.
11. A digitiser according to claim 6 or any of claims 7 to 10 as appendant thereto, wherein:
the phase values determined during calibration are sampled from an origin to determine the period of the detected radiation at each detector element and target position; where the period is exactly an integer, the corresponding object depth co-ordinate is determined such that for a unique integer period, a series of depth co-ordinates may be fitted to an appropriate mathematical function, the parameters of which are stored, along with the fringe integer, such that they may later be retrieved;
or the series of depth co-ordinates at any detector element are fitted to an appropriate mathematical function, the parameters of which are stored such that they may later be retrieved; and
when reconstructing an unknown surface, the phase and period of the detected radiation is used as an index to the stored values to determine the depth co-ordinates of a point on an unknown surface corresponding to a detector element.
12. A digitiser according to claim 3 or any of claims 4 to 11 as appendant thereto, wherein the target is a substantially flat target coplanar to the detector, and the target may be moved in equal steps peφendicular to the detector.
13. A digitiser according to claim 12, wherein an omni-directional mounting is provided for any detector, and the detector may be established coplanar to the target by the following steps:
applying a periodic waveform to the target which is in sight of the detector;
determining the phase of detected radiation; adjusting the detector orientation and re-determining the phase; and
repeating until the phase is equally distributed across the detector at which time the reference plane and detector are coplanar.
14. A digitiser according to any of the preceding claims, with adjacent detector and projector(s) combined into a flexible compact unit such that the or each projector may be adjusted so that the optical axes of the projector and detector coincide at different distances from the detector, and wherein the magmfication of the focusing device and the fringe orientation can be varied.
15. A digitiser according to claim 14, wherein a substantially fixed relationship is maintained between the image planes of the detector and the or each projector throughout a wide temperature range; and/or
a substantially linear relationship is maintained between the image planes of the detector and the or each projector throughout a wide temperature range, which relationship is compensated for by calibration; and/or
the detector is cooled.
16. A digitiser according to any of the preceding claims, wherein means are provided to carry out a semi- or fully-automatic calibration technique; wherein the detector may be positioned coplanar to a substantially flat reference target; wherein the detector focusing device may be adjusted to give optimum magnification, focus and image contrast by viewing the target; wherein the orientation of the or each projector may be established by viewing the target and its magnification, focus and image contrast optimised; and wherein the magnification, fringe orientation and period of the or each projector may be established according to a desired calibrated volume size and stand-off.
17. A digitiser according to any of the preceding claims, comprising either a single probe unit comprising detector and projector(s) and a multi-axis manipulator to hold the subject or the probe; or multiple probe units disposed around the subject, whereby different views of the subject surface may be presented to the detector(s).
18. A digitiser according to claim 17, including means for reconstructing a sample surface semi- or fully- automatically by the following steps:
placing the sample in sight of the or each probe and reconstructing the visible surface at those detector elements where the phase is considered to be reliable;
moving the sample or probe(s) a known distance along or around any axis of movement, and reconstructing the visible surface at those detector elements where the phase is considered to be reliable;
fully traversing the sample surface by initial relatively coarse movements, in order to determine an approximate centre of the sample from 3-D co-ordinates thus far reconstructed;
subsequently moving the sample or probe(s), recalculating the sample centre and using this origin as part of a spherical shape closing process, which determines how best the probe(s) or sample may be moved in order to acquire missing data;
repeating this process until full closure is achieved according to predetermined criteria; and
optionally, where the or each focusing device has variable magnification, varying this parameter to digitise gross or fine surface detail.
19. A method of digitising an object surface by means of a digitiser according to any of the preceding claims, the method including the steps of:
casting a regular pattern of dark and light stripes onto the surface of a remote subject by the or each said projector;
viewing a subject irradiated by such projected radiation by the or each said detector;
determining the phase of radiation detected by the detector;
determining the period of radiation detected by the detector; and
determining the position of a point on the subject surface, in a 3-D co-ordinate system, from the phase and period of the radiation detected at a corresponding one of the detector elements.
PCT/GB1995/002431 1994-10-13 1995-10-13 Method and apparatus for three-dimensional digitising of object surfaces WO1996012160A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP95934704A EP0786072A1 (en) 1994-10-13 1995-10-13 Method and apparatus for three-dimensional digitising of object surfaces
AU37022/95A AU3702295A (en) 1994-10-13 1995-10-13 Method and apparatus for three-dimensional digitising of object surfaces

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB9420638A GB9420638D0 (en) 1994-10-13 1994-10-13 Three-dimensional digitiser
GB9420638.0 1994-10-13

Publications (1)

Publication Number Publication Date
WO1996012160A1 true WO1996012160A1 (en) 1996-04-25

Family

ID=10762771

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB1995/002431 WO1996012160A1 (en) 1994-10-13 1995-10-13 Method and apparatus for three-dimensional digitising of object surfaces

Country Status (4)

Country Link
EP (1) EP0786072A1 (en)
AU (1) AU3702295A (en)
GB (1) GB9420638D0 (en)
WO (1) WO1996012160A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999028704A1 (en) * 1997-12-02 1999-06-10 Universita' Degli Studi Di Brescia Process for the measurement of three dimensional (3d) profiles by means of structured light projection
EP0947802A2 (en) * 1998-04-04 1999-10-06 Joh. & Ernst Link GmbH & Co. KG Measurement arrangement for test pieces dimensions detection,preferably hollow bodies,in particular for bores in workpieces,as well as measurement method for such dimensions
US6100990A (en) * 1999-06-14 2000-08-08 Ford Motor Company Method and apparatus for determining reflective optical quality using gray-scale patterns
EP1065498A2 (en) * 1999-06-14 2001-01-03 Ford Motor Company Method and apparatus for determining optical quality
WO2006107955A1 (en) 2005-04-06 2006-10-12 Dimensional Photonics International, Inc. Multiple channel interferometric surface contour measurement system
WO2007061632A2 (en) * 2005-11-09 2007-05-31 Geometric Informatics, Inc. Method and apparatus for absolute-coordinate three-dimensional surface imaging
EP2175233A1 (en) * 2008-10-13 2010-04-14 Koh Young Technology Inc. Apparatus and method for measuring three-dimensional shape by using multi-wavelength patterns
CN102538681A (en) * 2010-11-19 2012-07-04 株式会社高永科技 Method of inspecting a substrate
CN102589475A (en) * 2009-05-27 2012-07-18 株式会社高永科技 Three dimensional shape measurement method
DE102013104733B4 (en) * 2012-05-10 2015-04-09 Cognex Corp. LASER MEASUREMENT UNIT FOR A VISUAL SYSTEM CAMERA
CN111174731A (en) * 2020-02-24 2020-05-19 五邑大学 Color segmentation based double-stripe projection phase unwrapping method and device
WO2021095596A1 (en) * 2019-11-14 2021-05-20 株式会社 安永 Three-dimensional measuring device and three-dimensional measuring method
CN113377865A (en) * 2021-05-25 2021-09-10 成都飞机工业(集团)有限责任公司 Signal synchronization method of airplane large-range surface quality detection system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2204397A (en) * 1987-04-30 1988-11-09 Eastman Kodak Co Digital moire profilometry
DE4011407A1 (en) * 1990-04-09 1991-10-10 Steinbichler Hans Quantitative absolute measurer for three=dimensional coordinates - contains projector of test pattern, sensor and displacement device for surface evaluation of test object
DE4011406A1 (en) * 1990-04-09 1992-03-05 Steinbichler Hans Three=dimensional coordinate measurer for sample - uses projector with grating to project strip pattern for quantitative absolute measuring by means of moire technique
DE4119744A1 (en) * 1991-06-15 1992-12-17 Zeiss Carl Fa Evaluating phase angles of periodic brightness pattern, esp. for topography - involves recording several pasteurising camera and evaluating phases at individual image points for at least three patterns in two step method
DE4136428A1 (en) * 1991-11-05 1993-05-06 Henning Dr. 7440 Nuertingen De Wolf Phase-corrected Moire pattern generation with electronic grating - applying e.g. iterative least-squares fit to achieve phase constancy of fringes in real=time processing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2204397A (en) * 1987-04-30 1988-11-09 Eastman Kodak Co Digital moire profilometry
DE4011407A1 (en) * 1990-04-09 1991-10-10 Steinbichler Hans Quantitative absolute measurer for three=dimensional coordinates - contains projector of test pattern, sensor and displacement device for surface evaluation of test object
DE4011406A1 (en) * 1990-04-09 1992-03-05 Steinbichler Hans Three=dimensional coordinate measurer for sample - uses projector with grating to project strip pattern for quantitative absolute measuring by means of moire technique
DE4119744A1 (en) * 1991-06-15 1992-12-17 Zeiss Carl Fa Evaluating phase angles of periodic brightness pattern, esp. for topography - involves recording several pasteurising camera and evaluating phases at individual image points for at least three patterns in two step method
DE4136428A1 (en) * 1991-11-05 1993-05-06 Henning Dr. 7440 Nuertingen De Wolf Phase-corrected Moire pattern generation with electronic grating - applying e.g. iterative least-squares fit to achieve phase constancy of fringes in real=time processing

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
UEDA T: "Real time contour line generation for measuring 3-D shapes", REVIEW OF THE ELECTRICAL COMMUNICATION LABORATORIES, SEPT.-OCT. 1979, JAPAN, vol. 27, no. 9-10, ISSN 0029-067X, pages 876 - 885 *
XIAN-YU SU ET AL: "PHASE-STEPPING GRATING PROFILOMETRY: UTILIZATION OF INTENSITY MODULATION ANALYSIS IN COMPLEX OBJECTS EVALUATION", OPTICS COMMUNICATIONS, vol. 98, no. 1 / 02 / 03, 15 April 1993 (1993-04-15), pages 141 - 150, XP000349117 *

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999028704A1 (en) * 1997-12-02 1999-06-10 Universita' Degli Studi Di Brescia Process for the measurement of three dimensional (3d) profiles by means of structured light projection
EP0947802A2 (en) * 1998-04-04 1999-10-06 Joh. & Ernst Link GmbH & Co. KG Measurement arrangement for test pieces dimensions detection,preferably hollow bodies,in particular for bores in workpieces,as well as measurement method for such dimensions
EP0947802A3 (en) * 1998-04-04 2000-08-16 Joh. & Ernst Link GmbH & Co. KG Measurement arrangement for test pieces dimensions detection,preferably hollow bodies,in particular for bores in workpieces,as well as measurement method for such dimensions
US6100990A (en) * 1999-06-14 2000-08-08 Ford Motor Company Method and apparatus for determining reflective optical quality using gray-scale patterns
EP1065498A2 (en) * 1999-06-14 2001-01-03 Ford Motor Company Method and apparatus for determining optical quality
EP1065498A3 (en) * 1999-06-14 2001-03-21 Ford Motor Company Method and apparatus for determining optical quality
US6208412B1 (en) 1999-06-14 2001-03-27 Visteon Global Technologies, Inc. Method and apparatus for determining optical quality
US7595892B2 (en) 2005-04-06 2009-09-29 Dimensional Photonics International, Inc. Multiple channel interferometric surface contour measurement system
WO2006107955A1 (en) 2005-04-06 2006-10-12 Dimensional Photonics International, Inc. Multiple channel interferometric surface contour measurement system
WO2007061632A2 (en) * 2005-11-09 2007-05-31 Geometric Informatics, Inc. Method and apparatus for absolute-coordinate three-dimensional surface imaging
WO2007061632A3 (en) * 2005-11-09 2007-08-02 Geometric Informatics Inc Method and apparatus for absolute-coordinate three-dimensional surface imaging
US7929751B2 (en) 2005-11-09 2011-04-19 Gi, Llc Method and apparatus for absolute-coordinate three-dimensional surface imaging
EP2175233A1 (en) * 2008-10-13 2010-04-14 Koh Young Technology Inc. Apparatus and method for measuring three-dimensional shape by using multi-wavelength patterns
US8325350B2 (en) 2008-10-13 2012-12-04 Koh Young Technology Inc. Apparatus and method for measuring three-dimensional shape by using multi-wavelength
CN102589475A (en) * 2009-05-27 2012-07-18 株式会社高永科技 Three dimensional shape measurement method
US8878929B2 (en) 2009-05-27 2014-11-04 Koh Young Technology Inc. Three dimensional shape measurement apparatus and method
CN102538681A (en) * 2010-11-19 2012-07-04 株式会社高永科技 Method of inspecting a substrate
US8730464B2 (en) 2010-11-19 2014-05-20 Koh Young Technology Inc. Method of inspecting a substrate
DE102011086467B4 (en) 2010-11-19 2018-03-29 Koh Young Technology Inc. METHOD FOR INVESTIGATING A SUBSTRATE
DE102013104733B4 (en) * 2012-05-10 2015-04-09 Cognex Corp. LASER MEASUREMENT UNIT FOR A VISUAL SYSTEM CAMERA
WO2021095596A1 (en) * 2019-11-14 2021-05-20 株式会社 安永 Three-dimensional measuring device and three-dimensional measuring method
CN111174731A (en) * 2020-02-24 2020-05-19 五邑大学 Color segmentation based double-stripe projection phase unwrapping method and device
CN113377865A (en) * 2021-05-25 2021-09-10 成都飞机工业(集团)有限责任公司 Signal synchronization method of airplane large-range surface quality detection system

Also Published As

Publication number Publication date
AU3702295A (en) 1996-05-06
GB9420638D0 (en) 1994-11-30
EP0786072A1 (en) 1997-07-30

Similar Documents

Publication Publication Date Title
EP2183544B1 (en) Non-contact measurement apparatus and method
Lenz et al. Techniques for calibration of the scale factor and image center for high accuracy 3D machine vision metrology
Reid et al. Absolute and comparative measurements of three-dimensional shape by phase measuring moiré topography
Carrihill et al. Experiments with the intensity ratio depth sensor
Heikkila Geometric camera calibration using circular control points
US5307151A (en) Method and apparatus for three-dimensional optical measurement of object surfaces
US7136170B2 (en) Method and device for determining the spatial co-ordinates of an object
JP5757950B2 (en) Non-contact object measurement
Davis et al. A laser range scanner designed for minimum calibration complexity
US20100046005A1 (en) Electrostatice chuck with anti-reflective coating, measuring method and use of said chuck
JP2001012925A (en) Three-dimensional shape measurement method and device and record medium
CA2253085A1 (en) Methods and system for measuring three dimensional spatial coordinates and for external camera calibration necessary for that measurement
EP0786072A1 (en) Method and apparatus for three-dimensional digitising of object surfaces
WO1998005157A2 (en) High accuracy calibration for 3d scanning and measuring systems
US4878247A (en) Method for the photogrammetrical pick up of an object with the aid of at least one opto-electric solid-state surface sensor
KR101566129B1 (en) Moire Technique- based Measurement of the 3-Dimension Profile of a Specimen and its Implementation with Line-Scan Camera
CN112985258A (en) Calibration method and measurement method of three-dimensional measurement system
Jovanović et al. Accuracy assessment of structured-light based industrial optical scanner
CN107835931B (en) Method for monitoring linear dimension of three-dimensional entity
Sinnreich et al. Optical 3D tube measurement system for quality control in industry
Kowarschik et al. Adaptive optical 3D measurement with structured light
Kujawinska et al. Automatic fringe pattern analysis for holographic measurement of transient event
Ariyaeeinia Calibration of an active stereoscopic imaging system
Chang et al. Electronic fringe projection for profiling large surfaces
Amir et al. Three-dimensional line-scan intensity ratio sensing

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU BB BG BR BY CA CH CN CZ DE DK EE ES FI GB GE HU IS JP KE KG KP KR KZ LK LR LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TT UA UG US UZ VN

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): KE MW SD SZ UG AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 1995934704

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1995934704

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

NENP Non-entry into the national phase

Ref country code: CA

WWW Wipo information: withdrawn in national office

Ref document number: 1995934704

Country of ref document: EP