WO1998018117A2 - Machine vision calibration targets and methods of determining their location and orientation in an image - Google Patents

Machine vision calibration targets and methods of determining their location and orientation in an image Download PDF

Info

Publication number
WO1998018117A2
WO1998018117A2 PCT/US1997/018268 US9718268W WO9818117A2 WO 1998018117 A2 WO1998018117 A2 WO 1998018117A2 US 9718268 W US9718268 W US 9718268W WO 9818117 A2 WO9818117 A2 WO 9818117A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
edge
target
edges
tool
Prior art date
Application number
PCT/US1997/018268
Other languages
French (fr)
Other versions
WO1998018117A3 (en
Inventor
David J. Michael
Aaron S. Wallack
Original Assignee
Cognex Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cognex Corporation filed Critical Cognex Corporation
Priority to JP51943098A priority Critical patent/JP2001524228A/en
Priority to EP97910863A priority patent/EP0883857A2/en
Publication of WO1998018117A2 publication Critical patent/WO1998018117A2/en
Publication of WO1998018117A3 publication Critical patent/WO1998018117A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Definitions

  • the invention pertains to machine vision and, more particularly, to calibration targets and methods for determining their location and orientation in an image.
  • Machine vision refers to the automated analysis of an image to determine characteristics of objects and other features shown in the image. It is often employed in automated manufacturing lines, where images of components are analyzed to determine placement and alignment prior to assembly. Machine vision is also used for quality assurance. For example, in the pharmaceutical and food packing industries, images of packages are analyzed to insure that product labels, lot numbers, "freshness" dates, and the like, are properly positioned and legible.
  • an object whose image is to be analyzed include a calibration target.
  • the target facilitates determining the orientation and position of the object with respect to other features in the image. It also facilitates correlating coordinate positions in the image with those in the "real world," e.g., coordinate positions of a motion stage or conveyor belt on which the object is placed.
  • a calibration target can also be used to facilitate determining the position and orientation of the camera with respect to the real world, as well as to facilitate determining the camera and lens parameters such as pixel size and lens distortion.
  • the prior art suggests the use of arrays of dots, bulls-eyes of concentric circles, and parallel stripes as calibration targets. Many of these targets have characteristics that make difficult finding their centers and orientations. This typically results from lack of clarity when the targets and, particularly, their borders are imaged. It also results from discrepancies in conventional machine vision techniques used to analyze such images.
  • the edges of a cross-shaped target may be imprecisely defined in an image, leading a machine vision analysis system to wrongly interpret the location of those edges and, hence, to misjudge the mark's center by a fraction of a pixel or more.
  • a localized defect in a camera lens may cause a circular calibration mark to appear as an oval, thereby, causing the system to misjudge the image's true aspect ratio.
  • An object of this invention is to provide an improved calibration targets and methods for machine vision analysis thereof.
  • a related object is to provide calibration targets and analysis methods reliable at a wide range of magnifications.
  • a further object is to provide such methods as can be readily implemented on conventional digital data processors or other conventional machine vision analysis equipment.
  • Yet still another object of the invention is to provide such methods that can rapidly analyze images of calibration target without undue consumption of resources.
  • the foregoing objects are among those attained by the invention, which provides in one aspect a machine vision method for analysis of a calibration target of the type having two or more regions, each having a different "imageable characteristic" (e.g., a different color, contrast, or brightness) from its neighboring region(s).
  • Each region has at least two edges ⁇ referred to as "adjoining edges" - that are linear and that are directed toward and, optionally meet at, a reference point (e.g., the center of the target or some other location of interest).
  • the method includes generating an image of the target, identifying in the image features corresponding to the adjoining edges, and determining the orientation and/or position of the target from those edges.
  • the invention provides a method as described above for analyzing a target of the type that includes four regions, where the adjoining edges of each region are perpendicular to one another, and in which each region in the target has a different imageable characteristic from its edge- wise neighbor.
  • the edges of those regions can meet, for example, at the center of the target, as in the case of a four-square checkerboard.
  • the invention provides a method as described above for determining an orientation of the target as a function of the angle of the edges identified in the image and for determining the location of the reference point as an intersection of lines fitted to those edges.
  • the invention provides a method of determining the orientation of a target in an image by applying a Sobel edge tool to the image to generate a Sobel angle image, and by generating a angle histogram from that angle image.
  • the orientation is determined by applying a Hough line tool to the image and determining the predominant angle of the edges identified by that tool.
  • one aspect of the invention calls for locating the adjoining edges by applying a caliper vision tool to the image, beginning at an approximate location of the reference point. That approximate location of the reference point can itself be determined by applying a Hough line vision tool to the image in order to find lines approximating the adjoining edges and by determining an intersection of those lines. Alternatively, the approximate location of the reference point can be determined by performing a binary or grey scale correlation to find where a template representing the edges most closely matches the image.
  • the approximate location of the reference point is determined by applying a projection vision tool to the image along each of the axes with which the adjoining edges align.
  • a first difference operator vision tool and a peak detector vision tool are applied to the output of the projection tool (i.e., to the projection) in order to find the approximate location of the edges.
  • the invention has wide application in industry and research applications. It facilitates the calibration of images by permitting accurate determination of target location and orientation, regardless of magnification.
  • an object bearing a target can be imaged by multiple cameras during the assembly process, with accurate determinations of location and orientation made from each such image.
  • Figure 1 A - 1C depict calibration targets according to the invention
  • Figure ID depicts the effect of rotation on the target depicted Figure IB
  • Figure 2 depicts an object according to the invention incorporating a calibration target of the type depicted in Figure IB;
  • Figure 3 depicts a machine vision system according to the invention for determining the reference point and orientation of a calibration target
  • Figures 4 and 5 depict a method according to the invention for interpreting an image of a calibration target to determine a reference point and orientation thereof;
  • Figure 6 illustrates the magnification invariance of a target according to the invention.
  • FIGS 1 A - 1C depict calibration targets according to the invention.
  • a target 10 according to the invention having three regions 12, 14, 16.
  • Each region is bounded by at least two linear edges that are oriented toward a reference location or reference point 18 on the target.
  • region 12 is bounded by edges 20, 24;
  • region 14 is bounded by edges 20, 22; and
  • region 16 is bounded by edges 22, 24.
  • the edges are shared by adjoining regions and, hence, are referred to below as “adjoining edges.”
  • region 12 shares edge 20 with region 14; region 14 shares edge 22 with region 16; and region 16 shares edge 24 with region 12.
  • the reference point 14 is at the center of target 10, though, those skilled in the art will appreciate that the reference point can be positioned elsewhere.
  • an "imageable characteristic” is a characteristic of a region as imaged by a machine vision system (e.g., of the type shown in Figure 3) and, particularly, as imaged by an image capture device used by such a system.
  • region 12 has the characteristic of being colored black; region 14, white; and region 16, gray.
  • imageable characteristics useful with conventional machine vision systems which typically utilize image capture devices operational in visual spectrum ⁇ include contrast, brightness, and stippling.
  • an imageable characteristic is temperature.
  • an imageable characteristic is emitted radiation intensity or frequency.
  • edges 20, 22, 24 comprise straight linear segments. Those edges are implicitly defined as the borders between regions that, themselves, have different imageable characteristics. Thus, for example, edge 20 is a straight linear segment defined by the border between black region 12 and white region 14. Likewise, edge 24 is defined by the border between black region 12 and gray region 16. Further, edge 22 is defined by the border between white region 14 and grey region 16.
  • Figure IB depicts a calibration target 30 according to the invention having four rectangular (and, more particularly, square) regions 32, 34, 36, 38.
  • each region is bounded by at least two linear edges that are oriented toward a reference point 40 at the center of the target.
  • region 32 is bounded by edges 42, 44; region 34 is bounded by edges 42, 46; and so forth.
  • edges 42, 44 are oriented toward a reference point 40 at the center of the target.
  • region 32 is bounded by edges 42, 44;
  • region 34 is bounded by edges 42, 46; and so forth.
  • these edges are shared by adjoining regions.
  • region 32 shares edge 42 with region 34, and so forth.
  • Each region in target 30 has a different imageable characteristic from its edge-wise neighbor. Hence, regions 32 and 36 are white, while their edge-wise adjoining neighbors 34, 38 are black.
  • Figure 1C depicts a calibration target 50 according to the invention having five regions 52, 54, 56, 58, 60, each having two linear edges directed toward a reference point 62.
  • the adjoining regions are of differing contrast, thereby, defining edges at their common borders, as illustrated.
  • the edges separating the regions 52 - 60 of target 50 are directed toward the reference point 62, they do not meet at that location.
  • no marker or other element imageable characteristic is provided at reference point 62.
  • FIG. 1 depicts an object according to the invention for use in machine vision imaging, detection, and/or manipulation having a calibration target according to the invention coupled thereto
  • the object is an integrated circuit chip 70 having coupled to the casing thereof a calibration target 72 of the type shown in Figure IB.
  • targets according to the invention can likewise be coupled to the object 70.
  • the targets can be coupled to the object by any known means. For example, they can be molded onto, etched into, or printed on the surface of the object.
  • decals embodying the targets can be glued, screwed or otherwise affixed to the object
  • calibration plates incorporating the targets can be placed on the object and held in place by friction
  • the object can include any other objects to which a target can be coupled, such as printed circuit boards, electrical components, mechanical parts, containers, bottles, automotive parts, paper goods, etc
  • Figure 3 depicts a machine vision system 80 according to the invention for determining the reference point and orientation of an object 82 having coupled thereto a calibration target 84 according to the invention and, particularly, a four-region target of the type shown in Figure IB.
  • the system 80 includes an image capture device 86 that generates an image of a scene including object 82.
  • the device may be responsive to the visual spectrum, e.g., a conventional video camera or scanner, it may also be responsive to emissions (or reflections) in other spectra, e.g., infrared, gamma-ray, etc.
  • Digital image data (or pixels) generated by the capturing device 86 represent, in the conventional manner, the image intensity (e.g., contrast, color, brightness) of each point in the field of view of the capturing device.
  • That digital image data is transmitted from capturing device 86 via a communications path 88 to an image analysis system 90.
  • This can be a conventional digital data processor, or a vision processing system of the type commercially available from the assignee hereof, Cognex Corporation, as programmed in accord with the teachings hereof to determine the reference point and orientation of a target image.
  • the image analysis system 90 may have one or more central processing units 92, main memory 94, input-output system 96, and disk drive (or other mass storage device) 98, all of the conventional type.
  • the system 90 and, more particularly, central processing unit 92, is configured by programming instructions according to teachings hereof for operation as illustrated in Figure 4 and described below.
  • teachings hereof for operation as illustrated in Figure 4 and described below.
  • Those skilled in the art will appreciate that, in addition to implementation on a programmable digital data processor, the methods and apparatus taught herein can be implemented in special purpose hardware.
  • FIG. 4 there is shown a machine methodology according to the invention for interpreting an image of a target 84 to determine its reference point and orientation.
  • the discussion that follows is particularly directed to identifying a four- region target of the type shown in Figure IB.
  • Those skilled in the art will appreciate that these teachings can be readily applied to finding targets according to the invention, as well as to other targets having detectable linear edges that are oriented toward a reference location or reference point on the target, e.g., a prior art cross-shaped target.
  • linear edges are referred to as "adjoining edges," regardless of whether they are from calibration targets according to the invention or from prior art calibration targets.
  • an image of the target 84 (or of the target 84 and object 82) is generated, e.g., using image capture device 86, and input for machine vision analysis as discussed below.
  • the image can be generated real time, retrieved from a storage device (such as storage device 98), or received from any other source.
  • the method estimates the orientation of the target in the image using any of many alternative strategies. For example, as shown in step 102, the method determines the orientation by applying a conventional Hough line vision tool that finds the angle of edges discernable in the image. In instances where the target occupies the entire image, those lines will necessarily correspond to the adjoining edges. Where, on the other hand, the target occupies only a portion of the image, extraneous edges (e.g., from other targets) may be evident in the output of that tool. Although those extraneous edges can generally be ignored, in instances where they skew the results, the image can be windowed so that the Hough vision tool is only applied to that portion that contains the target. Once the angles of the lines has been determined by the Hough line tool, the orientation of the image is determined from the predominant ones of those angles. Alternatively, the angle of the image can be determined by taking a histogram of the angles.
  • the Hough vision tool used in step 102 may be of the conventional type known and commercially available for finding the angle of lines in image.
  • a preferred such tool is the Cognex Line Finder, commercially available from the Assignee hereof, Cognex Corporation.
  • step 106 An alternative to using a Hough vision tool is shown in step 106.
  • the illustrated method determines the orientation of the target by applying a Sobel edge tool to the image to find the adjoining edges. Particularly, that tool generates a Sobel angle image that reveals the direction of edges in the image.
  • the adjoining edges will be the only ones discerned by the Sobel edge tool image.
  • the target occupies only a portion of the image, any extraneous edges can be ignored or windowed out.
  • the Sobel edge tool may be of the conventional type known and commercially available for finding lines in image.
  • a preferred such tool is the Cognex Edge Detection tool, commercially available from the Assignee hereof, Cognex Corporation.
  • the orientation of the target in the image is determined by generating a histogram of the edge angle information; see, step 108. From that histogram, the target orientation can be determined by taking a one- dimensional correlation of that histogram with respect to a template histogram of a target oriented at 0°. Where a Sobel magnitude image is generated, in addition to the Sobel angle image, such a histogram can be generated by counting the number of edges greater then a threshold length at each orientation.
  • the method contemplates obtaining the angle of orientation of the target from the user (or operator). To this end, the user may enter angle orientation information via a keyboard or other input device coupled with digital data processor 90.
  • the method determines the location, i.e., coordinates, of the target reference point in the image.
  • the method can apply a Hough vision tool, as described above, to find the angle of lines discernable in the image.
  • a conventional Hough vision tool determines, in addition to the angle of lines in an image, the distance of each line, e.g., from a central pixel. As above, where the target occupies the entire image, those lines will be the only ones discernable by the Sobel edge tool image. Where, on the other hand, the target occupies only a portion of the image, any extraneous edges can be ignored or windowed out.
  • the Hough vision tool used in step 104 may be of the conventional type known and commercially available for finding the angle and position of lines in image.
  • a preferred such tool is the Cognex Line Finder, commercially available from the Assignee hereof, Cognex Corporation.
  • steps 102 and 110 can be combined, such that a single application of the Hough vision tool provides sufficient information from which to determine both the orientation of the target in the image and its reference point.
  • the method can apply a projection vision tool to the image in order to find the position of the lines discernable in the image; see, step 112.
  • the projection tool which maps the two-dimensional image of the target into a one-dimensional image, is applied along the axes defined by the edges in the image.
  • the location of the edges can be discerned from by finding the peaks in the first derivatives of each of those projections.
  • those lines will be the only lines discernable by the Sobel edge tool image.
  • the target occupies only a portion of the image, any extraneous edges can be ignored or windowed out.
  • the projection vision tool used in step 112 may be of the conventional type known and commercially available for mapping a two-dimensional image of the target into a one- dimensional image.
  • a preferred such tool is that provided with the Cognex Caliper tool commercially available from the Assignee hereof, Cognex Corporation.
  • step 114 the method uses the information generated in steps 110 and 112 to compute the location of the reference point, particularly, as the intersection of the lines found in those steps 110 and 112.
  • the method can apply determine the location of the reference point by performing a binary or grey scale correlation on the image; see step 116.
  • the method uses, as a template, a pattern matching the expected arrangement of the sought-after edges, to wit, a cross-shaped pattern in the case of a target of the type shown in Figure IB.
  • the use of correlation vision tools for this purpose is well known in the art.
  • the template for such an operation is preferably generated artificially, although it can be generated from prior images of similar targets.
  • the method can apply a grey-scale image registration using the sum of absolute differences metric between the image and a template; see step 118.
  • the method uses, as a template, a pattern matching the expected arrangement of the sought-after edges, to wit, a cross-shaped pattern in the case of a target of the type shown in Figure IB.
  • the template for such an operation is preferably generated artificially, although it can be generated from prior images of similar targets.
  • a preferred grey-scale image registration tool is disclosed in United States Patent No. 5,548,326, the teachings of which are incorporated herein by reference.
  • steps 110 - 118 can be used to determine the approximate location of the reference point of the target in the image
  • the method utilizes optional steps 120 and 122 to refine that estimate. These two steps are invoked one or more times (if at all) in order make that refinement.
  • step 120 the method applies a conventional caliper vision tool to rind points in the image that define the adjoining edges of the regions.
  • step 120 the method applies calipers along fifty points along each edge (though those skilled in the art will appreciate that other numbers of points can be used), beginning with points closest to the estimate of the reference point, as discerned in steps 110 - 118.
  • the calipers are preferably applied a small distance away from the actual estimate of the reference point to avoid skewing the analysis due to possible misprinting of the target at that point, a missing pattern at that point (e.g., Figure 1C), or a too-high spatial frequency at that point (e.g., Figure IB).
  • the method then fits a line to the points found along each edge by the caliper tool, preferably, using a conventional least squares technique.
  • the method computes a refined location of the reference point as the intersection of the lines identified in step 120.
  • the reference point is computed using conventional least squares techniques.
  • the method utilizes the same number of points on either side of (and closest to) the reference point for purposes of fitting each line. This minimizes the bias otherwise introduced by a conventional edge detection technique in finding edges that are defined only by dark-to-light (or light-to- dark) transitions.
  • Calibration targets of the type shown in Figure IB are advantageously processed by a method according to the invention insofar as they further minimize bias otherwise introduced by a conventional edge detection techniques.
  • bias is reduced by the fact that "opposing" adjoining edges (i.e., edges that oppose one another across the reference point) define straight linear segments that change polarity across the reference point. That is, those segments are defined by regions that transition ⁇ preferably, equally in magnitude ⁇ from light-to-dark one one side of the reference point, and from dark-to-light on the other side. This is true for all "symmetric" calibration targets according to the invention, i.e., targets in which opposing edges define straight linear segments that are opposite polarity on either side of the reference point.
  • the method specifically applies the caliper tool at points along the lines found in the previous iteration. Preferably, these points are at every pixel in the image that lies along the line, though, those skilled in the art will appreciate that few points can be used.
  • the calipers can be applied orthogonal to the lines, though, in the preferred embodiment, they are applied along the grid defined by the image pixels. The caliper range decreases with each subsequent invocation of step 120.
  • the method continues applying the calipers, beginning at the estimated center point until one of the four following situations occurs: no edge is found by the caliper applied at the sample point; more than one edge is found by the caliper and the highest scoring edge is less than twice the score of the second highest scoring edge (this 2X comes from the CONFUSION_THRESHOLD); the distance between a computed edge point and the nominal line (computed from the previous invocation of step 120) is larger than a threshold (which threshold decreases with each subsequent invocation of step 120); or, the caliper extends outside of the image.
  • the method then fits a line to the points found along each edge by the caliper tool, preferably, using a conventional least squares technique.
  • the method computes a refined location of the reference point as the intersection of the lines identified in step 120.
  • the reference point is computed using conventional least squares techniques.
  • step 150 the method calls for generating an image of a target 84 and, particularly, of a target according to the invention having two or more regions, each region being defined by at least two linear edges that are directed toward a reference point, and having at least one of the regions having a different imageable characteristic from an adjacent region.
  • Step 152 can be effected in the manner described in connection with step 100 of Figure 4, or equivalents thereof.
  • the method analyzes the image to generate an estimate of an orientation of the target in the image.
  • Step 152 can be effected in the manner described in connection with steps 102 - 108 of Figure 4, or equivalents thereof.
  • step 154 the method analyzes the image to generate estimate of a location of the target's reference point.
  • Step 154 can be effected in the manner described in connection with steps 110 - 118 of Figure 4, or equivalents thereof.
  • step 156 the method analyzes the image to refine its estimates of the location of the reference point in the image and the orientation of the target in the image.
  • Step 156 can be effected in the manner described in connection with steps 120 - 122 of Figure 4, or equivalents thereof.
  • Calibration target and methods for analysis according to the invention are advantageous over prior art targets and methods insofar as they are magnification invariant.
  • methods according to the invention insure reliance on features (to wit, regions) that retain the same imageable appearance regardless of magnification. This is in contrast to prior art targets and methods, which rely on individual lines (or dots) to define calibrating features. As noted above, the imaging appearances of such lines and dots change with varying degrees of magnification. Even the prior art methods that analyze checkerboard targets rely on analysis of corners, which are not magnification invariant.
  • FIGS 6 A - 6C The magnification invariance of targets and methods according to the present invention is illustrated in Figures 6 A - 6C.
  • Figure 6 A there is shown an imaging setup wherein camera 200 images a target 202 (on object 204) from a height x.
  • An image generated by camera 200 is displayed on monitor 206 of workstation 208.
  • Figure 6B there is shown an identical imaging setup, except insofar as a camera (of identical magnification) images a target (of identical size) from a greater height, x'.
  • Figure 6C shows an identical imaging setup, except insofar as a camera (again, of identical magnification) images a target (again, of identical size) from a still greater height, x".
  • Still another advantage of calibration targets and methods according to the invention is that they permits angular orientation to be determined throughout a full 360° range.
  • the relative positions of the regions can be used to determine the overall orientation of the target, i.e., whether it is rotated 0°, 90°, 180°, or 270°. This information can be combined with a determination of relative orientation made by analysis of the adjoining edges as discussed above to determine the precise position of the target.
  • This chapter describes the Edge Detection tool, which includes edge detection ana peak detection.
  • Edge detection takes an input Image and produces two output images: an image of the edge magnitude of each input pixel and an image of the edge angle of each input pixel.
  • the information produced by edge detection can be used to locate objectss within an image.
  • Edge pixel information can be used to detect the rotation of an ob
  • Peak detection is useful in any application in which knowledge of the local maximum vaJu ⁇ e in a two-dimensional space is useful. Peak detection takes an input image and produces an output Image containing only those pixels in the input image with higher values than neighboring pixels.
  • a typical input image to peak detection is an edge magnitude image; the output image contains only the highest magnitude edge pixels.
  • the edge detection function can optionally use peak detection to postprocess the edge detection results so that only the strongest edges remain
  • Edge Detection An Overview of Edge Detection describes the goals of edge detection, defines an edge pixel, and describes how the Edge Detection tool finds edge pixels. This section also explains the two prope ⁇ ies of a pixel that are calculated by the Edge Detection tool, e ⁇ ge magnitude and an ⁇ le. and ends with a sample application.
  • Tins section contains a discussion of the compression tables that affect the amount of memory u ⁇ ed by the edge detector.
  • Edge Detection Enumerations and Data Structures describes the data structures and enumerations that the edge detection functions use. Types and data structures that support peak detection are discussed in Peak Detection Enumeration Types and Data Structures below. Edge Detection Tool
  • Edge Detection Functions describes the functions that implement edge detection
  • Peak Detection describes the interface to peak detection.
  • Peak Detection Enumeration Types and Data Structures describes the peak detection enumerations and ⁇ ata structures
  • Peak Detection Functions describes the peak detection functions
  • Sobel operators Two 3x3 operators that the Edge Detection tool uses to locate edge pixels in an image horizontal edge Value returned by the horizontal Sobel operator when it is component applied to a pixel in an image.
  • the Edge Detection tool defines rne angle of a pixel as arctan(vyft) where v is the vertical edge component and h is the nonzontal edge component of the pixel peak pixel Pixel with a value greater than or equal to some or all of its neighboring pixels' values peak detection Finding some or all peak pixels in an image
  • symmetric peak Peak whose pixel value is greater than or equal to the values or each of its neighbors.
  • aeymmetrle peak Peak whose pixel value is greater than the values of its left and lower neighbors and greater than or equal to the values of Its right and upper neighbors.
  • Deak detection a region of contiguous pixels of the same value.
  • the Edge Detection tool finds edge pixels in an image. This overview of edge detection oontain ⁇ the following descriptions:
  • the Edge Detection tool locates edges and determines each edge's angle and magnitude.
  • Figure 66 A figure with an edge Edge Detection Tool
  • edge can be further refined as the directed border between grey areas of an image, where direction is defined as a vector normal to tne edge.
  • direction is defined as a vector normal to tne edge.
  • the triangle in Figure 67 has three directional edges, represented by the thraa vectors T ⁇ e angle ot the edge is the counterclockwise angle of the vector normal to the edge, with respect to the horizontal axis of the image See the section Angle Image on page 181 for a discussion of edge angle
  • an edge Along with an angle, an edge has a magnitude, which reflects the amount of the difference between grey levels on either side of the edge. As the difference in grey levels increases, the maomtude increases See the section The Magnitude Image on page 178 for a discussion of edge magnitude.
  • the triangle in Figure 68 has the same three edges as the triangle in Figure 67' however, these edges are of lower magnitude This lower magnitude is Illustrated by the shortened length of the three direction vectors
  • Edges are actually located pixel by pixel Figure 69 contains a magnified view of a poruon of an edge in an Image. Each cell in the figure represents the grey level of a single pixel Notice that on this "microscopic" level there are many edges between pixels in which the grey levels on either side of the edge differ by a few percentage points
  • the Edge Detection tool locates edges in an image by identifying each pixel that has a different value from one of its neighboring pixels. It calculates the angle ana magnitude of each edge pixel It also provides a means of classifying edges, so that low-magnitude edges are not reported
  • a pixel is an edge pixel if it ha ⁇ a different value from at least one of its eight neighboring pixels and is not on the border of an image
  • the four shaded pixels are edge pixels
  • Figure 71 contains a grid representing grey levels in an image with the highest magnitude edge pixels shaded Notice that border pixels (pixels along the edge of the image) are not edge pixels
  • the Edge Detection tool finds edge pixels using the So ⁇ e/ 3x3 neighborhood operators. During edge detection the Sobel operators are applied to each pixel In an input image This operation produces two values for each pixel. One value represents the vertical edge component lor the pixel, the other value represents the horizontal edge component lor the pixel. These two values are then used to compute the edge magnitude and edge angle of the pixel.
  • Sobel edge detection uses two 3x3 neighborhood operators to locate edge pixels In an image.
  • the horizontal Sobel operator detects the horizontal (x) edge component for a pixel.
  • the vertical Sobei operator detects the vertical (y) ⁇ oge component tor a pixel
  • Figure 72 shows the Sobel operators.
  • the Sobel operator works only on pixels with eight neighboring pixels. When the Sobel operator is applied to a pixel on the border of an image, the result is defined to be 0 because there are not enough neighboring pixels to calculate an edge value.
  • the horizontal and vertical Sobel operators are 3x3 linear operators. These operators compute a value for a pixel (x.y) in the following way
  • Figure 74 shows all the vertical and horizontal edge component values for a small image
  • the upper grid is the image; the values represent the grey levels of each pixel
  • the grid to the lower left of the image contains horizontal edge component values tor the image.
  • the cell in the grid contains the result of applying the Sobel horizontal operator to the corresponding input Image pixel.
  • the grid to the lower right contains the vertical e ⁇ ge components for the image It is computed using the Sobel vertical operator
  • the horizontal and vertical edge component values computed with the Sobel operator are not returned by the edge detection functions. They are used to compute the edge magnitude ana edge angle of each pixel in the image Edge Detection Tool
  • Edge magnitudes are ⁇ tored in an output image that shows tne edge magnitude of each corresponding pixel in the input image.
  • Edge angles are stored in another output image that shows the edge angle of each pixel You can choose which output images to create magnitude, angle, or both
  • the e ⁇ ge magnitude for a given pixel is a function of the horizontal and vertical edge components of the pixel If x is the horizontal edge component value and y is the vertical edge component value for some pixel then the edge magnitude M for at pixel is ⁇ etirte ⁇ as follows:
  • Edge Detection tool uses the formula described above to compute edge magnitude and then scales the magnitude upward by approximately 16%.
  • the size and depth of the magnitude image are the same as that of the input image. Since all border pixels have horizontal and vertical edge pixel values of 0, the border of the magnitude Image contains all zeros. You can display an edge magnitude imuge higher magnitude edges are brighter
  • edge detection can return the number of edge pixels in the input image and the 3um of all edge magnitude values. By dividing the sum ot all edge magnitude values by the size of the image, you can compute the average edge magnitude of an image. If you have two Images of the same scene, you can determine which image is in sharper focus by comparing average edge magnitudes of the two images; the image with the higher average magnitude is in sharper focus
  • Figure 76 contains an image, the image's horizontal and vertical edge components computed with Sobel operators, and fhe output image containing the edge magnitude values for each pixel
  • the edge detection functions produce an image oontaining tne angle of each edge pixel. If ls the horizontal edge component value and y is the vertical edge component value, the edge angle le the counterclockwise angle between the horizontal axis and the magnitude vector
  • Figure 77 shows the geometric relationship between the edge angle, the edge magnitude, and the two edge components.
  • M is the magnitude vector and ⁇ is the edge angle
  • Edge angles are stored In the angle image in binary form They are represented as fixed-point numbers with tne decimal point to the left of the most significant bit and are interpreted as fractions of 360°. You can choose the maximum number of bits available to express an angle; this value must be less than or equal to 8. If ⁇ is the maximum number of bits used to express angle, and x is the binary representation of an angle, you can convert x to degree notation by using the following formula:
  • angles can be expressed to the nearest 5.626°; with a 7-bit representation, an angle can be expressed to the nearest 2.8°; with an a-b ⁇ t representation, an angle can be expressed to the nearest 1 4°.
  • Edge Detection Tool
  • the edge angle is computed using the arc tangent function If x is the horizontal edge component and y is the vertical edge component for a pixel, the e ⁇ ge angle for that pixel is calculated as shown in Table 3.
  • Figure 78 contains six rows. Each row contains (from left to nght) an image with the central pixel shaded, the vertical and horizontal edge component value of the shaded pixel, the image with a vector superimposed showing the edge angle of the central pixel, and the formula used to compute the edge angle. Notice that the equations defining the edge angle value are designed so that the vector always points to the brighter side of the edge
  • an edge angle n that is greater than 180° maps to edge angle (n— 180°) For instance, 300" maps to 120°, and 270° maps to 90°.
  • the binary representation ot an angle in the range 0° to 180° is similar to the representation of an angle in the range 0° to 360°. If n is the maximum number of bits used to express angle and xls the binary representation of an angle in the range 0° to 180°. you can convert x to degree notation by applying the following formula: iao anglefx) Edge Detection Tool 4
  • the edge detection function uae ⁇ it to preproceaa your input image before It calculates the horizontal and vertical edge components.
  • Pixel Mapping For a description of pixel mapping see Chapter 3, Pixel Mapping.
  • edge angles are defined for all pixels in an image, they are significant only at the pixels wtth the greatest edge magnitudes. There are many edges between pixels where the grey levels on either aide of the edge differ by only a few percent. These insignificant edge pixels, due to frame grabber noise and random fluctuations in pixel values, can clutter your edge magnitude image and make it difficult to evaluate the date in that image Aleo edge magnitude values usually increase gradually in a "smear" as they approach an edge and then decrease as the edge is croeeed You might want to sharpen the magnitude image by filtering out some edge pixels that are near, but not at, an edge.
  • the Edge Detection tool supplies three postprocessing methods Two of these methods, setup and r ⁇ n ⁇ t ⁇ me thresholding, set the lowest edge magnitude pixels to 0
  • the third method, peak detection takes an edge magnitude image and processes it so mat. for any small pixel neighborhood, only the highest edge magnitude pixels remain and the other pixels are set to o see tne section Peak Detection on an Edge Magnitude Image on page 229 for a complete discussion of this method.
  • edge angles corresponding to 2 ⁇ ro edge magnitude pixels are also set to 0 unless you have specified run-time thresholding but nor peak detection. In this case, edge angles corresponding to zero edge magnitude pixels can be nonzero, but should be considered undefined
  • n represents a percentage of the highest possible edge magnitude value.
  • any magnitude values in the first n percent of all possible values are forced to 0. For example, if you provide 6 bits to express magnitude values and specify that the lowest 5 percent of magnitude values are forced to 0, then any magnitude below 3 is forced to 0 since 5 percent of 63 (truncated) is 3.
  • Peak detection is a method of filtering en image so that only peak pixels remain and other pixels are set to 0 Peak pixels have a value greater than or equal to some or all ot their neighboring pixels' values
  • the edge detection function does peak detection postprocessing after it has done setup and run-time thresholding There will be a 2-p ⁇ xel-wlde border of zeros in a p ⁇ ak-deteoted magnitude image.
  • Peak Detection on an Edge Magnitude Image on page 229 discusses, in detail, the benefits of using peak detection on an edge magnitude image This section also describes the proper way to parameterize peak detection so that it filters edge magnitude pixels correctly. However you do not need a detailed knowledge of peak detection to use it with edge detection Suggested peak detection parameters are supplied in the section Dafa Structure c ⁇ _edg ⁇ params on page 199
  • Figure ⁇ O contains an image its edge magnitude image, and the image that results from peak detection
  • the edge magnitude image contains a smeared edge, which is sharpened by peak detection
  • the angle histogram supplies a useful "signature" of an object's shape. You can use this signature to gauge similarity among objects to detect rotations of an object, and to detect flaws in an object
  • Figure 81 contains four shapes; each shape Is paired with its angle histogram Note that the square and the plus sign have identical angle histograms
  • the angle histogram of the circle is flat because there is no dominant angle along the edge of a circle.
  • Figure 82 demonstrates the effect of rotation on the angle histograms of a plus sign The histogram shifts as the plus sign rotates
  • the Caliper Tool is a tool for locating edges and edge pairs in an image.
  • the edge of an obi ⁇ ct in an image is a change in gey value from dark to light or light to dark. This change may span ceveral pixele.
  • the Caliper Tool provides methods for ignoring edges that are caused by noise or that are not of interest for a particular application.
  • the Caliper Tool is modeled on the mechanical engineer's caliper, a precision device for measuring distances. You specify the separation etween the caliper "jaws," and the Caliper Tool searches an area you specify for edge pairs separated by that distance. You can also search for individual edges when you know their approximate location in an image. This is similar to using a caliper to measure depth.
  • a typioal use tor the Caliper Tool ⁇ e part inspection on an assembly line For example, in integrated circuit lead inspection, a part is moved in front of a camera to an approximately known location (the expected location). You use the Caliper Tool to determine the exact location of the left edge of the part by searching for a single light to dark edge at the expected location (see Figure 103). The edge that is closest to the excected location is the left edge of the part.
  • the Caliper Tool can be used to measure lead width and lead spacing ⁇ an integrated circuit.
  • the Caliper Tool uses protection to map a two- ⁇ imensional window of an image (the caliper window) into a one-dimensional image. Projection collapses an image by summing the pixels in the direction or tne projection, wnicn ten ⁇ s to amplify edges in that direction.
  • Figure 105 shows an image, a caliper window, the one-directional image that results from projection, and a graph of the pixel grey values in the one-dimensional image.
  • Pixel mapping can be used to filter an image attenuate or amplify a range of grey values, and otherwise map pixel values. See Chapter 3, Pixel Mapping, in the Image Processing manual for a complete description of pixel mapping.
  • an edge fitte s applied to the one-dimensional image to further enhance edge information and to smooth the image by eliminating minor grey value changes between neighboring pixels that are most likely caused by noise
  • the edge filter produces Caliper Tool
  • Figure 106 shows the one-dimensional image from Figure 105 an image generated by the edge fllta/ and a graph of that image
  • Edge detection is a method of evaluating peaks in the filtered image and ignoring edge ⁇ that are not of interest such as e ⁇ ges representing n ⁇ se in the image
  • Edge detection applies geometric constraints thai you SDecirv to each edge or edge pair found in the image
  • Applying geometric constraints provides a way to limit the numD ⁇ r of edges to evaluate by assigning a score to each edge or e ⁇ ge pair based on such factors as the expected location of the edge or edge pair the distance between edges in an edge pair and the minimum grey level difference between pixels on each side of the edge
  • the Caliper Tool returns as many results as you have specified in the cclp_p «rams data structure and computes the geometric average of the score of eacn geometric constraint that you specify For edges or edge pairs whose score equals or exceeds the accept threshold the Caliper Tool returns information such as the score and location of the edge or edge pair and the measured distance between edges in an edge pair
  • the controlling variables of an edge or edge pair search define a caliper and include such information as the dimensions of the search window and the edge detection scoring parameters
  • the type of tne data structure tnat defines a caliper is edp.callpar Caliper Tool 4
  • the caliper window is the portion of an image in which the Caliper Tool cearcnee for edges. It is defined by the search length and projection length, which are the width and height resD ⁇ ctively. of the window in an image that will be projected into a one-dimensional image.
  • the caliper window is illustrated In Figure 107.
  • the pixel in the center of the caliper window is called the application point. This is the point in the run-time image at which you want to locate edges.
  • edge information is needed at more than one angle in an image.
  • the caliper window can be oriented at any angle to locate edge ⁇ in an image. Angles increase In the clockwise direction from the horizontal axis.
  • Figure 108 shows how changing the caliper window angle affects the projection and edge search directions at 90" rotations.
  • the Caliper Tool runs faster for 90* rotations than It does for arbitrary rotations.
  • Figure 109 shows a -15° caliper window with window rotation disabled.
  • the Caliper Tool creates a "skewed" window and projects it along the angle of skew into a one- dimensional image.
  • the pixels in the source image are summed along tne angle of skew as shown in Figure 1 10.
  • Each pixel in the two-dimensional image contributes to one pixel in the one-dimensional image.
  • the skewed window is transformed into a rectangular window before projection.
  • the transformation is performed by clp_trarw1orm() which uses several neighboring pixels from the two-dimensional image to calculate the pixel value for the one-dimensional image. See Chapter 2, Basic Image Processing, in the Image Processing manual for a complete description of the function otp_tranaform()
  • Figure 1 1 shows a -16° caliper window with window rotation enabled.
  • the Caliper Tool rotates the two-dimensional image and projects it into a one-dimensional image.
  • skewed or rotated projection depends on the angle of the edge or edge pair of interest in the image and the remaining content of the image If the rotation angle is close to 90° or if a skewed image will contain enough of the edge or edge pair of interest to generate strong edges, you can use skewed projection to speed program execution Figure 112 shows an image, a caliper window and the one-direcrjonal image that results from projection.
  • the skewed window contains enough edge information, so skewed projection may be used.
  • Figure 113a snows a caliper window where the edges of interest are at 15° and window rotation is disabled A window large enough to enclose these edges would also include much of the dark area in tns image, which in some cases would result in a one-dime ⁇ sio ⁇ al image where the edges are obscured
  • Figure 113b shows a caliper window at -15° in the same image with window rotation enabled Only the edges of interest are included in this window
  • an edge filter is run on the resulting one-dimensional image
  • the edge filter accentuates edges in the image and produces a filtered image
  • the peaks in this image indicate strong edges
  • Edge filter size is the number of pixels to each side of the edge to consider in the evaluation
  • Edge filter leniency is the number of pixels at an edge to ignore between light and dark sides of the edge (lemency is described in tu ⁇ ner detail below)
  • Figure 114 illustrates an edge filter The edge filter is positioned over the leftmost pixel in the ⁇ -dime ⁇ sional image where it will entirely fit in that image Pixel values to the left of the leniency region are summed and subtracted from the ⁇ um of the pixel v ⁇ luee to the right of the leniency region in tnia example, (Of S)-(o-O) yields 5 for the edge filter image
  • Figure 1 14 The edge filter IB positioned in the one-dimen ⁇ tonal image
  • edge filter is then applied at each successive pixel in the one-dimensonal image until the edge filter no longer fits entirely in the image At each pixel position, the pixel values to the left of the leniency region are summed and subtracted from
  • Figure 116 shows a graph of the one-dimensional image and the edge filter image from Figure 1 5 Peak position is calculated to suopixei accuracy
  • Edge filter size specifies tne number of pixels on either side of an edge to consider in an edge evaluation. Size should usually be greater than 1 because noise in an image causes grey level differences between neighboring pixels.
  • Figure 117 Shows a graph Of a one-dimenaonal image of a light band on a dark background, and a graph of an edge filter image where a size of 1 was used. Because there is noise in the image, many more peaks exist than are of interest.
  • Figure 118 shows an image similar to the one in Figure 1 7 where a size of four was used. The only two peaks in the image are the peaks of interest. Size is typically between 2 and 5 inclusive.
  • Edge filter leniency specifies a region of pixels to ignore at an edge In an Image. Due to slight changes in edge or edge pair position or orientation, a pixel on the edge of a feature may not always have the same grey value. By setting a leniency zone between the light and dark areas of a feature, the effects of slight changes in feature location on filter results are minimized.
  • Figure 1 9 shows a graph of a one-dimensional image with two edges that span two pixels eacn, and graphs of the output of two edge filters.
  • the first edge filter has a leniency of 0
  • the second edge filter has a leniency of 2.
  • the edge filter with a leniency of 2 produces higher peaks.
  • Two is a typical value for leniency, Caliper Tool 4
  • Point Registration Tool Overview provides information about the capabilities and intended use of the Point Registration tool.
  • Haw the Point Registration Tool Works provides a general description of the operation of the Point Registration tool.
  • Point Registration Tool describes some of the techniques that you will use to implement an application using the Point Registration tool.
  • Point Registration Tool Data Structures and Functions provides a detailed description of the data structures and functions that you will use to implement your application.
  • point registration A search technique designed to determine the exact point at which two images of the same scene are precisely aligned.
  • global minimum The lowest value in a function or signal.
  • local minima A low value in a function or signal.
  • Positions within an image may be specified in terms of whole pixel positions, in which case the position refers to the upper- left corner of the pixel, or in terms of fractional pixel positions, in which case the position may e anywhere within a pixel. Positions specified in terms of fractional pixel positions are referred to as having subpixei accuracy.
  • the Cognex Point Registration tool performs fast, accurate point registration using two images that you supply.
  • Point registration is the process of determining the precise offset between two images of the same scene.
  • Point Registration tool You use the Point Registration tool to perform point registration by providing a model image and a target image, along with a starting point within the target image.
  • the model image snould be a reference image of the feature you are attempting to align.
  • the target image should be larger than the model image, and it should contain an Instance of the model image witnin it.
  • the Point Registration tool will determine the precise location of the model image within the target image.
  • the Point Registration tool will return the location of this match, called the registration point, with subpixei accuracy.
  • Figure 160 illustrates an example of how the Point Registration tool performs point registration.
  • the Point Registration tool is optimized to locate precisely the model image within the target image, even if the target image contains image defects such as reflections, or if the modei image is partially obscured by other features in the target image.
  • Fi ⁇ ure 161 illustrates an example of point registration where the target image is partially obscured.
  • the Point Registration tool finds the location of the model image within the target image. You operate the Point Registration tool by supplying the model and target images, along with the location within the target image where you expect the origin of the model image to be. The Point Registration tool will determine, with subpixei accuracy where the origin of the model image lies within the target Image.
  • the Point Registration tool works by computing a score indicating the degree of similarity between the model image and a particular portion of the target image that is the same size aa the model image This score can range from 0 to 253, with a score of 0 Indicating that the model image and the target image are perfectly similar and a score of 255 indicating that the model image and the terget image are perfectly dissimilar
  • the tool computes this score for locations in the immediate neighborhood surrounding the starting point
  • the tool will find the location within thig n ⁇ ignborhood of the target image that produces tne local minima in the value of this score
  • the Point Registration tool determines the location of the model image within the target image with subpixei accuracy For typical images, the Point Registration tool can achieve accurate registration to within 0 25 pixels.
  • the Point Registration tool seeks the local minima, if the starting point you specify is more than a few pixels from the actual registration point, the tool may not return the correct r ⁇ giQtration point
  • the ⁇ xaot amount of vananoe that the Point Registration tool can tolerate will vary depending on the images The variance may be as small as 3 to 5 pixels for some images or as large as 30 pixels for others
  • the Point Registration tool will reiurn in addition to tne location of the origin of the model image within the target image a score indicating the degree of similarity between the model image and the target image
  • the score returned by tne tool will be from 0 to 255 with a score of 0 indicating that the model image and the target image are perfectly similar and a score of 255 indicating that the model image and the target image are perfectly dissimilar
  • the actual precision of the point registration location may be somewhat lower than if the point registration receives a low score
  • Point Registration tool When you use the Point Registration tool, you supply the tool with two images and a sta ⁇ ing location within the model image The tool will confine its point registration search to a small area around the starting location that you specify Point Registration Tool
  • the Point Registration tool can also perform point registration exhaustively that is, by computing the score for every possible location of the model image within the target image This procedure, called exhaustive point registration is extremely slow It can be helpful however in debugging applications where the Point Registration tool does not appear to be working correctly.
  • Figure 1 ⁇ 2 illustrates an example of specifying a masking rectangle
  • IMS example the right side of the model is often obscured in the target image
  • a rectangle that oov ⁇ r ⁇ the left aide of the model Image you can cause the Point Registration tool to consider only the pixels in that part of the model image This will tend to increase the accuracy of the point registration
  • Figure 163 illustrates an example where the oent ⁇ r of the model image ⁇ a often obscured in the target image
  • any DIX ⁇ IS that are contained in more than one masking rectangle will be counted toward the score multiple times, once for each rectangle in which they are contained
  • the Point Registration tool may not work well in cases where the model image and the target image have different intensity values You can perform limited image processing as part of the point registration by supplying a pixel mapping table.
  • Every pixel in the target image will be mapped to the value in the pixel mapping table at the location within the pixel mapping table given by the DiX ⁇ l's value For example if a pixel in the target image had a value of 200, it would be mapped to the value of the 200" , element within the pixel mapping table.
  • the Point Registration tool is designed to align target images with a wide variety of image defects, but in order for the tool to work, the model image needa to be a ⁇ free from defeots as possible.
  • Point Registration tool When using the Point Registration tool, you must specify a starting location in the model image. The tool will perform its point registration search starting at the point that you specify. Because the tool only seeks the local minima for the score value, if you specify a starting location that vanee greatly from the actual registration point, the point registration operation will fail
  • creg_params creg_parame contains parameters that determine how the point registration search will be conducted.
  • a structure of this type is supplle ⁇ as an argument to ereg_polnt_reglster().
  • option ⁇ Lmap is an optional pixel map. If optionaljnap is non-NULL, the Point Registretion tool will replace each pixel in the target image with the value contained in the element of optionaljnap corresponding to the value of the pixel in the target image. If you supply a value for optional_map. you muat oupply an image of type FAST ⁇ .
  • flags specifies the type of point registration mat tne Point Registration tool will perform flags is constructed by ORing together any of the following values:
  • CREG TYPE1 is reserved for future use: you should always include CREG_TYPE1 in the value of flags.
  • CREG YPE2 is reserved for future use; you should never include CREG TYPE2 In the value of flags.
  • the tool may return a location that does not represent tne best match within the Image. If you specify CREG_EXHAUSTIVE, the registration will return the best registration match tor the entire image. An exhaustive point registration will be extremely slow.
  • creg results cr ⁇ g_po ⁇ m_r ⁇ gle ⁇ r returns a pointer to a creg_reeutt « structure.
  • n _r*fll ⁇ t ⁇ r fills in the structure with information about the results of a point registration operation
  • • and y are the x-coordmate and y-coordinate respectively, of the location within the model image at which the target image was found x and give the position with subpixei accuracy
  • score is a measure of the similarity between the pixel values in the model image and the target image at tne nearest whole pixel alignment position, score will be between 0 and 255. with lower values indicating a greater degree of similarity between the model image and the target Image
  • on_edge_x is set to a nonzero value if, along the x-axi ⁇ , one edge of the model image area is at the edge of the target image. If onjedge t is nonzero, the accuracy of the position information may be slightly reduced from what it would otherwise be.
  • on_edge_y is set to a nonzero value if, along the y-axis, one edge of the model image area is at the edge of the target image. If ⁇ _ ⁇ dg ⁇ _y Is nonzero, the accuracy of the position information may be slightly reduced from what it would otherwise be.
  • creg_point_regi ⁇ ter() ereg_po4rtt_regiet*r performs point registration u ⁇ ing the model image and target image that you supply.
  • the point registration will be controlled by the parameters in the cre ⁇ _pwwn « structure supplied to the luncnon creg_polm_r*gi-rt ⁇ r() returns a pointer to a creg_resurta structure that describes the result of the point registration
  • target points to the image to use as the target image for this point registration target must be at least one pixel larger in both the x-dimenson and the y-dimension than model
  • model points to the image to use as the model image for this point registration model must be at least one pixel smaller in both the x-dim ⁇ nsion and the y-dimension than larger
  • creg_reeult structure from the neap using the default allocator You can free this structure by calling froe() creg_p ⁇ lnt_regl ⁇ ter() throws CGEN_ERR_BADARG if
  • the Line Finder computes the locations of lines in an image For each line that it finds, it returns an angle ⁇ and a signed distance dfrom the central pixel of the image. Given ⁇ and d, you can compute all points (x,y) in the image that satisfy the equation of the line (as illustrated in the section The Transformation from Cartesian Space to Hough Space on page 204) The algorithm that the Line Finder uses, along with the angle/distance classification of lines, is explained in How the Line Finder Works on page 166.
  • the Line Fln ⁇ er provi ⁇ es the location or the point along each line mat is nearest to the center or tne image, along with the angle of the line
  • the Line Finder also estimates the length and density of each line mat it locates, using statistical methods, and returns the position of the center of mass of the line
  • Figure 78 illustrates the two modes of the Line Finder.
  • the Image In Figure 78a Is shown in Figure 78b after the Line Finder has been invoked in quick mode The presence of the four lines is indicated.
  • Each line is drawn to the screen at a constant length, but no attempt is made to estimate its actual length
  • Figure 78c shows the result when using the Line Finder in normal mode- the length of each line segment is estimated.
  • the Line Finder finds lines regardless ot grey scale variations In the Image, rotation of the scene or changes in scale.
  • a typical application tor the Line Finder is locating fiducial marks in a series of images In which grey levels, scale, and rotation are unpredictable from one image to the next.
  • Figure 79a and Figure 79b illustrate two images, each containing the same fiducial mark a square within a square. However, the sizes of the marks, and their rotations differ between the two images Also, the mark in Figure 79a is a dark square within Line Finder 3
  • the Line Finder demo code includes post-processing of this sort. It contains functions that locate squares of any size and rotation, uang the Line Finder. This code deduces the presence of squares based on the Line Finder's results, eliminating extraneous lines. See Chapter 1, Cognex API Introduction, in the Development Environment 1 manual for the location and name of the Contour Finding demo code on your system Line Finder
  • the Hough Line Transform describes the algorithm that the Line Finder uses to find lines In an image.
  • the Line Finder uses a Cartesian coordinate system whose origin is the center of the image. Most of the examples in the book are drawn in this coordinate system, although a few use image coordinates These two systems are pictured in Figure 80 Notice that the x-axes of the two systems are the same, but the y-axes are opposite. Note also that positive angles in Cartesian coordinates are measured counterclockwise from the x-axis, positive angles In image buffer coordinates are measured clockwise from the x-axls.
  • Figure 80 Image buffer coordinates and Cartesian coordinates
  • the Line Finder uses the central pixel of the image as a reference point, The image buffer coordinates of this pixel (x c , y are computed with integer division, using the following formulas: Line Finder 3
  • the Line Finder defines a line (in Cartesian coordinates) by its angle from the x-axis and by the signed distance from the center of the image (the central pixel) to the line These two parameters uniquely describe any line in Cartesian space.
  • Figure 81a contains an edge that lies along a line ( Figure 81b)
  • the line is defined by its angle from the horizontal ( ⁇ ) and by the shortest distance from the center of the image to the line ( Figure 81 c). Note that the distance vector is perpendicular to the line
  • the distance is negative if the angle of the distance vector (in Cartesian coordinates) is greater than or equal to 0° and less than 180°: the distance is positive if the distance vector has an angle greater than or equal to 180° and less than 360°.
  • Figure 82a shows an image with two features. Each feature is bordered on one side by a line. Aitnough tne lines are different, they have the same angle ⁇ and the same absolute distance d from the center ( Figure 82b). Line Finder
  • each line is ensured a unique definition In this example, the two lines are defined as ( ⁇ , d) and ( ⁇ , -d).
  • Hough space is a two-dimensional space in which the Line Finder records the lines that it finds in an image.
  • a point in Hough space represents a line: one axis represents angle, the other represents distance
  • Hough space is Implemented in the Line Finder as an array a simple example is shown in Figure 83.
  • the Hough space in this example can record eight angles and nine distances and therefore 72 lines.
  • the Line Finder lets you control the size of Hough space. You specify the range of distance in pixels, and, in the edge detection data structure thai you pass to the Line Finder, you supply the number of angles that the Line Finder can compute.
  • the Hough space in Figure 84 contains a single line. It has an angle of 135° and is a distance of 3 units from the center of the image.
  • Line Finder
  • the Line Finder uses the Hough Line Transform to locate lines in an image. This method is outlined and illustrated in this section To understand this discussion, you should be familiar with edge detection and peak detection, as described in Chapter 4, Edge Detection Tool, in the Image Processing manual
  • the Hough Line Transform takes advantage of the following rule. If an edge pixel's angle ⁇ a ⁇ location (x0,y ⁇ ) is known, then the line on whicn the pixel lies is also known it is the line with angle ⁇ +90 thai contains the pixel (xO.yO) This is illu ⁇ trated in Figure 85.
  • Figure 85 Four edges in an image (a) edge angles at four edge pixels (b), and line angles at those pixels (c)
  • the Hough Line Transform is implemented as follows.
  • the Line Finder uses edge detection to create an edge magnitude image and an edge angle image from the input image
  • the Line Finder creates and clears a Hough space
  • the Line Finder searches Hougn space for maximum values, using peak detection.
  • the highest value in Hough space represents the strongest line in the image, the second highest represents the second strongest line and so fo ⁇ n.
  • each line is pnnted out and, optionally each line is drawn in an output image.
  • Figure ⁇ a is an image passed to the Line Finder
  • Figure ⁇ b is a stylized edge magnitude image created from the input image Une Finder
  • Each edge pixel is processed, as described above. For example, because the outlined edge pixel in Figure 87 belongs to the line (90, -2), the bin in Hough space that represents that line is incremented
  • Figure ⁇ 8 shows tne final Hough space along with the input image
  • Figure 89 shows the peak-detected Hough space, with each of the peaks outlined
  • Figure 89 also shows the input image with the tines, represented by the peaks In Hough space, drawn into the input image
  • This f le includes a function for localizing a cress:
  • Th s involves constructing a syntnet c -iccel of a rotatec
  • the computed edge pcir.t is an outlier 'tne aistance cetween tne edge point and the nominal line is larger than tnr ⁇ sr.cid)
  • the caliper may ee off z ⁇ up to 5
  • TIGHT_CALIPER_WIDTH snould be larger than the minimum widtn sc that
  • xcent xsum / n_po ⁇ nts
  • * purpose is to determine whether we need to use a vertical or
  • ccmpute_su_op ⁇ xel_pcs ⁇ t cn ' ⁇ ⁇ HII eitn ⁇ r perform a vertical or orizcntal sucpixel edge estimation Sucpixe.
  • edge positions are computed in tne same manner as tne ccundary tracker (using " neignccring pixels, and " differences wr.ere we quadratically interpolate tne edge position from at least t ree differences .
  • d_max max ( do , max ⁇ dl , max ( d2 , max v d3 , max i d4 , max ; d5 , d ⁇ '.
  • d_m ⁇ n mm ( do , m ( dl , mm ( d2 , mm , d3 , mm ⁇ d4 , mm ( dS , d6 '.
  • pos_x ap_x*l. - cz_parf ⁇ c ⁇ , d5 , do 55536 -2 return 1;
  • step_s ⁇ ze r.umcer of pels n the larger direction (and a fractional number of pels in the other direction) .
  • Cip_transform throws CGEN_ERR_BADARG , CIP_ERR_PELAD3R,

Abstract

A machine vision method analyzes a calibration target (10) of the type having two or more regions (12, 14, 16), each having an 'imageable characteristic' (14, 16), e.g., a different color, contrast, or brightness, from its neighboring region(s). Each region has at least two edges - referred to as 'adjoining edges' (20, 22, 24) - that are linear and that are directed toward and, optionally meet at, a reference point (18) (e.g., the center of the target or some other location of interest). The method includes generating an image of the target (100), identifying in the image features (102, 104, 106) corresponding to the adjoining edges (22, 22, 24), fitting lines to those edges (110, 112, 114, 116, 118), and determining the orientation and/or position of the target from those lines (120, 122).

Description

MACHINE VISION CALIBRATION TARGETS AND METHODS
OF DETERMINING THEIR LOCATION AND ORDENTATION IN AN IMAGE
Reservation of Copyright
The disclosure of this patent document contains material which is subject to copyright protection. The owner thereof has no objection to facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the U.S. Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
Background of the Invention
The invention pertains to machine vision and, more particularly, to calibration targets and methods for determining their location and orientation in an image.
Machine vision refers to the automated analysis of an image to determine characteristics of objects and other features shown in the image. It is often employed in automated manufacturing lines, where images of components are analyzed to determine placement and alignment prior to assembly. Machine vision is also used for quality assurance. For example, in the pharmaceutical and food packing industries, images of packages are analyzed to insure that product labels, lot numbers, "freshness" dates, and the like, are properly positioned and legible.
In many machine vision applications, it is essential that an object whose image is to be analyzed include a calibration target. Often a cross-shaped symbol, the target facilitates determining the orientation and position of the object with respect to other features in the image. It also facilitates correlating coordinate positions in the image with those in the "real world," e.g., coordinate positions of a motion stage or conveyor belt on which the object is placed. A calibration target can also be used to facilitate determining the position and orientation of the camera with respect to the real world, as well as to facilitate determining the camera and lens parameters such as pixel size and lens distortion.
In addition to cross-shaped marks, the prior art suggests the use of arrays of dots, bulls-eyes of concentric circles, and parallel stripes as calibration targets. Many of these targets have characteristics that make difficult finding their centers and orientations. This typically results from lack of clarity when the targets and, particularly, their borders are imaged. It also results from discrepancies in conventional machine vision techniques used to analyze such images. For example, the edges of a cross-shaped target may be imprecisely defined in an image, leading a machine vision analysis system to wrongly interpret the location of those edges and, hence, to misjudge the mark's center by a fraction of a pixel or more. By way of further example, a localized defect in a camera lens may cause a circular calibration mark to appear as an oval, thereby, causing the system to misjudge the image's true aspect ratio.
In addition to the foregoing, many of the prior art calibration targets are useful only at a limited range of magnifications. Parallel stripes, for example, do not provide sufficient calibration information unless many of them appear in an image. To accommodate this, a machine vision system must utilize lower magnification. However, as the magnification decreases, so does the ability of the machine vision equipment to distinguish between individual stripes. Similar drawbacks limit the usefulness of the other prior art calibration targets for use in all but a narrow range of magnifications.
Though the art suggests the use of checkerboard patterns as alignment marks, the manner in which images of those marks are analyzed by conventional machine systems also limits their utility to a limited range of magnifications. Particularly, prior art systems obtain alignment information from checkerboard marks by identifying and checking their corners, e.g., the eight black (or white) corners in a black-and-white image. By relying on corners, the systems necessitate that images show entire checkerboards, yet, with sufficient resolution to insure accurate detection and analysis. An object of this invention is to provide an improved calibration targets and methods for machine vision analysis thereof.
A related object is to provide calibration targets and analysis methods reliable at a wide range of magnifications.
A further object is to provide such methods as can be readily implemented on conventional digital data processors or other conventional machine vision analysis equipment. Yet still another object of the invention is to provide such methods that can rapidly analyze images of calibration target without undue consumption of resources.
Summary of the Invention
The foregoing objects are among those attained by the invention, which provides in one aspect a machine vision method for analysis of a calibration target of the type having two or more regions, each having a different "imageable characteristic" (e.g., a different color, contrast, or brightness) from its neighboring region(s). Each region has at least two edges ~ referred to as "adjoining edges" - that are linear and that are directed toward and, optionally meet at, a reference point (e.g., the center of the target or some other location of interest). The method includes generating an image of the target, identifying in the image features corresponding to the adjoining edges, and determining the orientation and/or position of the target from those edges.
In another aspect, the invention provides a method as described above for analyzing a target of the type that includes four regions, where the adjoining edges of each region are perpendicular to one another, and in which each region in the target has a different imageable characteristic from its edge- wise neighbor. The edges of those regions can meet, for example, at the center of the target, as in the case of a four-square checkerboard.
In yet another aspect, the invention provides a method as described above for determining an orientation of the target as a function of the angle of the edges identified in the image and for determining the location of the reference point as an intersection of lines fitted to those edges. In regard to the former, the invention provides a method of determining the orientation of a target in an image by applying a Sobel edge tool to the image to generate a Sobel angle image, and by generating a angle histogram from that angle image. In an alternate embodiment, the orientation is determined by applying a Hough line tool to the image and determining the predominant angle of the edges identified by that tool.
In regard to the location of the reference point, one aspect of the invention calls for locating the adjoining edges by applying a caliper vision tool to the image, beginning at an approximate location of the reference point. That approximate location of the reference point can itself be determined by applying a Hough line vision tool to the image in order to find lines approximating the adjoining edges and by determining an intersection of those lines. Alternatively, the approximate location of the reference point can be determined by performing a binary or grey scale correlation to find where a template representing the edges most closely matches the image.
In another alternate embodiment, the approximate location of the reference point is determined by applying a projection vision tool to the image along each of the axes with which the adjoining edges align. A first difference operator vision tool and a peak detector vision tool are applied to the output of the projection tool (i.e., to the projection) in order to find the approximate location of the edges.
The invention has wide application in industry and research applications. It facilitates the calibration of images by permitting accurate determination of target location and orientation, regardless of magnification. Thus, for example, an object bearing a target can be imaged by multiple cameras during the assembly process, with accurate determinations of location and orientation made from each such image.
These and other aspects of the invention are evident in the drawings and in the description that follows.
SUBSTTΓUTE SHEET (RULE 26) Brief Description of the Drawings
A more complete understanding of the invention may be attained by reference to the drawings, in which:
Figure 1 A - 1C depict calibration targets according to the invention;
Figure ID depicts the effect of rotation on the target depicted Figure IB;
Figure 2 depicts an object according to the invention incorporating a calibration target of the type depicted in Figure IB;
Figure 3 depicts a machine vision system according to the invention for determining the reference point and orientation of a calibration target;
Figures 4 and 5 depict a method according to the invention for interpreting an image of a calibration target to determine a reference point and orientation thereof; and
Figure 6 illustrates the magnification invariance of a target according to the invention.
Detailed Description of the Illustrated Embodiment
Figures 1 A - 1C depict calibration targets according to the invention. Referring to Figure 1 A, there is shown a target 10 according to the invention having three regions 12, 14, 16. Each region is bounded by at least two linear edges that are oriented toward a reference location or reference point 18 on the target. Thus, for example, region 12 is bounded by edges 20, 24; region 14 is bounded by edges 20, 22; and region 16 is bounded by edges 22, 24. As evident in the drawings, the edges are shared by adjoining regions and, hence, are referred to below as "adjoining edges." Thus, region 12 shares edge 20 with region 14; region 14 shares edge 22 with region 16; and region 16 shares edge 24 with region 12. In the illustration, the reference point 14 is at the center of target 10, though, those skilled in the art will appreciate that the reference point can be positioned elsewhere.
Each of the regions has a different imageable characteristic from its neighboring regions. As used herein, an "imageable characteristic" is a characteristic of a region as imaged by a machine vision system (e.g., of the type shown in Figure 3) and, particularly, as imaged by an image capture device used by such a system. For example, in the illustration, region 12 has the characteristic of being colored black; region 14, white; and region 16, gray. In addition to color, imageable characteristics useful with conventional machine vision systems — which typically utilize image capture devices operational in visual spectrum ~ include contrast, brightness, and stippling.
Those skilled in the art will appreciate that any other characteristics by which a region may be identified and distinguished in an image are suitable for practice of the invention. Thus, for example, for a machine vision system that utilizes a temperature- sensitive (or infrared) image capture device, an imageable characteristic is temperature. By way of further example, for a machine vision system that utilizes a nuclear decay radiation-sensitive image capture device, an imageable characteristic is emitted radiation intensity or frequency.
SUBSTΓΓUTE SHEET (RULE 26) As shown in the illustration, the adjoining edges 20, 22, 24 comprise straight linear segments. Those edges are implicitly defined as the borders between regions that, themselves, have different imageable characteristics. Thus, for example, edge 20 is a straight linear segment defined by the border between black region 12 and white region 14. Likewise, edge 24 is defined by the border between black region 12 and gray region 16. Further, edge 22 is defined by the border between white region 14 and grey region 16.
Figure IB depicts a calibration target 30 according to the invention having four rectangular (and, more particularly, square) regions 32, 34, 36, 38. As above, each region is bounded by at least two linear edges that are oriented toward a reference point 40 at the center of the target. Thus, for example, region 32 is bounded by edges 42, 44; region 34 is bounded by edges 42, 46; and so forth. As above, these edges are shared by adjoining regions. Thus, region 32 shares edge 42 with region 34, and so forth. Each region in target 30 has a different imageable characteristic from its edge-wise neighbor. Hence, regions 32 and 36 are white, while their edge-wise adjoining neighbors 34, 38 are black.
Figure 1C depicts a calibration target 50 according to the invention having five regions 52, 54, 56, 58, 60, each having two linear edges directed toward a reference point 62. The adjoining regions are of differing contrast, thereby, defining edges at their common borders, as illustrated. Although the edges separating the regions 52 - 60 of target 50 are directed toward the reference point 62, they do not meet at that location. As evident in Figure 1C. no marker or other element imageable characteristic is provided at reference point 62.
Those skilled in the art will appreciate that, in addition to the calibration targets shown in Figures 1A - 1C, targets with still more regions (or as few as two regions) and shapes, otherwise in accord with the teachings hereof, fall within the scope of the invention. Moreover, it will be appreciated that targets may be of any size and that their regions need not be of uniform size. Still further, it will be appreciated that the outer borders of the targets need not be linear and may, indeed, take on any shape. Figure 2 depicts an object according to the invention for use in machine vision imaging, detection, and/or manipulation having a calibration target according to the invention coupled thereto In the illustration, the object is an integrated circuit chip 70 having coupled to the casing thereof a calibration target 72 of the type shown in Figure IB. Other targets according to the invention, of course, can likewise be coupled to the object 70. The targets can be coupled to the object by any known means. For example, they can be molded onto, etched into, or printed on the surface of the object. By way of further example, decals embodying the targets can be glued, screwed or otherwise affixed to the object Moreover, by way of still further example, calibration plates incorporating the targets can be placed on the object and held in place by friction In addition to integrated circuit chips, the object can include any other objects to which a target can be coupled, such as printed circuit boards, electrical components, mechanical parts, containers, bottles, automotive parts, paper goods, etc
Figure 3 depicts a machine vision system 80 according to the invention for determining the reference point and orientation of an object 82 having coupled thereto a calibration target 84 according to the invention and, particularly, a four-region target of the type shown in Figure IB. The system 80 includes an image capture device 86 that generates an image of a scene including object 82. Although the device may be responsive to the visual spectrum, e.g., a conventional video camera or scanner, it may also be responsive to emissions (or reflections) in other spectra, e.g., infrared, gamma-ray, etc. Digital image data (or pixels) generated by the capturing device 86 represent, in the conventional manner, the image intensity (e.g., contrast, color, brightness) of each point in the field of view of the capturing device.
That digital image data is transmitted from capturing device 86 via a communications path 88 to an image analysis system 90. This can be a conventional digital data processor, or a vision processing system of the type commercially available from the assignee hereof, Cognex Corporation, as programmed in accord with the teachings hereof to determine the reference point and orientation of a target image. The image analysis system 90 may have one or more central processing units 92, main memory 94, input-output system 96, and disk drive (or other mass storage device) 98, all of the conventional type.
The system 90 and, more particularly, central processing unit 92, is configured by programming instructions according to teachings hereof for operation as illustrated in Figure 4 and described below. Those skilled in the art will appreciate that, in addition to implementation on a programmable digital data processor, the methods and apparatus taught herein can be implemented in special purpose hardware.
Referring to Figure 4 there is shown a machine methodology according to the invention for interpreting an image of a target 84 to determine its reference point and orientation. The discussion that follows is particularly directed to identifying a four- region target of the type shown in Figure IB. Those skilled in the art will appreciate that these teachings can be readily applied to finding targets according to the invention, as well as to other targets having detectable linear edges that are oriented toward a reference location or reference point on the target, e.g., a prior art cross-shaped target. For convenience, in the discussion that follows, such linear edges are referred to as "adjoining edges," regardless of whether they are from calibration targets according to the invention or from prior art calibration targets.
In step 100, an image of the target 84 (or of the target 84 and object 82) is generated, e.g., using image capture device 86, and input for machine vision analysis as discussed below. The image can be generated real time, retrieved from a storage device (such as storage device 98), or received from any other source.
In steps 102 - 108, the method estimates the orientation of the target in the image using any of many alternative strategies. For example, as shown in step 102, the method determines the orientation by applying a conventional Hough line vision tool that finds the angle of edges discernable in the image. In instances where the target occupies the entire image, those lines will necessarily correspond to the adjoining edges. Where, on the other hand, the target occupies only a portion of the image, extraneous edges (e.g., from other targets) may be evident in the output of that tool. Although those extraneous edges can generally be ignored, in instances where they skew the results, the image can be windowed so that the Hough vision tool is only applied to that portion that contains the target. Once the angles of the lines has been determined by the Hough line tool, the orientation of the image is determined from the predominant ones of those angles. Alternatively, the angle of the image can be determined by taking a histogram of the angles.
The Hough vision tool used in step 102 may be of the conventional type known and commercially available for finding the angle of lines in image. A preferred such tool is the Cognex Line Finder, commercially available from the Assignee hereof, Cognex Corporation.
An alternative to using a Hough vision tool is shown in step 106. There, the illustrated method determines the orientation of the target by applying a Sobel edge tool to the image to find the adjoining edges. Particularly, that tool generates a Sobel angle image that reveals the direction of edges in the image. As above, where the target occupies the entire image, the adjoining edges will be the only ones discerned by the Sobel edge tool image. Where, on the other hand, the target occupies only a portion of the image, any extraneous edges can be ignored or windowed out.
The Sobel edge tool may be of the conventional type known and commercially available for finding lines in image. A preferred such tool is the Cognex Edge Detection tool, commercially available from the Assignee hereof, Cognex Corporation.
Once the Sobel angle image is generated, in step 106, the orientation of the target in the image is determined by generating a histogram of the edge angle information; see, step 108. From that histogram, the target orientation can be determined by taking a one- dimensional correlation of that histogram with respect to a template histogram of a target oriented at 0°. Where a Sobel magnitude image is generated, in addition to the Sobel angle image, such a histogram can be generated by counting the number of edges greater then a threshold length at each orientation. As a still further alternative to applying a Hough vision tool or Sobel edge tool, the method contemplates obtaining the angle of orientation of the target from the user (or operator). To this end, the user may enter angle orientation information via a keyboard or other input device coupled with digital data processor 90.
In steps 110 - 118, the method determines the location, i.e., coordinates, of the target reference point in the image. Particularly, in step 110, the method can apply a Hough vision tool, as described above, to find the angle of lines discernable in the image. A conventional Hough vision tool determines, in addition to the angle of lines in an image, the distance of each line, e.g., from a central pixel. As above, where the target occupies the entire image, those lines will be the only ones discernable by the Sobel edge tool image. Where, on the other hand, the target occupies only a portion of the image, any extraneous edges can be ignored or windowed out.
As above, the Hough vision tool used in step 104 may be of the conventional type known and commercially available for finding the angle and position of lines in image. Once again, a preferred such tool is the Cognex Line Finder, commercially available from the Assignee hereof, Cognex Corporation. Those skilled in the art will, of course, appreciate that steps 102 and 110 can be combined, such that a single application of the Hough vision tool provides sufficient information from which to determine both the orientation of the target in the image and its reference point.
As an alternative to using a Hough vision tool, the method can apply a projection vision tool to the image in order to find the position of the lines discernable in the image; see, step 112. The projection tool, which maps the two-dimensional image of the target into a one-dimensional image, is applied along the axes defined by the edges in the image. As those skilled in the art will appreciate, the location of the edges can be discerned from by finding the peaks in the first derivatives of each of those projections. As above, where the target occupies the entire image, those lines will be the only lines discernable by the Sobel edge tool image. Where, on the other hand, the target occupies only a portion of the image, any extraneous edges can be ignored or windowed out. The projection vision tool used in step 112 may be of the conventional type known and commercially available for mapping a two-dimensional image of the target into a one- dimensional image. A preferred such tool is that provided with the Cognex Caliper tool commercially available from the Assignee hereof, Cognex Corporation.
In step 114, the method uses the information generated in steps 110 and 112 to compute the location of the reference point, particularly, as the intersection of the lines found in those steps 110 and 112.
As an alternative to using the Hough vision tool and the projection tool, the method can apply determine the location of the reference point by performing a binary or grey scale correlation on the image; see step 116. To this end, the method uses, as a template, a pattern matching the expected arrangement of the sought-after edges, to wit, a cross-shaped pattern in the case of a target of the type shown in Figure IB. The use of correlation vision tools for this purpose is well known in the art. The template for such an operation is preferably generated artificially, although it can be generated from prior images of similar targets.
As still another alternate to the Hough vision tool and the projection tool, the method can apply a grey-scale image registration using the sum of absolute differences metric between the image and a template; see step 118. To this end. the method uses, as a template, a pattern matching the expected arrangement of the sought-after edges, to wit, a cross-shaped pattern in the case of a target of the type shown in Figure IB. The template for such an operation is preferably generated artificially, although it can be generated from prior images of similar targets. A preferred grey-scale image registration tool is disclosed in United States Patent No. 5,548,326, the teachings of which are incorporated herein by reference.
Although steps 110 - 118 can be used to determine the approximate location of the reference point of the target in the image, the method utilizes optional steps 120 and 122 to refine that estimate. These two steps are invoked one or more times (if at all) in order make that refinement. In step 120, the method applies a conventional caliper vision tool to rind points in the image that define the adjoining edges of the regions. On the first invocation of step 120, the method applies calipers along fifty points along each edge (though those skilled in the art will appreciate that other numbers of points can be used), beginning with points closest to the estimate of the reference point, as discerned in steps 110 - 118. The calipers are preferably applied a small distance away from the actual estimate of the reference point to avoid skewing the analysis due to possible misprinting of the target at that point, a missing pattern at that point (e.g., Figure 1C), or a too-high spatial frequency at that point (e.g., Figure IB). In step 120, the method then fits a line to the points found along each edge by the caliper tool, preferably, using a conventional least squares technique.
In step 122, the method computes a refined location of the reference point as the intersection of the lines identified in step 120. Although conventional geometric calculations can be performed for this purpose, preferably, the reference point is computed using conventional least squares techniques. Preferably, when examining the image of a symmetric calibration target of the type shown in Figure IB, the method utilizes the same number of points on either side of (and closest to) the reference point for purposes of fitting each line. This minimizes the bias otherwise introduced by a conventional edge detection technique in finding edges that are defined only by dark-to-light (or light-to- dark) transitions.
Calibration targets of the type shown in Figure IB are advantageously processed by a method according to the invention insofar as they further minimize bias otherwise introduced by a conventional edge detection techniques. In this regard, it will be appreciated that such bias is reduced by the fact that "opposing" adjoining edges (i.e., edges that oppose one another across the reference point) define straight linear segments that change polarity across the reference point. That is, those segments are defined by regions that transition ~ preferably, equally in magnitude ~ from light-to-dark one one side of the reference point, and from dark-to-light on the other side. This is true for all "symmetric" calibration targets according to the invention, i.e., targets in which opposing edges define straight linear segments that are opposite polarity on either side of the reference point. On the second and subsequent invocations of step 120, the method specifically applies the caliper tool at points along the lines found in the previous iteration. Preferably, these points are at every pixel in the image that lies along the line, though, those skilled in the art will appreciate that few points can be used. The calipers can be applied orthogonal to the lines, though, in the preferred embodiment, they are applied along the grid defined by the image pixels. The caliper range decreases with each subsequent invocation of step 120. The method continues applying the calipers, beginning at the estimated center point until one of the four following situations occurs: no edge is found by the caliper applied at the sample point; more than one edge is found by the caliper and the highest scoring edge is less than twice the score of the second highest scoring edge (this 2X comes from the CONFUSION_THRESHOLD); the distance between a computed edge point and the nominal line (computed from the previous invocation of step 120) is larger than a threshold (which threshold decreases with each subsequent invocation of step 120); or, the caliper extends outside of the image. As above, in step 120, the method then fits a line to the points found along each edge by the caliper tool, preferably, using a conventional least squares technique.
In the second and subsequent invocations of step 122, the method computes a refined location of the reference point as the intersection of the lines identified in step 120. As above, although conventional geometric calculations can be performed for this purpose, preferably, the reference point is computed using conventional least squares techniques.
Referring to Figure 5, there is also shown a machine methodology according to the invention for interpreting an image of a target 84 to determine its reference point and orientation. Particularly, in step 150, the method calls for generating an image of a target 84 and, particularly, of a target according to the invention having two or more regions, each region being defined by at least two linear edges that are directed toward a reference point, and having at least one of the regions having a different imageable characteristic from an adjacent region. Step 152 can be effected in the manner described in connection with step 100 of Figure 4, or equivalents thereof. In step 152, the method analyzes the image to generate an estimate of an orientation of the target in the image. Step 152 can be effected in the manner described in connection with steps 102 - 108 of Figure 4, or equivalents thereof.
In step 154, the method analyzes the image to generate estimate of a location of the target's reference point. Step 154 can be effected in the manner described in connection with steps 110 - 118 of Figure 4, or equivalents thereof.
In step 156. the method analyzes the image to refine its estimates of the location of the reference point in the image and the orientation of the target in the image. Step 156 can be effected in the manner described in connection with steps 120 - 122 of Figure 4, or equivalents thereof.
Calibration target and methods for analysis according to the invention are advantageous over prior art targets and methods insofar as they are magnification invariant. By analyzing the adjoining edges of targets, methods according to the invention insure reliance on features (to wit, regions) that retain the same imageable appearance regardless of magnification. This is in contrast to prior art targets and methods, which rely on individual lines (or dots) to define calibrating features. As noted above, the imaging appearances of such lines and dots change with varying degrees of magnification. Even the prior art methods that analyze checkerboard targets rely on analysis of corners, which are not magnification invariant.
The magnification invariance of targets and methods according to the present invention is illustrated in Figures 6 A - 6C. Referring to Figure 6 A, there is shown an imaging setup wherein camera 200 images a target 202 (on object 204) from a height x. An image generated by camera 200 is displayed on monitor 206 of workstation 208. Referring to Figure 6B, there is shown an identical imaging setup, except insofar as a camera (of identical magnification) images a target (of identical size) from a greater height, x'. Likewise, Figure 6C shows an identical imaging setup, except insofar as a camera (again, of identical magnification) images a target (again, of identical size) from a still greater height, x". Comparing monitor images depicted in Figures 6A - 6C, it is seen that the images of the targets, generated by the cameras and displayed on the monitors, are identical in appearance regardless of the relative heights (x, x', and x") of the camera. This magnfication invariance results from the fact that the target has the same appearance regardless of height (or equivalently, magnification), within specified limits thereof. Those skilled in the art will appreciate that those limits are greater than the analagous limits for prior art targets (e.g., cross hairs, parallel stripes, etc.).
Still another advantage of calibration targets and methods according to the invention is that they permits angular orientation to be determined throughout a full 360° range. With reference to Figure ID, for example, the relative positions of the regions can be used to determine the overall orientation of the target, i.e., whether it is rotated 0°, 90°, 180°, or 270°. This information can be combined with a determination of relative orientation made by analysis of the adjoining edges as discussed above to determine the precise position of the target.
17
SUBSTTΓUTE SHEET (RULE 26) APPENDIX I
Patent Application for
MACHINE VISION CALIBRATION TARGETS AND METHODS OF DETERMINING THEIR LOCATION AND ORIENTATION
Vision Tool Description
THE FOLLOWING APPENDIX IS NOT BELIEVED TO BE NECESSARY FOR ENABLEMENT OR BEST MODE DISCLOSURE OF THE INVENTION DISCLOSED AND
CLAIMED IN THE ACCOMPANYING APPLICATION.
SUBSTΓΓUTE SHEET (RULE 26)
Figure imgf000021_0001
Edge Defection Tool
This chapter describes the Edge Detection tool, which includes edge detection ana peak detection.
Edge detection takes an input Image and produces two output images: an image of the edge magnitude of each input pixel and an image of the edge angle of each input pixel. When combined with other Cognex software tools, the information produced by edge detection can be used to locate obiects within an image. Edge pixel information can be used to detect the rotation of an ob|βct or detects such as scratches, cracks, or particles. You can aiso gauge now snarply an image is focused with edge pixel information.
Peak detection is useful in any application in which knowledge of the local maximum vaJuβe in a two-dimensional space is useful. Peak detection takes an input image and produces an output Image containing only those pixels in the input image with higher values than neighboring pixels. A typical input image to peak detection is an edge magnitude image; the output image contains only the highest magnitude edge pixels. The edge detection function can optionally use peak detection to postprocess the edge detection results so that only the strongest edges remain
This chapter has nine sections as follows:
An Overview of Edge Detection describes the goals of edge detection, defines an edge pixel, and describes how the Edge Detection tool finds edge pixels. This section also explains the two propeπies of a pixel that are calculated by the Edge Detection tool, eαge magnitude and anαle. and ends with a sample application.
Using Edge Detection describes, in general terms, the interface to the Edge Detection tool. Tins section contains a discussion of the compression tables that affect the amount of memory uεed by the edge detector.
Edge Detection Enumerations and Data Structures describes the data structures and enumerations that the edge detection functions use. Types and data structures that support peak detection are discussed in Peak Detection Enumeration Types and Data Structures below. Edge Detection Tool
Edge Detection Functions describes the functions that implement edge detection
Maximizing the Performance of VC2 Vision Coprocessor lists tne edge detection parameter settings that you should avoiα if you want to maximue the performance of the VC2 vision coprocessor during edge detection
Overview ot Peak Detection defines peak pixel and gives examples of some applications of peak detection
Using Peak Detection describes the interface to peak detection.
Peak Detection Enumeration Types and Data Structures describes the peak detection enumerations and αata structures
Peak Detection Functions describes the peak detection functions
Edge Detection Tool 4
Some Useful Definitions
• gβ pixel Pixel that has a different value from one or more of its eight neighDoπng pixels edge detector Operator that finds edge pixels in an image. eompreaeton tables Two tables thai are created during edge detection initialization the edge compression table and the magnitude compression table These tables are used by the Edge Detection tool to map data into a smaller range of values
Sobel operators Two 3x3 operators that the Edge Detection tool uses to locate edge pixels in an image horizontal edge Value returned by the horizontal Sobel operator when it is component applied to a pixel in an image. vertical edge component Value returned by the vertical Sobel operator when it is applied to a pixel in an image edge magnitude Value that increases as the difference in grey levels between neighboring pixels increases edge angle Orientation of an edge, with respect to the x-axis of the image at an edge pixel The Edge Detection tool defines rne angle of a pixel as arctan(vyft) where v is the vertical edge component and h is the nonzontal edge component of the pixel peak pixel Pixel with a value greater than or equal to some or all of its neighboring pixels' values peak detection Finding some or all peak pixels in an image
8 -point neighborhood Pixels that are horizontally, vertically, and diagonally adjacent to a pixel.
4-polnt neighborhood Pixels that are horizontally and vertically adiacentto a pixel 2-pokrt neighborhood Neighbors of a pixel along a single axis. Peak detection supports a horizontal and a vertical 2-poιnt neighborhood operator Edge Detection Tool
symmetric peak Peak whose pixel value is greater than or equal to the values or each of its neighbors. aeymmetrle peak Peak whose pixel value is greater than the values of its left and lower neighbors and greater than or equal to the values of Its right and upper neighbors. all-axes peak Pixel that satisfies the peak definition (symmetric or asymmetric) along all axes in its neighborhood. β In le-axle peak Pixel that satisfies the peak definition (symmetric or aaymmβtrio) along at least one axis in its neighborhood plateau In Deak detection, a region of contiguous pixels of the same value.
Edge Detection Tool 4
An Overview of Edge Detection
The Edge Detection tool finds edge pixels in an image. This overview of edge detection oontainβ the following descriptions:
• Definition of the problem of edge detection
• Overview of the tasks that the Edge Detection tool performs
• Definition of edge pixel
• Description of the Sobel operator, which is used by the Edge Detection tool to locate edge pixels
Definitions of the angle and magnitude of an eαge pixel, and descriptions of the two images produced by the Edge Detection tool: the magnitude image and the angle image
• Description of edge detection preprocessing
• Description of edge detection postprocessing
• Description of a sample application that uses both the magnitude and the angle images to create an angle "signature' of a shape
Edge Detection
The Edge Detection tool locates edges and determines each edge's angle and magnitude.
An edge occurs wherever ad|acent areas of an image have different grey values. Using this definition the triangle in Figure 66 has a single edge.
Figure imgf000025_0001
Figure 66. A figure with an edge Edge Detection Tool
The definition of edge can be further refined as the directed border between grey areas of an image, where direction is defined as a vector normal to tne edge. Using this definition the triangle in Figure 67 has three directional edges, represented by the thraa vectors Tπe angle ot the edge is the counterclockwise angle of the vector normal to the edge, with respect to the horizontal axis of the image See the section Angle Image on page 181 for a discussion of edge angle
Figure imgf000026_0001
Figure 67 Vectors indicating the edge angles ot a tnaπglβ
Along with an angle, an edge has a magnitude, which reflects the amount of the difference between grey levels on either side of the edge. As the difference in grey levels increases, the maomtude increases See the section The Magnitude Image on page 178 for a discussion of edge magnitude.
The triangle in Figure 68 has the same three edges as the triangle in Figure 67' however, these edges are of lower magnitude This lower magnitude is Illustrated by the shortened length of the three direction vectors
Figure imgf000026_0002
Figure 68 A wangle with lowmagnlluόβ edges Edge Detection Tool 4
Edges are actually located pixel by pixel Figure 69 contains a magnified view of a poruon of an edge in an Image. Each cell in the figure represents the grey level of a single pixel Notice that on this "microscopic" level there are many edges between pixels in which the grey levels on either side of the edge differ by a few percentage points
Figure imgf000027_0001
Figure 60 Grey levels in a magnified portion of an edge
The Edge Detection tool locates edges in an image by identifying each pixel that has a different value from one of its neighboring pixels. It calculates the angle ana magnitude of each edge pixel It also provides a means of classifying edges, so that low-magnitude edges are not reported
Edge Detection Tool
Edge Pixels
A pixel is an edge pixel if it haβ a different value from at least one of its eight neighboring pixels and is not on the border of an image In Rgure 70, the four shaded pixels are edge pixels
Figure imgf000028_0001
Figure imgf000028_0002
Figure 70. The shaded pixels are edge pixels
Figure 71 contains a grid representing grey levels in an image with the highest magnitude edge pixels shaded Notice that border pixels (pixels along the edge of the image) are not edge pixels
Figure imgf000028_0003
Figure 71 Highest magnitude edge pixels in an image Edge Detection Tool 4
The Edge Detection Operators
The Edge Detection tool finds edge pixels using the Soώe/ 3x3 neighborhood operators. During edge detection the Sobel operators are applied to each pixel In an input image This operation produces two values for each pixel. One value represents the vertical edge component lor the pixel, the other value represents the horizontal edge component lor the pixel. These two values are then used to compute the edge magnitude and edge angle of the pixel.
Sobel edge detection uses two 3x3 neighborhood operators to locate edge pixels In an image. The horizontal Sobel operator detects the horizontal (x) edge component for a pixel. The vertical Sobei operator detects the vertical (y) βoge component tor a pixel Figure 72 shows the Sobel operators.
Figure imgf000029_0001
Horizontal operator Vertical operator
Figure 72 Sobel operators
The Sobel operator works only on pixels with eight neighboring pixels. When the Sobel operator is applied to a pixel on the border of an image, the result is defined to be 0 because there are not enough neighboring pixels to calculate an edge value.
The horizontal and vertical Sobel operators are 3x3 linear operators. These operators compute a value for a pixel (x.y) in the following way
• The grey level of the pixel (x.y) is multiplied by the value in the center of the 3x3 linear operator
• The grey level of the neighboring pixel (x-l,y-l) is multiplied by the first value in the top row of the 3x3 linear operator
• The grey level of the neighboring pixel (x.y-1) is multiplied by the second value in the top row of the 3x3 linear operator
• Thie continues until the produot of eβoh pair of values lβ calculated.
• These products are then summed to produce the output, For the Sobel operator the value is interpreted as horizontal or vertical edge component of the pixel Edge Detection Tool
If the result of applying at least one of the Sobel operators to a pixel is nonzero, that pixel is an edge pixel
Figure imgf000030_0001
Figure imgf000030_0002
Sobel operator Image pixels
Figure 73. Applying the Sobel operator
In Figure 73, the horizontal Sobel operator is applied to the shaded pixel in the image The resultant edge pixel value is 6 which is calculated as follows
-1 - W O l 2 + -2 O 2 + 2 3 -r - l 2 ι- 0 3 + l 3 = 6
Note: The range of pixel values in Figure 73 and in most examples in this document is much smaller than the typical range of pixel values in an image. This is done to simplify the examples
Figure 74 shows all the vertical and horizontal edge component values for a small image The upper grid is the image; the values represent the grey levels of each pixel The grid to the lower left of the image contains horizontal edge component values tor the image. Each
Edge Detection Tool 4
cell in the grid contains the result of applying the Sobel horizontal operator to the corresponding input Image pixel. The grid to the lower right contains the vertical eαge components for the image It is computed using the Sobel vertical operator
Figure imgf000031_0002
Figure imgf000031_0001
operators
Figure imgf000031_0003
Figure imgf000031_0004
Horizontal edge component values Vertical edge component values
Figure 74 Horizontal and vertical edge component values computed with the Sobel operators
The horizontal and vertical edge component values computed with the Sobel operator are not returned by the edge detection functions. They are used to compute the edge magnitude ana edge angle of each pixel in the image Edge Detection Tool
The Output Images
Edge magnitudes are βtored in an output image that shows tne edge magnitude of each corresponding pixel in the input image. Edge angles are stored in another output image that shows the edge angle of each pixel You can choose which output images to create magnitude, angle, or both
These two images are described in this section
The Magnitude Image
The eαge magnitude for a given pixel is a function of the horizontal and vertical edge components of the pixel If x is the horizontal edge component value and y is the vertical edge component value for some pixel then the edge magnitude M for at pixel is αetirteα as follows:
Figure imgf000032_0001
This formula suggests a geometric interpretation of the data if ls the horizontal edge component and y is the vertical edge component, the magnitude is a vector from the origin of a two-dimensional coordinate system to the point (x,y) Figure 75 shows the geometric relβtionβhip between the edge magnitude and the two edge oompononta
Figure imgf000032_0002
Figure 76. Geometric representation ot edge magnitude
Note: For implementation-specific reasons, the Edge Detection tool uses the formula described above to compute edge magnitude and then scales the magnitude upward by approximately 16%. Edge Detection Tool 4
You can choose the number of bits that can be used to express magnitude Because magnitude values, computed with vertical and horizontal edge component values can exceed this number of bits magnitude values are compressed to an integer value that can be expressed in the number of bits you have specified You can control how magnitude values are compressed you can choose a method suooliβd by the Edge Detection tool, such as a logarithmic or linear map, or you can supply a magnitude compression map of your own. For descriptions of the maps supplied by the Edge Detection tool and for information on using your own map, see the section Two Compression Tables on page 192
The size and depth of the magnitude image are the same as that of the input image. Since all border pixels have horizontal and vertical edge pixel values of 0, the border of the magnitude Image contains all zeros. You can display an edge magnitude imuge higher magnitude edges are brighter
Optionally, edge detection can return the number of edge pixels in the input image and the 3um of all edge magnitude values. By dividing the sum ot all edge magnitude values by the size of the image, you can compute the average edge magnitude of an image. If you have two Images of the same scene, you can determine which image is in sharper focus by comparing average edge magnitudes of the two images; the image with the higher average magnitude is in sharper focus
Edge Detection Tool
Figure 76 contains an image, the image's horizontal and vertical edge components computed with Sobel operators, and fhe output image containing the edge magnitude values for each pixel The edge Dixels with the highest magnitude representing the actual edge, are shaded
Figure imgf000034_0003
Grey levels
Figure imgf000034_0001
Figure imgf000034_0004
Figure imgf000034_0005
Horizontal βαge comoonβnt vβluea Vertical edge component values
Figure imgf000034_0002
Figure imgf000034_0006
Sobel edge magnitude values Edge Detection Tool 4
Figure 76. Edge magnitude values computed with tne Sobel operators
Angle image
The edge detection functions produce an image oontaining tne angle of each edge pixel. If ls the horizontal edge component value and y is the vertical edge component value, the edge angle le the counterclockwise angle between the horizontal axis and the magnitude vector
Figure 77 shows the geometric relationship between the edge angle, the edge magnitude, and the two edge components. In this figure, M is the magnitude vector and θ is the edge angle
Figure imgf000035_0001
Figure 77. Geometric interpretation of edge angle
Edge angles are stored In the angle image in binary form They are represented as fixed-point numbers with tne decimal point to the left of the most significant bit and are interpreted as fractions of 360°. You can choose the maximum number of bits available to express an angle; this value must be less than or equal to 8. If π is the maximum number of bits used to express angle, and x is the binary representation of an angle, you can convert x to degree notation by using the following formula:
360x angle(x) =
Note that with a 6-bιt representation, angles can be expressed to the nearest 5.626°; with a 7-bit representation, an angle can be expressed to the nearest 2.8°; with an a-bιt representation, an angle can be expressed to the nearest 1 4°. Edge Detection Tool
The edge angle is computed using the arc tangent function If x is the horizontal edge component and y is the vertical edge component for a pixel, the eαge angle for that pixel is calculated as shown in Table 3.
Figure imgf000036_0001
Table 3. Formulae tor computing edge angles The computation is designed so that an edge angle β is in the following range:
0° ≤ θ < 360°
Edge Detection Tool 4
Figure 78 contains six rows. Each row contains (from left to nght) an image with the central pixel shaded, the vertical and horizontal edge component value of the shaded pixel, the image with a vector superimposed showing the edge angle of the central pixel, and the formula used to compute the edge angle. Notice that the equations defining the edge angle value are designed so that the vector always points to the brighter side of the edge
Vertical edge = -4 Angle = arc tan(- /0) = 270° Horizontal edge = 0
Figure imgf000037_0003
Figure imgf000037_0004
Vertical edge = 0 Angle = arc tan(0/4) = 0° Horizontal edge = 4
Figure imgf000037_0005
Figure imgf000037_0006
225*
Figure imgf000037_0001
Figure imgf000037_0007
Figure imgf000037_0008
Vertical edge = 1 Angle = arc tan( 1/1 ) = 45° Horizontal edge = 1
Figure imgf000037_0009
Figure imgf000037_0010
Vertical edge = Angle = arc tan(4/0) = 90" Horizontal edge = 0
Figure imgf000037_0011
Figure imgf000037_0013
35°
Figure imgf000037_0002
Figure imgf000037_0012
Figure imgf000037_0014
Figure 7β. Computing the angle ot various edges
183 Edge Detection Tool
By setong a flag in the edge detection parameters, you can specify that you want angles computed in the range 0" to 180° instead of the default range, 0° to 360°. When you select tne smaller range, an edge angle nthat is greater than 180° maps to edge angle (n— 180°) For instance, 300" maps to 120°, and 270° maps to 90°.
Use the reduced angle range when you want tne same edge angle results regardless of the polarity of the image. As shown in Figure 79, when the range of angles is restricted to the range 0° to 180°. the same edge angles are computed for a black object on a white background as for a white ob|βct on a black background. When the range of angles is 0° to 360° (also Figure 79) this is not the case: reversing the polarity of the image reverses the edge pixel angle.
Figure imgf000038_0001
Figure imgf000038_0002
Figure 79. Effect ot reversed polarity on edge angles for the two ranges
The binary representation ot an angle in the range 0° to 180° is similar to the representation of an angle in the range 0° to 360°. If n is the maximum number of bits used to express angle and xls the binary representation of an angle in the range 0° to 180°. you can convert x to degree notation by applying the following formula: iao anglefx)
Figure imgf000038_0003
Edge Detection Tool 4
Edge Detection Preprocessing
If you supply a pixel map, the edge detection function uaeβ it to preproceaa your input image before It calculates the horizontal and vertical edge components. For a description of pixel mapping see Chapter 3, Pixel Mapping.
Edge Detection Postprocessing
Although edge angles are defined for all pixels in an image, they are significant only at the pixels wtth the greatest edge magnitudes. There are many edges between pixels where the grey levels on either aide of the edge differ by only a few percent. These insignificant edge pixels, due to frame grabber noise and random fluctuations in pixel values, can clutter your edge magnitude image and make it difficult to evaluate the date in that image Aleo edge magnitude values usually increase gradually in a "smear" as they approach an edge and then decrease as the edge is croeeed You might want to sharpen the magnitude image by filtering out some edge pixels that are near, but not at, an edge.
The Edge Detection tool supplies three postprocessing methods Two of these methods, setup and rυn~tιme thresholding, set the lowest edge magnitude pixels to 0 The third method, peak detection, takes an edge magnitude image and processes it so mat. for any small pixel neighborhood, only the highest edge magnitude pixels remain and the other pixels are set to o see tne section Peak Detection on an Edge Magnitude Image on page 229 for a complete discussion of this method.
After postprocessing, edge angles corresponding to 2βro edge magnitude pixels are also set to 0 unless you have specified run-time thresholding but nor peak detection. In this case, edge angles corresponding to zero edge magnitude pixels can be nonzero, but should be considered undefined
Setup Thresholding
You can supply a setup time rejection percentage. This is a value n that represents a percentage of the highest possible edge magnitude value. With this method, any magnitude values in the first n percent of all possible values are forced to 0. For example, if you provide 6 bits to express magnitude values and specify that the lowest 5 percent of magnitude values are forced to 0, then any magnitude below 3 is forced to 0 since 5 percent of 63 (truncated) is 3.
Run-Timo Thresholding
You can εupply a run-time rejection percentage. This is a value n that represents a percentage of the highest actual edge magnitude value in a magnitude image. With this method, magnitude values in the first n percent of the magnitude values in a specific image Edge Detection Tool
are forced to 0 For example, if the highest magnitude value in a particular image is 51 , and if you specify that the lowest 5 percent of the run-time magnitude values are forced to 0, then any magnitude below 2 is forced to 0 since 5 percent of 51 (truncated) is 2.
Using a setup rejection percentage is taster than using a run-time rejection percentage, since the values that are forced to 0 are computed once, before any magnitude images are created However, the run-time method is more flexible and robust If you are producing magnitude images of the same scene but under varying lighting conditions, you snouiα use a run-time rejection percentage since the proportion of magnitude values forced to 0 remains stable as the range of magnitude values changes (due to varying illumination)
Postprocessing with Peak Detection on a Magnitude Image
Peak detection is a method of filtering en image so that only peak pixels remain and other pixels are set to 0 Peak pixels have a value greater than or equal to some or all ot their neighboring pixels' values
You can set a flag in the edge detection parameters structure that instructs the edge detection function to run peak detection on the magnitude image during postprocessing All nonpeak pixels are filtered out of the magnitude image and the edge angle image before they are returned. When you specify this, you must supply peak detection parameters in the edge detection parameter structure
The edge detection function does peak detection postprocessing after it has done setup and run-time thresholding There will be a 2-pιxel-wlde border of zeros in a pβak-deteoted magnitude image.
The section Peak Detection on an Edge Magnitude Image on page 229 discusses, in detail, the benefits of using peak detection on an edge magnitude image This section also describes the proper way to parameterize peak detection so that it filters edge magnitude pixels correctly. However you do not need a detailed knowledge of peak detection to use it with edge detection Suggested peak detection parameters are supplied in the section Dafa Structure cιρ_edgβ params on page 199
Figure ΘO contains an image its edge magnitude image, and the image that results from peak detection The edge magnitude image contains a smeared edge, which is sharpened by peak detection
Edge Detection Tool 4
Figure imgf000041_0001
Figure imgf000041_0003
Grey levels
Figure imgf000041_0005
Edge magnitude pixels
Figure imgf000041_0002
Figure imgf000041_0004
Peak edge pixels
Figure 80 Peak detecting an edge magnitude Image to determine actual edge Edge Detection Tool
Sample Application: Angle Histogram
You can use the magnitude and angle imageβ of an ob|eot to oreate en angle histogram of that object This is a table that assigns a value n to each edge angle θ if there are n edge pixβle (above a given magnitude threshold) whose edge angle is β
The angle histogram supplies a useful "signature" of an object's shape. You can use this signature to gauge similarity among objects to detect rotations of an object, and to detect flaws in an object
Once you have created edge magnitude and edge angle images for an ob|ect, you can create end display en angle histogram ae follows:
void angle_histogram ( res , threshold)
C-.p_edge_reEU-.es 'res; /• results structure pointer */ int threshold, / significant edge threshold */
(
Figure imgf000042_0001
int hιst[256] ; cu_clear(hist,sιzeof (hist) ) ; tor (1 = 0 ; l<ras->tnag_Uno_p-> ldth, _.♦♦ ) for ( j =0 ; 3 <res-> ag_img_p->height ; ]♦ + ) I
Figure imgf000042_0002
(rβ3-»τnag_ιmg_p , i , j ) , if (mag > threshold) hist (ras->ang_img_p->get ( res->ang_ιmg_ p , i, j ) ] ♦+ ; ) cgr_hi.HtQgr._m (c»q_ ιt__gα . hist . 65 ) caq_dι. splay ( ) , return;
)
Edge Detection Tool 4
Figure 81 contains four shapes; each shape Is paired with its angle histogram Note that the square and the plus sign have identical angle histograms The angle histogram of the circle is flat because there is no dominant angle along the edge of a circle.
Number of pixels
Figure imgf000043_0001
Figure imgf000043_0002
90 180 270 360
Figure imgf000043_0003
Angle
Figure imgf000043_0004
Angle
Figure imgf000043_0005
Angle Figure 87. Assorted shapes with their angle histograms Edge Detection Tool
Figure 82 demonstrates the effect of rotation on the angle histograms of a plus sign The histogram shifts as the plus sign rotates
Number of pixels
Figure imgf000044_0001
Figure imgf000044_0002
90 180 270 360 Angle
Number of pixels
Figure imgf000044_0003
Figure imgf000044_0004
90 180 270 360 Angle
Figure 82 Effect ot rotation on angle histogram
Caliper Tool 4
The Caliper Tool
This section defines the Caliper Tool and describes in general terms how it works. The concepts introduced in this section are described in detail in later sections.
The Purpose of the Caliper Tool
The Caliper Tool is a tool for locating edges and edge pairs in an image. The edge of an obiβct in an image is a change in gey value from dark to light or light to dark. This change may span ceveral pixele. The Caliper Tool provides methods for ignoring edges that are caused by noise or that are not of interest for a particular application.
The Caliper Tool is modeled on the mechanical engineer's caliper, a precision device for measuring distances. You specify the separation etween the caliper "jaws," and the Caliper Tool searches an area you specify for edge pairs separated by that distance. You can also search for individual edges when you know their approximate location in an image. This is similar to using a caliper to measure depth.
A typioal use tor the Caliper Tool ιe part inspection on an assembly line. For example, in integrated circuit lead inspection, a part is moved in front of a camera to an approximately known location (the expected location). You use the Caliper Tool to determine the exact location of the left edge of the part by searching for a single light to dark edge at the expected location (see Figure 103). The edge that is closest to the excected location is the left edge of the part.
Figure imgf000045_0001
Figure 103. The Caliper Tool is used to locate the left edge of the part. Caliper Tool
Once you find the left edge of the part, you can predict where the leads should lie within the image to specify a window in which to look for leads (see Figure 104). You can then use the Caliper Tool to search for edge pairs consisting of a dark to light transition followed bv a light to dark transition, and separated by the expected lead width. With information calculated by the Caliper Tool, you can compare measured lead width and location to expected lead width and location to make an accept reject decision about the part.
Figure imgf000046_0001
Figure 104. The Caliper Tool can be used to measure lead width and lead spacing σπ an integrated circuit.
How the Caliper Tool Works
The Caliper Tool uses protection to map a two-αimensional window of an image (the caliper window) into a one-dimensional image. Projection collapses an image by summing the pixels in the direction or tne projection, wnicn tenαs to amplify edges in that direction. Figure 105 shows an image, a caliper window, the one-directional image that results from projection, and a graph of the pixel grey values in the one-dimensional image.
Caliper Tool 4
Image Caliper window
Figure imgf000047_0001
First edge Second edge
Figure 105 Overview of the Caliper Tool
You may specify that the Caliper Tool perform pixel mapping before projection Pixel mapping can be used to filter an image attenuate or amplify a range of grey values, and otherwise map pixel values. See Chapter 3, Pixel Mapping, in the Image Processing manual for a complete description of pixel mapping.
After projection, an edge fitte s applied to the one-dimensional image to further enhance edge information and to smooth the image by eliminating minor grey value changes between neighboring pixels that are most likely caused by noise The edge filter produces Caliper Tool
a first derivative image (the most rapid changes in grey value in the projected image result in the highest peaks in the filtered image) Figure 106 shows the one-dimensional image from Figure 105 an image generated by the edge fllta/ and a graph of that image
Figure imgf000048_0001
First edge Second edge
Figure we An edge filter is applied to me one-dimensional image that results from pro/ectloπ
After the filter operation edge detection is performed on the filtered image Edge detection is a method of evaluating peaks in the filtered image and ignoring edgeβ that are not of interest such as eαges representing nαse in the image Edge detection applies geometric constraints thai you SDecirv to each edge or edge pair found in the image Applying geometric constraints provides a way to limit the numDβr of edges to evaluate by assigning a score to each edge or eαge pair based on such factors as the expected location of the edge or edge pair the distance between edges in an edge pair and the minimum grey level difference between pixels on each side of the edge
The Caliper Tool returns as many results as you have specified in the cclp_p«rams data structure and computes the geometric average of the score of eacn geometric constraint that you specify For edges or edge pairs whose score equals or exceeds the accept threshold the Caliper Tool returns information such as the score and location of the edge or edge pair and the measured distance between edges in an edge pair
The controlling variables of an edge or edge pair search define a caliper and include such information as the dimensions of the search window and the edge detection scoring parameters The type of tne data structure tnat defines a caliper is edp.callpar Caliper Tool 4
The Caliper Window and Projection
This section describes how you specify the caliper window, which is the subset of an image in which to locate edges or edge pairs It then describes Drojecυαn, which is the mapping of the caliper window into a one-dimensional image. Finally it describes the edge filter, which enhances edges in the one-dimensional image.
Caliper Window
The caliper window is the portion of an image in which the Caliper Tool cearcnee for edges. It is defined by the search length and projection length, which are the width and height resDβctively. of the window in an image that will be projected into a one-dimensional image. The caliper window is illustrated In Figure 107.
hej — Search len gth »H
! u
01
Ό
<5- j j ^"* Application point
Edge detection
Figure 107. The Caliper Tool window
The pixel in the center of the caliper window is called the application point. This is the point in the run-time image at which you want to locate edges.
Caliper Window Rotation
For many applications edge information is needed at more than one angle in an image. For example, to inspect a rectangular part, you may want to measure the dimensions of the part by Drojβcting the image first at 0" to measure length, and then at 90° to measure width. The caliper window can be oriented at any angle to locate edgeβ in an image. Angles increase In the clockwise direction from the horizontal axis.
Figure 108 shows how changing the caliper window angle affects the projection and edge search directions at 90" rotations. The Caliper Tool runs faster for 90* rotations than It does for arbitrary rotations. Caliper Tool
Figure imgf000050_0001
Figure 108. The caliper window at 90° rotations
Skewed Window Projection
Figure 109 shows a -15° caliper window with window rotation disabled. In this case, the Caliper Tool creates a "skewed" window and proiects it along the angle of skew into a one- dimensional image.
CallperTool 4
Figure imgf000051_0001
Figure 106. Skewed window projection
When window rotaton is disabled, you specify how the skewed window will be Droiected: with or without interpolation Interpolation increases accuracy at tne expense of execution speed.
When you use a skewed window without interpolation for proiectiorj, the pixels in the source image are summed along tne angle of skew as shown in Figure 1 10. Each pixel in the two-dimensional image contributes to one pixel in the one-dimensional image. In Figure 1 10, those pixels containing the number 1 oontπbutβ to the deetination pixel containing the number 1 , and so on.
Caliper Tool
Image
Skewed caliper window
Figure imgf000052_0002
One-dimensional
Figure imgf000052_0001
Figure imgf000052_0003
image
Figure 110 Skewed window projection without interpolation
When you use a skewed window with interpolation for projection, the skewed window is transformed into a rectangular window before projection. The transformation is performed by clp_trarw1orm() which uses several neighboring pixels from the two-dimensional image to calculate the pixel value for the one-dimensional image. See Chapter 2, Basic Image Processing, in the Image Processing manual for a complete description of the function otp_tranaform()
Figure imgf000053_0001
Rotated Window Projection
Figure 1 1 shows a -16° caliper window with window rotation enabled. In this case the Caliper Tool rotates the two-dimensional image and projects it into a one-dimensional image.
Figure imgf000053_0002
Figure 111 Rotated window projection
Choosing the Projection Type
The choice of skewed or rotated projection depends on the angle of the edge or edge pair of interest in the image and the remaining content of the image If the rotation angle is close to 90° or if a skewed image will contain enough of the edge or edge pair of interest to generate strong edges, you can use skewed projection to speed program execution Figure 112 shows an image, a caliper window and the one-direcrjonal image that results from projection. The skewed window contains enough edge information, so skewed projection may be used.
Caliper Tool
Image
Figure imgf000054_0001
Figure 112 Skewed projection may be used
Figure 113a snows a caliper window where the edges of interest are at 15° and window rotation is disabled A window large enough to enclose these edges would also include much of the dark area in tns image, which in some cases would result in a one-dimeπsioπal image where the edges are obscured
Figure 113b shows a caliper window at -15° in the same image with window rotation enabled Only the edges of interest are included in this window
Caliper Tool 4
Figure imgf000055_0001
Grey level
Figure imgf000055_0002
intensity
Figure imgf000055_0003
Figure 113 Skewed projection (a) and rotated projection (b)
Caliper Tool Optimization
When setting up the variables for a caliper you specify the following types ot optimization
• Space optimization The Caliper Tool is optimized for minimum memory use at the expense of execution speed.
• Speed optimization The Caliper Tool will use a number of internal techniques to improve execution speed at the expense of increased memory use.
• Default optimization- The Caliper Tool will weigh space and speed equally, using slightly more space than if you specify space optimization and slightly slower execution speed than if you specify speed optimization. Caliper Tool
The Edge Filter
After projection, an edge filter is run on the resulting one-dimensional image The edge filter accentuates edges in the image and produces a filtered image The peaks in this image indicate strong edges
The two parameters of the edge filter are its size and leniency Edge filter size is the number of pixels to each side of the edge to consider in the evaluation Edge filter leniency is the number of pixels at an edge to ignore between light and dark sides of the edge (lemency is described in tuπner detail below) Figure 114 illustrates an edge filter The edge filter is positioned over the leftmost pixel in the σπβ-dimeπsional image where it will entirely fit in that image Pixel values to the left of the leniency region are summed and subtracted from the βum of the pixel vβluee to the right of the leniency region in tnia example, (Of S)-(o-O) yields 5 for the edge filter image
Figure imgf000056_0001
1-D * image
Figure imgf000056_0002
Edge filter, t image H
Figure 1 14 The edge filter IB positioned in the one-dimenβtonal image
As shown in Figure 115 the edge filter is then applied at each successive pixel in the one-dimensonal image until the edge filter no longer fits entirely in the image At each pixel position, the pixel values to the left of the leniency region are summed and subtracted from
Caliper Tool 4
the sum of the pixel values to the right of the leniency region and the result is wriπen to the edge filter image The resulting image will be 2 * si∑e + leniency - 1 smaller than the one-dimensional image
Figure imgf000057_0001
image
Figure imgf000057_0002
Figure 115 The edge filter is applied at each successive pixel until the edge filter no longer fits entirely in the image
Figure 116 shows a graph of the one-dimensional image and the edge filter image from Figure 1 5 Peak position is calculated to suopixei accuracy
Caliper Tool
1 -D image
Edge filter image
Figure imgf000058_0001
Figure 716. Edge filter image graph vε onβ-dimβnsional image graph
The graph in Figure 1 θ represents an eαge filter applied to an Ideal Image; few applications would result in as ideal a graph All Images contain some degree of noise, and a typical edge in an Image can be several pixels wide. The next two sections provide guidelines for optimizing the edge filter by choosing appropriate values for size and leniency.
Choosing Edge Filter Size
Edge filter size specifies tne number of pixels on either side of an edge to consider in an edge evaluation. Size should usually be greater than 1 because noise in an image causes grey level differences between neighboring pixels. Figure 117 Shows a graph Of a one-dimenaonal image of a light band on a dark background, and a graph of an edge filter image where a size of 1 was used. Because there is noise in the image, many more peaks exist than are of interest.
CallperTool 4
Figure imgf000059_0001
Edge filter peaks
Figure HT. An edge filter size of 1 is typically too small.
Increasing size tends to smooth the cage niter Image because summing several pixels on either side of an edge tends to reduce the effects of noise. Figure 118 shows an image similar to the one in Figure 1 7 where a size of four was used. The only two peaks in the image are the peaks of interest. Size is typically between 2 and 5 inclusive.
Caliper Tool
Figure imgf000060_0001
Edge filter peaks
Figure 118. Using four for edge filter size reduces the effects ot noise on the edge filter
Choosing Edge Filter Leniency
Edge filter leniency specifies a region of pixels to ignore at an edge In an Image. Due to slight changes in edge or edge pair position or orientation, a pixel on the edge of a feature may not always have the same grey value. By setting a leniency zone between the light and dark areas of a feature, the effects of slight changes in feature location on filter results are minimized.
Figure 1 9 shows a graph of a one-dimensional image with two edges that span two pixels eacn, and graphs of the output of two edge filters. The first edge filter has a leniency of 0, the second edge filter has a leniency of 2. The edge filter with a leniency of 2 produces higher peaks. Two is a typical value for leniency, Caliper Tool 4
, 1 -D image
. Leniency = 0 . Leniency = 2
Figure imgf000061_0001
Figure 119 The effect of leniency on an edge filter image
Figure imgf000062_0001
Point Registration Tool
This chapter describes the Cognex Point Registration tool. It contains seven sections.
The first two sections, this one and Some Useful Definitions, provide an overview of the chapter and define some terms that you will encounter as you read.
Point Registration Tool Overview provides information about the capabilities and intended use of the Point Registration tool.
Haw the Point Registration Tool Works provides a general description of the operation of the Point Registration tool.
Comparing the Point Registration T∞l with Search compares the Point Registration tool with the Cognex Search tool,
Using the Point Registration Tool describes some of the techniques that you will use to implement an application using the Point Registration tool.
Point Registration Tool Data Structures and Functions provides a detailed description of the data structures and functions that you will use to implement your application.
Point Registration Tool
Some Useful Definitions point registration A search technique designed to determine the exact point at which two images of the same scene are precisely aligned. global minimum The lowest value in a function or signal. local minima A low value in a function or signal.
Figure imgf000063_0001
subpixei accuracy Positions within an image may be specified in terms of whole pixel positions, in which case the position refers to the upper- left corner of the pixel, or in terms of fractional pixel positions, in which case the position may e anywhere within a pixel. Positions specified in terms of fractional pixel positions are referred to as having subpixei accuracy.
Point Registration Tool 6
Point Registration Tool Overview
The Cognex Point Registration tool performs fast, accurate point registration using two images that you supply.
Point Registration
Point registration is the process of determining the precise offset between two images of the same scene.
You use the Point Registration tool to perform point registration by providing a model image and a target image, along with a starting point within the target image. The model image snould be a reference image of the feature you are attempting to align. The target image should be larger than the model image, and it should contain an Instance of the model image witnin it.
The Point Registration tool will determine the precise location of the model image within the target image. The Point Registration tool will return the location of this match, called the registration point, with subpixei accuracy.
Figure 160 illustrates an example of how the Point Registration tool performs point registration.
Figure imgf000064_0001
Model image Target image within target image
Figure 160. Point registration
Point Registration Tool
The Point Registration tool is optimized to locate precisely the model image within the target image, even if the target image contains image defects such as reflections, or if the modei image is partially obscured by other features in the target image. Fiαure 161 illustrates an example of point registration where the target image is partially obscured.
Figure imgf000065_0001
. . . , .. e
Model imag ae Targ »et imaq ae
Figure imgf000065_0002
Figure 161. Point registration with partially obscured target image
Point Registration Tool 6
How the Point Registration Tool Works
The Point Registration tool finds the location of the model image within the target image. You operate the Point Registration tool by supplying the model and target images, along with the location within the target image where you expect the origin of the model image to be. The Point Registration tool will determine, with subpixei accuracy where the origin of the model image lies within the target Image.
The Point Registration tool works by computing a score indicating the degree of similarity between the model image and a particular portion of the target image that is the same size aa the model image This score can range from 0 to 253, with a score of 0 Indicating that the model image and the target image are perfectly similar and a score of 255 indicating that the model image and the terget image are perfectly dissimilar The tool computes this score for locations in the immediate neighborhood surrounding the starting point The tool will find the location within thig nβignborhood of the target image that produces tne local minima in the value of this score
By adding an interpolation step, the Point Registration tool then determines the location of the model image within the target image with subpixei accuracy For typical images, the Point Registration tool can achieve accurate registration to within 0 25 pixels.
Because of the way the Point Registration tool seeks the local minima, if the starting point you specify is more than a few pixels from the actual registration point, the tool may not return the correct rβgiQtration point The βxaot amount of vananoe that the Point Registration tool can tolerate will vary depending on the images The variance may be as small as 3 to 5 pixels for some images or as large as 30 pixels for others
Point Registration Score
Each time it is invoked the Point Registration tool will reiurn in addition to tne location of the origin of the model image within the target image a score indicating the degree of similarity between the model image and the target image The score returned by tne tool will be from 0 to 255 with a score of 0 indicating that the model image and the target image are perfectly similar and a score of 255 indicating that the model image and the target image are perfectly dissimilar
If your point registration receives a high score, the actual precision of the point registration location may be somewhat lower than if the point registration receives a low score
Exhaustive Point Registration
When you use the Point Registration tool, you supply the tool with two images and a staπing location within the model image The tool will confine its point registration search to a small area around the starting location that you specify Point Registration Tool
The Point Registration tool can also perform point registration exhaustively that is, by computing the score for every possible location of the model image within the target image This procedure, called exhaustive point registration is extremely slow It can be helpful however in debugging applications where the Point Registration tool does not appear to be working correctly.
Masked Point Registration
You can limit the areas of the model image and target image that the Point Registration tool uses to perform the point registration by supplying a series of rectangles to the tool if you supply these rectangles, the tool will compute the score based only on those pixels contained within the rectangles that you specify
Figure 1 Θ2 illustrates an example of specifying a masking rectangle In IMS example the right side of the model is often obscured in the target image By specifying a rectangle that oovβrβ the left aide of the model Image, you can cause the Point Registration tool to consider only the pixels in that part of the model image This will tend to increase the accuracy of the point registration
Figure imgf000067_0001
e Model image Target image
Figure imgf000067_0002
Figure 162 Masked point registration
You can also specify several masking rectangles Figure 163 illustrates an example where the oentβr of the model image ιa often obscured in the target image By specifying a series of rectangles, you can eliminate from consideration the part of the Image that is most frequently obscured Point Registration Tool 6
Figure imgf000068_0001
„„ . , , „„ τ*.~*, ι—~~ Location of model image
Model image Target image with.n taroβt image
Figure 163 Masked point registration uwng mulvple rectangles
If you specify multiple masking rectangles and the rectangles overlaD any DIXΘIS that are contained in more than one masking rectangle will be counted toward the score multiple times, once for each rectangle in which they are contained
In all cases, the Point Registration too! will return the location of the origin of the model image within the target image
Image Normalization
The Point Registration tool may not work well in cases where the model image and the target image have different intensity values You can perform limited image processing as part of the point registration by supplying a pixel mapping table.
If you supply a pixel mapping table every pixel in the target image will be mapped to the value in the pixel mapping table at the location within the pixel mapping table given by the DiXβl's value For example if a pixel in the target image had a value of 200, it would be mapped to the value of the 200", element within the pixel mapping table.
IT you nave specified a masking rectangle only those portions of the target image that lie within the masking rectangle or rectangles will be mapped through the pixel mapping table
For more information on pixel mapping, see Chapter 3, Pixel Mapping, in the Image Processing manual
Point Registration Tool
Comparing the Point Registration Tool with Search
This section compares the Point Registration tool with the Search tool. For more detailed information on the Search tool see Chapter l. Searching.
The Point Registration tool has the following advantages over the Search tool:
• It is less sensitive to images that contain areas of pixels that have widely different values than the same pixels in the model image (i.e., blotches or occlusions).
• It requires no training step.
The Point Registration tool has the following disadvantages when compared with the Search tool:
• It is capable ot tine alignment only.
• It does not work with scaled or rotated target images.
• It does not work well with images with brightness changes.
• It will not find or rank multiple registration points.
Point Registration Tool 6
Using the Point Registration Tool
This section describes how to use the Point Registration tool.
Obtaining Model and Target Images
You acquire the images that you use as model and target images using the techniques described in Chapter i, 3000/4000 Acquisition and Display, in the 3000/4000 Image Acquisition and I/O manual and Chapter 1, 5000 Acquisition and Display, In the 5000 Image Acquisition and I/O manual.
You should acquire all images that you plan to use with the Point Registration tool. especially images of scenes with low contrast, using a bit depth of at least 8. For fastest performance, you should acquire images as FAST8 images.
When you acquire the model image, you should take care to ensure that the image is as ideal a model as possible of the images that you will be using as target images. The Point Registration tool is designed to align target images with a wide variety of image defects, but in order for the tool to work, the model image needa to be aβ free from defeots as possible.
Specifying a Starting Offset
When using the Point Registration tool, you must specify a starting location in the model image. The tool will perform its point registration search starting at the point that you specify. Because the tool only seeks the local minima for the score value, if you specify a starting location that vanee greatly from the actual registration point, the point registration operation will fail
You can use other vision tools such as the Search tool or the Inspection tool to locate the model feature within the target image, or you can rely on operator input to specify the coarse location of the feature.
Point Registration Tool
Point Registration Tool Data Structures and Functions
This section contains descriptions of the Point Registration tool data structures and functions, All Point Registration tool data structure and function names begin with the prefix creg_.
Structure creg_params creg_parame contains parameters that determine how the point registration search will be conducted. A structure of this type is supplleα as an argument to ereg_polnt_reglster().
#inclu.de <regiater.h> cypadef struct (
Figure imgf000071_0001
"fc»-τt_y; σ_Inc32 rect_count ; char *opte-.onal_map/ c_Int32 f lags ; 1 creα_Darajτιs .-
• starts and s?arf_yare the x- and y-coordinates of the starting position for the point registration search.
Figure imgf000071_0002
the number of rectangles in the reels argument to crβς_polnt_reglatβrO
• optionβLmap is an optional pixel map. If optionaljnap is non-NULL, the Point Registretion tool will replace each pixel in the target image with the value contained in the element of optionaljnap corresponding to the value of the pixel in the target image. If you supply a value for optional_map. you muat oupply an image of type FASTβ.
You can use optionaljnap to compensate for image-to-image brightness variations.
» flags specifies the type of point registration mat tne Point Registration tool will perform flags is constructed by ORing together any of the following values:
• CREG TYPE1 is reserved for future use: you should always include CREG_TYPE1 in the value of flags.
• CREG YPE2 is reserved for future use; you should never include CREG TYPE2 In the value of flags.
• CREG_NORMAL is used to specify a point registration based on finding the local minima of the score value. Point Registration Tool 6
Figure imgf000072_0001
used to specify a point registration based on finding the global minimum ot the score value.
If you specify CREG_NORMAL and if sterr_x and star y are more than a few pixels from the actual registration point, the tool may return a location that does not represent tne best match within the Image. If you specify CREG_EXHAUSTIVE, the registration will return the best registration match tor the entire image. An exhaustive point registration will be extremely slow.
Structure creg results crβg_po<m_rβgle βr returns a pointer to a creg_reeutt« structure. creβ_po|n _r*fllβtβr fills in the structure with information about the results of a point registration operation
♦include <register.h> typedef struct ( float x, y; c_Int32 ecore, time; char on_edge_x, on_edge_y ; ) creςι_rθGultε ;
• and y are the x-coordmate and y-coordinate respectively, of the location within the model image at which the target image was found x and give the position with subpixei accuracy
• score is a measure of the similarity between the pixel values in the model image and the target image at tne nearest whole pixel alignment position, score will be between 0 and 255. with lower values indicating a greater degree of similarity between the model image and the target Image
• fine is the total amount ot time that the tool spent on this point registration, in milliseconds.
• on_edge_x is set to a nonzero value if, along the x-axiβ, one edge of the model image area is at the edge of the target image. If onjedge t is nonzero, the accuracy of the position information may be slightly reduced from what it would otherwise be.
• on_edge_y is set to a nonzero value if, along the y-axis, one edge of the model image area is at the edge of the target image. If σπ_βdgβ_y Is nonzero, the accuracy of the position information may be slightly reduced from what it would otherwise be. Point Registration Tool
Function creg_point_regiβter() ereg_po4rtt_regiet*r performs point registration uβing the model image and target image that you supply. The point registration will be controlled by the parameters in the creα_pwwn« structure supplied to the luncnon creg_polm_r*gi-rtβr() returns a pointer to a creg_resurta structure that describes the result of the point registration
(fineluda <regxater . h> crβg_rβsult& *crog_poinc_regiβter ( consc cιp_bu£rβr 'target , const cιp_buf fer 'model , const cιa_rect *rects , const crβfl_paraΛ>ε "paranis , creg_results "reεultε) ,
• target points to the image to use as the target image for this point registration target must be at least one pixel larger in both the x-dimenson and the y-dimension than model
• model points to the image to use as the model image for this point registration model must be at least one pixel smaller in both the x-dimβnsion and the y-dimension than larger
• rects points to an array ot cla.roct structures If reels is non-NULL, the Point Registration tool will limit the comparison of pixel values to just those pixels that lie within the rectangles contained in recrs. If the rectangles in rects overlap, a pixel will be included in the comparison once for each rectangle that contains it.
• params points to a creg_p*ramε structure that defines the parameters to use tor this point registration
• results points to a creg„re»urt structure cr#g_polnt_rβglβter() will place the results of this point registration in results and will return a pointer to rβsulte
If you supply NULL tor results crog_polnt_r#gf»tβr() will allocate a creg_reeult structure from the neap using the default allocator You can free this structure by calling froe() creg_pαlnt_reglβter() throws CGEN_ERR_BADARG if
• rects is NULL and the rec count field of params is not 0
• If model is larger than or the same a te as target
• If the start.* field In params is less than 0 or greater than the width of target minus the width of model
• If the stan field In params is less than 0 or greater than the height of fargsf minus the height of model
• If model is NULL
• If params is NULL Point Registration Tool 6
If rargefis ULL
Point Registration Tool
Line Finder
An Overview of the Line Finder
The Line Finder computes the locations of lines in an image For each line that it finds, it returns an angle θ and a signed distance dfrom the central pixel of the image. Given θ and d, you can compute all points (x,y) in the image that satisfy the equation of the line (as illustrated in the section The Transformation from Cartesian Space to Hough Space on page 204) The algorithm that the Line Finder uses, along with the angle/distance classification of lines, is explained in How the Line Finder Works on page 166.
You can use the ϋne Finder in either of two modes, quick or normal. In quick mode the Line Flnαer proviαes the location or the point along each line mat is nearest to the center or tne image, along with the angle of the line In normal mode, the Line Finder also estimates the length and density of each line mat it locates, using statistical methods, and returns the position of the center of mass of the line
Figure 78 illustrates the two modes of the Line Finder. The Image In Figure 78a Is shown in Figure 78b after the Line Finder has been invoked in quick mode The presence of the four lines is indicated. Each line is drawn to the screen at a constant length, but no attempt is made to estimate its actual length Figure 78c shows the result when using the Line Finder in normal mode- the length of each line segment is estimated.
For a detailed explanation or me αaia returned oy the Line Finder in eacn moαe, see tne section Results in Quick Mode and Normal Mode
Figure imgf000076_0001
Figure re. An image (a), lines round in quick mode (b), and line segments found in normal mode (c)
The Line Finder finds lines regardless ot grey scale variations In the Image, rotation of the scene or changes in scale. A typical application tor the Line Finder is locating fiducial marks in a series of images In which grey levels, scale, and rotation are unpredictable from one image to the next. Figure 79a and Figure 79b illustrate two images, each containing the same fiducial mark a square within a square. However, the sizes of the marks, and their rotations differ between the two images Also, the mark in Figure 79a is a dark square within Line Finder 3
a lighter square against an even lighter background, but the mark in Figure 79b is a light square within a darker square against a darker background. As shown in Figure 79c and Figure 79d the Line Finder finds the outlines of the fiducial marks despite these variations
Figure imgf000077_0001
Figure 79 The Line Finder is immune to grey scale, size and rotation changes
Since tne Line Finder Is an Inter eaiate-ievei tool your application win prooaoiy require that you do further processing, using the results of the Line Finder as input. You will probably not be searching for lines in your image, but rather for a feature whose presence is implied by the locations of lines For example, in Figure 79 the Line Finder does not return the loeationβ and dimenβtonβ of the fiducial marks, only the looationa of the lines that bound the nested squares Computing the precise locations and sizes of the fiducial marks requires post-processing based on the Line Finder's results.
The Line Finder demo code includes post-processing of this sort. It contains functions that locate squares of any size and rotation, uang the Line Finder. This code deduces the presence of squares based on the Line Finder's results, eliminating extraneous lines. See Chapter 1, Cognex API Introduction, in the Development Environment 1 manual for the location and name of the Contour Finding demo code on your system Line Finder
How the Line Finder Works
This chapter explains how the Line Finder finds lines in an image This discussion is divided into the following subsections
• Defining a Line by Angle and Distance describes the definition of a line used by the Line Finder
♦ Hough Space introduces the two-dimensional space in which the Line Finder records lines,
» The Hough Line Transform describes the algorithm that the Line Finder uses to find lines In an image.
The Line Finder uses a Cartesian coordinate system whose origin is the center of the image. Most of the examples in the book are drawn in this coordinate system, although a few use image coordinates These two systems are pictured in Figure 80 Notice that the x-axes of the two systems are the same, but the y-axes are opposite. Note also that positive angles in Cartesian coordinates are measured counterclockwise from the x-axis, positive angles In image buffer coordinates are measured clockwise from the x-axls.
+y
Figure imgf000078_0001
+y -y
Figure 80 Image buffer coordinates and Cartesian coordinates
The Line Finder uses the central pixel of the image as a reference point, The image buffer coordinates of this pixel (xc, y are computed with integer division, using the following formulas: Line Finder 3
x„ = /2
H/2 where w \s the width of tne Image and H is the height of the image, in pixels In the Line Finder, the Cartesian coordinates of the central point in the image are (0,0)
Defining a Line by Angle and Distance
The Line Finder defines a line (in Cartesian coordinates) by its angle from the x-axis and by the signed distance from the center of the image (the central pixel) to the line These two parameters uniquely describe any line in Cartesian space.
The feature in Figure 81a contains an edge that lies along a line (Figure 81b) The line is defined by its angle from the horizontal (θ) and by the shortest distance from the center of the image to the line (Figure 81 c). Note that the distance vector is perpendicular to the line
Figure imgf000079_0001
Figure 81 Angle and distance define a line
In the definition of a line by angle and distance, the distance is negative if the angle of the distance vector (in Cartesian coordinates) is greater than or equal to 0° and less than 180°: the distance is positive if the distance vector has an angle greater than or equal to 180° and less than 360°.
Distance is signed to avoid the ambiguity illustrated in Figure 82. Figure 82a shows an image with two features. Each feature is bordered on one side by a line. Aitnough tne lines are different, they have the same angle θ and the same absolute distance d from the center (Figure 82b). Line Finder
Figure imgf000080_0001
Figure 82 The angle of the distance vector determines the sign of the distance
By designating a sign to the distance vector based on its angle (δ In Figure 82c, δ+ 180 in Figure 82d) each line is ensured a unique definition In this example, the two lines are defined as (θ, d) and (θ, -d).
Once you know the distance and the angle of a line, you can compute all points ( o. yu), in Cartesian coordinates, that lie on the line Given a distance, d and an angle θ, a line is all points (Xo. y„) that satisfy the following equation.
,sιnθ - y0co«θ
The derivation of thie formula ιe supplied in the βeotion The Transform tion from Cartesian Space to Hough Space on page 204 As an example of its application, if θ is 45 and d is 0, a line is all points in Cartesian space that satisfy the following equalities Line Finder 3
0 = xsin(45) - ycos(45)
Figure imgf000081_0001
x = y
Hough Space
Hough space is a two-dimensional space in which the Line Finder records the lines that it finds in an image. A point in Hough space represents a line: one axis represents angle, the other represents distance
Hough space is Implemented in the Line Finder as an array a simple example is shown in Figure 83. The Hough space in this example can record eight angles and nine distances and therefore 72 lines.
The Line Finder lets you control the size of Hough space. You specify the range of distance in pixels, and, in the edge detection data structure thai you pass to the Line Finder, you supply the number of angles that the Line Finder can compute.
315
270
225
Angle 180
135
90
45
0
-1 0 1 Distance
Figure 83 Hough space
The Hough space in Figure 84 contains a single line. It has an angle of 135° and is a distance of 3 units from the center of the image. Line Finder
Angle
Figure imgf000082_0001
Figure imgf000082_0002
Image Distance
Figure 84 Hough space containing a single line
The Hough Line Transform
The Line Finder uses the Hough Line Transform to locate lines in an image. This method is outlined and illustrated in this section To understand this discussion, you should be familiar with edge detection and peak detection, as described in Chapter 4, Edge Detection Tool, in the Image Processing manual
The Hough Line Transform, as implemented by this tool, takes advantage of the following rule. If an edge pixel's angle β aπα location (x0,yϋ) is known, then the line on whicn the pixel lies is also known it is the line with angle φ+90 thai contains the pixel (xO.yO) This is illuβtrated in Figure 85.
170 Line Finder 3
Figure imgf000083_0001
Figure 85 Four edges in an image (a) edge angles at four edge pixels (b), and line angles at those pixels (c)
The Hough Line Transform is implemented as follows.
1 Using edge detection, the Line Finder creates an edge magnitude image and an edge angle image from the input image
2. The Line Finder creates and clears a Hough space
3 For each edge pixel m the input image the Line Finder does the following
♦ Using the edge angle ø and location (xO,yϋ) of the pixel the Line Finder calculates the line's distance from the center d, and its angle, θ (using the formula β = φ t 90).
• The Line Finder increments the bin in Hough space representing the line (θ,d)
4 Once all edge pixels in the image have been examined, the Line Finder searches Hougn space for maximum values, using peak detection. The highest value in Hough space represents the strongest line in the image, the second highest represents the second strongest line and so foπn.
5 The location of each line is pnnted out and, optionally each line is drawn in an output image.
Figure 86 through Figure 89 illustrate this method.
Figure βθa is an image passed to the Line Finder Figure ββb is a stylized edge magnitude image created from the input image Une Finder
Figure imgf000084_0001
Figure ββ. An input image (a) and the edge detected input image (b)
Each edge pixel is processed, as described above. For example, because the outlined edge pixel in Figure 87 belongs to the line (90, -2), the bin in Hough space that represents that line is incremented
Angle
Figure imgf000084_0003
Figure imgf000084_0002
-4 -1 0 1 2 3 Distance
Figure 87 The bin in Hough space for the edge pixel's line is incremented. Line Finder 3
Figure Θ8 shows tne final Hough space along with the input image Figure 89 shows the peak-detected Hough space, with each of the peaks outlined Figure 89 also shows the input image with the tines, represented by the peaks In Hough space, drawn into the input image
Angle
Figure imgf000085_0001
Figure imgf000085_0002
-4 -3 -2 -1 0 1 2 3 4
Distance Figure 88 The final Hough space and the input image
Line Finder
Angle
Figure imgf000086_0002
Figure imgf000086_0001
-3 -2 -1 0 1 2 3 Distance
Figure 89. The peak-detected Hough space ana the input Image, with lines added
Notice that the Hough space contains many extraneous lines. If, for instance, tne vision problem is to find the triangle in the image the lower scoring lines that border the blob need to be eliminated. Almost all of the Line Finder applications require post-processing of this sort. In this example, simply locating the three highest peaks in Hough space eliminates the unwanted lines (Figure 90) In most applications, the required post-processing is more complex.
Line Finder 3
Angle
Figure imgf000087_0001
Figure imgf000087_0002
-3 -2 -1 0 1 2 3 4 Distance
Figure 90. The output image after post-processing
APPENDIX II
Patent Application for
MACHINE VISION CALIBRATION TARGETS AND METHODS OF DETERMINING THEIR LOCATION AND ORIENTATION
Software Listings
THE FOLLOWING APPENDIX IS NOT BELIEVED TO BE NECESSARY FOR ENABLEMENT OR BEST MODE DISCLOSURE OF THE INVENTION DISCLOSED AND
CLAIMED IN THE ACCOMPANYING APPLICATION.
86
SUBSTTΓUTE SHEET (RULE 26) /*
* Copyright (c) 1996 by Cognex Corporation, Natic.-c, MA USA
* All rights reserved. This material contains unpublished,
* copyrighted work, which includes confidential ana prcpr e:
* information of Cognex.
* cwa_ md_target . c, Aaron Wallack - 13 Apr 1996 Created
* Time-stamp: <96/07/30 09:32 47 awaliack> last modι - - - , *
* $Revιs on: /maiπ/noneS */ <prealιgn.h> #include <math.h> ttmclude <stdιo.h #mclude <cct.h> ^include <cιp.h> Ifinclude <ctm.h> #mclude <caq/caq_def . h> #include <cwa_prι .n> ^include <searc 'ctr_de . > ^include <searcn, cse_def.π> ^include <clp/ccip h> #mclude <clp/ccip_er h> itmclude <cr.p.h
/* This f le includes a function for localizing a cress:
* square' target.
#*:?; • ~- -
* The algorithm is intended to handle the cases when tne target s
* imperfect (occluded or broken)
* ###################*####### (occlusion) broken #############
* These routines are intended to be used as a subroutine for
* determining the cameras' field of views (with respect to an absolute
* coordinate system defined by a motion stage) .
* The target localization routine (cwa_find_target) takes as input:
* img - a cip_buffer containing an image of the target
* calib - a calibration structure intended to characterize the
* non-rectangularity of the pixels
* angle - the orientation of the crosshair ir. terms of rr.age
* coordinates (we require th s
* orientation estimate to be accurate o 3 degrees, :. 5 radians
* result - pointer to a cwa_iιr.e structure where the result w ll be
* returned
* Requirements :
* Image quality:
* The image of the crosshair must be visible and perfect
* (without occlusion or breaks'; throughout a 100 X IOC pixel square
* surrounding the crosshair center.
* Consequently, the center of the crosshair .USC oe at lease
* 50 pixels away from the borders of the c p_ uffer.
* Orientation estimate accuracy:
* The given angle estimate (given ir. terms of image coordinates' must be
* accurate to within 3 degrees 3.35 radians
* This algoritm tolerates occlusion and breaks n the target
* so long as these imperfections occur at least 50
* pixels away from the crosshair center
* Major Consideration:
* One interesting thing to note about crosshair calibration, and the reason
* that we use four distinct rays, is that the separation between the opposite
* rays depends upon the lighting and aperture of the lens. Furthermore, we
* cannot assume that the separation between these edges will remain constant
* from one setup to another
* What we really want is the position of the +
* Since we do not know what the offset; separation is, it turns out that the
* repeatability of the entire fit (localization of the crosshair) is bounded
* by the accuracy of the smallest of these four lines. It doesn't make any
* sense to include 10000 edge points from one ray f we only have 50 edge
* points from another (the localization error w ll still be on the order of
* 0.1/sqrt(50) pels, assuming random .i.d. error and su p xel localization
* accuracy of 0.1 pixel) .
* We want to incorporate the same number of edge points frcrr. opposite rays
* because we always want to f t the lines to the same exact points for
* highest accuracy and repeatability' . hen the calibration target s
* occluded by a particular obπect (such as the borders that can be seen m
* the database) , then if we use the exact same number of pixels on the
* opposite u oc luded ray as the number of pixels on the occluded ray, then
* we will always be locking at the same pixels en the unocciuded ray
* Basically, since we dc not know which edge points correspond to
* i cerfections in the target and which edge points correspond to
* valid points on the target, we use a beet - strapping algorithm to
* compute regions where edge points are VALID.
* Assuming that the target s perfect w tmn the GUARANTEED GOOD
* REGION (center + - 50 pixels) , we can compute four GOOD RAYS
* characterizing the four rays emmatmg from the crosshair center.
* Then, we can use these GOOD RAYS to define the VALID REGIONS (the
* set of points within a TIGHT THRESKOLD of the GOOD RAYS) . Finally,
* we improve the estimates of the four rays by going back and
* enumerating edge points along the GOOD RAYS 'until we encounter an
* edge point which falls outside the VALID REGIONS.
* Definitions
+
* GUARANTEED GOOD REGION
* region surrounding the origin where we insist that the calibration target
* must be perfect
* GOOD RAY
* A ray fit to points enumerated within the guaranteed good region
* VALID REGION * Region suroundmg good ray wnere we trust tnat tne ecge points are due to
* the crosshair and not from occlusion or breaκ≤
* OUTLIER POINTS
* Points which we cannot trust because they do not fall within the valid
* region
CENTER OF CROSSHAIR TARGET
* V outlier
* V edge points
*_„„
* #*### ## < good ray >
* #######- VALID REGION
-guaran: ;ec cocc
* We are adopting a better-safe- nan-scrry strategy for .-.ar.clir.g
* occlusion (as seen as we see a "bac" pcir.t, *»e cse faitn in all of
* the subsequent points) Aitnough tnis approacr. s sucoptir.al, tne
* database images seem te contain nearly perfect points eminatmg
* from the origin that it snould work well in practice
* Algorithm:
* 1, Use aDΞolute normalizes correlation searcr. to approximately
* locate tne crosshair center
* Th s involves constructing a syntnet c -iccel of a rotatec
* crossna r center isee gress_lccate_crossna r_center_m_image
* 2) Given this initial position estimate ana tne user-supplied orientation
* estimate, improve the positional and oπentational estimates by localizing
* four good rays emmatmg from the crossnair center This involves
* enumerating edge points within the GUARANTEED GOOD REGION (50 pixels)
* along the supplied orientation as well as
* theta+PI/2, theta+PI, heta+3 +PI/2. These computed four GOOD RAYS are used
* in step 3 to check whether edge points are outliers. In otner words, these
* four good rays define VALID REGIONS separating good edge points (points we
* believe correspond to the crosshair) from bad edge points (points we
* believe are due to occlusion or breaks) We should also point out that we
* fix the orientations of the four rays by "averaging" these orientations,
* i.e., we incorporate the constraint that the four rays correspond to
* orthogonal physical lines. (see
* compute center_posιtιon_and_angle_assummg_four_guaranteed_good_rays ( ) )
* 3) Go back and enmerate edge points along the four rays 'until we find an
* edge pcmt which does not satisfy the VALID REGION constraints. We also
* throw out some of the edge points in order to realize equal numoers of
* edge points from opposite rays. (see * compute_center_posιtιon_and_angle_alcng_guaran eed_gooα_rays ,
* NOTES
* When we enumerate edge points along rays eminatmg from tne origin,
* we start out a small distance away from the origin sc tnat we den ' t
* get confused by two different edges (this buffer zone s
* characterized by the constant C RCUΛR_BUFFER_ZONE_ARCUND_CΞNTΞR,
* which s 10 pixels)
* The central function in this algorithm is
* enumerate_pomts_and_fιt_lιne_along_ray ( ) which enumerates edge points
* along a specified path. enumerate_poιnts_and_fιt_lιne_alαng_ray ' ) applies
* calipers at sample points along the path (the edge estimates can be
* further improved using parabolic subpixei mterpolaticn. Along a specific
* path, enumerate_pomts_and_fιt_ime_along_ray ' ) always makes integral unit
* steps m either the x or y direction. This s sc that we can apply
* axis-aligned calipers along every row or column w en tne step s: :e s _
* enumerate_pcmts_and_f t_lιne_along_ray , enumerates cge points
* until it either encounters a eac ecge pcir.t, or f nisr.es samp-.-.ng
* the edge segment. At this point, it ceases enumerating edge points
* and f ts a line to the points There are four ways an ecge pcir.t
* can be "bad" . If enumerate_pcmts_and_f t_lme_along_ray
* encounters any of these four situations, t returns 3 after fitting
* the l ne; if it successfullv enumerated all of tne edce points
* along its specified course, t returns I
1. no edge is found by the caliper applied at tne sample point
2. more than one edge is found oy tne caliper and tne r. gnest scoring edge is less cr.an twice tne score of tne second r.ignest scoring edge (this 2X comes from t e CCNFTJ3ICN_TKRE3K0LD
3. the computed edge pcir.t is an outlier 'tne aistance cetween tne edge point and the nominal line is larger than tnrεsr.cid)
4. the caliper extends outside of the cιp_buffer
* When we call enumerate_poir.ts_ar.d_f t_iιne_aiong_ray ) in step 1, we
* specify a ray from a point near tne crossnair center to a pcir.t 50
* pixels away from the crosshair center We do look at tne return val
* to check that the enumerate_pomts_and_fιt_ime_along_ray ( ) did not
* encounter any problems .
* When we call enumerate_pomts_and_f t_iιne_aiong_ray ' ) in step 2, we
* specify a ray from a point close to the crosshair center along a
* good ray to a point 1000 pixels away (guaranteed to be outside the
* cιp_buffer) . Thereby, even if we do not see any "bad" points, we
* w ll -undoubtedly leave tne cιp_buffer at some point
* The reason we pass a calibration object to cwa_fιr.d_target is that we
* recompute the orientations of the four rays Dy incorporating tne knowledge * that they all come from orthogonal lines. If tne pixels are ncn-square
* (i.e., by virtue of the fact that the CCD elements are non-square , tnen
* we need to account for this when we "average" the four orientations. We
* "average" the four orientations by transforming the image orientations
* into physical coordinates (and then truly averaging tne pnysical angles
* and then transforming the average angle back into image coordinates We
* try to use the term "phys" to denote physical coordinates and "i g" te
* denote image coordinates (such as phys_orιent and ιmg_oπent! .
* We use different sized calipers for steps 2 & 3 because we make
* different assumptions about the relative accuracies of our caliper
* application points. We use the terms LOOSE and TIGHT to refer to the
* calipers; LOOSE for step 2 where we have little faith n the
* application points, and are thereby forced to use broad calipers, and
* TIGHT for step 3 , where we have much more faith m tne application
* points, and can thereby use relatively tig.nt calipers
* We expect the initial search position to be accurate to 2 pels, and
* the orientation estimate to be accurate to 0 35 radians At a caliper
* application point 5C pels away, the caliper may ee off zγ up to 5
* pels from the correct ecge position Just to oe safe use a
* caliper with projection length 24 12 cr. eacr. sice to e mer e ecge
* points in step 2
* After improving tne position and orientation estimates ..-. step 2, we
* basically assume tnat we've get tne right position to - - 3.2 pels
* and +/- 0.01 radians. Thereαy, we can use tr.ese lines to tnreshcld
* outliers because we would expect errors of only 2.2 pels at 200
* pixels away This is pretty tight because we use a threshold of 3.3
* pels to detemme outliers
* (actually, the C.2 pels, 0.31 radians could significantly impact the
* performance because the first 50 pels and the orientation estimate
* end up predominantly affecting whicn ecge pcmts are included in tne
* final line estimates'
* Main subroutines -
* gross_locate_cresshaιr_center_ιn_ιmage )
* construct a synthetic model of the crosshair and use search to
* estimate the crosshair position
* fit_lιne_to_pomts ! )
* f t a line (x,y,t! to an array of points
* compute_subpιxel_posιtιon ( )
* use parabolic interpolation at the neignborhood of tne application point
* to compute the subpixei edge position
* enumerate_poir.ts_ar.d_fιt_lιne_along_ray ( ) enumerate eαge points along a
* l ne segment from start_pomt to end_pomt and possibly store the found
* edge points in an array. Furtnermcre , a nominal line and a thresncl are
* passed in as arguments, and when the found edge pcmts deviate from this
* nominal line by mere tnan the threshold, enumeration is stopped and a l ne * s fit to the enumerated edge positions Enumeration _s also stopped wnen
* two edges are found (confusion) , no edges are found 'occlusion, or when
* the caliper is applied out of bounds (out of bounds
* average_four_ιmage_orιentatιons t )
* average the orientations of the four rays to compute tne best
* composite orientation estimate This function transforms the image
* orientations into the physical domain in order to exploit the
* constraint that the physical lines are ortncgonal
* compute_center_posιtιon_and_angle_assumιng_four_guaranteed_good_rays \ )
* improve estimate of the crosshair position and orientation by
* searching four regions of guaranteed good length (50 pixels)
* emmatmg from the crosshair center. Lines are fit te the edge
* points along the four regions, and then the orientations of these
* lines are "averaged" . Two orthogonal lines are computed by
* averaging pairs of the four rays Finally, the position estimate is
* computed by intersecting the two orthogonal lines
* compute_center_posιtιon_and_angle_alcng_g aranteec_gooc_rays generat
* final estimate of crossnair position ana orientation _r enumerating pcmts
* along four rays using four gccc rays as thresholding _ιr.es We use
* tignter calipers (tnan </e usec _n step 2 oecause e trust tne positions
* of the good rays */ ttdefme N_EDGES 10 ttdefme GUARANTΞED_GCCD_ΞIZE 50 ttdefme GROSS_TARGET_SIZE GUARANTΞΞD_GCOD_SIZE
#defme NUM_SAMPLΞ_?OINTS 1000 ttdefme LOOSΞ_TKRΞSHCLD 12 ttdefir.e TIGHT_THRESHCLD 3 ttdefme LOOSΞ_CALI?ER_WIDTK 24
/* ( (mt) 2* 1LOCSΞ_THRESHCI_0+GUARANTΞΞD_GCCD_SIZΞ*s n v 35/ l -t- 2 (kernel * ttdefir.e TIGHT_CALIPΞR_WIDTH 14
/* ( int) 3*TIGHT_THRΞSHOLD) ÷ 2 'kernel/*
/*
* TIGHT_CALIPER_WIDTH snould be larger than the minimum widtn sc that
* we check whether there is a sharp edge j ust outside tne
* TIGHT_THRESHOLD rang e , because if there is a sharp edge π us t
* outs ide the TIGHT_Tϊ_RESHOLD range , then we want to declare
* confus ion and stop enumerat ing po ints * /
^def i e LOCSE_CALIPER_PROJΞCTION_LΞNGTH 5 ttdef me TIGHT_CALIPER_PROJECTION_LENGTH 5 ttdef me MAX_NUM_POINTS 1000 ttdef me EPSILON_THRESHOLD 0 . 05 ttdef me CIRCULAR_BUFFER_ZONE_AROUND_CENTΞR ( 8 + LOOSE_CALIPER_?RGJECTICN_LENGTH ' ttdef me STEP_SIZE_USED_TO_ENUMERATΞ_POINTS_IN_GOOD_REGION 3 ttdef me STEP_S I Ξ_USED_TC_ENUMERATE_ALL_?OINTS 1 ttdef me MAX_LΞNGTΗ_DIAGONAL_ACROSS_CIP_BUFFER 1000 ttdef me CONFUSION THRESHOLD 3 / * number of pixels to ignore when we run nto an out. er » ttdef me OUTLIER_BUFFER 7 extern t cd_showas ( int , char* ) ; extern double cwa_po t_dιstance ( ) , char* cy_prealιgn_cwa_f d_target ( ) ( return FILE , static t cwa_f md_target_debug_f lag=0 , int set_cwa_f md_target_debug ( mt x)
( mt old_f lag=cwa_f ιnd_target_debug_f lag ; cwa_f md_target_debug_f lag=x ; return old_flag; } typedef enum caiιb_orιent
{'
CALIB_HORIZ=0,
CALI3_VERT } calιb_oπent ,
/* A training parameters record for training rotatec crossnair ~c static ctr_params cwa_cal b_ccrr_tp =
{ 2, /* make a binary mcdel 2 grey Levels
0, /* no leniency
0, /* den ~ measure angle 5, /* default bias
1, /* left tail at 1% 1, /* rignt tail at 1%
50, /* thrεsncld, not useα for grey models
4, 4, /* mcdel resolution 0, /* don't tra for reaαer
0, /* no angle training
0, /* ditto
0.0, /* no standard deviation tnres olcing for statistics
128, /* care threshold for mask training *
0 /* NO flags set
/* (do net bother to initialized reserved fields
}; static csejpararas cwa_caiιb_sp = {cse_absolutε , cse_abs_bmary , NO, NO, 1, 153, 1000} ,
/* caliper used to find more accurate position estimates of edge * positions */ static cclp_constraιnt cwa_f d_target_gcs [ 1 ] =
(
( CCL?_CONTRAST , 3 , 1000 , 0 , 100 , 3 , ,
\ ; /* static */ cclp_calιper clp_fιnd_target_ιnιtιal_search = { LOOSE_CALIPER_WIDTH, /* Search Length * ' LOOΞE_CALIPER_PROJECTION_LΞNGTH, / * Projection Lengt
2, /* Edge Filter Size
0, /* Edge Filter Leniency
10., / Expected Size */
EDGEJTHRESHOLD , /* Contrast threshold
CCLP_DONT_CARE, /* El Polarity
CCLP_NO_PAIRS , /* E2 Polarity */
CC P_DEFAULT, /* Optimization
8, /* Bits per pixel *
TRUE, / Window Rotation */
TRUE, / Interpolate */
1, /* Number of contramts cwa_f d_target_gcs , /* Constraints NULL, /* Pixel Map */
0, 0, 0
/* static *, cclp_calιper clp_fmd_target_ e_searon = ■ TIGHT_CALI?ER_WIDTH, /* Searon Lengt * TIGHT_CALI?ER_PRCJECTICN_LENGTH, , * Projectic 2, /* Edge Filter Size
0, /* Edge Filter Leniency 10., /* Expected Size * EDGEJTHRESHOL , /* Contrast thresnciα CCLP_DONT_CARE , /* El Polarity CCLP_NO_PAIRS , /* E2 Polarity CCL?_DEFAULT, ,* Optimization
8, /* Bits per pixel *
TRUE, /* Window Rotation *
TRUE, /* Interpolate *
1 , / * Number of contramts cwa_f d_target_gcs , , * Constraints
NULL, /* Pixel Map
0, 0, 0
/* static */ cclp_calιper clp_fmd_target_compute_subpιxel = f TIGHT_CALIPER_WIDTH, /* Search Length */ 1,/* Projection Length */
2, /* Edge Filter Size *,
0, /* Edge Filter Leniency *
10., /* Expected Size */
EDGEJTHRESHOLD , /* Contrast thresnold CCLP_DONT_CARE , /* El Polarity *
CC P_NO_PAIRS , /* E2 Polarity CCL?_DEFAULT, /* Optimization 8, /* Bits per pixel *
FALSE, /* Window Rotation */ FALSE, /* Interpolate
1, /* Number of contramts * cwa_f ind_target_gcs, /* Constraints *
NULL, /* Pixel Map
0, 0, 0
};
/* make an image of a rotated center of a crosshair, this image is
* used to train a model to search for z e crosshair in an image
* img - cip_buffer where rotated crosshair will be stored
* orιentatιon_in_tenths - orientation of rotated crosshair m tenths
* of degrees
* returns img */ cip_buf f er *make_cross_haιr_center_ιmage (cιp_buffer *img, t orιentatιon_m_tenths)
{ c p_buffer win, *xhair=NULL, cct_sιgnal sig,-
NO_REGISTΞR >,xhair ' ; if ( ! img) cct_erro (CGΞN_ΞRR_3ADARG' , if (sig=cct_cacch;0) } goto dene;
/* allocate a temporary cιp_buffer ■ hair which will contain a
* non-rotated crosshairs, and tnen use tnis temporary cιp_buffer
* as a source fcr cιp_rotate centered
* / xhair = cip create (img- >wιdth* , img- heιght* , 3 , cιp_set (xhair , 0) ; cip_wmdow (xhair, &w , 3,0, xhair- >wιdth.2 , xhair- >heιght.2 : , cιp_set (&wm, 255) ; cιp_wιndow (xhair, -win, xhair- > width,' , xhair- >heιght 2 , xhair->width/ , hair- >heιght/2) ; cip_set (&wm, 255) ; cip_rotate_centered (xhair , img, orier.taticn_in_ter.ths) ; done : if (xhair) cip_delete (xhair) , xhaιr=NULL, if (sig) cct_throw(sιg) ; return img; }
/* construct a synthetic image and use search to find a gross estimate
* of the center position of tne crosshair center
* we expect that the synthetic model will be accurate te * approximately 1 pixel, and therefore, this search routine snould
* localize the center to approximately 1 pixel
* img - image containing a rotated crosshair
* orientation in_tenths - orientation of rotated crosshair in tenths
* of degrees
* res - pointer to cwa_point where result will be stored
* gross_locate_crosshair_center_in_ιmage (! returns int signifying
* whether or not it found a crosshair
*/ static int gross_locate_crosshair_center_ιn_image (cip_buffer *img, int orientation_in_tenths , cva_poir.t *res)
{ cip__buffer *target=NULL; cse_model *mdl=NULL; cct_signal sig; cse_results cse_res; int ans ,-
NO_REGISTER; target) ; NO_REGISTER(mdl, ; if (sιg=ccc_catch (0) ! goto done; target = cιp_create (GROSS JTARGΞTJS IDE , RCSS_TARGET_SIZΞ, 3 ; make_cross_haιr_center_ιmage .target, orientat on_m_tenths; ; dl = ctr_tram_model (target , 3. 3 3, 3, 3 3, iowa_calιb_ccrr_tp 0' ; cse_area_search '. img, mdl, icwa_cal b_sp . icse_res ', ; if (cwa_ ιnd_target_debug_ lag i 3x1. printf("%d Vd %d %d\n" , cse res.x,cse res . , cse res . score, cse_res . found; ; res->x = 1. * (GRCΞS_TARGET_SIZE/2 + CIA_RNDlδ (cse_res . x . n; ; res->y = 1. * (GRCSS_TARGET_SIZΞ,'2 ~ CIA_RND16 '. cse_res . y . n; ; ; done : if (target) cip_delete (target) , target = NULL; if (mdl) ctr_deiete_mcdel (mdl) , mdi=NULL; if (sig) cct_throw(sig) ; return cse_res . found; }
/* compute distance from a point to the closest pcint on a line */ static double distance_from_pomt_to_line (cwa_pomt *pl , cwa_lme *ll!
{ cwa_line 12; cwa_point p2 ;
12.x = pl->x; y = pl->y, 12. t = ll->t + PI/2, cwa_lmes_to_poιnt ( ll , &12 , &p2 ) , return (sqrt ( (pl->x-p2.x) * (pi- >x-ρ2 x) (pl->y-p2 y)*(pl->y-p2 y) ) ) ,
/* Fit a line to an array of points */ static double f ιt_lιne_to_po ts
(cwa pαmt *pts, int n_poιnts, cwa_lme *lιne)
{ mt , double xsum=0 0, ysum=0 0, xcent , ycent , double a, b, c, theta, xp, yp, cwa_pomt *ptr, if (n_poιnts < 2) cct_error (CWA_ERR_NO_INTERSECT) for (j=0, ptr = pts , j<n_pcmts -- per- f (cwa_f d_target_decug_f lag i 3x_ prιntf("%f tf »r.",ptr->.< ptr-> xsum + = ptr->x, ysu += ptr->y,
xcent = xsum / n_poιnts ycent = ysum / n_pomts, for ( =0, ptr = pts, a=b=c=3 ;; <n_pc_r.cs j — ptr- xp = ptr->x - xcent,
YP = ptr->y - ycent, a += xp * xp, b += 2 * xp * yp c += yp * yp, } theta = atan2(b, a-c) ' 2 0, lme->x = xcent; lιne->y = ycent, lme-st = theta, if (cwa_fmd_target_debug_flag _ 0x1) { pr tf ("fιt_lιne_to_poιnts returned %f", (a*cos (theta+PI/2) *cos (theta+PI, 2 + b*cos (theta+PI/2) *sιn (theta+PI/2) - c*sm(theta+PI/2) *sm (theta+PI '2) 'n_po ts) cd_showas ( (int) l ne, "cwa_i e") ,
} return ( (a*cos (theta+PI/2) *cos ιcheta~?I/2 - b*cos (theta+PI '2) *sm (theta+PI 2) -- c*s (theta+PI '2^ *sιn' theta+PI 2) n_pomtS; static int prιnt_lιst_of_pomts (cwajpoiπt *val , t num_vals )
{ mt 1 ; for ( i = 0 ; i < num_vals , ι+- ) prxntf ("*f %f\n" , val [i] .x, val [i] y) ; }
/* calιb_orιent_from_angle ( ) returns CALIB_VERT or CALI3_H0RIZ
* depending upon the orientation (measured in degrees) of a line I
* purpose is to determine whether we need to use a vertical or
* horizontal quadratic subpixei edge interpolation */ static calιb_orιent calιb_oπent_frem_angle 'double angle n_degreesι
( double angle_ιn_rad, angle_m_rad = angle_ιn_degrees * PI 180 , if (fabs (cos !angie_m_rad J fabs (sin ιangie_m_rad, return CALI3_VERT, return CALI3 HCRIZ,
computes subpixei edge positions giver, an image ana a r.e_gncorncoα (ap_x, ap_y) to search for an edge Depending upon tne or entatιcn_ιn_degrees , ccmpute_su_opιxel_pcsιt cn ' ^ ΛHII eitnεr perform a vertical or orizcntal sucpixel edge estimation Sucpixe. edge positions are computed in tne same manner as tne ccundary tracker (using " neignccring pixels, and " differences wr.ere we quadratically interpolate tne edge position from at least t ree differences .
* We compute a seven first differences ust to ce safe There is no
* guarantee tr.at the computed caliper position corresponds to tne maximum
* 1st difference (because we're using calipers with filter size 2
* mg - cιp_buffer containing image of crosshair target
* ap_x,ap_y - application point wnere we sample image grey values
* actually, we sample grey values at the application
* point and +/- 2 from the application point (maybe we
* should sample at +/- 4, wno knows
* orιentatιon_m_degrees - orientation of caliper whicn nas been applied
* to find this application point. Since we
* only want to call subpixei interpolation
* at the right place, we suggest using
* αn-axis calipers
* pos_x, pos_y - pointers to doubles where
* compute_sunpιxei_posιtιon > ! .vill store result
*/ mt compute_suopιxei_posιtιon (cιp_buffer *ιmg, t ap_x , t ap_y, double orιentatιon_ιn_degrees , double *pos_x, douυle *pos_y) int pO , pl,p2 , p3 , p4 , p5 , p6 , p"7 , t dO,dl, d2,d3 , d4 , d5 , d6 , d_max, d_mm, calιb_orιent direction; if (cwa_f md_target_debug_f lag i 3x1, pr tf ( "compute_subpιxel_pos tιon called w tn id %α %f Vx %χ ap_x, ap_y , orιentatιon_m_degrees , pcs_ , pcs_y , if (ap_x < 4 I j ap_x > ιmg->wιdth-4 || ap_y < 4 I I ap_y > img- >heιght-4) { return 0 , }
/* should we sample horizontally or vertically * direction = calιb_or ent_frem_angie ι 1orιentat cn_ιn_
: (direction == CALI3 n CR: p0=* (img- >rat [ap_y-4 ~ mg - >x_c _ rse -ar pl=* ( mg- >rat ap_y-3 -i- img- >x_of rse -a; p2=* (img- >rat [ap_y-2 *ιmc- >x cffset-a: p3 = * (img- >rat ap_y- 1] + mg- .x_offset-ap p4=* ( img- >rat [ap_y] -img- >x_cffse +ap_x p5=* (img- >rat [ap_y+l i-img- >x_effset-ap p6=* ( img- >rat [ap_y+2 ] +ιmg- >x_offse -ap p7=* ( img- >rat [ap /+3 ] +ιmg- >x_offset+ap_ d0=pl-p0 dl=p2-pl d2=p3-p2 d3=p4-p3 d4=p5-p4 d5=p6-p5
d_max = max ( do , max { dl , max ( d2 , max v d3 , max i d4 , max ; d5 , dβ '. d_mιn = mm ( do , m ( dl , mm ( d2 , mm , d3 , mm \ d4 , mm ( dS , d6 '. if (-d_mιn > d_ma ) d0=-d0 dl=-dl; d2=-d2 d3=-d3 d4=-d4, d5=-d5, d6=-d6 d_max = - d_m ; }
*pos_x = ap_x*l. + 0.5,
/* If we're at a maximum retur (d_max > EDGΞ_THRESHOLD) \ if (dl == d_max __ d3 ' = d_max) {
*pos_y = ap_y + cz parf t (do , l , ) 65536 -2 return 1 , } else if (d2 == d_max && d4 = d_max) (
*pos_y = ap_y + cz_parfιc dl , d , d31 '65536 -1 return 1, } else if (d3 == d_max .. d5 ' = d_max) <
*pos_y = ap_y + cz_parfi (d2 , d3 , d4 ! 65536 , return 1, } else if (d4 == d_max &_ d6 ! = d_max) {
*pos_y = ap_y + cz_parfit (d3 , d4 , dS) /65S36 +1., return 1 ; } else if (d5 == d_max) (
*pos_y = ap_y + cz_parfit (d4 , d5 , d6 , /6S536 + , return 1 ;
:e not at a maxi scmet ir.g rεtu: 3, else ( p0=* (img- >rat [ap_yj -img- >x orrrsseett-ap X -4 pl=* (img- >rat [ap_y] ÷img- >x f se: ap x- 3 p2=* (img- >rat [ap_y] -img- >x_ or f set ap x-2 p3 = * ( img- >rat [ap_y! - mg- >x_ of f set ap x- 1 p4 = * (img- >rat [ap_y] -img- ;<_ fse: ap_ X , p5=* (img- >rat [ap_y] -img- >x_ of f set ap :<-! p6=* ( img- >rat [ap_y] +ιmg- of fset ap_ x—2 ^ p7 = * (img- >rat [ap_y] - mg- >;<_ of f set ap x-3 d0=pl-p0, dl=p2-pl, d2=p3-p2; d3=p4-p3 ; d4=p5-p4 , d5=p6-p5; d6=p7-p6 , d_max = max ( do , max ( dl , max ( d2 , max ', d3 , max t d4 , max ( d5 , d6 ) ) ) ) ) ) d_mιn = mm (do ,mm (dl , m ιd2 , mm (d3 , mm (d4 , mm ids , d6 * ) ' ' ' if ( - d_mm > d_max j ( d0=-d0; dl=-dl; d2=-d2, d3=-d3, d4=-d4, d5=-d5; d6=-d6; d_max = - d_mι ;
}
/* If we're at a maximum, compute the subpixei position and * return 1 * /
*pos_y = ap_y* 1 . + 0 . 5 , f ( d_max > EDGEJTHRESHOLD ) { if (dl == d_max .. d3 ! = d_max) {
*pos_x = ap_x*l. + cz_parfit (do , dl, 2) /6S536. -2. , return 1;
} else if (d2 == d_max i_ d4 != d_max) {
*pos_x = ap_x*l. + cz_parfit (dl, d2, d3) /S5536. -1. ; return 1 ,-
} else if (d3 == d_max &_ d5 != d_max! {
*pos_x = ap_x*l. + cz_parfi (d2 , d3 , d4 > /65S36 , return 1 ;
} else if (d4 == d_max .. d6 1= d_ma ■ <'
*pos_κ = ap_x*l + oz_par t 'd3 , , c5 65535 -1 return 1 ,
1
; else if 'd5 == d_max) ■
»pos_x = ap_x*l. - cz_parfιc α , d5 , do 55536 -2 return 1;
}
/* If we're not at a maximum, sc etning's wrong, and return 0
*/ return 0 ;
/* enumerate_poir.ts_ar.d_fιt_lme_along_ray ( ) enumerates edge pcmts
* along a ray from a start_pcιnt to an end_pcιnc by using calipers
* (and possibly subpixei interpolation; to compute edge positions
* enumerate_poιnts_and_fιt_ime_aiong_ray ( continues enumerating
* points until one of the following conditions occurs :
* 0. we reach the end_po t
* 1. no edge is found by the caliper
* 2. more than one edge is found by the caliper and the Highest
* scoring edge is less than twice the score of the second hignest
* scoring edge (this 2X comes from the CONFUΞIONJTHRESHOLD)
* 3. the computed edge point is an outlier (the distance between the
* edge point and the nominal line is larger than threshold)
* 4. we run off the cip_buffer (screen)
* After it has enumerated the edge points, it then calls
* fιt_pomts_to_iιne to compute tne optimal line estimate
* img - cιp_buffer containing a crosshair target start_poιnt , end_po t - define region along wr. cn to ap l calipers nomιnal_lme , threshold - used to determine wnetner an edge point is an outlier (and consequently end the enumeration of edge points) . The outlier test involves checking whether t e distance from the point to the step_sιze - frequency of edge point sampling. Since ^e want to evenly sample the lines, we move step_sιze r.umcer of pels n the larger direction (and a fractional number of pels in the other direction) . This approach provides even sampling assuming we are doing on-axes calipering or parabolic subpixei interpolation search_calιper - caliper used to search for edge points computed_line - cwa_lme where result will be returned enumerated_poιnts - optional array of cwa_pomts where found edge points would be stored num_po ts - pointer to an t where the number of found points would be stored do_subpixei_mterp - flag characterizing wnet er we «ιa.-.t to perform subDixel ntemolat on after acc.v r.. t e calmer
static mt enumerate_poιnts_and_fιt_lιne_along_ray (cιp_buffer *ιmg, cwa_pcmt *start_pomt , cwa_pcmt *enc_pc nt cwa_ime *nommai_iιne , double tnresncld, dcuole step_s za cclp_caiiper *search_caiιper, cwa_lιnε *ccmputed_lme , cwa_pemt enumerated nomts, t *num_poιr.ts , int do_suboιxel_ιntero
{ mt i , j , k ; double maxAbsCosS , cwa_pomt ap_pomt, *enum_pc nt, vac , nt ap_x, ap_y; ccip_resulcs res_h N_ΞDGEΞ*2 - ] , cclp_params cpp = { ΞDGE_THRΞ3HCLD*13 , N_ΞDGΞ3, 3 double cclp_angle; int num_samples , tmp , double caliper_vert ; cct_signal sig; cwa_lme samplιng_lme ; if ( cwa_f ιnd_target_debug_f lag & Oxl ) ( printf ( "enumerated_poιnts_and_f ιt_lme_aicng_ray called witn n" printf ( "img %x start_poιnt Vf %f end_poιnt %f %f \n" , img, start_pomt - >x , start_poιnt - >y , end_po t - >x , end_pcmt - >y) printf ( "nommal_lme f %f t f threshold f \n" , nominal line- >x , ncmιnal_ime- >v , nominal _1 me - > , threshold) ;
}
/* Compute the sampling line * / s mpling_lme . x = start_pomt - >x; sampling_lme . y = start_pomt - >y ; samplmg_line . t = atan2 (end_pemt - >y- start_poιnt - >y , end_pomt - >x- start_pcmt - >κ: ; '* Compute the step which s integral x or / ccαrc nates */ if (fabs (cos (samplιng_lιne . t/ ; > fabs is >samplιr.g_ me t maxAbsCosS = fabs ices (samci else maxAbsCosS = fabs (sin (samplιng_ime . t) ) , vec.x = (cos (samplmg_iιne . t) /maxAbsCosS ' * step_s ze, vec.y = (sin (samplmg_line . t) /maxAbsCosSm) * step_sιze,
/* Compute the number of samples which we will be making */ num_samples = maxfabs ( ( t) (start_pomt- >x - end_pcmt- =>:<) abs ((int) (start_pomt- >y - end_pomt->yl step_sιze,- if ( ! enumerated_pcιnts; enumerated_pcmcs =
(cwa_pcιnt * ) chp_alloca ιr.'.,_sa,-cles*sι:ec: cwa_pc r.t if ( ! num_poιnts , num_pomts = itmp ; cclp_angle = sampiιng_iιne . t*i30 'PI-9C , if (do_subpιxei_mterpι { if (calιb_orιent_f rom_angle cclp_angle == CALI3JΞRT' cclp_angle = 0, else ccip_angie = 90; } if (cwa_f ιnd_target_debug_f lag _ 0x1 printf ( "ccip_angie %f \n" , ccip_angle ,
/* Inner Loop: Enumerate edge points */ for (i = 0, ap_pomt.x = start_po t - >x, ap_pcmt.y = start_poιnt- >y, enum_pomt = enumerated_poιnts , *num_poιnts = 0 ; i < nurrjsamples ; ι++, ap_poιnt.x += vec x, ap_poιnt y » = vec.y er.uji_po r.t- (*num_pomts) ++) { ap_x = (int) (ap_poιnt.x + 0.5) , ap_y = (mt) tap_poιnt.y + 0.5) , if (cwa_f d_target_debug_flag _ Oxl, printf ("ap_x Vd ap_y %d\n" , ap_x, ap_y res h[0] . found = res h[l] . found =
/* Cip_transform throws CGEN_ERR_BADARG , CIP_ERR_PELAD3R,
* CCLP_ERR_FIT when caliper region extends past tne ccur.άs of
* the cip_buffer. We want to break off the search at this
* point anyways, so we catch the error and jump to fitting a
* line to the points */ if (sig = cct_catch(CGEN_ΞRR_BADARG! ) goto fit_line; if (sig = cct_catch(CIP_ERR_PELADDR) ) goto fit_line; if (sig = cct_catch(CCLP_ERR_FIT! ) goto fit_lme; cclp_apply
(search_calιper , ιmg,ap_x, ap_y cclp_ang e, ic p res_n cct_end(CCL?_ΞRR_?Iτ: ; cct_end(CIP_ΞRR_?ELAEDR; ; cct_end!CGΞN_ΞRR_3ADARG; ,
/* check if edge is not found * if (! res_h[0] . found) goto fιt_lιne;
/* check if mere than one edge s found CONFUSION * if (res_ [1] . found i_ resjn [0] . score < CONFt;3ICN_THRΞSHC D*res_h '!] . score: { if (cwa_f d_target_debug_f lag _ Oxl { cd_showas i res_h, "cclp_resuits" ) ; cd_showas (&res_h [1] , "ccio_results" , ;
} goto fit_lιne;
} if (cwa_find_target_debug_flag _ 0x1) { printf ("ap_x %d ap_y %d angle f pos %f\n", ap_x, ap_y, cclp_angie, res_h [0] .position) ; if (fabs (res_h[0] .position) > l.){ cd_showas (res_h, "cclp_results" ) , - cd_showas (£cres_h [1] , "cclp_results" ; ;
} }
/* compute the subpixei point measured by the caliper * enum_poιnt- >x = ap_x + 0.5 + res_h[0] . pαsition'cos ιccip_angie*?I.130) ; enum_point- >y = ap_y - C 5 f res_h[0] pos tιcn*s n cc_p_angie*?I 123 f (do_subpixel_interp) { if ( compute_3ubp xel_posιt on
(img, (mt) enum_pomt - >x, tint, enum_pomt - >y cclp_angle , &enum_pomt - > , &enum_pomt - >y goto fιt_lιne; if (cwa_f md_target_debug_f lag & 3x1) printf ( "compute subpixei position returned ϊf %f n", enum_po nt- >x, enum_poιnt- >y) ; tfendi
/* check if the point is within the ACCEPTABLE REGION * (within distance threshold of the nominal line
(dιstance_frsm Dcir.t er.urr pcir.t thres old)
.me tit a :z ana re urr 3 - f *τe any cad points, return 1 ctnerwise
f ιt_lme_to_poιnts
(enumerated pcmts, *num_pcmts ccmputec l e
{ if (x > 0) return (mt! ιx+0 5> , else return (mt) (x-3 5) ,
/* average two orientations but taxe into account that orientations * mod PI are equivalent
*/ static double average_two_orιentatιons dcuole orιent_l double orient
{ double diff.res. mt modDiff; if (cwa_f d_target_debug_f lag & 0x1) prmtfC'avg %f %f " , orιent_l , orιent_2 ) d ff = or ent_2 - orιent_l, modDiff = round_numJDer (dif f ,'PI , res= v (orient 1-αnent 2 -modDiff *PI , 21 , i cwa_f md_target_decug_f 1;
Figure imgf000109_0001
-turn res;
/* compute angle in alternative coordinates given cy tne tap
*/ double transform_angle_accordmg_to_calιbratιon_τ,ap (prιv_transf orm *map, double ιmg_orιentatιcn;
{ double realΞin, realCos ; cz_transformPo t (map , cos v ιmg_orιentatιcn) , sm ( ιmg_crιentatιen)
&realCos , irealS ) ; return (atan2 (realSm, realCos) ) ,
}
/* average the orientations of the four rays to compute tne aest
* composite orientation estimate Given tne orientations of the fc,
* image rays (the rays should be orienteα n multiples of PI, 2 out
* with each other' , compute tne "average' orientation First, tne
* orientations are al gnec wit mg_crιent 3 ; sc f ιτg_erιen . : '
* cnaracterized a ncrizcntal line, t en all of tne angles may ce
* smfted by PI 2 to become ncrizcntal tne .ve average tne result:
* orientations .
...__> orientations m tie t>r.vsιea_ coma:
* constraint r thh;at tne pnysιca_ .ir.es are ortncgcna. */ static double average_f cur_ιrage_orιentat ons (cwa calib *calιb, double * irrα orient
douDie pr.ys_onent .4 , , avg_pnys_crιent , avg_ιmg_orιent ,
* First, transform angles from image ccorciar.oes to pnysicai
* coordinates */ for (l = 0; i < 4; l**) phys_or ent [i] = trans f orm_angle_ac cor dmg_to_ca librae ιon_rnap ( ( (pπv_cwa_calιb *' caiib; - >map . ιmg_απent [ ] ' ; if (cwa_f d_target_debug_f lag & 0x1. for (i = I; i < 4; i÷- prmtf ( "%f \n" , phys_oπent ' i", ' ,
/* Shift orthogonal angles by ?l,'2 m order to line up with the
* first orientation */
Figure imgf000109_0002
f (fabs (sir. !phys_orιent J. -phys_or ent [0] >sqrt 3.5 , phys orient [i] -= PI,'2; / * average the orientations * avg_phys_orιent = average_two_or lent at ions
( average_two_orιentatιcns phys_onent [ 0 ] , phys_or ent ] , average_two_orιentatιons ;phys_orιent [2 i , phys_orιent [ 3 > ,
/ * compute the orientation image coordinates corresponding to * the averaged orientation
*/ avg_ιmg_orιent = trans form_angle_according_to_calibration_map
( ( (prιv_cwa_calib * ) calιb) - >ιnv_map , avg_phys_oπent ) , if (cwa_find_target_debug_flag _ 0x1) printf ("phys %f img %f\n" , avg_phys_oπen , avg_ mg_cr ent ) ■ return avg_img_oπent ,
/* compute the image orientation for wr.ier. tne pnysical orientation
* is orthogonal to anotr.er image orientation
*/ static double ιmage_ncrmal_to_ιmage_angle (cwa_calιb *caiιb, double mc_cπent { return
( ransfcrm_angle_acccrdmg_te_calιbratιor._map ( ', (prιv_cwa_calιb * calm -J nv_map, ( transfcrm_angie_accerdmg_to_cal bra ιon_map ( ( (prιv_cwa_calιb * calib ->map, ιmg_orιent, - PI, 2 ' ' ;
compute_center_pos tιon_and_angle_assumιng_four_guaranteed_good_rays ; improve estimate of tne crosshair position and orientation by searching four regions of guaranteed good length (50 pixels) e inating from the crosshair center. Lines are fit to the edge points along the four regions, and then the orientations of these lines are "averaged" . Two orthogonal lines are computed by averaging pairs of the four rays Finally, the position estimate is computed by intersecting the two orthogonal lines img - cιp_buffer containing an image of a crosshair target calib - cwa_calib object characterizing pixel non-rectanguiarity estiraated_ctr_ιmg - estimated position (given in image coordinates) of crosshair target (computed in step estimated_orientat on_phys - user- supplied orientation estimate
(given physical coordinates) guaranteed good_distance - length ( pels) of guarateed good region extending outward from the origin computed_center - structure where computed position w ll be stored * computed oπentatιen_phys - double wnere computed orientation m
* physical coordinates; w ll oe stored
* good_rays - array of four cwa_lιne structures where gocd rays
* (lines fit to sampled edge points tne gecd region,
* will be stored */ static int compuce_center_posιtιcn_and_angle_assumιng_four_guaranteed_gcod rays (cip_buffer *ιmg, cwa_calιb *calιb, c ajooint *estιmated_ctr_ιmg, double estιmated_orιentatιon_phys , double num_guaranteed_good_poιnts , cwa_poιnt *computed_center , double *computed_orιentatιon_phys , cwa_lιne *good_rays)
( double angle [4] ; cwajjomt start_pomt [4] , end_pomt [4] , cwa_lme computed_lme [4] , ncmmai_Ime; double ιmg_angle[4] , angie_norιz_lme , angle [0] = estimated_cner.catior._pr.ys , angle [1] = estιmated_orιentatιcn_pnys'?I 2, angle [2] = estιmated_orιentatιon_phys-PI , angle [3] = estιmated_crιentatιon_pnys-?I *1 5,
/* We define the 50 pel GUARANTEED GOCD REGION m terms of a
* VALID REGION so that we oar. use
* the enumerate_pomts_and_f ιt_lme_aiong_ray function
* If we're sampling along a line and we want to stop sampling
* after 50 pels, then we can use a nominal line which is ncrmai
* to and passes cr.rougr. the start point, and a thresnel of 53
*/ nommal_lιne . x = estιmated_ctr_ mg- >x, nommal_lιne . y = estιmated_otr_ιmg- > ;
I* The passed orientation is given pnysical coordinates * for (1 = 0; i < 4, I**) angle [i] = transf orm_angie_accerαmg_to_calιcratιor._map ( ( (pπv_cwa_calιb *t calib; - > v_map, angle r.ι] ) , for (i = 0; i < , !•>-+) {
/* The sampling region begins at the point a small distance
* away from the crosshair center
* (CIRCULAR_3UFFER_ZCNE_AR0UND_CENTER) to be exact, and
* extends out to guaranteed_good_dιstance from the center
*/ start_poιnt [i] .x = estιmated_ctr_ιrng- >x +
CIRCULAR_3UFFΞR_Z0NE_AR0UND_CENTER*cos (angle [i! ' , start_pomt [i] .y = estιmated_ccr_ιmg->y +
CIRCULAR_BUFFER_ZONE_AROUND_CΞNTΞR*sι (angle [l] ) , end_poιnt [i] .x = estιmated_ctr_ιmg- x + num_guaranteed_gcod_poιnts*cos (angle [i] , , end_poιnt [i] .y = estιmated_ctr_ιmg- y + num_guaranteed_gccd_points*sin (angle [ ] nomιnal_lme . t = angle [i] +PI/2 , enumerate_pomts_and_f ιt_lιne_along_ray
(img, _start_pomt [i] , _end_poιnt [i] , _ncmmal_lιne, num_guaranteed_good_pomts ,
STEP_SIZE_USED_T0_ENUMERATΞ_POINTS_IN_3COD_RE0ICN*2. ,
&clp_f ind_target_initial_search,
&computed_line [i] , NULL, NULL, 0) ; ιmg_angle[i] = computed_line [i] . t ; cu_copy (&computed_line [i] , &good_rays [i] , sizeof !cwa_lme! ) ; }
/* average the angles */ angle_horiz_line = average_four_ιmage_orιentatιons (calib, ιmg_angie , if (cwa_f ιnd_target_decug_f lag _ 3x1; printf ("average %f f f Vf %f'n", img_angie 0] , ιmg_anglε 1 , mg_angle [ 2. ' ιτg_ar.gle [ 2 ! . angle_hor z_ime) ,•
/* average the fit rays * computed_l e [ 0 ] x = (computed_lme [0] .x - ccmputeα _me .2 , computed_i e [0] y = (cαmputed_lιne [0] . y 1- computed line y ' computed_lme [1] x = (computed_l e [lj computed_lme [1] y = (computed_iιne i; y - computed line .3. y ■ ccmputed_lme [0] t = angie_hcrιz_lιne ; ccmputed_l e [1] t - image normal to imaσe anσ ca_m anc
'* compute the mtersectec point », cwa_iιnes_to_pcmt ( _ccmputed_lme [ 3 ] , _computed_iιne [1] , computed_center.1 ;
/* f x the orientations of the good rays to correspond to the
* "average" orientation
*/ good_rays [0] . t = good_rays[2] .t = angie_horιz_lιne ;
*computed_orientation_phys = transform_angle_according_to_calιbratιon_map
( ( (priv_cwa_calib *) calib) - >map , angie_horιz_lιnei ; gαod_rays[l] .t = good_rays[3] . t = transform_angle_according_to_calιbratιcn_map
( ( (priv_cwa_calib *) calib) ->ιnv_map, *computed_orιentation_phys+PI/2) ;
/* Compute the two intersections between a circle and a line if the
* circle and line intersect. If the circle and line do not
* intersection, signal CWA_ERR_NC_INTERSECTION */ static mt mtersect_l ne_and_c rcie cwa_l ιne * e :«a orir.t *pt double radius , cwa_pomt *res 1 cwajpQ .it res_21
{ double angle_to_lme , dιstancε_to_iιne , angle_to_pcmt_cn line angle_to_lme = line- >t + PI/2 , if ( ( cos (angle_to_lme ) * ( line- >x-pt - >x) + sm (angle_to_lme ) * ( line- >y-pt - >y) ) < 0 ) angle_to_lme += PI ; dιstance_to_lιne = dιstance_f rom_poιnt_to_lιne (p , line ; if (dιstance_to_lιne ( radius - EPSILON_THRESHOLD) cct_errαr (CWA_ERR_NO_INTERSECT) , angle_to_pomt_on_lιne = acos (dιstance_tc_i ne/ rad us , res_l->x = pt->x + radιus*cos (angle_to_lme+ar.gle_to_pcιnt_αn_iιne) , res_l->y = pt->y + radius*sir. ,angie_te_l neι-ang_e_to_pcιnt_on_l ne res_2->x = pt->x + radius*cos langie_co_lιne-angle_to_poιnt_or._l ne res_2->y = pt->y * radius*sm λangle_to_iιne-angle_ o_pcιnt_on_l ne
/* compute_center_posιtιon_ar.d_angle_alor.g_guaranteed_gooc_r3ys
* generate final estimate of crossnair position and orientation oy
* enumerating pcmts along four rays (using four good rays as
* thresholding lines) We use tignter calipers because we trust tne
* positions of the good rays
* img - cιp_buffer containing an image of a crosshair target
* calib - cwa_calιb object characterizing pixel r.on-reccar.gularicy
* estιmated_ctr_ιmg - estimated position (given in image
* coordinates) of crossnair target (computed in
* step 2)
* good_rays - array of four cwa_lme structures where good rays
* (lines fit to sampled edge points in the good region)
* will be stored
* tιght_threshold - threshold used for determining outlier points
* with respect to the good rays
* computed center - structure where computed position will oe stored
* computed or entatιon_phys - double where computed orientation (in
* physical coordinates) will ce stored */ static int compute_center_posιtιon_and_angle_alαng_guaranteed_good_rays (cιp_buffer *ιmg, cwa_calιb *calιc, cwa_pomt *εstιmated_ctr_ιmg, cwa_lιne *good_rays , double tιght_threshold, cwa_poιnt
*computed_center, double *computed_orιentatιcn_phys , double *feature score) t l, double angle [4] , cwajpoint start_pomt [4] , end_pomt 14 ] cwa_lme computed_lme [4] , double ιmg_angle[4] , angle_horιz_lme , int num_enumerated_pomts [4] , cwa_poιnt possιble_start_pomts [2] , enumerated_po ts ; ;MAX_NUM_ POINTS] ; double avg_squared_error, if (cwa_f d_target_debug_f lag _ Oxl) printf ( "compute_center_posιtιon_and_angle_alcng_guaranteed_good_rays\n" ^ or (i = 0 ; l < 4 , ι++) { double dιstance_to_center , radius , orιentatιon_away_from_center
/* For each ray, we need to determine wnat tne sampling
* line should be
* We do this by intersecting the ray itr. a circle
* centered at the crosshair center
* Since there are two intersection points ,e r.eeα to
* determine whicn one s tne real start of tne sampling
* line
* It turns out tr.at we oar. simply picx tne _ntarsect cn
* point which is closer to the center of tne ray */ dιstance_to_center = dιstance_f rom_poιnt_to_l e estιmated_ctr_ιmg, &good_rays [i radius = sqr (CIR(T_ΛR_3 FFΞR_Z0NE_AΛ0 NE_CΞNTΞR* CIRC'j_\R_3UFFΞR_:-CNΞ_ARCUND_CΞ.]TΞR- d stance_to_center*dιstance_tc_center mtersect_iιne_and_cιrcie ' _gooc_rays [_; , estιmated_ctr_ιmg, radius, ipossibie_start_poir.es [0] , Stpossicle_start_pcints [l] , if (cwa_poιnt_dιstance
(&possιble_start_pomts [0] , (cwa_poιnt * igcod_rays [i] < cwa_pomt_d stance
(_possιble_start_pomts [1] , (cwajpoint *) &good_rays [i] 1 ) cu_copy (_poss ble_start_pαmts [0] , istart_poi.it [i] , sizeof (cwa_pomt, ) , else cu copy (__)θssιble_start_poιnts [l] , istartjooi.t [i] , sizeof (cwa_po .t, ) ,
/* Now, we have to determine which way we should travel along
* the ray (good_ray[ι! . or good_ray[ι] t+PI
* We determine which way by comparing the start point with
* the center of the good ray */ orientation away_from center = good_rays[ι] t, if ( (cos (orιentat on_away_f rom_center) *
(good_rays [1] . x-estιmated_ctr_ιmg- ?x, - sin (orιentatιon_away_f rom_center ! * (good_rays [i] . y-estιmated_ctr_ιmg- >y < 3, oπentatιon_away_f rom_center ■>•= PI; end_pomt [i] .x = start_poιnt [i] .x +
MAX_LENGTH_D I AGONAL_ACROS S _C I ?_BUFFE * cos (orιentatιon_away_f rom_center) ; end_pomt[i] .y = start_poιnt [i] .y +
MAX_LENGTH_DIAGONAL_ACROSS_CIP_BUFFER* sin (oπentation_away_f rom_center ) ,- if (cwa_f ind_target_debug_flag _ Oxl) { printf ( "center f %f good_ray %f %f %f distance est mated_ctr_ιmg- >x, estιmated_ctr_ιmg- >γ good_rays [ i ] . , gocd_rays [ ] y , gccd_rays dιstance_to_center) ; pπ.itf "%f %f %f Vf n" , s tart_pc t [ i ', x, start_pomt i l y end oci.it ill κ , end ccm i l v ,
enumera t e_pα mt s_and_f it _!me_along_r ay
(img, _start_pcmt [i] , ier,d_pcιnt [ ] , _gccd_rays '.-'. , tιght_threshold,
STΞP_SIZE_USΞD_TO_ENUMERATΞ_ALL_?CINTS, &clp_f md_carget_f me_search, /*_clp_f ιnd_target_ccmpute_subpιxel , * &computed_lιre [i] , &enumerated_pcιnts - '0] , -cnum_enumerated_po nts r. " , 1, . ιmg_angle[ι; = ccmrjucεd_Ime ii] . t, }
/* For each pair of opposite rays, use the closest points sc that
* we use the same number of points on botn sides of tne
* crosshair center
* Since we don't know why the last edge point became invalid, we
* skip the neighboring 2 points just to be on the safe side */ num_enumerated_poιnts [0] = num_enumerated_poιnts [2] = mιn(num_enumerated_poιnts [0] , num_enumerated_poιncs [2] ' -CUTLIΞR_3UFFΞR; num_enumerated_pomts [1] = num_enumerated_pcir.es [3 = mm (num_enumerated_poιnts [1] , num_enumerated_po ts { 3 ] -CUTLIER_3UFFΞR ;
/* Re-fit lines te these points * avg_squared_error = 0 ; for (i = 0; i < ,- i+-) { double fit error; fιt_error = f ιt_lιne_to_poιnts
(&enumerated_poιnts [i] [0] , num_enumerated_poιn &computed_lιne [1] ) , avg_squared_error += fιt_error'4; ιmg_angle[ι] = computed_ime [i] . t,
}
*f eature_scαre = cz_map_average_squared_error_co_max_lOOO (avg_squared_error)
/* Average the four rays by averaging the positions and * orientations */ computed_lme [0] .x (computed_i e [0] . x ccmputec . e t_J .:<; computed_lιne [0] .y (computed_lme [0] . y computed . e [2] y> computed_l e [ 1 ] . x (computed_ime [1] . x computed . e [2] .x, computed line [1] .y (computed line '1] y ccmcuteα angle_hoπz_lme = average_f our_ιmage_orιentatιcns calm, ιmg_ang_e computed_lme [0] t = angle_hoπz_lme ; computed lme[l] t = image normal_tc image angle oa
cwa lmes_to_poιnt ( -tccmputed_ime [0] , -ccmpuced_iιr.e ] , ecmp.
*computed_or entatιon_phys = trans f crm_ar.gle_accordir.g_te_calibraticr._map
( ( (pπv_cwa_caiιb ** calib > - >map , angie_horιz_iιne ,
\
/* cwa_fmd_target ' ) is the top level function for localizing
* the crosshair target. It returns 0 if it cannot find tne crossnair.
* 1) use search to improve position estimate of crossnair center
* 2) compute four good rays eminat g from the origin by sampling
* edge points within the guaranteed good region along the user
* specified orientation
* 3) Enumerate subpixei edge points along the four rays emulating
* from the the origin and use tne found good rays to identify outliers
* cwa_f d_target tries to localize a crosshair target within an
* image and returns 1 if it was successful and 0 otherwise
* img - cip_buffer containing crosshair target
* calib - characterizing field of v ew
* angle - initial orientation estimate of crosshair target
* result - pointer to cwa_lme structure where result will be stored */ int cwa_fιnd_target_mtemal
(cιp_buffer *ιmg, cwa_calιb *calιb, double angle, cwa_lιne *result, cwa status *status) cwa_po nt gross_pos_estιmate, improved_pos__est mate double ιmproved_phys_orιentatιon_estιmate cwa_lιne good_rays [4] , t i , ans , cct_sιgnal sig; ctm_tιmer time;
Figure imgf000117_0001
cwa_l e tmp; cwa_calιb *calιb_ιdentιty=NULL,
NO_REGISTER(calιb_ιdentιty) ; cu_clear (status, sizeof (cwa_status) ) ,
/* CWA_ERR_NO_INTERSECTION is used to signify when we cannot find
* any edge points for an expected horizontal or vertical line */ if iSig = cct_catcn,0' gcco done,
,' * expand tne stack to make room for tne 4323 icicl am 3 *
* allocate
cu_expand_stack ! IS ! , f ( : result) result = itmp, if (limg) cct_errcr ,CGEN_ΞRR_3ADARG
/* If no calibration object s passeα, τ.aκe one so cr.at t.erf
* only one path tr.rcugn tne cone
*/ if (! calib) calib = calιb_ιdentιty = cwa_caiιb_make 3 ,0 , 0. , 1. , 1. , 0) ,
/* 1) use search */ ctm_begm ( itime) ; ans = gross_locate_crosshaιr_cencer_m_ιmage
( mg, (int) (10. * angle*130 'PI ' , igrcss_pcs_estιmate , if Cans) cct_error (CWA_ERR_NO_INTERSECT^
{ cιp_buffer win; double x_edge, y_edge;
Figure imgf000117_0002
double angle_offse , cct signal sig;
115
SUBSTTrUTE SHEET (RULE 26) int i, double left [4] ={-2. , -2 ,2. ,2.}, double top [4] ={-2. , 2 ,2., -2.;, cιp_wmdow(ιmg, _wm, gross_pos_estιmate . x- 25 , gross_pcs_escιma e '/- 35 , "C for (l = 0 ; 1 < 4 ; 1++) { if ( cz_cebt_start_retum_success_val
( iw n, 3 , 3 , left [i] , op [ ] , s_<_edge , iy_edge , iedge val , _dιr , &angle_of f set ) ) brea ;
} status- >contrast_score=cz_map_edge_val_to_max_1000 (edσe val'
> to = ctm_read ( itime) , if (cwa_f md_target_debug_f lag _ Oxl , ( printf ( "gross pos:"), cd_showas ; ' mt) _gross_pcs_estι ate , "cwa_co nt ' }
/* 2) improve tne position and orientation esti-ate ar.α oomπut * the four gocd rays
compute_cεnter_posιtιon_anc_angle_assumιng_four_gu ant ( mg, calib, -cgross_pos_estιmate , transform_angle_accordιng_to_calιbratιon_map ( ( (prιv_cwa_calιb * calib) - >map , angle' , (double) GUARANTΞΞD_GOCD_SIZE , _ιmpreved_pcs_estιmate , -imDroved ohvs orientation estimate , good ravs;,
tl = ctm_read t itime) , if (cwa_fmd_target_decug_flag i 3x1) ( printf ! " ιmproved_pos_estιmate • l , cd showas i vint) Stimoroved cos estimate , "cwa oc t
/* 3) Enumerate edge points along those four good rays and use * those lines to compute the final position and orientation estimates */ compute_center_posιtιon_and_angie_aiong_guaranteed_good_rays
( mg, calib, &ιmproved_pcs_estιmace , gcod_rays , IGHTJTHRESHCLD , (cwa_pomt *) resul ,_ (result- t) , &status->feature_score) ; t2 = ctm_read (itime) , cz_transf ormPo t
( ( (pr v_cwa_calιb *) calib; ->map, resul ->x, result ->y, 4result->x, -result- >y) if (cwa_f ind_target_debug_f lag & Oxl) printf ("result: ") ; cd showas ( (int) resul , "cwa Do t' ;
if (cwa_f nd_target_debug_flag ύ 0x4) { printf ( " Times : " ) ; printf ( " gross locate: %d\n",t0) ; printf ( " fine locate: *d\n",tl-t0) printf ( " finer locate: %d\n",c2-tl)
} done : cu_expand_stac (-16) ; if (calib_identity) cwa_calib_deiete (calib_ιdentιty) ;
/* Make returned angle agree with given angle argument if (sig) cct_throw (sig) ; for (i = -8; i <= 8; i--- if ( (i ! = 0) &&
(fabs (result- >t ->- - ar.g_; :acs ;su_ - >t ang_c result->t += ι*PI, return 1 ;
} int cwa_find_target
(cip_buffer *ιmg, cwa_caiιb 'calib, double angle, cwa_line -.e≤ult cwa_status *status)
{ cwa_scatus tmp_status; cct__signal sig; if (istatus) status = itmp_status;
CCt_catch (CGEN_ERR_BADARG) cct_catch (CWA_ERR_BAC LIT) CCt_catch (CWA_ERR_TRAC ΞR) if (sig=cct_catch(0) ) { if (sig == CGEN_ERR_BADARG) { status- >error_flags i= CWA_ΞRR_3ADARG ; cct_end(sig) ; cct_error (sig) ;
} status- >error_f lags |= CWA_ΞRR_FAILΞD_TC_?IND_TARGE ; return status- > found = 0; } cwa find target internal ( img, calib, angle, esult , status) ; return status- > found = 1,
Described herein are calibration targets and methods for determining the location and orientation thereof meeting the objects set forth above. It will be appreciated that the embodiments shown in the drawings and discussed above are merely illustrative of the invention, and that other embodiments incorporating changes therein fall within the scope of the invention, of which we claim:

Claims

1. A machine vision method of determining a location of a reference point in an image of a calibration target, the method comprising the steps of
A. generating an image of a target that includes
i. two or more regions, each region being defined by at least two linear edges that are directed toward a reference point;
ii.. at least one of the regions having a different imageable characteristic from an adjacent region;
B. identifying edges in the image corresponding to lines in the calibration target;
C. determining a location where lines fitted to those edges intersect.
2. A method according to step 1, wherein step (B) comprises the step of identifying edges in the image by applying an edge detection vision tool to the image.
3. A method according to step 1, wherein step (B) comprises identifying edges in the image by applying a caliper vision tool to the image.
4. A method according to step 1, wherein step (B) comprises identifying edges in the image by applying a first difference operator vision tool to the image, and by applying a peak detector vision tool the an output of the first difference operator vision tool
5. A method according to step 1, wherein step (B) comprises identifying edges in the image by applying a caliper tool to the image beginning at an approximate location of the reference point.
6. A method according to claim 5, comprising the step of determining an approximate location of the reference point by applying a Hough line vision tool to the image and finding an intersection of lines identified thereby.
7. A method according to claim 5, comprising the step of determining an approximate location of the reference point by applying a correlation vision tool to the image using a template substantially approximating an expected pattern of edges in the image.
8. A method according to claim 5, comprising the step of determining an approximate location of the reference point by applying a projection vision tool to the image along one or more axes with which the edges are expected to be aligned.
9. A method according to claim 1, comprising the steps of
identifying edges in the image corresponding to lines in the calibration target; and
determining an orientation of the target based on the angle of those identified edges.
10. A machine vision according to claim 1, wherein
step (B) includes the step of applying a Hough line vision tool to identify edges in the image corresponding to lines in the calibration target; and
the method further including the step of determining an orientation of the target based on the angle of those identified edges.
11. A machine vision according to claim 1, comprising
A. applying a Sobel edge tool to the image to generate at least a Sobel angle image; and B. taking an angle histogram of the Sobel angle image to determine an orientation of the target.
12. A machine vision method of an orientation of a calibration target in an image, the method comprising the steps of
A. generating an image of a target that includes
i. two or more regions, each region being defined by at least two linear edges that are directed toward a position;
ii.. at least one of the regions having a different imageable characteristic from an adjacent region;
B. identifying edges in the image corresponding to lines in the calibration target;
C. determining an orientation of the target based on the angle of those identified edges.
13. A method according to claim 12, wherein
step (B) includes applying a Hough line vision tool to the image to identify therein edges in the image corresponding to lines in the calibration target; and
step (C) includes determimng an orientation of the target based on the angle of those identified edges.
14. A method according to claim 12, wherein
step (B) includes applying a Sobel edge tool to generate at least a Sobel angle image; and step (C) includes taking an angle histogram of the Sobel angle image to determine an orientation of the target.
15. A method according to any of claims 1 and 12, wherein step (A) includes the step of generating an image of a target in which each region has any of a different color, contrast, brightness and stippling from regions adjacent thereto.
16. A method according to claim 15, for use with a machine vision system having image capture means for capturing an image of the target, wherein each region has a different characteristic in such an image than the regions adjacent thereto.
17. A method according to any of claims 1 and 12, wherein step (A) includes the step of generating an image of a target for which the reference point is at a center thereof.
18. A method according to claim 17, wherein step (A) includes the step of generating an image of a target for which the linear edges substantially meet at the reference point.
19. A method according to any of claim 1 and 12, wherein step (A) includes the step of generating in image of a target having four imageable regions.
20. A method according to claim 19, wherein step (A) includes the step of generating an image of a four-region target in which the at least two linear edges of each of the regions are perpendicular to one another.
21. A machine vision method of determining a location and orientation of a calibration target in an image thereof, the method comprising the steps of
A. generating an image of a target that includes i. two or more regions, each region being defined by at least two linear edges that are directed toward a reference point;
ii.. at least one of the regions having a different imageable characteristic from an adjacent region;
B. generating, based on analysis of the image, an estimate of an orientation of the target therein;
C. generating, based on analysis of the image, an estimate of a location of the reference point therein;
D. refining, based on analysis of the image, estimates of at least one of the location of the reference point in the image and the orientation of the target in the image.
22. A method according to claim 21, wherein step (B) includes one or more of the following steps:
i. applying a Hough line vision tool to the image to find an angle of edges in the image;
ii. applying a Sobel edge tool to the image to generate Sobel angle image, generating a histogram of the angles represented in that Sobel angle image, and determining a one dimensional correlation thereof; and
iii. inputting an orientation from the user.
23. A method according to claim 22, wherein step (C) includes one or more of the following steps: i. applying a Hough vision tool to the image to find an approximate location of edges therein, and finding an intersection of lines defined by those edges;
ii. applying a projection tool vision tool to the image to find an approximate location of edges therein, and finding an intersection of lines defined by those edges;
iii. applying a correlation vision tool to the image to find approximate coordinates of the reference point; and
iv. determining a sum of absolute differences to find approximate coordinates of the reference point.
24. A method according to claim 23, wherein step (D) includes the step of invoking the steps in sequence, at least one time:
i. applying a caliper vision tool along each of the expected edges in the image, where those edges are defined by a current estimate of the reference point location and a current estimate of the target orientation; and
ii. fitting lines to edge points determined by the caliper vision tool.
25. A method according to claim 24, wherein step (D)(i) includes the step of applying the caliper vision tool along each of the expected edges until any of the following conditions is met: no edge is found by the caliper vision tool at a sample point; more than one edge is found by the caliper vision tool at a sample point; a distance between a sample edge point and a nominal line is larger than a threshold; or, the caliper vision tool window for the sample edge point extends outside of the image.
PCT/US1997/018268 1996-10-07 1997-10-07 Machine vision calibration targets and methods of determining their location and orientation in an image WO1998018117A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP51943098A JP2001524228A (en) 1996-10-07 1997-10-07 Machine vision calibration target and method for determining position and orientation of target in image
EP97910863A EP0883857A2 (en) 1996-10-07 1997-10-07 Machine vision calibration targets and methods of determining their location and orientation in an image

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/726,521 US6137893A (en) 1996-10-07 1996-10-07 Machine vision calibration targets and methods of determining their location and orientation in an image
US08/726,521 1996-10-07

Publications (2)

Publication Number Publication Date
WO1998018117A2 true WO1998018117A2 (en) 1998-04-30
WO1998018117A3 WO1998018117A3 (en) 1998-07-02

Family

ID=24918945

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1997/018268 WO1998018117A2 (en) 1996-10-07 1997-10-07 Machine vision calibration targets and methods of determining their location and orientation in an image

Country Status (4)

Country Link
US (1) US6137893A (en)
EP (1) EP0883857A2 (en)
JP (1) JP2001524228A (en)
WO (1) WO1998018117A2 (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19921778A1 (en) * 1999-03-24 2000-10-05 Anitra Medienprojekte Gmbh Process, support for samples and reading device for two-dimensional position determination on surfaces and the associated triggering of program processes
WO2001033504A1 (en) * 1999-10-29 2001-05-10 Cognex Corporation Method and apparatus for locating objects using universal alignment targets
WO2001039124A2 (en) * 1999-11-23 2001-05-31 Canon Kabushiki Kaisha Image processing apparatus
US6384907B1 (en) 1999-07-08 2002-05-07 Bae Systems Plc Optical target and apparatus and method for automatic identification thereof
US6903726B1 (en) 1999-03-24 2005-06-07 Anjowiggins Papiers Couches Method and system for determining positions on a document
US6912490B2 (en) 2000-10-27 2005-06-28 Canon Kabushiki Kaisha Image processing apparatus
US6975326B2 (en) 2001-11-05 2005-12-13 Canon Europa N.V. Image processing apparatus
US7043055B1 (en) 1999-10-29 2006-05-09 Cognex Corporation Method and apparatus for locating objects using universal alignment targets
US7079679B2 (en) 2000-09-27 2006-07-18 Canon Kabushiki Kaisha Image processing apparatus
US7120289B2 (en) 2000-10-27 2006-10-10 Canon Kabushiki Kaisha Image generation method and apparatus
US7492476B1 (en) 1999-11-23 2009-02-17 Canon Kabushiki Kaisha Image processing apparatus
US7561164B2 (en) 2002-02-28 2009-07-14 Canon Europa N.V. Texture map editing
US7620234B2 (en) 2000-10-06 2009-11-17 Canon Kabushiki Kaisha Image processing apparatus and method for generating a three-dimensional model of an object from a collection of images of the object recorded at different viewpoints and segmented using semi-automatic segmentation techniques
WO2010044939A1 (en) * 2008-10-17 2010-04-22 Honda Motor Co., Ltd. Structure and motion with stereo using lines
US7773773B2 (en) 2006-10-18 2010-08-10 Ut-Battelle, Llc Method and system for determining a volume of an object from two-dimensional images
CN102288606A (en) * 2011-05-06 2011-12-21 山东农业大学 Pollen viability measuring method based on machine vision
EP2437495A1 (en) * 2009-05-27 2012-04-04 Aisin Seiki Kabushiki Kaisha Calibration target detection apparatus, calibration target detecting method for detecting calibration target, and program for calibration target detection apparatus
US8855406B2 (en) 2010-09-10 2014-10-07 Honda Motor Co., Ltd. Egomotion using assorted features
TWI507678B (en) * 2014-04-09 2015-11-11 Inventec Energy Corp Device and method for identifying an object
WO2023141903A1 (en) * 2022-01-27 2023-08-03 Cognex Vision Inspection System (Shanghai) Co., Ltd. Easy line finder based on dynamic time warping method

Families Citing this family (76)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7016539B1 (en) 1998-07-13 2006-03-21 Cognex Corporation Method for fast, robust, multi-dimensional pattern recognition
US6650779B2 (en) * 1999-03-26 2003-11-18 Georgia Tech Research Corp. Method and apparatus for analyzing an image to detect and identify patterns
US6658164B1 (en) * 1999-08-09 2003-12-02 Cross Match Technologies, Inc. Calibration and correction in a fingerprint scanner
US6813376B1 (en) * 1999-10-29 2004-11-02 Rudolph Technologies, Inc. System and method for detecting defects on a structure-bearing surface using optical inspection
US6812933B1 (en) * 1999-10-29 2004-11-02 Cognex Technology And Investment Method for rendering algebraically defined two-dimensional shapes by computing pixel intensity using an edge model and signed distance to the nearest boundary
US6744913B1 (en) * 2000-04-18 2004-06-01 Semiconductor Technology & Instruments, Inc. System and method for locating image features
US6816625B2 (en) 2000-08-16 2004-11-09 Lewis Jr Clarence A Distortion free image capture system and method
US6728582B1 (en) 2000-12-15 2004-04-27 Cognex Corporation System and method for determining the position of an object in three dimensions using a machine vision system with two cameras
US6751338B1 (en) 2000-12-15 2004-06-15 Cognex Corporation System and method of using range image data with machine vision tools
US6771808B1 (en) 2000-12-15 2004-08-03 Cognex Corporation System and method for registering patterns transformed in six degrees of freedom using machine vision
US6681151B1 (en) 2000-12-15 2004-01-20 Cognex Technology And Investment Corporation System and method for servoing robots based upon workpieces with fiducial marks using machine vision
US6751361B1 (en) * 2000-12-22 2004-06-15 Cognex Corporation Method and apparatus for performing fixturing in a machine vision system
US6997387B1 (en) * 2001-03-28 2006-02-14 The Code Corporation Apparatus and method for calibration of projected target point within an image
US7003161B2 (en) * 2001-11-16 2006-02-21 Mitutoyo Corporation Systems and methods for boundary detection in images
US6798515B1 (en) 2001-11-29 2004-09-28 Cognex Technology And Investment Corporation Method for calculating a scale relationship for an imaging system
US7085432B2 (en) * 2002-06-10 2006-08-01 Lockheed Martin Corporation Edge detection using Hough transformation
US8218000B2 (en) * 2002-09-13 2012-07-10 Leica Instruments (Singapore) Pte. Ltd. Method and system for size calibration
DE10242628B4 (en) * 2002-09-13 2004-08-12 Leica Microsystems (Schweiz) Ag Size calibration method and system
US7957554B1 (en) * 2002-12-31 2011-06-07 Cognex Technology And Investment Corporation Method and apparatus for human interface to a machine vision system
US8081820B2 (en) 2003-07-22 2011-12-20 Cognex Technology And Investment Corporation Method for partitioning a pattern into optimized sub-patterns
US7190834B2 (en) 2003-07-22 2007-03-13 Cognex Technology And Investment Corporation Methods for finding and characterizing a deformed pattern in an image
JP3714350B2 (en) * 2004-01-27 2005-11-09 セイコーエプソン株式会社 Human candidate region extraction method, human candidate region extraction system, and human candidate region extraction program in image
JP4428067B2 (en) * 2004-01-28 2010-03-10 ソニー株式会社 Image collation apparatus, program, and image collation method
US20050175217A1 (en) * 2004-02-05 2005-08-11 Mueller Louis F. Using target images to determine a location of a stage
US7522306B2 (en) * 2004-02-11 2009-04-21 Hewlett-Packard Development Company, L.P. Method and apparatus for generating a calibration target on a medium
US20050197860A1 (en) * 2004-02-23 2005-09-08 Rademr, Inc. Data management system
JP2005267457A (en) * 2004-03-19 2005-09-29 Casio Comput Co Ltd Image processing device, imaging apparatus, image processing method and program
US7075097B2 (en) * 2004-03-25 2006-07-11 Mitutoyo Corporation Optical path array and angular filter for translation and orientation sensing
US7551765B2 (en) * 2004-06-14 2009-06-23 Delphi Technologies, Inc. Electronic component detection system
DE602005004332T2 (en) 2004-06-17 2009-01-08 Cadent Ltd. Method for providing data related to the oral cavity
US7522163B2 (en) * 2004-08-28 2009-04-21 David Holmes Method and apparatus for determining offsets of a part from a digital image
US8437502B1 (en) 2004-09-25 2013-05-07 Cognex Technology And Investment Corporation General pose refinement and tracking tool
WO2006065563A2 (en) * 2004-12-14 2006-06-22 Sky-Trax Incorporated Method and apparatus for determining position and rotational orientation of an object
US7606437B2 (en) * 2005-01-11 2009-10-20 Eastman Kodak Company Image processing based on ambient air attributes
US8111904B2 (en) 2005-10-07 2012-02-07 Cognex Technology And Investment Corp. Methods and apparatus for practical 3D vision system
US7724942B2 (en) * 2005-10-31 2010-05-25 Mitutoyo Corporation Optical aberration correction for machine vision inspection systems
US8311311B2 (en) 2005-10-31 2012-11-13 Mitutoyo Corporation Optical aberration correction for machine vision inspection systems
US7965887B2 (en) * 2005-12-01 2011-06-21 Cognex Technology And Investment Corp. Method of pattern location using color image data
US7570800B2 (en) * 2005-12-14 2009-08-04 Kla-Tencor Technologies Corp. Methods and systems for binning defects detected on a specimen
US7732768B1 (en) * 2006-03-02 2010-06-08 Thermoteknix Systems Ltd. Image alignment and trend analysis features for an infrared imaging system
US8162584B2 (en) 2006-08-23 2012-04-24 Cognex Corporation Method and apparatus for semiconductor wafer alignment
US8126260B2 (en) * 2007-05-29 2012-02-28 Cognex Corporation System and method for locating a three-dimensional object using machine vision
WO2010016379A1 (en) * 2008-08-05 2010-02-11 アイシン精機株式会社 Target position identifying apparatus
JP5339124B2 (en) * 2008-09-30 2013-11-13 アイシン精機株式会社 Car camera calibration system
EP3975138A1 (en) * 2008-10-06 2022-03-30 Mobileye Vision Technologies Ltd. Bundling of driver assistance systems
US9533418B2 (en) 2009-05-29 2017-01-03 Cognex Corporation Methods and apparatus for practical 3D vision system
CN101794373B (en) * 2009-12-30 2012-07-04 上海维宏电子科技股份有限公司 Application method of rotating and sub-pixel matching algorithm to machine vision system
JP5371015B2 (en) * 2010-04-08 2013-12-18 独立行政法人産業技術総合研究所 Cross mark detection apparatus and method, and program
US9393694B2 (en) 2010-05-14 2016-07-19 Cognex Corporation System and method for robust calibration between a machine vision system and a robot
US20120327214A1 (en) * 2011-06-21 2012-12-27 HNJ Solutions, Inc. System and method for image calibration
US9349033B2 (en) * 2011-09-21 2016-05-24 The United States of America, as represented by the Secretary of Commerce, The National Institute of Standards and Technology Standard calibration target for contactless fingerprint scanners
FR2986326B1 (en) * 2012-01-27 2014-03-14 Msc & Sgcc OPTICAL METHOD FOR INSPECTING TRANSPARENT OR TRANSLUCENT ARTICLES TO ASSIGN AN OPTICAL REFERENCE ADJUSTMENT TO THE VISION SYSTEM
US9946947B2 (en) * 2012-10-31 2018-04-17 Cognex Corporation System and method for finding saddle point-like structures in an image and determining information from the same
US9189702B2 (en) 2012-12-31 2015-11-17 Cognex Corporation Imaging system for determining multi-view alignment
US9440313B2 (en) 2013-03-12 2016-09-13 Serenity Data Security, Llc Hard drive data destroying device
US9679224B2 (en) 2013-06-28 2017-06-13 Cognex Corporation Semi-supervised method for training multiple pattern recognition and registration tool models
US9833962B2 (en) 2014-02-26 2017-12-05 Toyota Motor Engineering & Manufacturing Norh America, Inc. Systems and methods for controlling manufacturing processes
US9286695B2 (en) 2014-03-13 2016-03-15 Bendix Commercial Vehicle Systems Llc Systems and methods for tracking points within an encasement
US9675430B2 (en) 2014-08-15 2017-06-13 Align Technology, Inc. Confocal imaging apparatus with curved focal surface
US9596459B2 (en) * 2014-09-05 2017-03-14 Intel Corporation Multi-target camera calibration
EP3054265B1 (en) * 2015-02-04 2022-04-20 Hexagon Technology Center GmbH Coordinate measuring machine
WO2017004573A1 (en) 2015-07-02 2017-01-05 Serenity Data Services, Inc. Product verification for hard drive data destroying device
US11167384B2 (en) 2015-07-02 2021-11-09 Serenity Data Security, Llc Hard drive non-destructive dismantling system
WO2017004575A1 (en) 2015-07-02 2017-01-05 Serenity Data Services, Inc. Hard drive dismantling system
US10937168B2 (en) 2015-11-02 2021-03-02 Cognex Corporation System and method for finding and classifying lines in an image with a vision system
US10152780B2 (en) 2015-11-02 2018-12-11 Cognex Corporation System and method for finding lines in an image with a vision system
CN105528789B (en) * 2015-12-08 2018-09-18 深圳市恒科通机器人有限公司 Robot visual orientation method and device, vision calibration method and device
EP3467428A4 (en) * 2016-05-30 2019-05-08 Sony Corporation Information processing device, information processing method, program, and image capturing system
US20180240250A1 (en) * 2017-02-21 2018-08-23 Wipro Limited System and method for assisted pose estimation
US11135449B2 (en) 2017-05-04 2021-10-05 Intraop Medical Corporation Machine vision alignment and positioning system for electron beam treatment systems
US10762405B2 (en) 2017-10-26 2020-09-01 Datalogic Ip Tech S.R.L. System and method for extracting bitstream data in two-dimensional optical codes
US10650584B2 (en) 2018-03-30 2020-05-12 Konica Minolta Laboratory U.S.A., Inc. Three-dimensional modeling scanner
US11291507B2 (en) 2018-07-16 2022-04-05 Mako Surgical Corp. System and method for image based registration and calibration
US10735665B2 (en) * 2018-10-30 2020-08-04 Dell Products, Lp Method and system for head mounted display infrared emitter brightness optimization based on image saturation
CN111428731B (en) * 2019-04-04 2023-09-26 深圳市联合视觉创新科技有限公司 Multi-category identification positioning method, device and equipment based on machine vision
CN110909668B (en) * 2019-11-20 2021-02-19 广州极飞科技有限公司 Target detection method and device, computer readable storage medium and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5027419A (en) * 1989-03-31 1991-06-25 Atomic Energy Of Canada Limited Optical images by quadrupole convolution
US5113565A (en) * 1990-07-06 1992-05-19 International Business Machines Corp. Apparatus and method for inspection and alignment of semiconductor chips and conductive lead frames
US5179419A (en) * 1991-11-22 1993-01-12 At&T Bell Laboratories Methods of detecting, classifying and quantifying defects in optical fiber end faces
US5371690A (en) * 1992-01-17 1994-12-06 Cognex Corporation Method and apparatus for inspection of surface mounted devices
US5553859A (en) * 1995-03-22 1996-09-10 Lazer-Tron Corporation Arcade game for sensing and validating objects

Family Cites Families (138)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3816722A (en) * 1970-09-29 1974-06-11 Nippon Electric Co Computer for calculating the similarity between patterns and pattern recognition system comprising the similarity computer
JPS5425782B2 (en) * 1973-03-28 1979-08-30
US3967100A (en) * 1973-11-12 1976-06-29 Naonobu Shimomura Digital function generator utilizing cascade accumulation
US3968475A (en) * 1974-11-11 1976-07-06 Sperry Rand Corporation Digital processor for extracting data from a binary image
JPS5177047A (en) * 1974-12-27 1976-07-03 Naonobu Shimomura
US4011403A (en) * 1976-03-30 1977-03-08 Northwestern University Fiber optic laser illuminators
CH611017A5 (en) * 1976-05-05 1979-05-15 Zumbach Electronic Ag
US4183013A (en) * 1976-11-29 1980-01-08 Coulter Electronics, Inc. System for extracting shape features from an image
JPS5369063A (en) * 1976-12-01 1978-06-20 Hitachi Ltd Detector of position alignment patterns
US4385322A (en) * 1978-09-01 1983-05-24 View Engineering, Inc. Pattern recognition apparatus and method
US4200861A (en) * 1978-09-01 1980-04-29 View Engineering, Inc. Pattern recognition apparatus and method
JPS5579590A (en) * 1978-12-13 1980-06-16 Hitachi Ltd Video data processor
US4300164A (en) * 1980-03-21 1981-11-10 View Engineering, Inc. Adaptive video processor
JPS57102017A (en) * 1980-12-17 1982-06-24 Hitachi Ltd Pattern detector
US4441124A (en) * 1981-11-05 1984-04-03 Western Electric Company, Inc. Technique for inspecting semiconductor wafers for particulate contamination
DE3267548D1 (en) * 1982-05-28 1986-01-02 Ibm Deutschland Process and device for an automatic optical inspection
US4534813A (en) * 1982-07-26 1985-08-13 Mcdonnell Douglas Corporation Compound curve-flat pattern process
US4736437A (en) * 1982-11-22 1988-04-05 View Engineering, Inc. High speed pattern recognizer
US4577344A (en) * 1983-01-17 1986-03-18 Automatix Incorporated Vision system
US4783829A (en) * 1983-02-23 1988-11-08 Hitachi, Ltd. Pattern recognition apparatus
GB8311813D0 (en) * 1983-04-29 1983-06-02 West G A W Coding and storing raster scan images
US4581762A (en) * 1984-01-19 1986-04-08 Itran Corporation Vision inspection system
US4606065A (en) * 1984-02-09 1986-08-12 Imaging Technology Incorporated Image processing-system
US4541116A (en) * 1984-02-27 1985-09-10 Environmental Research Institute Of Mi Neighborhood image processing stage for implementing filtering operations
US4860374A (en) * 1984-04-19 1989-08-22 Nikon Corporation Apparatus for detecting position of reference pattern
US4688088A (en) * 1984-04-20 1987-08-18 Canon Kabushiki Kaisha Position detecting device and method
EP0163885A1 (en) * 1984-05-11 1985-12-11 Siemens Aktiengesellschaft Segmentation device
JPS6134583A (en) * 1984-07-26 1986-02-18 シャープ株式会社 Lighting apparatus
US4953224A (en) * 1984-09-27 1990-08-28 Hitachi, Ltd. Pattern defects detection method and apparatus
JPS6180222A (en) * 1984-09-28 1986-04-23 Asahi Glass Co Ltd Method and apparatus for adjusting spectacles
WO1986003866A1 (en) * 1984-12-14 1986-07-03 Sten Hugo Nils Ahlbom Image processing device
US4876728A (en) * 1985-06-04 1989-10-24 Adept Technology, Inc. Vision system for distinguishing touching parts
US4831580A (en) * 1985-07-12 1989-05-16 Nippon Electric Industry Co., Ltd. Program generator
US4617619A (en) * 1985-10-02 1986-10-14 American Sterilizer Company Reflector for multiple source lighting fixture
US4742551A (en) * 1985-10-07 1988-05-03 Fairchild Camera & Instrument Corporation Multistatistics gatherer
US4706168A (en) * 1985-11-15 1987-11-10 View Engineering, Inc. Systems and methods for illuminating objects for vision systems
US4860375A (en) * 1986-03-10 1989-08-22 Environmental Research Inst. Of Michigan High speed cellular processing system
US4728195A (en) * 1986-03-19 1988-03-01 Cognex Corporation Method for imaging printed circuit board component leads
GB8608431D0 (en) * 1986-04-07 1986-05-14 Crosfield Electronics Ltd Monitoring digital image processing equipment
US4783828A (en) * 1986-06-02 1988-11-08 Honeywell Inc. Two-dimensional object recognition using chain codes, histogram normalization and trellis algorithm
US4771469A (en) * 1986-06-30 1988-09-13 Honeywell Inc. Means and method of representing an object shape by hierarchical boundary decomposition
US4783826A (en) * 1986-08-18 1988-11-08 The Gerber Scientific Company, Inc. Pattern inspection system
US4724330A (en) * 1986-09-24 1988-02-09 Xerox Corporation Self aligning raster input scanner
US4955062A (en) * 1986-12-10 1990-09-04 Canon Kabushiki Kaisha Pattern detecting method and apparatus
US4972359A (en) * 1987-04-03 1990-11-20 Cognex Corporation Digital image processing system
US4764870A (en) * 1987-04-09 1988-08-16 R.A.P.I.D., Inc. System and method for remote presentation of diagnostic image information
US4982438A (en) * 1987-06-02 1991-01-01 Hitachi, Ltd. Apparatus and method for recognizing three-dimensional shape of object
CA1318977C (en) * 1987-07-22 1993-06-08 Kazuhito Hori Image recognition system
JPH07120385B2 (en) * 1987-07-24 1995-12-20 シャープ株式会社 Optical reading method
JP2630605B2 (en) * 1987-07-29 1997-07-16 三菱電機株式会社 Curved surface creation method
US4903218A (en) * 1987-08-13 1990-02-20 Digital Equipment Corporation Console emulation for a graphics workstation
US5119435A (en) * 1987-09-21 1992-06-02 Kulicke And Soffa Industries, Inc. Pattern recognition apparatus and method
US4907169A (en) * 1987-09-30 1990-03-06 International Technical Associates Adaptive tracking vision and guidance system
US5081656A (en) * 1987-10-30 1992-01-14 Four Pi Systems Corporation Automated laminography system for inspection of electronics
US5287449A (en) * 1987-11-06 1994-02-15 Hitachi, Ltd. Automatic program generation method with a visual data structure display
JPH01160158A (en) * 1987-12-17 1989-06-23 Murata Mach Ltd Mechanical control system at remote location
JP2622573B2 (en) * 1988-01-27 1997-06-18 キヤノン株式会社 Mark detection apparatus and method
JP2739130B2 (en) * 1988-05-12 1998-04-08 株式会社鷹山 Image processing method
JP2541631B2 (en) * 1988-07-26 1996-10-09 ファナック株式会社 CNC remote diagnosis method
US5046190A (en) * 1988-09-06 1991-09-03 Allen-Bradley Company, Inc. Pipeline image processor
US4975972A (en) * 1988-10-18 1990-12-04 At&T Bell Laboratories Method and apparatus for surface inspection
US5054096A (en) * 1988-10-24 1991-10-01 Empire Blue Cross/Blue Shield Method and apparatus for converting documents into electronic data for transaction processing
US4876457A (en) * 1988-10-31 1989-10-24 American Telephone And Telegraph Company Method and apparatus for differentiating a planar textured surface from a surrounding background
US4932065A (en) * 1988-11-16 1990-06-05 Ncr Corporation Universal character segmentation scheme for multifont OCR images
NL8803112A (en) * 1988-12-19 1990-07-16 Elbicon Nv METHOD AND APPARATUS FOR SORTING A FLOW OF ARTICLES DEPENDING ON OPTICAL PROPERTIES OF THE ARTICLES.
IL89484A (en) * 1989-03-03 1992-08-18 Nct Ltd Numerical Control Tech System for automatic finishing of machined parts
EP0385009A1 (en) 1989-03-03 1990-09-05 Hewlett-Packard Limited Apparatus and method for use in image processing
US5081689A (en) * 1989-03-27 1992-01-14 Hughes Aircraft Company Apparatus and method for extracting edges and lines
US5153925A (en) * 1989-04-27 1992-10-06 Canon Kabushiki Kaisha Image processing apparatus
US5060276A (en) * 1989-05-31 1991-10-22 At&T Bell Laboratories Technique for object orientation detection using a feed-forward neural network
US5253309A (en) * 1989-06-23 1993-10-12 Harmonic Lightwaves, Inc. Optical distribution of analog and digital signals using optical modulators with complementary outputs
DE3923449A1 (en) * 1989-07-15 1991-01-24 Philips Patentverwaltung METHOD FOR DETERMINING EDGES IN IMAGES
US5432525A (en) 1989-07-26 1995-07-11 Hitachi, Ltd. Multimedia telemeeting terminal device, terminal device system and manipulation method thereof
US5063608A (en) * 1989-11-03 1991-11-05 Datacube Inc. Adaptive zonal coder
JP3092809B2 (en) * 1989-12-21 2000-09-25 株式会社日立製作所 Inspection method and inspection apparatus having automatic creation function of inspection program data
US5164994A (en) * 1989-12-21 1992-11-17 Hughes Aircraft Company Solder joint locator
JPH03210679A (en) * 1990-01-12 1991-09-13 Hiyuutec:Kk Method and device for pattern matching
US5271068A (en) * 1990-03-15 1993-12-14 Sharp Kabushiki Kaisha Character recognition device which divides a single character region into subregions to obtain a character code
US5151951A (en) * 1990-03-15 1992-09-29 Sharp Kabushiki Kaisha Character recognition device which divides a single character region into subregions to obtain a character code
US5495424A (en) 1990-04-18 1996-02-27 Matsushita Electric Industrial Co., Ltd. Method and apparatus for inspecting solder portions
US4959898A (en) * 1990-05-22 1990-10-02 Emhart Industries, Inc. Surface mount machine with lead coplanarity verifier
JP2690603B2 (en) 1990-05-30 1997-12-10 ファナック株式会社 Vision sensor calibration method
US5243607A (en) * 1990-06-25 1993-09-07 The Johns Hopkins University Method and apparatus for fault tolerance
JPH0475183A (en) * 1990-07-17 1992-03-10 Mitsubishi Electric Corp Correlativity detector for image
JP2865827B2 (en) 1990-08-13 1999-03-08 株式会社日立製作所 Data storage method in conference system
US5206820A (en) * 1990-08-31 1993-04-27 At&T Bell Laboratories Metrology system for analyzing panel misregistration in a panel manufacturing process and providing appropriate information for adjusting panel manufacturing processes
JP2591292B2 (en) * 1990-09-05 1997-03-19 日本電気株式会社 Image processing device and automatic optical inspection device using it
US5388252A (en) 1990-09-07 1995-02-07 Eastman Kodak Company System for transparent monitoring of processors in a network with display of screen images at a remote station for diagnosis by technical support personnel
US5115309A (en) * 1990-09-10 1992-05-19 At&T Bell Laboratories Method and apparatus for dynamic channel bandwidth allocation among multiple parallel video coders
US5168269A (en) * 1990-11-08 1992-12-01 Norton-Lambert Corp. Mouse driven remote communication system
US5327156A (en) 1990-11-09 1994-07-05 Fuji Photo Film Co., Ltd. Apparatus for processing signals representative of a computer graphics image and a real image including storing processed signals back into internal memory
US5086478A (en) * 1990-12-27 1992-02-04 International Business Machines Corporation Finding fiducials on printed circuit boards to sub pixel accuracy
US5091968A (en) * 1990-12-28 1992-02-25 Ncr Corporation Optical character recognition system and method
US5133022A (en) * 1991-02-06 1992-07-21 Recognition Equipment Incorporated Normalizing correlator for video processing
TW198107B (en) 1991-02-28 1993-01-11 Ibm
JP3175175B2 (en) * 1991-03-01 2001-06-11 ミノルタ株式会社 Focus detection device
US5143436A (en) * 1991-03-06 1992-09-01 The United States Of America As Represented By The United States Department Of Energy Ringlight for use in high radiation
US5265173A (en) * 1991-03-20 1993-11-23 Hughes Aircraft Company Rectilinear object image matcher
CA2072198A1 (en) 1991-06-24 1992-12-25 Scott C. Farrand Remote console emulator for computer system manager
JP2700965B2 (en) 1991-07-04 1998-01-21 ファナック株式会社 Automatic calibration method
DE4222804A1 (en) 1991-07-10 1993-04-01 Raytheon Co Automatic visual tester for electrical and electronic components - performs video scans of different surfaces with unequal intensities of illumination by annular and halogen lamps
US5477138A (en) 1991-07-23 1995-12-19 Vlsi Technology, Inc. Apparatus and method for testing the calibration of a variety of electronic package lead inspection systems
DE69222102T2 (en) 1991-08-02 1998-03-26 Grass Valley Group Operator interface for video editing system for the display and interactive control of video material
US5297238A (en) * 1991-08-30 1994-03-22 Cimetrix Incorporated Robot end-effector terminal control frame (TCF) calibration method and device
US5475766A (en) 1991-09-05 1995-12-12 Kabushiki Kaisha Toshiba Pattern inspection apparatus with corner rounding of reference pattern data
JPH0568243A (en) * 1991-09-09 1993-03-19 Hitachi Ltd Variable length coding controlling system
FR2683340A1 (en) 1991-11-05 1993-05-07 Sgs Thomson Microelectronics CIRCUIT ELEVATEUR IN THE SQUARE OF BINARY NUMBERS.
US5315388A (en) * 1991-11-19 1994-05-24 General Instrument Corporation Multiple serial access memory for use in feedback systems such as motion compensated television
US5159281A (en) * 1991-11-20 1992-10-27 Nsi Partners Digital demodulator using numerical processor to evaluate period measurements
US5145432A (en) * 1991-11-27 1992-09-08 Zenith Electronics Corporation Optical interprogation system for use in constructing flat tension shadow mask CRTS
US5299269A (en) * 1991-12-20 1994-03-29 Eastman Kodak Company Character segmentation using an associative memory for optical character recognition
US5216503A (en) * 1991-12-24 1993-06-01 General Instrument Corporation Statistical multiplexer for a multichannel image compression system
JP3073599B2 (en) 1992-04-22 2000-08-07 本田技研工業株式会社 Image edge detection device
US5594859A (en) 1992-06-03 1997-01-14 Digital Equipment Corporation Graphical user interface for video teleconferencing
GB2270581A (en) 1992-09-15 1994-03-16 Ibm Computer workstation
US5367667A (en) 1992-09-25 1994-11-22 Compaq Computer Corporation System for performing remote computer system diagnostic tests
US5367439A (en) 1992-12-24 1994-11-22 Cognex Corporation System for frontal illumination
US5583956A (en) 1993-01-12 1996-12-10 The Board Of Trustees Of The Leland Stanford Junior University Estimation of skew angle in text image
US5608872A (en) 1993-03-19 1997-03-04 Ncr Corporation System for allowing all remote computers to perform annotation on an image and replicating the annotated image on the respective displays of other comuters
US5481712A (en) 1993-04-06 1996-01-02 Cognex Corporation Method and apparatus for interactively generating a computer program for machine vision analysis of an object
JPH06325181A (en) 1993-05-17 1994-11-25 Mitsubishi Electric Corp Pattern recognizing method
US5455933A (en) 1993-07-14 1995-10-03 Dell Usa, L.P. Circuit and method for remote diagnosis of personal computers
EP0638801B1 (en) 1993-08-12 1998-12-23 International Business Machines Corporation Method of inspecting the array of balls of an integrated circuit module
US5532739A (en) 1993-10-06 1996-07-02 Cognex Corporation Automated optical inspection apparatus
US5640199A (en) 1993-10-06 1997-06-17 Cognex Corporation Automated optical inspection apparatus
CA2113752C (en) 1994-01-19 1999-03-02 Stephen Michael Rooks Inspection system for cross-sectional imaging
US5519840A (en) 1994-01-24 1996-05-21 At&T Corp. Method for implementing approximate data structures using operations on machine words
US5583954A (en) 1994-03-01 1996-12-10 Cognex Corporation Methods and apparatus for fast correlation
US5526050A (en) 1994-03-31 1996-06-11 Cognex Corporation Methods and apparatus for concurrently acquiring video data from multiple video data sources
US5550763A (en) 1994-05-02 1996-08-27 Michael; David J. Using cone shaped search models to locate ball bonds on wire bonded devices
US5613013A (en) 1994-05-13 1997-03-18 Reticula Corporation Glass patterns in image alignment and analysis
US5557410A (en) 1994-05-26 1996-09-17 Lockheed Missiles & Space Company, Inc. Method of calibrating a three-dimensional optical measurement system
US5495537A (en) 1994-06-01 1996-02-27 Cognex Corporation Methods and apparatus for machine vision template matching of images predominantly having generally diagonal and elongate features
US5602937A (en) 1994-06-01 1997-02-11 Cognex Corporation Methods and apparatus for machine vision high accuracy searching
US5640200A (en) 1994-08-31 1997-06-17 Cognex Corporation Golden template comparison using efficient image registration
US5574668A (en) 1995-02-22 1996-11-12 Beaty; Elwin M. Apparatus and method for measuring ball grid arrays
US5566877A (en) 1995-05-01 1996-10-22 Motorola Inc. Method for inspecting a semiconductor device
US5846318A (en) 1997-07-17 1998-12-08 Memc Electric Materials, Inc. Method and system for controlling growth of a silicon crystal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5027419A (en) * 1989-03-31 1991-06-25 Atomic Energy Of Canada Limited Optical images by quadrupole convolution
US5113565A (en) * 1990-07-06 1992-05-19 International Business Machines Corp. Apparatus and method for inspection and alignment of semiconductor chips and conductive lead frames
US5179419A (en) * 1991-11-22 1993-01-12 At&T Bell Laboratories Methods of detecting, classifying and quantifying defects in optical fiber end faces
US5371690A (en) * 1992-01-17 1994-12-06 Cognex Corporation Method and apparatus for inspection of surface mounted devices
US5553859A (en) * 1995-03-22 1996-09-10 Lazer-Tron Corporation Arcade game for sensing and validating objects

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6903726B1 (en) 1999-03-24 2005-06-07 Anjowiggins Papiers Couches Method and system for determining positions on a document
DE19921778A1 (en) * 1999-03-24 2000-10-05 Anitra Medienprojekte Gmbh Process, support for samples and reading device for two-dimensional position determination on surfaces and the associated triggering of program processes
US6384907B1 (en) 1999-07-08 2002-05-07 Bae Systems Plc Optical target and apparatus and method for automatic identification thereof
WO2001033504A1 (en) * 1999-10-29 2001-05-10 Cognex Corporation Method and apparatus for locating objects using universal alignment targets
US7043055B1 (en) 1999-10-29 2006-05-09 Cognex Corporation Method and apparatus for locating objects using universal alignment targets
US6671049B1 (en) 1999-10-29 2003-12-30 Cognex Corporation Article of manufacture bearing a universal alignment target
US7492476B1 (en) 1999-11-23 2009-02-17 Canon Kabushiki Kaisha Image processing apparatus
WO2001039124A2 (en) * 1999-11-23 2001-05-31 Canon Kabushiki Kaisha Image processing apparatus
WO2001039124A3 (en) * 1999-11-23 2002-05-10 Canon Kk Image processing apparatus
US7079679B2 (en) 2000-09-27 2006-07-18 Canon Kabushiki Kaisha Image processing apparatus
US7620234B2 (en) 2000-10-06 2009-11-17 Canon Kabushiki Kaisha Image processing apparatus and method for generating a three-dimensional model of an object from a collection of images of the object recorded at different viewpoints and segmented using semi-automatic segmentation techniques
US7120289B2 (en) 2000-10-27 2006-10-10 Canon Kabushiki Kaisha Image generation method and apparatus
US7545384B2 (en) 2000-10-27 2009-06-09 Canon Kabushiki Kaisha Image generation method and apparatus
US6912490B2 (en) 2000-10-27 2005-06-28 Canon Kabushiki Kaisha Image processing apparatus
US6975326B2 (en) 2001-11-05 2005-12-13 Canon Europa N.V. Image processing apparatus
US7561164B2 (en) 2002-02-28 2009-07-14 Canon Europa N.V. Texture map editing
US7773773B2 (en) 2006-10-18 2010-08-10 Ut-Battelle, Llc Method and system for determining a volume of an object from two-dimensional images
US8401241B2 (en) 2008-10-17 2013-03-19 Honda Motor Co., Ltd. Structure and motion with stereo using lines
WO2010044939A1 (en) * 2008-10-17 2010-04-22 Honda Motor Co., Ltd. Structure and motion with stereo using lines
EP2437495A4 (en) * 2009-05-27 2013-06-12 Aisin Seiki Calibration target detection apparatus, calibration target detecting method for detecting calibration target, and program for calibration target detection apparatus
EP2437495A1 (en) * 2009-05-27 2012-04-04 Aisin Seiki Kabushiki Kaisha Calibration target detection apparatus, calibration target detecting method for detecting calibration target, and program for calibration target detection apparatus
US8605156B2 (en) 2009-05-27 2013-12-10 Aisin Seiki Kabushiki Kaisha Calibration target detection apparatus, calibration target detecting method for detecting calibration target, and program for calibration target detection apparatus
US8855406B2 (en) 2010-09-10 2014-10-07 Honda Motor Co., Ltd. Egomotion using assorted features
CN102288606A (en) * 2011-05-06 2011-12-21 山东农业大学 Pollen viability measuring method based on machine vision
TWI507678B (en) * 2014-04-09 2015-11-11 Inventec Energy Corp Device and method for identifying an object
WO2023141903A1 (en) * 2022-01-27 2023-08-03 Cognex Vision Inspection System (Shanghai) Co., Ltd. Easy line finder based on dynamic time warping method

Also Published As

Publication number Publication date
EP0883857A2 (en) 1998-12-16
JP2001524228A (en) 2001-11-27
WO1998018117A3 (en) 1998-07-02
US6137893A (en) 2000-10-24

Similar Documents

Publication Publication Date Title
EP0883857A2 (en) Machine vision calibration targets and methods of determining their location and orientation in an image
US6501554B1 (en) 3D scanner and method for measuring heights and angles of manufactured parts
JP2835274B2 (en) Image recognition device
US9704232B2 (en) Stereo vision measurement system and method
US10475179B1 (en) Compensating for reference misalignment during inspection of parts
US7804586B2 (en) Method and system for image processing for profiling with uncoded structured light
EP2322899A1 (en) Specimen roughness detecting method, and apparatus for the method
US7171036B1 (en) Method and apparatus for automatic measurement of pad geometry and inspection thereof
CN106017313B (en) Edge detection deviation correction value calculation method, edge detection deviation correction method and device
US6898333B1 (en) Methods and apparatus for determining the orientation of an object in an image
Li et al. Stereo vision based automated solder ball height and substrate coplanarity inspection
US6813377B1 (en) Methods and apparatuses for generating a model of an object from an image of the object
CN116148277B (en) Three-dimensional detection method, device and equipment for defects of transparent body and storage medium
Frobin et al. Automatic Measurement of body surfaces using rasterstereograph
US20040151363A1 (en) Choice of reference markings for enabling fast estimating of the position of an imaging device
CN115375610A (en) Detection method and device, detection equipment and storage medium
Shiau et al. Study of a measurement algorithm and the measurement loss in machine vision metrology
JPH0933227A (en) Discrimination method for three-dimensional shape
CN114037705B (en) Metal fracture fatigue source detection method and system based on moire lines
JPS63229311A (en) Detection of cross-sectional shape
Kainz et al. Estimation of camera intrinsic matrix parameters and its utilization in the extraction of dimensional units
Li Locally Adaptive Stereo Vision Based 3D Visual Reconstruction
JP2004318488A (en) Product inspection method and product inspection device
Leon Determination Of The Coverage Of Shot-peened Surfaces.
JPH0334801B2 (en)

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): CA JP KR SG

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE

ENP Entry into the national phase

Ref country code: JP

Ref document number: 1998 519430

Kind code of ref document: A

Format of ref document f/p: F

AK Designated states

Kind code of ref document: A3

Designated state(s): CA JP KR SG

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE

WWE Wipo information: entry into national phase

Ref document number: 1997910863

Country of ref document: EP

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWP Wipo information: published in national office

Ref document number: 1997910863

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 1997910863

Country of ref document: EP