WO1998018117A2 - Machine vision calibration targets and methods of determining their location and orientation in an image - Google Patents
Machine vision calibration targets and methods of determining their location and orientation in an image Download PDFInfo
- Publication number
- WO1998018117A2 WO1998018117A2 PCT/US1997/018268 US9718268W WO9818117A2 WO 1998018117 A2 WO1998018117 A2 WO 1998018117A2 US 9718268 W US9718268 W US 9718268W WO 9818117 A2 WO9818117 A2 WO 9818117A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- edge
- target
- edges
- tool
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
Definitions
- the invention pertains to machine vision and, more particularly, to calibration targets and methods for determining their location and orientation in an image.
- Machine vision refers to the automated analysis of an image to determine characteristics of objects and other features shown in the image. It is often employed in automated manufacturing lines, where images of components are analyzed to determine placement and alignment prior to assembly. Machine vision is also used for quality assurance. For example, in the pharmaceutical and food packing industries, images of packages are analyzed to insure that product labels, lot numbers, "freshness" dates, and the like, are properly positioned and legible.
- an object whose image is to be analyzed include a calibration target.
- the target facilitates determining the orientation and position of the object with respect to other features in the image. It also facilitates correlating coordinate positions in the image with those in the "real world," e.g., coordinate positions of a motion stage or conveyor belt on which the object is placed.
- a calibration target can also be used to facilitate determining the position and orientation of the camera with respect to the real world, as well as to facilitate determining the camera and lens parameters such as pixel size and lens distortion.
- the prior art suggests the use of arrays of dots, bulls-eyes of concentric circles, and parallel stripes as calibration targets. Many of these targets have characteristics that make difficult finding their centers and orientations. This typically results from lack of clarity when the targets and, particularly, their borders are imaged. It also results from discrepancies in conventional machine vision techniques used to analyze such images.
- the edges of a cross-shaped target may be imprecisely defined in an image, leading a machine vision analysis system to wrongly interpret the location of those edges and, hence, to misjudge the mark's center by a fraction of a pixel or more.
- a localized defect in a camera lens may cause a circular calibration mark to appear as an oval, thereby, causing the system to misjudge the image's true aspect ratio.
- An object of this invention is to provide an improved calibration targets and methods for machine vision analysis thereof.
- a related object is to provide calibration targets and analysis methods reliable at a wide range of magnifications.
- a further object is to provide such methods as can be readily implemented on conventional digital data processors or other conventional machine vision analysis equipment.
- Yet still another object of the invention is to provide such methods that can rapidly analyze images of calibration target without undue consumption of resources.
- the foregoing objects are among those attained by the invention, which provides in one aspect a machine vision method for analysis of a calibration target of the type having two or more regions, each having a different "imageable characteristic" (e.g., a different color, contrast, or brightness) from its neighboring region(s).
- Each region has at least two edges ⁇ referred to as "adjoining edges" - that are linear and that are directed toward and, optionally meet at, a reference point (e.g., the center of the target or some other location of interest).
- the method includes generating an image of the target, identifying in the image features corresponding to the adjoining edges, and determining the orientation and/or position of the target from those edges.
- the invention provides a method as described above for analyzing a target of the type that includes four regions, where the adjoining edges of each region are perpendicular to one another, and in which each region in the target has a different imageable characteristic from its edge- wise neighbor.
- the edges of those regions can meet, for example, at the center of the target, as in the case of a four-square checkerboard.
- the invention provides a method as described above for determining an orientation of the target as a function of the angle of the edges identified in the image and for determining the location of the reference point as an intersection of lines fitted to those edges.
- the invention provides a method of determining the orientation of a target in an image by applying a Sobel edge tool to the image to generate a Sobel angle image, and by generating a angle histogram from that angle image.
- the orientation is determined by applying a Hough line tool to the image and determining the predominant angle of the edges identified by that tool.
- one aspect of the invention calls for locating the adjoining edges by applying a caliper vision tool to the image, beginning at an approximate location of the reference point. That approximate location of the reference point can itself be determined by applying a Hough line vision tool to the image in order to find lines approximating the adjoining edges and by determining an intersection of those lines. Alternatively, the approximate location of the reference point can be determined by performing a binary or grey scale correlation to find where a template representing the edges most closely matches the image.
- the approximate location of the reference point is determined by applying a projection vision tool to the image along each of the axes with which the adjoining edges align.
- a first difference operator vision tool and a peak detector vision tool are applied to the output of the projection tool (i.e., to the projection) in order to find the approximate location of the edges.
- the invention has wide application in industry and research applications. It facilitates the calibration of images by permitting accurate determination of target location and orientation, regardless of magnification.
- an object bearing a target can be imaged by multiple cameras during the assembly process, with accurate determinations of location and orientation made from each such image.
- Figure 1 A - 1C depict calibration targets according to the invention
- Figure ID depicts the effect of rotation on the target depicted Figure IB
- Figure 2 depicts an object according to the invention incorporating a calibration target of the type depicted in Figure IB;
- Figure 3 depicts a machine vision system according to the invention for determining the reference point and orientation of a calibration target
- Figures 4 and 5 depict a method according to the invention for interpreting an image of a calibration target to determine a reference point and orientation thereof;
- Figure 6 illustrates the magnification invariance of a target according to the invention.
- FIGS 1 A - 1C depict calibration targets according to the invention.
- a target 10 according to the invention having three regions 12, 14, 16.
- Each region is bounded by at least two linear edges that are oriented toward a reference location or reference point 18 on the target.
- region 12 is bounded by edges 20, 24;
- region 14 is bounded by edges 20, 22; and
- region 16 is bounded by edges 22, 24.
- the edges are shared by adjoining regions and, hence, are referred to below as “adjoining edges.”
- region 12 shares edge 20 with region 14; region 14 shares edge 22 with region 16; and region 16 shares edge 24 with region 12.
- the reference point 14 is at the center of target 10, though, those skilled in the art will appreciate that the reference point can be positioned elsewhere.
- an "imageable characteristic” is a characteristic of a region as imaged by a machine vision system (e.g., of the type shown in Figure 3) and, particularly, as imaged by an image capture device used by such a system.
- region 12 has the characteristic of being colored black; region 14, white; and region 16, gray.
- imageable characteristics useful with conventional machine vision systems which typically utilize image capture devices operational in visual spectrum ⁇ include contrast, brightness, and stippling.
- an imageable characteristic is temperature.
- an imageable characteristic is emitted radiation intensity or frequency.
- edges 20, 22, 24 comprise straight linear segments. Those edges are implicitly defined as the borders between regions that, themselves, have different imageable characteristics. Thus, for example, edge 20 is a straight linear segment defined by the border between black region 12 and white region 14. Likewise, edge 24 is defined by the border between black region 12 and gray region 16. Further, edge 22 is defined by the border between white region 14 and grey region 16.
- Figure IB depicts a calibration target 30 according to the invention having four rectangular (and, more particularly, square) regions 32, 34, 36, 38.
- each region is bounded by at least two linear edges that are oriented toward a reference point 40 at the center of the target.
- region 32 is bounded by edges 42, 44; region 34 is bounded by edges 42, 46; and so forth.
- edges 42, 44 are oriented toward a reference point 40 at the center of the target.
- region 32 is bounded by edges 42, 44;
- region 34 is bounded by edges 42, 46; and so forth.
- these edges are shared by adjoining regions.
- region 32 shares edge 42 with region 34, and so forth.
- Each region in target 30 has a different imageable characteristic from its edge-wise neighbor. Hence, regions 32 and 36 are white, while their edge-wise adjoining neighbors 34, 38 are black.
- Figure 1C depicts a calibration target 50 according to the invention having five regions 52, 54, 56, 58, 60, each having two linear edges directed toward a reference point 62.
- the adjoining regions are of differing contrast, thereby, defining edges at their common borders, as illustrated.
- the edges separating the regions 52 - 60 of target 50 are directed toward the reference point 62, they do not meet at that location.
- no marker or other element imageable characteristic is provided at reference point 62.
- FIG. 1 depicts an object according to the invention for use in machine vision imaging, detection, and/or manipulation having a calibration target according to the invention coupled thereto
- the object is an integrated circuit chip 70 having coupled to the casing thereof a calibration target 72 of the type shown in Figure IB.
- targets according to the invention can likewise be coupled to the object 70.
- the targets can be coupled to the object by any known means. For example, they can be molded onto, etched into, or printed on the surface of the object.
- decals embodying the targets can be glued, screwed or otherwise affixed to the object
- calibration plates incorporating the targets can be placed on the object and held in place by friction
- the object can include any other objects to which a target can be coupled, such as printed circuit boards, electrical components, mechanical parts, containers, bottles, automotive parts, paper goods, etc
- Figure 3 depicts a machine vision system 80 according to the invention for determining the reference point and orientation of an object 82 having coupled thereto a calibration target 84 according to the invention and, particularly, a four-region target of the type shown in Figure IB.
- the system 80 includes an image capture device 86 that generates an image of a scene including object 82.
- the device may be responsive to the visual spectrum, e.g., a conventional video camera or scanner, it may also be responsive to emissions (or reflections) in other spectra, e.g., infrared, gamma-ray, etc.
- Digital image data (or pixels) generated by the capturing device 86 represent, in the conventional manner, the image intensity (e.g., contrast, color, brightness) of each point in the field of view of the capturing device.
- That digital image data is transmitted from capturing device 86 via a communications path 88 to an image analysis system 90.
- This can be a conventional digital data processor, or a vision processing system of the type commercially available from the assignee hereof, Cognex Corporation, as programmed in accord with the teachings hereof to determine the reference point and orientation of a target image.
- the image analysis system 90 may have one or more central processing units 92, main memory 94, input-output system 96, and disk drive (or other mass storage device) 98, all of the conventional type.
- the system 90 and, more particularly, central processing unit 92, is configured by programming instructions according to teachings hereof for operation as illustrated in Figure 4 and described below.
- teachings hereof for operation as illustrated in Figure 4 and described below.
- Those skilled in the art will appreciate that, in addition to implementation on a programmable digital data processor, the methods and apparatus taught herein can be implemented in special purpose hardware.
- FIG. 4 there is shown a machine methodology according to the invention for interpreting an image of a target 84 to determine its reference point and orientation.
- the discussion that follows is particularly directed to identifying a four- region target of the type shown in Figure IB.
- Those skilled in the art will appreciate that these teachings can be readily applied to finding targets according to the invention, as well as to other targets having detectable linear edges that are oriented toward a reference location or reference point on the target, e.g., a prior art cross-shaped target.
- linear edges are referred to as "adjoining edges," regardless of whether they are from calibration targets according to the invention or from prior art calibration targets.
- an image of the target 84 (or of the target 84 and object 82) is generated, e.g., using image capture device 86, and input for machine vision analysis as discussed below.
- the image can be generated real time, retrieved from a storage device (such as storage device 98), or received from any other source.
- the method estimates the orientation of the target in the image using any of many alternative strategies. For example, as shown in step 102, the method determines the orientation by applying a conventional Hough line vision tool that finds the angle of edges discernable in the image. In instances where the target occupies the entire image, those lines will necessarily correspond to the adjoining edges. Where, on the other hand, the target occupies only a portion of the image, extraneous edges (e.g., from other targets) may be evident in the output of that tool. Although those extraneous edges can generally be ignored, in instances where they skew the results, the image can be windowed so that the Hough vision tool is only applied to that portion that contains the target. Once the angles of the lines has been determined by the Hough line tool, the orientation of the image is determined from the predominant ones of those angles. Alternatively, the angle of the image can be determined by taking a histogram of the angles.
- the Hough vision tool used in step 102 may be of the conventional type known and commercially available for finding the angle of lines in image.
- a preferred such tool is the Cognex Line Finder, commercially available from the Assignee hereof, Cognex Corporation.
- step 106 An alternative to using a Hough vision tool is shown in step 106.
- the illustrated method determines the orientation of the target by applying a Sobel edge tool to the image to find the adjoining edges. Particularly, that tool generates a Sobel angle image that reveals the direction of edges in the image.
- the adjoining edges will be the only ones discerned by the Sobel edge tool image.
- the target occupies only a portion of the image, any extraneous edges can be ignored or windowed out.
- the Sobel edge tool may be of the conventional type known and commercially available for finding lines in image.
- a preferred such tool is the Cognex Edge Detection tool, commercially available from the Assignee hereof, Cognex Corporation.
- the orientation of the target in the image is determined by generating a histogram of the edge angle information; see, step 108. From that histogram, the target orientation can be determined by taking a one- dimensional correlation of that histogram with respect to a template histogram of a target oriented at 0°. Where a Sobel magnitude image is generated, in addition to the Sobel angle image, such a histogram can be generated by counting the number of edges greater then a threshold length at each orientation.
- the method contemplates obtaining the angle of orientation of the target from the user (or operator). To this end, the user may enter angle orientation information via a keyboard or other input device coupled with digital data processor 90.
- the method determines the location, i.e., coordinates, of the target reference point in the image.
- the method can apply a Hough vision tool, as described above, to find the angle of lines discernable in the image.
- a conventional Hough vision tool determines, in addition to the angle of lines in an image, the distance of each line, e.g., from a central pixel. As above, where the target occupies the entire image, those lines will be the only ones discernable by the Sobel edge tool image. Where, on the other hand, the target occupies only a portion of the image, any extraneous edges can be ignored or windowed out.
- the Hough vision tool used in step 104 may be of the conventional type known and commercially available for finding the angle and position of lines in image.
- a preferred such tool is the Cognex Line Finder, commercially available from the Assignee hereof, Cognex Corporation.
- steps 102 and 110 can be combined, such that a single application of the Hough vision tool provides sufficient information from which to determine both the orientation of the target in the image and its reference point.
- the method can apply a projection vision tool to the image in order to find the position of the lines discernable in the image; see, step 112.
- the projection tool which maps the two-dimensional image of the target into a one-dimensional image, is applied along the axes defined by the edges in the image.
- the location of the edges can be discerned from by finding the peaks in the first derivatives of each of those projections.
- those lines will be the only lines discernable by the Sobel edge tool image.
- the target occupies only a portion of the image, any extraneous edges can be ignored or windowed out.
- the projection vision tool used in step 112 may be of the conventional type known and commercially available for mapping a two-dimensional image of the target into a one- dimensional image.
- a preferred such tool is that provided with the Cognex Caliper tool commercially available from the Assignee hereof, Cognex Corporation.
- step 114 the method uses the information generated in steps 110 and 112 to compute the location of the reference point, particularly, as the intersection of the lines found in those steps 110 and 112.
- the method can apply determine the location of the reference point by performing a binary or grey scale correlation on the image; see step 116.
- the method uses, as a template, a pattern matching the expected arrangement of the sought-after edges, to wit, a cross-shaped pattern in the case of a target of the type shown in Figure IB.
- the use of correlation vision tools for this purpose is well known in the art.
- the template for such an operation is preferably generated artificially, although it can be generated from prior images of similar targets.
- the method can apply a grey-scale image registration using the sum of absolute differences metric between the image and a template; see step 118.
- the method uses, as a template, a pattern matching the expected arrangement of the sought-after edges, to wit, a cross-shaped pattern in the case of a target of the type shown in Figure IB.
- the template for such an operation is preferably generated artificially, although it can be generated from prior images of similar targets.
- a preferred grey-scale image registration tool is disclosed in United States Patent No. 5,548,326, the teachings of which are incorporated herein by reference.
- steps 110 - 118 can be used to determine the approximate location of the reference point of the target in the image
- the method utilizes optional steps 120 and 122 to refine that estimate. These two steps are invoked one or more times (if at all) in order make that refinement.
- step 120 the method applies a conventional caliper vision tool to rind points in the image that define the adjoining edges of the regions.
- step 120 the method applies calipers along fifty points along each edge (though those skilled in the art will appreciate that other numbers of points can be used), beginning with points closest to the estimate of the reference point, as discerned in steps 110 - 118.
- the calipers are preferably applied a small distance away from the actual estimate of the reference point to avoid skewing the analysis due to possible misprinting of the target at that point, a missing pattern at that point (e.g., Figure 1C), or a too-high spatial frequency at that point (e.g., Figure IB).
- the method then fits a line to the points found along each edge by the caliper tool, preferably, using a conventional least squares technique.
- the method computes a refined location of the reference point as the intersection of the lines identified in step 120.
- the reference point is computed using conventional least squares techniques.
- the method utilizes the same number of points on either side of (and closest to) the reference point for purposes of fitting each line. This minimizes the bias otherwise introduced by a conventional edge detection technique in finding edges that are defined only by dark-to-light (or light-to- dark) transitions.
- Calibration targets of the type shown in Figure IB are advantageously processed by a method according to the invention insofar as they further minimize bias otherwise introduced by a conventional edge detection techniques.
- bias is reduced by the fact that "opposing" adjoining edges (i.e., edges that oppose one another across the reference point) define straight linear segments that change polarity across the reference point. That is, those segments are defined by regions that transition ⁇ preferably, equally in magnitude ⁇ from light-to-dark one one side of the reference point, and from dark-to-light on the other side. This is true for all "symmetric" calibration targets according to the invention, i.e., targets in which opposing edges define straight linear segments that are opposite polarity on either side of the reference point.
- the method specifically applies the caliper tool at points along the lines found in the previous iteration. Preferably, these points are at every pixel in the image that lies along the line, though, those skilled in the art will appreciate that few points can be used.
- the calipers can be applied orthogonal to the lines, though, in the preferred embodiment, they are applied along the grid defined by the image pixels. The caliper range decreases with each subsequent invocation of step 120.
- the method continues applying the calipers, beginning at the estimated center point until one of the four following situations occurs: no edge is found by the caliper applied at the sample point; more than one edge is found by the caliper and the highest scoring edge is less than twice the score of the second highest scoring edge (this 2X comes from the CONFUSION_THRESHOLD); the distance between a computed edge point and the nominal line (computed from the previous invocation of step 120) is larger than a threshold (which threshold decreases with each subsequent invocation of step 120); or, the caliper extends outside of the image.
- the method then fits a line to the points found along each edge by the caliper tool, preferably, using a conventional least squares technique.
- the method computes a refined location of the reference point as the intersection of the lines identified in step 120.
- the reference point is computed using conventional least squares techniques.
- step 150 the method calls for generating an image of a target 84 and, particularly, of a target according to the invention having two or more regions, each region being defined by at least two linear edges that are directed toward a reference point, and having at least one of the regions having a different imageable characteristic from an adjacent region.
- Step 152 can be effected in the manner described in connection with step 100 of Figure 4, or equivalents thereof.
- the method analyzes the image to generate an estimate of an orientation of the target in the image.
- Step 152 can be effected in the manner described in connection with steps 102 - 108 of Figure 4, or equivalents thereof.
- step 154 the method analyzes the image to generate estimate of a location of the target's reference point.
- Step 154 can be effected in the manner described in connection with steps 110 - 118 of Figure 4, or equivalents thereof.
- step 156 the method analyzes the image to refine its estimates of the location of the reference point in the image and the orientation of the target in the image.
- Step 156 can be effected in the manner described in connection with steps 120 - 122 of Figure 4, or equivalents thereof.
- Calibration target and methods for analysis according to the invention are advantageous over prior art targets and methods insofar as they are magnification invariant.
- methods according to the invention insure reliance on features (to wit, regions) that retain the same imageable appearance regardless of magnification. This is in contrast to prior art targets and methods, which rely on individual lines (or dots) to define calibrating features. As noted above, the imaging appearances of such lines and dots change with varying degrees of magnification. Even the prior art methods that analyze checkerboard targets rely on analysis of corners, which are not magnification invariant.
- FIGS 6 A - 6C The magnification invariance of targets and methods according to the present invention is illustrated in Figures 6 A - 6C.
- Figure 6 A there is shown an imaging setup wherein camera 200 images a target 202 (on object 204) from a height x.
- An image generated by camera 200 is displayed on monitor 206 of workstation 208.
- Figure 6B there is shown an identical imaging setup, except insofar as a camera (of identical magnification) images a target (of identical size) from a greater height, x'.
- Figure 6C shows an identical imaging setup, except insofar as a camera (again, of identical magnification) images a target (again, of identical size) from a still greater height, x".
- Still another advantage of calibration targets and methods according to the invention is that they permits angular orientation to be determined throughout a full 360° range.
- the relative positions of the regions can be used to determine the overall orientation of the target, i.e., whether it is rotated 0°, 90°, 180°, or 270°. This information can be combined with a determination of relative orientation made by analysis of the adjoining edges as discussed above to determine the precise position of the target.
- This chapter describes the Edge Detection tool, which includes edge detection ana peak detection.
- Edge detection takes an input Image and produces two output images: an image of the edge magnitude of each input pixel and an image of the edge angle of each input pixel.
- the information produced by edge detection can be used to locate objectss within an image.
- Edge pixel information can be used to detect the rotation of an ob
- Peak detection is useful in any application in which knowledge of the local maximum vaJu ⁇ e in a two-dimensional space is useful. Peak detection takes an input image and produces an output Image containing only those pixels in the input image with higher values than neighboring pixels.
- a typical input image to peak detection is an edge magnitude image; the output image contains only the highest magnitude edge pixels.
- the edge detection function can optionally use peak detection to postprocess the edge detection results so that only the strongest edges remain
- Edge Detection An Overview of Edge Detection describes the goals of edge detection, defines an edge pixel, and describes how the Edge Detection tool finds edge pixels. This section also explains the two prope ⁇ ies of a pixel that are calculated by the Edge Detection tool, e ⁇ ge magnitude and an ⁇ le. and ends with a sample application.
- Tins section contains a discussion of the compression tables that affect the amount of memory u ⁇ ed by the edge detector.
- Edge Detection Enumerations and Data Structures describes the data structures and enumerations that the edge detection functions use. Types and data structures that support peak detection are discussed in Peak Detection Enumeration Types and Data Structures below. Edge Detection Tool
- Edge Detection Functions describes the functions that implement edge detection
- Peak Detection describes the interface to peak detection.
- Peak Detection Enumeration Types and Data Structures describes the peak detection enumerations and ⁇ ata structures
- Peak Detection Functions describes the peak detection functions
- Sobel operators Two 3x3 operators that the Edge Detection tool uses to locate edge pixels in an image horizontal edge Value returned by the horizontal Sobel operator when it is component applied to a pixel in an image.
- the Edge Detection tool defines rne angle of a pixel as arctan(vyft) where v is the vertical edge component and h is the nonzontal edge component of the pixel peak pixel Pixel with a value greater than or equal to some or all of its neighboring pixels' values peak detection Finding some or all peak pixels in an image
- symmetric peak Peak whose pixel value is greater than or equal to the values or each of its neighbors.
- aeymmetrle peak Peak whose pixel value is greater than the values of its left and lower neighbors and greater than or equal to the values of Its right and upper neighbors.
- Deak detection a region of contiguous pixels of the same value.
- the Edge Detection tool finds edge pixels in an image. This overview of edge detection oontain ⁇ the following descriptions:
- the Edge Detection tool locates edges and determines each edge's angle and magnitude.
- Figure 66 A figure with an edge Edge Detection Tool
- edge can be further refined as the directed border between grey areas of an image, where direction is defined as a vector normal to tne edge.
- direction is defined as a vector normal to tne edge.
- the triangle in Figure 67 has three directional edges, represented by the thraa vectors T ⁇ e angle ot the edge is the counterclockwise angle of the vector normal to the edge, with respect to the horizontal axis of the image See the section Angle Image on page 181 for a discussion of edge angle
- an edge Along with an angle, an edge has a magnitude, which reflects the amount of the difference between grey levels on either side of the edge. As the difference in grey levels increases, the maomtude increases See the section The Magnitude Image on page 178 for a discussion of edge magnitude.
- the triangle in Figure 68 has the same three edges as the triangle in Figure 67' however, these edges are of lower magnitude This lower magnitude is Illustrated by the shortened length of the three direction vectors
- Edges are actually located pixel by pixel Figure 69 contains a magnified view of a poruon of an edge in an Image. Each cell in the figure represents the grey level of a single pixel Notice that on this "microscopic" level there are many edges between pixels in which the grey levels on either side of the edge differ by a few percentage points
- the Edge Detection tool locates edges in an image by identifying each pixel that has a different value from one of its neighboring pixels. It calculates the angle ana magnitude of each edge pixel It also provides a means of classifying edges, so that low-magnitude edges are not reported
- a pixel is an edge pixel if it ha ⁇ a different value from at least one of its eight neighboring pixels and is not on the border of an image
- the four shaded pixels are edge pixels
- Figure 71 contains a grid representing grey levels in an image with the highest magnitude edge pixels shaded Notice that border pixels (pixels along the edge of the image) are not edge pixels
- the Edge Detection tool finds edge pixels using the So ⁇ e/ 3x3 neighborhood operators. During edge detection the Sobel operators are applied to each pixel In an input image This operation produces two values for each pixel. One value represents the vertical edge component lor the pixel, the other value represents the horizontal edge component lor the pixel. These two values are then used to compute the edge magnitude and edge angle of the pixel.
- Sobel edge detection uses two 3x3 neighborhood operators to locate edge pixels In an image.
- the horizontal Sobel operator detects the horizontal (x) edge component for a pixel.
- the vertical Sobei operator detects the vertical (y) ⁇ oge component tor a pixel
- Figure 72 shows the Sobel operators.
- the Sobel operator works only on pixels with eight neighboring pixels. When the Sobel operator is applied to a pixel on the border of an image, the result is defined to be 0 because there are not enough neighboring pixels to calculate an edge value.
- the horizontal and vertical Sobel operators are 3x3 linear operators. These operators compute a value for a pixel (x.y) in the following way
- Figure 74 shows all the vertical and horizontal edge component values for a small image
- the upper grid is the image; the values represent the grey levels of each pixel
- the grid to the lower left of the image contains horizontal edge component values tor the image.
- the cell in the grid contains the result of applying the Sobel horizontal operator to the corresponding input Image pixel.
- the grid to the lower right contains the vertical e ⁇ ge components for the image It is computed using the Sobel vertical operator
- the horizontal and vertical edge component values computed with the Sobel operator are not returned by the edge detection functions. They are used to compute the edge magnitude ana edge angle of each pixel in the image Edge Detection Tool
- Edge magnitudes are ⁇ tored in an output image that shows tne edge magnitude of each corresponding pixel in the input image.
- Edge angles are stored in another output image that shows the edge angle of each pixel You can choose which output images to create magnitude, angle, or both
- the e ⁇ ge magnitude for a given pixel is a function of the horizontal and vertical edge components of the pixel If x is the horizontal edge component value and y is the vertical edge component value for some pixel then the edge magnitude M for at pixel is ⁇ etirte ⁇ as follows:
- Edge Detection tool uses the formula described above to compute edge magnitude and then scales the magnitude upward by approximately 16%.
- the size and depth of the magnitude image are the same as that of the input image. Since all border pixels have horizontal and vertical edge pixel values of 0, the border of the magnitude Image contains all zeros. You can display an edge magnitude imuge higher magnitude edges are brighter
- edge detection can return the number of edge pixels in the input image and the 3um of all edge magnitude values. By dividing the sum ot all edge magnitude values by the size of the image, you can compute the average edge magnitude of an image. If you have two Images of the same scene, you can determine which image is in sharper focus by comparing average edge magnitudes of the two images; the image with the higher average magnitude is in sharper focus
- Figure 76 contains an image, the image's horizontal and vertical edge components computed with Sobel operators, and fhe output image containing the edge magnitude values for each pixel
- the edge detection functions produce an image oontaining tne angle of each edge pixel. If ls the horizontal edge component value and y is the vertical edge component value, the edge angle le the counterclockwise angle between the horizontal axis and the magnitude vector
- Figure 77 shows the geometric relationship between the edge angle, the edge magnitude, and the two edge components.
- M is the magnitude vector and ⁇ is the edge angle
- Edge angles are stored In the angle image in binary form They are represented as fixed-point numbers with tne decimal point to the left of the most significant bit and are interpreted as fractions of 360°. You can choose the maximum number of bits available to express an angle; this value must be less than or equal to 8. If ⁇ is the maximum number of bits used to express angle, and x is the binary representation of an angle, you can convert x to degree notation by using the following formula:
- angles can be expressed to the nearest 5.626°; with a 7-bit representation, an angle can be expressed to the nearest 2.8°; with an a-b ⁇ t representation, an angle can be expressed to the nearest 1 4°.
- Edge Detection Tool
- the edge angle is computed using the arc tangent function If x is the horizontal edge component and y is the vertical edge component for a pixel, the e ⁇ ge angle for that pixel is calculated as shown in Table 3.
- Figure 78 contains six rows. Each row contains (from left to nght) an image with the central pixel shaded, the vertical and horizontal edge component value of the shaded pixel, the image with a vector superimposed showing the edge angle of the central pixel, and the formula used to compute the edge angle. Notice that the equations defining the edge angle value are designed so that the vector always points to the brighter side of the edge
- an edge angle n that is greater than 180° maps to edge angle (n— 180°) For instance, 300" maps to 120°, and 270° maps to 90°.
- the binary representation ot an angle in the range 0° to 180° is similar to the representation of an angle in the range 0° to 360°. If n is the maximum number of bits used to express angle and xls the binary representation of an angle in the range 0° to 180°. you can convert x to degree notation by applying the following formula: iao anglefx) Edge Detection Tool 4
- the edge detection function uae ⁇ it to preproceaa your input image before It calculates the horizontal and vertical edge components.
- Pixel Mapping For a description of pixel mapping see Chapter 3, Pixel Mapping.
- edge angles are defined for all pixels in an image, they are significant only at the pixels wtth the greatest edge magnitudes. There are many edges between pixels where the grey levels on either aide of the edge differ by only a few percent. These insignificant edge pixels, due to frame grabber noise and random fluctuations in pixel values, can clutter your edge magnitude image and make it difficult to evaluate the date in that image Aleo edge magnitude values usually increase gradually in a "smear" as they approach an edge and then decrease as the edge is croeeed You might want to sharpen the magnitude image by filtering out some edge pixels that are near, but not at, an edge.
- the Edge Detection tool supplies three postprocessing methods Two of these methods, setup and r ⁇ n ⁇ t ⁇ me thresholding, set the lowest edge magnitude pixels to 0
- the third method, peak detection takes an edge magnitude image and processes it so mat. for any small pixel neighborhood, only the highest edge magnitude pixels remain and the other pixels are set to o see tne section Peak Detection on an Edge Magnitude Image on page 229 for a complete discussion of this method.
- edge angles corresponding to 2 ⁇ ro edge magnitude pixels are also set to 0 unless you have specified run-time thresholding but nor peak detection. In this case, edge angles corresponding to zero edge magnitude pixels can be nonzero, but should be considered undefined
- n represents a percentage of the highest possible edge magnitude value.
- any magnitude values in the first n percent of all possible values are forced to 0. For example, if you provide 6 bits to express magnitude values and specify that the lowest 5 percent of magnitude values are forced to 0, then any magnitude below 3 is forced to 0 since 5 percent of 63 (truncated) is 3.
- Peak detection is a method of filtering en image so that only peak pixels remain and other pixels are set to 0 Peak pixels have a value greater than or equal to some or all ot their neighboring pixels' values
- the edge detection function does peak detection postprocessing after it has done setup and run-time thresholding There will be a 2-p ⁇ xel-wlde border of zeros in a p ⁇ ak-deteoted magnitude image.
- Peak Detection on an Edge Magnitude Image on page 229 discusses, in detail, the benefits of using peak detection on an edge magnitude image This section also describes the proper way to parameterize peak detection so that it filters edge magnitude pixels correctly. However you do not need a detailed knowledge of peak detection to use it with edge detection Suggested peak detection parameters are supplied in the section Dafa Structure c ⁇ _edg ⁇ params on page 199
- Figure ⁇ O contains an image its edge magnitude image, and the image that results from peak detection
- the edge magnitude image contains a smeared edge, which is sharpened by peak detection
- the angle histogram supplies a useful "signature" of an object's shape. You can use this signature to gauge similarity among objects to detect rotations of an object, and to detect flaws in an object
- Figure 81 contains four shapes; each shape Is paired with its angle histogram Note that the square and the plus sign have identical angle histograms
- the angle histogram of the circle is flat because there is no dominant angle along the edge of a circle.
- Figure 82 demonstrates the effect of rotation on the angle histograms of a plus sign The histogram shifts as the plus sign rotates
- the Caliper Tool is a tool for locating edges and edge pairs in an image.
- the edge of an obi ⁇ ct in an image is a change in gey value from dark to light or light to dark. This change may span ceveral pixele.
- the Caliper Tool provides methods for ignoring edges that are caused by noise or that are not of interest for a particular application.
- the Caliper Tool is modeled on the mechanical engineer's caliper, a precision device for measuring distances. You specify the separation etween the caliper "jaws," and the Caliper Tool searches an area you specify for edge pairs separated by that distance. You can also search for individual edges when you know their approximate location in an image. This is similar to using a caliper to measure depth.
- a typioal use tor the Caliper Tool ⁇ e part inspection on an assembly line For example, in integrated circuit lead inspection, a part is moved in front of a camera to an approximately known location (the expected location). You use the Caliper Tool to determine the exact location of the left edge of the part by searching for a single light to dark edge at the expected location (see Figure 103). The edge that is closest to the excected location is the left edge of the part.
- the Caliper Tool can be used to measure lead width and lead spacing ⁇ an integrated circuit.
- the Caliper Tool uses protection to map a two- ⁇ imensional window of an image (the caliper window) into a one-dimensional image. Projection collapses an image by summing the pixels in the direction or tne projection, wnicn ten ⁇ s to amplify edges in that direction.
- Figure 105 shows an image, a caliper window, the one-directional image that results from projection, and a graph of the pixel grey values in the one-dimensional image.
- Pixel mapping can be used to filter an image attenuate or amplify a range of grey values, and otherwise map pixel values. See Chapter 3, Pixel Mapping, in the Image Processing manual for a complete description of pixel mapping.
- an edge fitte s applied to the one-dimensional image to further enhance edge information and to smooth the image by eliminating minor grey value changes between neighboring pixels that are most likely caused by noise
- the edge filter produces Caliper Tool
- Figure 106 shows the one-dimensional image from Figure 105 an image generated by the edge fllta/ and a graph of that image
- Edge detection is a method of evaluating peaks in the filtered image and ignoring edge ⁇ that are not of interest such as e ⁇ ges representing n ⁇ se in the image
- Edge detection applies geometric constraints thai you SDecirv to each edge or edge pair found in the image
- Applying geometric constraints provides a way to limit the numD ⁇ r of edges to evaluate by assigning a score to each edge or e ⁇ ge pair based on such factors as the expected location of the edge or edge pair the distance between edges in an edge pair and the minimum grey level difference between pixels on each side of the edge
- the Caliper Tool returns as many results as you have specified in the cclp_p «rams data structure and computes the geometric average of the score of eacn geometric constraint that you specify For edges or edge pairs whose score equals or exceeds the accept threshold the Caliper Tool returns information such as the score and location of the edge or edge pair and the measured distance between edges in an edge pair
- the controlling variables of an edge or edge pair search define a caliper and include such information as the dimensions of the search window and the edge detection scoring parameters
- the type of tne data structure tnat defines a caliper is edp.callpar Caliper Tool 4
- the caliper window is the portion of an image in which the Caliper Tool cearcnee for edges. It is defined by the search length and projection length, which are the width and height resD ⁇ ctively. of the window in an image that will be projected into a one-dimensional image.
- the caliper window is illustrated In Figure 107.
- the pixel in the center of the caliper window is called the application point. This is the point in the run-time image at which you want to locate edges.
- edge information is needed at more than one angle in an image.
- the caliper window can be oriented at any angle to locate edge ⁇ in an image. Angles increase In the clockwise direction from the horizontal axis.
- Figure 108 shows how changing the caliper window angle affects the projection and edge search directions at 90" rotations.
- the Caliper Tool runs faster for 90* rotations than It does for arbitrary rotations.
- Figure 109 shows a -15° caliper window with window rotation disabled.
- the Caliper Tool creates a "skewed" window and projects it along the angle of skew into a one- dimensional image.
- the pixels in the source image are summed along tne angle of skew as shown in Figure 1 10.
- Each pixel in the two-dimensional image contributes to one pixel in the one-dimensional image.
- the skewed window is transformed into a rectangular window before projection.
- the transformation is performed by clp_trarw1orm() which uses several neighboring pixels from the two-dimensional image to calculate the pixel value for the one-dimensional image. See Chapter 2, Basic Image Processing, in the Image Processing manual for a complete description of the function otp_tranaform()
- Figure 1 1 shows a -16° caliper window with window rotation enabled.
- the Caliper Tool rotates the two-dimensional image and projects it into a one-dimensional image.
- skewed or rotated projection depends on the angle of the edge or edge pair of interest in the image and the remaining content of the image If the rotation angle is close to 90° or if a skewed image will contain enough of the edge or edge pair of interest to generate strong edges, you can use skewed projection to speed program execution Figure 112 shows an image, a caliper window and the one-direcrjonal image that results from projection.
- the skewed window contains enough edge information, so skewed projection may be used.
- Figure 113a snows a caliper window where the edges of interest are at 15° and window rotation is disabled A window large enough to enclose these edges would also include much of the dark area in tns image, which in some cases would result in a one-dime ⁇ sio ⁇ al image where the edges are obscured
- Figure 113b shows a caliper window at -15° in the same image with window rotation enabled Only the edges of interest are included in this window
- an edge filter is run on the resulting one-dimensional image
- the edge filter accentuates edges in the image and produces a filtered image
- the peaks in this image indicate strong edges
- Edge filter size is the number of pixels to each side of the edge to consider in the evaluation
- Edge filter leniency is the number of pixels at an edge to ignore between light and dark sides of the edge (lemency is described in tu ⁇ ner detail below)
- Figure 114 illustrates an edge filter The edge filter is positioned over the leftmost pixel in the ⁇ -dime ⁇ sional image where it will entirely fit in that image Pixel values to the left of the leniency region are summed and subtracted from the ⁇ um of the pixel v ⁇ luee to the right of the leniency region in tnia example, (Of S)-(o-O) yields 5 for the edge filter image
- Figure 1 14 The edge filter IB positioned in the one-dimen ⁇ tonal image
- edge filter is then applied at each successive pixel in the one-dimensonal image until the edge filter no longer fits entirely in the image At each pixel position, the pixel values to the left of the leniency region are summed and subtracted from
- Figure 116 shows a graph of the one-dimensional image and the edge filter image from Figure 1 5 Peak position is calculated to suopixei accuracy
- Edge filter size specifies tne number of pixels on either side of an edge to consider in an edge evaluation. Size should usually be greater than 1 because noise in an image causes grey level differences between neighboring pixels.
- Figure 117 Shows a graph Of a one-dimenaonal image of a light band on a dark background, and a graph of an edge filter image where a size of 1 was used. Because there is noise in the image, many more peaks exist than are of interest.
- Figure 118 shows an image similar to the one in Figure 1 7 where a size of four was used. The only two peaks in the image are the peaks of interest. Size is typically between 2 and 5 inclusive.
- Edge filter leniency specifies a region of pixels to ignore at an edge In an Image. Due to slight changes in edge or edge pair position or orientation, a pixel on the edge of a feature may not always have the same grey value. By setting a leniency zone between the light and dark areas of a feature, the effects of slight changes in feature location on filter results are minimized.
- Figure 1 9 shows a graph of a one-dimensional image with two edges that span two pixels eacn, and graphs of the output of two edge filters.
- the first edge filter has a leniency of 0
- the second edge filter has a leniency of 2.
- the edge filter with a leniency of 2 produces higher peaks.
- Two is a typical value for leniency, Caliper Tool 4
- Point Registration Tool Overview provides information about the capabilities and intended use of the Point Registration tool.
- Haw the Point Registration Tool Works provides a general description of the operation of the Point Registration tool.
- Point Registration Tool describes some of the techniques that you will use to implement an application using the Point Registration tool.
- Point Registration Tool Data Structures and Functions provides a detailed description of the data structures and functions that you will use to implement your application.
- point registration A search technique designed to determine the exact point at which two images of the same scene are precisely aligned.
- global minimum The lowest value in a function or signal.
- local minima A low value in a function or signal.
- Positions within an image may be specified in terms of whole pixel positions, in which case the position refers to the upper- left corner of the pixel, or in terms of fractional pixel positions, in which case the position may e anywhere within a pixel. Positions specified in terms of fractional pixel positions are referred to as having subpixei accuracy.
- the Cognex Point Registration tool performs fast, accurate point registration using two images that you supply.
- Point registration is the process of determining the precise offset between two images of the same scene.
- Point Registration tool You use the Point Registration tool to perform point registration by providing a model image and a target image, along with a starting point within the target image.
- the model image snould be a reference image of the feature you are attempting to align.
- the target image should be larger than the model image, and it should contain an Instance of the model image witnin it.
- the Point Registration tool will determine the precise location of the model image within the target image.
- the Point Registration tool will return the location of this match, called the registration point, with subpixei accuracy.
- Figure 160 illustrates an example of how the Point Registration tool performs point registration.
- the Point Registration tool is optimized to locate precisely the model image within the target image, even if the target image contains image defects such as reflections, or if the modei image is partially obscured by other features in the target image.
- Fi ⁇ ure 161 illustrates an example of point registration where the target image is partially obscured.
- the Point Registration tool finds the location of the model image within the target image. You operate the Point Registration tool by supplying the model and target images, along with the location within the target image where you expect the origin of the model image to be. The Point Registration tool will determine, with subpixei accuracy where the origin of the model image lies within the target Image.
- the Point Registration tool works by computing a score indicating the degree of similarity between the model image and a particular portion of the target image that is the same size aa the model image This score can range from 0 to 253, with a score of 0 Indicating that the model image and the target image are perfectly similar and a score of 255 indicating that the model image and the terget image are perfectly dissimilar
- the tool computes this score for locations in the immediate neighborhood surrounding the starting point
- the tool will find the location within thig n ⁇ ignborhood of the target image that produces tne local minima in the value of this score
- the Point Registration tool determines the location of the model image within the target image with subpixei accuracy For typical images, the Point Registration tool can achieve accurate registration to within 0 25 pixels.
- the Point Registration tool seeks the local minima, if the starting point you specify is more than a few pixels from the actual registration point, the tool may not return the correct r ⁇ giQtration point
- the ⁇ xaot amount of vananoe that the Point Registration tool can tolerate will vary depending on the images The variance may be as small as 3 to 5 pixels for some images or as large as 30 pixels for others
- the Point Registration tool will reiurn in addition to tne location of the origin of the model image within the target image a score indicating the degree of similarity between the model image and the target image
- the score returned by tne tool will be from 0 to 255 with a score of 0 indicating that the model image and the target image are perfectly similar and a score of 255 indicating that the model image and the target image are perfectly dissimilar
- the actual precision of the point registration location may be somewhat lower than if the point registration receives a low score
- Point Registration tool When you use the Point Registration tool, you supply the tool with two images and a sta ⁇ ing location within the model image The tool will confine its point registration search to a small area around the starting location that you specify Point Registration Tool
- the Point Registration tool can also perform point registration exhaustively that is, by computing the score for every possible location of the model image within the target image This procedure, called exhaustive point registration is extremely slow It can be helpful however in debugging applications where the Point Registration tool does not appear to be working correctly.
- Figure 1 ⁇ 2 illustrates an example of specifying a masking rectangle
- IMS example the right side of the model is often obscured in the target image
- a rectangle that oov ⁇ r ⁇ the left aide of the model Image you can cause the Point Registration tool to consider only the pixels in that part of the model image This will tend to increase the accuracy of the point registration
- Figure 163 illustrates an example where the oent ⁇ r of the model image ⁇ a often obscured in the target image
- any DIX ⁇ IS that are contained in more than one masking rectangle will be counted toward the score multiple times, once for each rectangle in which they are contained
- the Point Registration tool may not work well in cases where the model image and the target image have different intensity values You can perform limited image processing as part of the point registration by supplying a pixel mapping table.
- Every pixel in the target image will be mapped to the value in the pixel mapping table at the location within the pixel mapping table given by the DiX ⁇ l's value For example if a pixel in the target image had a value of 200, it would be mapped to the value of the 200" , element within the pixel mapping table.
- the Point Registration tool is designed to align target images with a wide variety of image defects, but in order for the tool to work, the model image needa to be a ⁇ free from defeots as possible.
- Point Registration tool When using the Point Registration tool, you must specify a starting location in the model image. The tool will perform its point registration search starting at the point that you specify. Because the tool only seeks the local minima for the score value, if you specify a starting location that vanee greatly from the actual registration point, the point registration operation will fail
- creg_params creg_parame contains parameters that determine how the point registration search will be conducted.
- a structure of this type is supplle ⁇ as an argument to ereg_polnt_reglster().
- option ⁇ Lmap is an optional pixel map. If optionaljnap is non-NULL, the Point Registretion tool will replace each pixel in the target image with the value contained in the element of optionaljnap corresponding to the value of the pixel in the target image. If you supply a value for optional_map. you muat oupply an image of type FAST ⁇ .
- flags specifies the type of point registration mat tne Point Registration tool will perform flags is constructed by ORing together any of the following values:
- CREG TYPE1 is reserved for future use: you should always include CREG_TYPE1 in the value of flags.
- CREG YPE2 is reserved for future use; you should never include CREG TYPE2 In the value of flags.
- the tool may return a location that does not represent tne best match within the Image. If you specify CREG_EXHAUSTIVE, the registration will return the best registration match tor the entire image. An exhaustive point registration will be extremely slow.
- creg results cr ⁇ g_po ⁇ m_r ⁇ gle ⁇ r returns a pointer to a creg_reeutt « structure.
- n _r*fll ⁇ t ⁇ r fills in the structure with information about the results of a point registration operation
- • and y are the x-coordmate and y-coordinate respectively, of the location within the model image at which the target image was found x and give the position with subpixei accuracy
- score is a measure of the similarity between the pixel values in the model image and the target image at tne nearest whole pixel alignment position, score will be between 0 and 255. with lower values indicating a greater degree of similarity between the model image and the target Image
- on_edge_x is set to a nonzero value if, along the x-axi ⁇ , one edge of the model image area is at the edge of the target image. If onjedge t is nonzero, the accuracy of the position information may be slightly reduced from what it would otherwise be.
- on_edge_y is set to a nonzero value if, along the y-axis, one edge of the model image area is at the edge of the target image. If ⁇ _ ⁇ dg ⁇ _y Is nonzero, the accuracy of the position information may be slightly reduced from what it would otherwise be.
- creg_point_regi ⁇ ter() ereg_po4rtt_regiet*r performs point registration u ⁇ ing the model image and target image that you supply.
- the point registration will be controlled by the parameters in the cre ⁇ _pwwn « structure supplied to the luncnon creg_polm_r*gi-rt ⁇ r() returns a pointer to a creg_resurta structure that describes the result of the point registration
- target points to the image to use as the target image for this point registration target must be at least one pixel larger in both the x-dimenson and the y-dimension than model
- model points to the image to use as the model image for this point registration model must be at least one pixel smaller in both the x-dim ⁇ nsion and the y-dimension than larger
- creg_reeult structure from the neap using the default allocator You can free this structure by calling froe() creg_p ⁇ lnt_regl ⁇ ter() throws CGEN_ERR_BADARG if
- the Line Finder computes the locations of lines in an image For each line that it finds, it returns an angle ⁇ and a signed distance dfrom the central pixel of the image. Given ⁇ and d, you can compute all points (x,y) in the image that satisfy the equation of the line (as illustrated in the section The Transformation from Cartesian Space to Hough Space on page 204) The algorithm that the Line Finder uses, along with the angle/distance classification of lines, is explained in How the Line Finder Works on page 166.
- the Line Fln ⁇ er provi ⁇ es the location or the point along each line mat is nearest to the center or tne image, along with the angle of the line
- the Line Finder also estimates the length and density of each line mat it locates, using statistical methods, and returns the position of the center of mass of the line
- Figure 78 illustrates the two modes of the Line Finder.
- the Image In Figure 78a Is shown in Figure 78b after the Line Finder has been invoked in quick mode The presence of the four lines is indicated.
- Each line is drawn to the screen at a constant length, but no attempt is made to estimate its actual length
- Figure 78c shows the result when using the Line Finder in normal mode- the length of each line segment is estimated.
- the Line Finder finds lines regardless ot grey scale variations In the Image, rotation of the scene or changes in scale.
- a typical application tor the Line Finder is locating fiducial marks in a series of images In which grey levels, scale, and rotation are unpredictable from one image to the next.
- Figure 79a and Figure 79b illustrate two images, each containing the same fiducial mark a square within a square. However, the sizes of the marks, and their rotations differ between the two images Also, the mark in Figure 79a is a dark square within Line Finder 3
- the Line Finder demo code includes post-processing of this sort. It contains functions that locate squares of any size and rotation, uang the Line Finder. This code deduces the presence of squares based on the Line Finder's results, eliminating extraneous lines. See Chapter 1, Cognex API Introduction, in the Development Environment 1 manual for the location and name of the Contour Finding demo code on your system Line Finder
- the Hough Line Transform describes the algorithm that the Line Finder uses to find lines In an image.
- the Line Finder uses a Cartesian coordinate system whose origin is the center of the image. Most of the examples in the book are drawn in this coordinate system, although a few use image coordinates These two systems are pictured in Figure 80 Notice that the x-axes of the two systems are the same, but the y-axes are opposite. Note also that positive angles in Cartesian coordinates are measured counterclockwise from the x-axis, positive angles In image buffer coordinates are measured clockwise from the x-axls.
- Figure 80 Image buffer coordinates and Cartesian coordinates
- the Line Finder uses the central pixel of the image as a reference point, The image buffer coordinates of this pixel (x c , y are computed with integer division, using the following formulas: Line Finder 3
- the Line Finder defines a line (in Cartesian coordinates) by its angle from the x-axis and by the signed distance from the center of the image (the central pixel) to the line These two parameters uniquely describe any line in Cartesian space.
- Figure 81a contains an edge that lies along a line ( Figure 81b)
- the line is defined by its angle from the horizontal ( ⁇ ) and by the shortest distance from the center of the image to the line ( Figure 81 c). Note that the distance vector is perpendicular to the line
- the distance is negative if the angle of the distance vector (in Cartesian coordinates) is greater than or equal to 0° and less than 180°: the distance is positive if the distance vector has an angle greater than or equal to 180° and less than 360°.
- Figure 82a shows an image with two features. Each feature is bordered on one side by a line. Aitnough tne lines are different, they have the same angle ⁇ and the same absolute distance d from the center ( Figure 82b). Line Finder
- each line is ensured a unique definition In this example, the two lines are defined as ( ⁇ , d) and ( ⁇ , -d).
- Hough space is a two-dimensional space in which the Line Finder records the lines that it finds in an image.
- a point in Hough space represents a line: one axis represents angle, the other represents distance
- Hough space is Implemented in the Line Finder as an array a simple example is shown in Figure 83.
- the Hough space in this example can record eight angles and nine distances and therefore 72 lines.
- the Line Finder lets you control the size of Hough space. You specify the range of distance in pixels, and, in the edge detection data structure thai you pass to the Line Finder, you supply the number of angles that the Line Finder can compute.
- the Hough space in Figure 84 contains a single line. It has an angle of 135° and is a distance of 3 units from the center of the image.
- Line Finder
- the Line Finder uses the Hough Line Transform to locate lines in an image. This method is outlined and illustrated in this section To understand this discussion, you should be familiar with edge detection and peak detection, as described in Chapter 4, Edge Detection Tool, in the Image Processing manual
- the Hough Line Transform takes advantage of the following rule. If an edge pixel's angle ⁇ a ⁇ location (x0,y ⁇ ) is known, then the line on whicn the pixel lies is also known it is the line with angle ⁇ +90 thai contains the pixel (xO.yO) This is illu ⁇ trated in Figure 85.
- Figure 85 Four edges in an image (a) edge angles at four edge pixels (b), and line angles at those pixels (c)
- the Hough Line Transform is implemented as follows.
- the Line Finder uses edge detection to create an edge magnitude image and an edge angle image from the input image
- the Line Finder creates and clears a Hough space
- the Line Finder searches Hougn space for maximum values, using peak detection.
- the highest value in Hough space represents the strongest line in the image, the second highest represents the second strongest line and so fo ⁇ n.
- each line is pnnted out and, optionally each line is drawn in an output image.
- Figure ⁇ a is an image passed to the Line Finder
- Figure ⁇ b is a stylized edge magnitude image created from the input image Une Finder
- Each edge pixel is processed, as described above. For example, because the outlined edge pixel in Figure 87 belongs to the line (90, -2), the bin in Hough space that represents that line is incremented
- Figure ⁇ 8 shows tne final Hough space along with the input image
- Figure 89 shows the peak-detected Hough space, with each of the peaks outlined
- Figure 89 also shows the input image with the tines, represented by the peaks In Hough space, drawn into the input image
- This f le includes a function for localizing a cress:
- Th s involves constructing a syntnet c -iccel of a rotatec
- the computed edge pcir.t is an outlier 'tne aistance cetween tne edge point and the nominal line is larger than tnr ⁇ sr.cid)
- the caliper may ee off z ⁇ up to 5
- TIGHT_CALIPER_WIDTH snould be larger than the minimum widtn sc that
- xcent xsum / n_po ⁇ nts
- * purpose is to determine whether we need to use a vertical or
- ccmpute_su_op ⁇ xel_pcs ⁇ t cn ' ⁇ ⁇ HII eitn ⁇ r perform a vertical or orizcntal sucpixel edge estimation Sucpixe.
- edge positions are computed in tne same manner as tne ccundary tracker (using " neignccring pixels, and " differences wr.ere we quadratically interpolate tne edge position from at least t ree differences .
- d_max max ( do , max ⁇ dl , max ( d2 , max v d3 , max i d4 , max ; d5 , d ⁇ '.
- d_m ⁇ n mm ( do , m ( dl , mm ( d2 , mm , d3 , mm ⁇ d4 , mm ( dS , d6 '.
- pos_x ap_x*l. - cz_parf ⁇ c ⁇ , d5 , do 55536 -2 return 1;
- step_s ⁇ ze r.umcer of pels n the larger direction (and a fractional number of pels in the other direction) .
- Cip_transform throws CGEN_ERR_BADARG , CIP_ERR_PELAD3R,
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP51943098A JP2001524228A (en) | 1996-10-07 | 1997-10-07 | Machine vision calibration target and method for determining position and orientation of target in image |
EP97910863A EP0883857A2 (en) | 1996-10-07 | 1997-10-07 | Machine vision calibration targets and methods of determining their location and orientation in an image |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US08/726,521 US6137893A (en) | 1996-10-07 | 1996-10-07 | Machine vision calibration targets and methods of determining their location and orientation in an image |
US08/726,521 | 1996-10-07 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO1998018117A2 true WO1998018117A2 (en) | 1998-04-30 |
WO1998018117A3 WO1998018117A3 (en) | 1998-07-02 |
Family
ID=24918945
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US1997/018268 WO1998018117A2 (en) | 1996-10-07 | 1997-10-07 | Machine vision calibration targets and methods of determining their location and orientation in an image |
Country Status (4)
Country | Link |
---|---|
US (1) | US6137893A (en) |
EP (1) | EP0883857A2 (en) |
JP (1) | JP2001524228A (en) |
WO (1) | WO1998018117A2 (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE19921778A1 (en) * | 1999-03-24 | 2000-10-05 | Anitra Medienprojekte Gmbh | Process, support for samples and reading device for two-dimensional position determination on surfaces and the associated triggering of program processes |
WO2001033504A1 (en) * | 1999-10-29 | 2001-05-10 | Cognex Corporation | Method and apparatus for locating objects using universal alignment targets |
WO2001039124A2 (en) * | 1999-11-23 | 2001-05-31 | Canon Kabushiki Kaisha | Image processing apparatus |
US6384907B1 (en) | 1999-07-08 | 2002-05-07 | Bae Systems Plc | Optical target and apparatus and method for automatic identification thereof |
US6903726B1 (en) | 1999-03-24 | 2005-06-07 | Anjowiggins Papiers Couches | Method and system for determining positions on a document |
US6912490B2 (en) | 2000-10-27 | 2005-06-28 | Canon Kabushiki Kaisha | Image processing apparatus |
US6975326B2 (en) | 2001-11-05 | 2005-12-13 | Canon Europa N.V. | Image processing apparatus |
US7043055B1 (en) | 1999-10-29 | 2006-05-09 | Cognex Corporation | Method and apparatus for locating objects using universal alignment targets |
US7079679B2 (en) | 2000-09-27 | 2006-07-18 | Canon Kabushiki Kaisha | Image processing apparatus |
US7120289B2 (en) | 2000-10-27 | 2006-10-10 | Canon Kabushiki Kaisha | Image generation method and apparatus |
US7492476B1 (en) | 1999-11-23 | 2009-02-17 | Canon Kabushiki Kaisha | Image processing apparatus |
US7561164B2 (en) | 2002-02-28 | 2009-07-14 | Canon Europa N.V. | Texture map editing |
US7620234B2 (en) | 2000-10-06 | 2009-11-17 | Canon Kabushiki Kaisha | Image processing apparatus and method for generating a three-dimensional model of an object from a collection of images of the object recorded at different viewpoints and segmented using semi-automatic segmentation techniques |
WO2010044939A1 (en) * | 2008-10-17 | 2010-04-22 | Honda Motor Co., Ltd. | Structure and motion with stereo using lines |
US7773773B2 (en) | 2006-10-18 | 2010-08-10 | Ut-Battelle, Llc | Method and system for determining a volume of an object from two-dimensional images |
CN102288606A (en) * | 2011-05-06 | 2011-12-21 | 山东农业大学 | Pollen viability measuring method based on machine vision |
EP2437495A1 (en) * | 2009-05-27 | 2012-04-04 | Aisin Seiki Kabushiki Kaisha | Calibration target detection apparatus, calibration target detecting method for detecting calibration target, and program for calibration target detection apparatus |
US8855406B2 (en) | 2010-09-10 | 2014-10-07 | Honda Motor Co., Ltd. | Egomotion using assorted features |
TWI507678B (en) * | 2014-04-09 | 2015-11-11 | Inventec Energy Corp | Device and method for identifying an object |
WO2023141903A1 (en) * | 2022-01-27 | 2023-08-03 | Cognex Vision Inspection System (Shanghai) Co., Ltd. | Easy line finder based on dynamic time warping method |
Families Citing this family (76)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7016539B1 (en) | 1998-07-13 | 2006-03-21 | Cognex Corporation | Method for fast, robust, multi-dimensional pattern recognition |
US6650779B2 (en) * | 1999-03-26 | 2003-11-18 | Georgia Tech Research Corp. | Method and apparatus for analyzing an image to detect and identify patterns |
US6658164B1 (en) * | 1999-08-09 | 2003-12-02 | Cross Match Technologies, Inc. | Calibration and correction in a fingerprint scanner |
US6813376B1 (en) * | 1999-10-29 | 2004-11-02 | Rudolph Technologies, Inc. | System and method for detecting defects on a structure-bearing surface using optical inspection |
US6812933B1 (en) * | 1999-10-29 | 2004-11-02 | Cognex Technology And Investment | Method for rendering algebraically defined two-dimensional shapes by computing pixel intensity using an edge model and signed distance to the nearest boundary |
US6744913B1 (en) * | 2000-04-18 | 2004-06-01 | Semiconductor Technology & Instruments, Inc. | System and method for locating image features |
US6816625B2 (en) | 2000-08-16 | 2004-11-09 | Lewis Jr Clarence A | Distortion free image capture system and method |
US6728582B1 (en) | 2000-12-15 | 2004-04-27 | Cognex Corporation | System and method for determining the position of an object in three dimensions using a machine vision system with two cameras |
US6751338B1 (en) | 2000-12-15 | 2004-06-15 | Cognex Corporation | System and method of using range image data with machine vision tools |
US6771808B1 (en) | 2000-12-15 | 2004-08-03 | Cognex Corporation | System and method for registering patterns transformed in six degrees of freedom using machine vision |
US6681151B1 (en) | 2000-12-15 | 2004-01-20 | Cognex Technology And Investment Corporation | System and method for servoing robots based upon workpieces with fiducial marks using machine vision |
US6751361B1 (en) * | 2000-12-22 | 2004-06-15 | Cognex Corporation | Method and apparatus for performing fixturing in a machine vision system |
US6997387B1 (en) * | 2001-03-28 | 2006-02-14 | The Code Corporation | Apparatus and method for calibration of projected target point within an image |
US7003161B2 (en) * | 2001-11-16 | 2006-02-21 | Mitutoyo Corporation | Systems and methods for boundary detection in images |
US6798515B1 (en) | 2001-11-29 | 2004-09-28 | Cognex Technology And Investment Corporation | Method for calculating a scale relationship for an imaging system |
US7085432B2 (en) * | 2002-06-10 | 2006-08-01 | Lockheed Martin Corporation | Edge detection using Hough transformation |
US8218000B2 (en) * | 2002-09-13 | 2012-07-10 | Leica Instruments (Singapore) Pte. Ltd. | Method and system for size calibration |
DE10242628B4 (en) * | 2002-09-13 | 2004-08-12 | Leica Microsystems (Schweiz) Ag | Size calibration method and system |
US7957554B1 (en) * | 2002-12-31 | 2011-06-07 | Cognex Technology And Investment Corporation | Method and apparatus for human interface to a machine vision system |
US8081820B2 (en) | 2003-07-22 | 2011-12-20 | Cognex Technology And Investment Corporation | Method for partitioning a pattern into optimized sub-patterns |
US7190834B2 (en) | 2003-07-22 | 2007-03-13 | Cognex Technology And Investment Corporation | Methods for finding and characterizing a deformed pattern in an image |
JP3714350B2 (en) * | 2004-01-27 | 2005-11-09 | セイコーエプソン株式会社 | Human candidate region extraction method, human candidate region extraction system, and human candidate region extraction program in image |
JP4428067B2 (en) * | 2004-01-28 | 2010-03-10 | ソニー株式会社 | Image collation apparatus, program, and image collation method |
US20050175217A1 (en) * | 2004-02-05 | 2005-08-11 | Mueller Louis F. | Using target images to determine a location of a stage |
US7522306B2 (en) * | 2004-02-11 | 2009-04-21 | Hewlett-Packard Development Company, L.P. | Method and apparatus for generating a calibration target on a medium |
US20050197860A1 (en) * | 2004-02-23 | 2005-09-08 | Rademr, Inc. | Data management system |
JP2005267457A (en) * | 2004-03-19 | 2005-09-29 | Casio Comput Co Ltd | Image processing device, imaging apparatus, image processing method and program |
US7075097B2 (en) * | 2004-03-25 | 2006-07-11 | Mitutoyo Corporation | Optical path array and angular filter for translation and orientation sensing |
US7551765B2 (en) * | 2004-06-14 | 2009-06-23 | Delphi Technologies, Inc. | Electronic component detection system |
DE602005004332T2 (en) | 2004-06-17 | 2009-01-08 | Cadent Ltd. | Method for providing data related to the oral cavity |
US7522163B2 (en) * | 2004-08-28 | 2009-04-21 | David Holmes | Method and apparatus for determining offsets of a part from a digital image |
US8437502B1 (en) | 2004-09-25 | 2013-05-07 | Cognex Technology And Investment Corporation | General pose refinement and tracking tool |
WO2006065563A2 (en) * | 2004-12-14 | 2006-06-22 | Sky-Trax Incorporated | Method and apparatus for determining position and rotational orientation of an object |
US7606437B2 (en) * | 2005-01-11 | 2009-10-20 | Eastman Kodak Company | Image processing based on ambient air attributes |
US8111904B2 (en) | 2005-10-07 | 2012-02-07 | Cognex Technology And Investment Corp. | Methods and apparatus for practical 3D vision system |
US7724942B2 (en) * | 2005-10-31 | 2010-05-25 | Mitutoyo Corporation | Optical aberration correction for machine vision inspection systems |
US8311311B2 (en) | 2005-10-31 | 2012-11-13 | Mitutoyo Corporation | Optical aberration correction for machine vision inspection systems |
US7965887B2 (en) * | 2005-12-01 | 2011-06-21 | Cognex Technology And Investment Corp. | Method of pattern location using color image data |
US7570800B2 (en) * | 2005-12-14 | 2009-08-04 | Kla-Tencor Technologies Corp. | Methods and systems for binning defects detected on a specimen |
US7732768B1 (en) * | 2006-03-02 | 2010-06-08 | Thermoteknix Systems Ltd. | Image alignment and trend analysis features for an infrared imaging system |
US8162584B2 (en) | 2006-08-23 | 2012-04-24 | Cognex Corporation | Method and apparatus for semiconductor wafer alignment |
US8126260B2 (en) * | 2007-05-29 | 2012-02-28 | Cognex Corporation | System and method for locating a three-dimensional object using machine vision |
WO2010016379A1 (en) * | 2008-08-05 | 2010-02-11 | アイシン精機株式会社 | Target position identifying apparatus |
JP5339124B2 (en) * | 2008-09-30 | 2013-11-13 | アイシン精機株式会社 | Car camera calibration system |
EP3975138A1 (en) * | 2008-10-06 | 2022-03-30 | Mobileye Vision Technologies Ltd. | Bundling of driver assistance systems |
US9533418B2 (en) | 2009-05-29 | 2017-01-03 | Cognex Corporation | Methods and apparatus for practical 3D vision system |
CN101794373B (en) * | 2009-12-30 | 2012-07-04 | 上海维宏电子科技股份有限公司 | Application method of rotating and sub-pixel matching algorithm to machine vision system |
JP5371015B2 (en) * | 2010-04-08 | 2013-12-18 | 独立行政法人産業技術総合研究所 | Cross mark detection apparatus and method, and program |
US9393694B2 (en) | 2010-05-14 | 2016-07-19 | Cognex Corporation | System and method for robust calibration between a machine vision system and a robot |
US20120327214A1 (en) * | 2011-06-21 | 2012-12-27 | HNJ Solutions, Inc. | System and method for image calibration |
US9349033B2 (en) * | 2011-09-21 | 2016-05-24 | The United States of America, as represented by the Secretary of Commerce, The National Institute of Standards and Technology | Standard calibration target for contactless fingerprint scanners |
FR2986326B1 (en) * | 2012-01-27 | 2014-03-14 | Msc & Sgcc | OPTICAL METHOD FOR INSPECTING TRANSPARENT OR TRANSLUCENT ARTICLES TO ASSIGN AN OPTICAL REFERENCE ADJUSTMENT TO THE VISION SYSTEM |
US9946947B2 (en) * | 2012-10-31 | 2018-04-17 | Cognex Corporation | System and method for finding saddle point-like structures in an image and determining information from the same |
US9189702B2 (en) | 2012-12-31 | 2015-11-17 | Cognex Corporation | Imaging system for determining multi-view alignment |
US9440313B2 (en) | 2013-03-12 | 2016-09-13 | Serenity Data Security, Llc | Hard drive data destroying device |
US9679224B2 (en) | 2013-06-28 | 2017-06-13 | Cognex Corporation | Semi-supervised method for training multiple pattern recognition and registration tool models |
US9833962B2 (en) | 2014-02-26 | 2017-12-05 | Toyota Motor Engineering & Manufacturing Norh America, Inc. | Systems and methods for controlling manufacturing processes |
US9286695B2 (en) | 2014-03-13 | 2016-03-15 | Bendix Commercial Vehicle Systems Llc | Systems and methods for tracking points within an encasement |
US9675430B2 (en) | 2014-08-15 | 2017-06-13 | Align Technology, Inc. | Confocal imaging apparatus with curved focal surface |
US9596459B2 (en) * | 2014-09-05 | 2017-03-14 | Intel Corporation | Multi-target camera calibration |
EP3054265B1 (en) * | 2015-02-04 | 2022-04-20 | Hexagon Technology Center GmbH | Coordinate measuring machine |
WO2017004573A1 (en) | 2015-07-02 | 2017-01-05 | Serenity Data Services, Inc. | Product verification for hard drive data destroying device |
US11167384B2 (en) | 2015-07-02 | 2021-11-09 | Serenity Data Security, Llc | Hard drive non-destructive dismantling system |
WO2017004575A1 (en) | 2015-07-02 | 2017-01-05 | Serenity Data Services, Inc. | Hard drive dismantling system |
US10937168B2 (en) | 2015-11-02 | 2021-03-02 | Cognex Corporation | System and method for finding and classifying lines in an image with a vision system |
US10152780B2 (en) | 2015-11-02 | 2018-12-11 | Cognex Corporation | System and method for finding lines in an image with a vision system |
CN105528789B (en) * | 2015-12-08 | 2018-09-18 | 深圳市恒科通机器人有限公司 | Robot visual orientation method and device, vision calibration method and device |
EP3467428A4 (en) * | 2016-05-30 | 2019-05-08 | Sony Corporation | Information processing device, information processing method, program, and image capturing system |
US20180240250A1 (en) * | 2017-02-21 | 2018-08-23 | Wipro Limited | System and method for assisted pose estimation |
US11135449B2 (en) | 2017-05-04 | 2021-10-05 | Intraop Medical Corporation | Machine vision alignment and positioning system for electron beam treatment systems |
US10762405B2 (en) | 2017-10-26 | 2020-09-01 | Datalogic Ip Tech S.R.L. | System and method for extracting bitstream data in two-dimensional optical codes |
US10650584B2 (en) | 2018-03-30 | 2020-05-12 | Konica Minolta Laboratory U.S.A., Inc. | Three-dimensional modeling scanner |
US11291507B2 (en) | 2018-07-16 | 2022-04-05 | Mako Surgical Corp. | System and method for image based registration and calibration |
US10735665B2 (en) * | 2018-10-30 | 2020-08-04 | Dell Products, Lp | Method and system for head mounted display infrared emitter brightness optimization based on image saturation |
CN111428731B (en) * | 2019-04-04 | 2023-09-26 | 深圳市联合视觉创新科技有限公司 | Multi-category identification positioning method, device and equipment based on machine vision |
CN110909668B (en) * | 2019-11-20 | 2021-02-19 | 广州极飞科技有限公司 | Target detection method and device, computer readable storage medium and electronic equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5027419A (en) * | 1989-03-31 | 1991-06-25 | Atomic Energy Of Canada Limited | Optical images by quadrupole convolution |
US5113565A (en) * | 1990-07-06 | 1992-05-19 | International Business Machines Corp. | Apparatus and method for inspection and alignment of semiconductor chips and conductive lead frames |
US5179419A (en) * | 1991-11-22 | 1993-01-12 | At&T Bell Laboratories | Methods of detecting, classifying and quantifying defects in optical fiber end faces |
US5371690A (en) * | 1992-01-17 | 1994-12-06 | Cognex Corporation | Method and apparatus for inspection of surface mounted devices |
US5553859A (en) * | 1995-03-22 | 1996-09-10 | Lazer-Tron Corporation | Arcade game for sensing and validating objects |
Family Cites Families (138)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3816722A (en) * | 1970-09-29 | 1974-06-11 | Nippon Electric Co | Computer for calculating the similarity between patterns and pattern recognition system comprising the similarity computer |
JPS5425782B2 (en) * | 1973-03-28 | 1979-08-30 | ||
US3967100A (en) * | 1973-11-12 | 1976-06-29 | Naonobu Shimomura | Digital function generator utilizing cascade accumulation |
US3968475A (en) * | 1974-11-11 | 1976-07-06 | Sperry Rand Corporation | Digital processor for extracting data from a binary image |
JPS5177047A (en) * | 1974-12-27 | 1976-07-03 | Naonobu Shimomura | |
US4011403A (en) * | 1976-03-30 | 1977-03-08 | Northwestern University | Fiber optic laser illuminators |
CH611017A5 (en) * | 1976-05-05 | 1979-05-15 | Zumbach Electronic Ag | |
US4183013A (en) * | 1976-11-29 | 1980-01-08 | Coulter Electronics, Inc. | System for extracting shape features from an image |
JPS5369063A (en) * | 1976-12-01 | 1978-06-20 | Hitachi Ltd | Detector of position alignment patterns |
US4385322A (en) * | 1978-09-01 | 1983-05-24 | View Engineering, Inc. | Pattern recognition apparatus and method |
US4200861A (en) * | 1978-09-01 | 1980-04-29 | View Engineering, Inc. | Pattern recognition apparatus and method |
JPS5579590A (en) * | 1978-12-13 | 1980-06-16 | Hitachi Ltd | Video data processor |
US4300164A (en) * | 1980-03-21 | 1981-11-10 | View Engineering, Inc. | Adaptive video processor |
JPS57102017A (en) * | 1980-12-17 | 1982-06-24 | Hitachi Ltd | Pattern detector |
US4441124A (en) * | 1981-11-05 | 1984-04-03 | Western Electric Company, Inc. | Technique for inspecting semiconductor wafers for particulate contamination |
DE3267548D1 (en) * | 1982-05-28 | 1986-01-02 | Ibm Deutschland | Process and device for an automatic optical inspection |
US4534813A (en) * | 1982-07-26 | 1985-08-13 | Mcdonnell Douglas Corporation | Compound curve-flat pattern process |
US4736437A (en) * | 1982-11-22 | 1988-04-05 | View Engineering, Inc. | High speed pattern recognizer |
US4577344A (en) * | 1983-01-17 | 1986-03-18 | Automatix Incorporated | Vision system |
US4783829A (en) * | 1983-02-23 | 1988-11-08 | Hitachi, Ltd. | Pattern recognition apparatus |
GB8311813D0 (en) * | 1983-04-29 | 1983-06-02 | West G A W | Coding and storing raster scan images |
US4581762A (en) * | 1984-01-19 | 1986-04-08 | Itran Corporation | Vision inspection system |
US4606065A (en) * | 1984-02-09 | 1986-08-12 | Imaging Technology Incorporated | Image processing-system |
US4541116A (en) * | 1984-02-27 | 1985-09-10 | Environmental Research Institute Of Mi | Neighborhood image processing stage for implementing filtering operations |
US4860374A (en) * | 1984-04-19 | 1989-08-22 | Nikon Corporation | Apparatus for detecting position of reference pattern |
US4688088A (en) * | 1984-04-20 | 1987-08-18 | Canon Kabushiki Kaisha | Position detecting device and method |
EP0163885A1 (en) * | 1984-05-11 | 1985-12-11 | Siemens Aktiengesellschaft | Segmentation device |
JPS6134583A (en) * | 1984-07-26 | 1986-02-18 | シャープ株式会社 | Lighting apparatus |
US4953224A (en) * | 1984-09-27 | 1990-08-28 | Hitachi, Ltd. | Pattern defects detection method and apparatus |
JPS6180222A (en) * | 1984-09-28 | 1986-04-23 | Asahi Glass Co Ltd | Method and apparatus for adjusting spectacles |
WO1986003866A1 (en) * | 1984-12-14 | 1986-07-03 | Sten Hugo Nils Ahlbom | Image processing device |
US4876728A (en) * | 1985-06-04 | 1989-10-24 | Adept Technology, Inc. | Vision system for distinguishing touching parts |
US4831580A (en) * | 1985-07-12 | 1989-05-16 | Nippon Electric Industry Co., Ltd. | Program generator |
US4617619A (en) * | 1985-10-02 | 1986-10-14 | American Sterilizer Company | Reflector for multiple source lighting fixture |
US4742551A (en) * | 1985-10-07 | 1988-05-03 | Fairchild Camera & Instrument Corporation | Multistatistics gatherer |
US4706168A (en) * | 1985-11-15 | 1987-11-10 | View Engineering, Inc. | Systems and methods for illuminating objects for vision systems |
US4860375A (en) * | 1986-03-10 | 1989-08-22 | Environmental Research Inst. Of Michigan | High speed cellular processing system |
US4728195A (en) * | 1986-03-19 | 1988-03-01 | Cognex Corporation | Method for imaging printed circuit board component leads |
GB8608431D0 (en) * | 1986-04-07 | 1986-05-14 | Crosfield Electronics Ltd | Monitoring digital image processing equipment |
US4783828A (en) * | 1986-06-02 | 1988-11-08 | Honeywell Inc. | Two-dimensional object recognition using chain codes, histogram normalization and trellis algorithm |
US4771469A (en) * | 1986-06-30 | 1988-09-13 | Honeywell Inc. | Means and method of representing an object shape by hierarchical boundary decomposition |
US4783826A (en) * | 1986-08-18 | 1988-11-08 | The Gerber Scientific Company, Inc. | Pattern inspection system |
US4724330A (en) * | 1986-09-24 | 1988-02-09 | Xerox Corporation | Self aligning raster input scanner |
US4955062A (en) * | 1986-12-10 | 1990-09-04 | Canon Kabushiki Kaisha | Pattern detecting method and apparatus |
US4972359A (en) * | 1987-04-03 | 1990-11-20 | Cognex Corporation | Digital image processing system |
US4764870A (en) * | 1987-04-09 | 1988-08-16 | R.A.P.I.D., Inc. | System and method for remote presentation of diagnostic image information |
US4982438A (en) * | 1987-06-02 | 1991-01-01 | Hitachi, Ltd. | Apparatus and method for recognizing three-dimensional shape of object |
CA1318977C (en) * | 1987-07-22 | 1993-06-08 | Kazuhito Hori | Image recognition system |
JPH07120385B2 (en) * | 1987-07-24 | 1995-12-20 | シャープ株式会社 | Optical reading method |
JP2630605B2 (en) * | 1987-07-29 | 1997-07-16 | 三菱電機株式会社 | Curved surface creation method |
US4903218A (en) * | 1987-08-13 | 1990-02-20 | Digital Equipment Corporation | Console emulation for a graphics workstation |
US5119435A (en) * | 1987-09-21 | 1992-06-02 | Kulicke And Soffa Industries, Inc. | Pattern recognition apparatus and method |
US4907169A (en) * | 1987-09-30 | 1990-03-06 | International Technical Associates | Adaptive tracking vision and guidance system |
US5081656A (en) * | 1987-10-30 | 1992-01-14 | Four Pi Systems Corporation | Automated laminography system for inspection of electronics |
US5287449A (en) * | 1987-11-06 | 1994-02-15 | Hitachi, Ltd. | Automatic program generation method with a visual data structure display |
JPH01160158A (en) * | 1987-12-17 | 1989-06-23 | Murata Mach Ltd | Mechanical control system at remote location |
JP2622573B2 (en) * | 1988-01-27 | 1997-06-18 | キヤノン株式会社 | Mark detection apparatus and method |
JP2739130B2 (en) * | 1988-05-12 | 1998-04-08 | 株式会社鷹山 | Image processing method |
JP2541631B2 (en) * | 1988-07-26 | 1996-10-09 | ファナック株式会社 | CNC remote diagnosis method |
US5046190A (en) * | 1988-09-06 | 1991-09-03 | Allen-Bradley Company, Inc. | Pipeline image processor |
US4975972A (en) * | 1988-10-18 | 1990-12-04 | At&T Bell Laboratories | Method and apparatus for surface inspection |
US5054096A (en) * | 1988-10-24 | 1991-10-01 | Empire Blue Cross/Blue Shield | Method and apparatus for converting documents into electronic data for transaction processing |
US4876457A (en) * | 1988-10-31 | 1989-10-24 | American Telephone And Telegraph Company | Method and apparatus for differentiating a planar textured surface from a surrounding background |
US4932065A (en) * | 1988-11-16 | 1990-06-05 | Ncr Corporation | Universal character segmentation scheme for multifont OCR images |
NL8803112A (en) * | 1988-12-19 | 1990-07-16 | Elbicon Nv | METHOD AND APPARATUS FOR SORTING A FLOW OF ARTICLES DEPENDING ON OPTICAL PROPERTIES OF THE ARTICLES. |
IL89484A (en) * | 1989-03-03 | 1992-08-18 | Nct Ltd Numerical Control Tech | System for automatic finishing of machined parts |
EP0385009A1 (en) | 1989-03-03 | 1990-09-05 | Hewlett-Packard Limited | Apparatus and method for use in image processing |
US5081689A (en) * | 1989-03-27 | 1992-01-14 | Hughes Aircraft Company | Apparatus and method for extracting edges and lines |
US5153925A (en) * | 1989-04-27 | 1992-10-06 | Canon Kabushiki Kaisha | Image processing apparatus |
US5060276A (en) * | 1989-05-31 | 1991-10-22 | At&T Bell Laboratories | Technique for object orientation detection using a feed-forward neural network |
US5253309A (en) * | 1989-06-23 | 1993-10-12 | Harmonic Lightwaves, Inc. | Optical distribution of analog and digital signals using optical modulators with complementary outputs |
DE3923449A1 (en) * | 1989-07-15 | 1991-01-24 | Philips Patentverwaltung | METHOD FOR DETERMINING EDGES IN IMAGES |
US5432525A (en) | 1989-07-26 | 1995-07-11 | Hitachi, Ltd. | Multimedia telemeeting terminal device, terminal device system and manipulation method thereof |
US5063608A (en) * | 1989-11-03 | 1991-11-05 | Datacube Inc. | Adaptive zonal coder |
JP3092809B2 (en) * | 1989-12-21 | 2000-09-25 | 株式会社日立製作所 | Inspection method and inspection apparatus having automatic creation function of inspection program data |
US5164994A (en) * | 1989-12-21 | 1992-11-17 | Hughes Aircraft Company | Solder joint locator |
JPH03210679A (en) * | 1990-01-12 | 1991-09-13 | Hiyuutec:Kk | Method and device for pattern matching |
US5271068A (en) * | 1990-03-15 | 1993-12-14 | Sharp Kabushiki Kaisha | Character recognition device which divides a single character region into subregions to obtain a character code |
US5151951A (en) * | 1990-03-15 | 1992-09-29 | Sharp Kabushiki Kaisha | Character recognition device which divides a single character region into subregions to obtain a character code |
US5495424A (en) | 1990-04-18 | 1996-02-27 | Matsushita Electric Industrial Co., Ltd. | Method and apparatus for inspecting solder portions |
US4959898A (en) * | 1990-05-22 | 1990-10-02 | Emhart Industries, Inc. | Surface mount machine with lead coplanarity verifier |
JP2690603B2 (en) | 1990-05-30 | 1997-12-10 | ファナック株式会社 | Vision sensor calibration method |
US5243607A (en) * | 1990-06-25 | 1993-09-07 | The Johns Hopkins University | Method and apparatus for fault tolerance |
JPH0475183A (en) * | 1990-07-17 | 1992-03-10 | Mitsubishi Electric Corp | Correlativity detector for image |
JP2865827B2 (en) | 1990-08-13 | 1999-03-08 | 株式会社日立製作所 | Data storage method in conference system |
US5206820A (en) * | 1990-08-31 | 1993-04-27 | At&T Bell Laboratories | Metrology system for analyzing panel misregistration in a panel manufacturing process and providing appropriate information for adjusting panel manufacturing processes |
JP2591292B2 (en) * | 1990-09-05 | 1997-03-19 | 日本電気株式会社 | Image processing device and automatic optical inspection device using it |
US5388252A (en) | 1990-09-07 | 1995-02-07 | Eastman Kodak Company | System for transparent monitoring of processors in a network with display of screen images at a remote station for diagnosis by technical support personnel |
US5115309A (en) * | 1990-09-10 | 1992-05-19 | At&T Bell Laboratories | Method and apparatus for dynamic channel bandwidth allocation among multiple parallel video coders |
US5168269A (en) * | 1990-11-08 | 1992-12-01 | Norton-Lambert Corp. | Mouse driven remote communication system |
US5327156A (en) | 1990-11-09 | 1994-07-05 | Fuji Photo Film Co., Ltd. | Apparatus for processing signals representative of a computer graphics image and a real image including storing processed signals back into internal memory |
US5086478A (en) * | 1990-12-27 | 1992-02-04 | International Business Machines Corporation | Finding fiducials on printed circuit boards to sub pixel accuracy |
US5091968A (en) * | 1990-12-28 | 1992-02-25 | Ncr Corporation | Optical character recognition system and method |
US5133022A (en) * | 1991-02-06 | 1992-07-21 | Recognition Equipment Incorporated | Normalizing correlator for video processing |
TW198107B (en) | 1991-02-28 | 1993-01-11 | Ibm | |
JP3175175B2 (en) * | 1991-03-01 | 2001-06-11 | ミノルタ株式会社 | Focus detection device |
US5143436A (en) * | 1991-03-06 | 1992-09-01 | The United States Of America As Represented By The United States Department Of Energy | Ringlight for use in high radiation |
US5265173A (en) * | 1991-03-20 | 1993-11-23 | Hughes Aircraft Company | Rectilinear object image matcher |
CA2072198A1 (en) | 1991-06-24 | 1992-12-25 | Scott C. Farrand | Remote console emulator for computer system manager |
JP2700965B2 (en) | 1991-07-04 | 1998-01-21 | ファナック株式会社 | Automatic calibration method |
DE4222804A1 (en) | 1991-07-10 | 1993-04-01 | Raytheon Co | Automatic visual tester for electrical and electronic components - performs video scans of different surfaces with unequal intensities of illumination by annular and halogen lamps |
US5477138A (en) | 1991-07-23 | 1995-12-19 | Vlsi Technology, Inc. | Apparatus and method for testing the calibration of a variety of electronic package lead inspection systems |
DE69222102T2 (en) | 1991-08-02 | 1998-03-26 | Grass Valley Group | Operator interface for video editing system for the display and interactive control of video material |
US5297238A (en) * | 1991-08-30 | 1994-03-22 | Cimetrix Incorporated | Robot end-effector terminal control frame (TCF) calibration method and device |
US5475766A (en) | 1991-09-05 | 1995-12-12 | Kabushiki Kaisha Toshiba | Pattern inspection apparatus with corner rounding of reference pattern data |
JPH0568243A (en) * | 1991-09-09 | 1993-03-19 | Hitachi Ltd | Variable length coding controlling system |
FR2683340A1 (en) | 1991-11-05 | 1993-05-07 | Sgs Thomson Microelectronics | CIRCUIT ELEVATEUR IN THE SQUARE OF BINARY NUMBERS. |
US5315388A (en) * | 1991-11-19 | 1994-05-24 | General Instrument Corporation | Multiple serial access memory for use in feedback systems such as motion compensated television |
US5159281A (en) * | 1991-11-20 | 1992-10-27 | Nsi Partners | Digital demodulator using numerical processor to evaluate period measurements |
US5145432A (en) * | 1991-11-27 | 1992-09-08 | Zenith Electronics Corporation | Optical interprogation system for use in constructing flat tension shadow mask CRTS |
US5299269A (en) * | 1991-12-20 | 1994-03-29 | Eastman Kodak Company | Character segmentation using an associative memory for optical character recognition |
US5216503A (en) * | 1991-12-24 | 1993-06-01 | General Instrument Corporation | Statistical multiplexer for a multichannel image compression system |
JP3073599B2 (en) | 1992-04-22 | 2000-08-07 | 本田技研工業株式会社 | Image edge detection device |
US5594859A (en) | 1992-06-03 | 1997-01-14 | Digital Equipment Corporation | Graphical user interface for video teleconferencing |
GB2270581A (en) | 1992-09-15 | 1994-03-16 | Ibm | Computer workstation |
US5367667A (en) | 1992-09-25 | 1994-11-22 | Compaq Computer Corporation | System for performing remote computer system diagnostic tests |
US5367439A (en) | 1992-12-24 | 1994-11-22 | Cognex Corporation | System for frontal illumination |
US5583956A (en) | 1993-01-12 | 1996-12-10 | The Board Of Trustees Of The Leland Stanford Junior University | Estimation of skew angle in text image |
US5608872A (en) | 1993-03-19 | 1997-03-04 | Ncr Corporation | System for allowing all remote computers to perform annotation on an image and replicating the annotated image on the respective displays of other comuters |
US5481712A (en) | 1993-04-06 | 1996-01-02 | Cognex Corporation | Method and apparatus for interactively generating a computer program for machine vision analysis of an object |
JPH06325181A (en) | 1993-05-17 | 1994-11-25 | Mitsubishi Electric Corp | Pattern recognizing method |
US5455933A (en) | 1993-07-14 | 1995-10-03 | Dell Usa, L.P. | Circuit and method for remote diagnosis of personal computers |
EP0638801B1 (en) | 1993-08-12 | 1998-12-23 | International Business Machines Corporation | Method of inspecting the array of balls of an integrated circuit module |
US5532739A (en) | 1993-10-06 | 1996-07-02 | Cognex Corporation | Automated optical inspection apparatus |
US5640199A (en) | 1993-10-06 | 1997-06-17 | Cognex Corporation | Automated optical inspection apparatus |
CA2113752C (en) | 1994-01-19 | 1999-03-02 | Stephen Michael Rooks | Inspection system for cross-sectional imaging |
US5519840A (en) | 1994-01-24 | 1996-05-21 | At&T Corp. | Method for implementing approximate data structures using operations on machine words |
US5583954A (en) | 1994-03-01 | 1996-12-10 | Cognex Corporation | Methods and apparatus for fast correlation |
US5526050A (en) | 1994-03-31 | 1996-06-11 | Cognex Corporation | Methods and apparatus for concurrently acquiring video data from multiple video data sources |
US5550763A (en) | 1994-05-02 | 1996-08-27 | Michael; David J. | Using cone shaped search models to locate ball bonds on wire bonded devices |
US5613013A (en) | 1994-05-13 | 1997-03-18 | Reticula Corporation | Glass patterns in image alignment and analysis |
US5557410A (en) | 1994-05-26 | 1996-09-17 | Lockheed Missiles & Space Company, Inc. | Method of calibrating a three-dimensional optical measurement system |
US5495537A (en) | 1994-06-01 | 1996-02-27 | Cognex Corporation | Methods and apparatus for machine vision template matching of images predominantly having generally diagonal and elongate features |
US5602937A (en) | 1994-06-01 | 1997-02-11 | Cognex Corporation | Methods and apparatus for machine vision high accuracy searching |
US5640200A (en) | 1994-08-31 | 1997-06-17 | Cognex Corporation | Golden template comparison using efficient image registration |
US5574668A (en) | 1995-02-22 | 1996-11-12 | Beaty; Elwin M. | Apparatus and method for measuring ball grid arrays |
US5566877A (en) | 1995-05-01 | 1996-10-22 | Motorola Inc. | Method for inspecting a semiconductor device |
US5846318A (en) | 1997-07-17 | 1998-12-08 | Memc Electric Materials, Inc. | Method and system for controlling growth of a silicon crystal |
-
1996
- 1996-10-07 US US08/726,521 patent/US6137893A/en not_active Expired - Lifetime
-
1997
- 1997-10-07 JP JP51943098A patent/JP2001524228A/en active Pending
- 1997-10-07 EP EP97910863A patent/EP0883857A2/en not_active Withdrawn
- 1997-10-07 WO PCT/US1997/018268 patent/WO1998018117A2/en not_active Application Discontinuation
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5027419A (en) * | 1989-03-31 | 1991-06-25 | Atomic Energy Of Canada Limited | Optical images by quadrupole convolution |
US5113565A (en) * | 1990-07-06 | 1992-05-19 | International Business Machines Corp. | Apparatus and method for inspection and alignment of semiconductor chips and conductive lead frames |
US5179419A (en) * | 1991-11-22 | 1993-01-12 | At&T Bell Laboratories | Methods of detecting, classifying and quantifying defects in optical fiber end faces |
US5371690A (en) * | 1992-01-17 | 1994-12-06 | Cognex Corporation | Method and apparatus for inspection of surface mounted devices |
US5553859A (en) * | 1995-03-22 | 1996-09-10 | Lazer-Tron Corporation | Arcade game for sensing and validating objects |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6903726B1 (en) | 1999-03-24 | 2005-06-07 | Anjowiggins Papiers Couches | Method and system for determining positions on a document |
DE19921778A1 (en) * | 1999-03-24 | 2000-10-05 | Anitra Medienprojekte Gmbh | Process, support for samples and reading device for two-dimensional position determination on surfaces and the associated triggering of program processes |
US6384907B1 (en) | 1999-07-08 | 2002-05-07 | Bae Systems Plc | Optical target and apparatus and method for automatic identification thereof |
WO2001033504A1 (en) * | 1999-10-29 | 2001-05-10 | Cognex Corporation | Method and apparatus for locating objects using universal alignment targets |
US7043055B1 (en) | 1999-10-29 | 2006-05-09 | Cognex Corporation | Method and apparatus for locating objects using universal alignment targets |
US6671049B1 (en) | 1999-10-29 | 2003-12-30 | Cognex Corporation | Article of manufacture bearing a universal alignment target |
US7492476B1 (en) | 1999-11-23 | 2009-02-17 | Canon Kabushiki Kaisha | Image processing apparatus |
WO2001039124A2 (en) * | 1999-11-23 | 2001-05-31 | Canon Kabushiki Kaisha | Image processing apparatus |
WO2001039124A3 (en) * | 1999-11-23 | 2002-05-10 | Canon Kk | Image processing apparatus |
US7079679B2 (en) | 2000-09-27 | 2006-07-18 | Canon Kabushiki Kaisha | Image processing apparatus |
US7620234B2 (en) | 2000-10-06 | 2009-11-17 | Canon Kabushiki Kaisha | Image processing apparatus and method for generating a three-dimensional model of an object from a collection of images of the object recorded at different viewpoints and segmented using semi-automatic segmentation techniques |
US7120289B2 (en) | 2000-10-27 | 2006-10-10 | Canon Kabushiki Kaisha | Image generation method and apparatus |
US7545384B2 (en) | 2000-10-27 | 2009-06-09 | Canon Kabushiki Kaisha | Image generation method and apparatus |
US6912490B2 (en) | 2000-10-27 | 2005-06-28 | Canon Kabushiki Kaisha | Image processing apparatus |
US6975326B2 (en) | 2001-11-05 | 2005-12-13 | Canon Europa N.V. | Image processing apparatus |
US7561164B2 (en) | 2002-02-28 | 2009-07-14 | Canon Europa N.V. | Texture map editing |
US7773773B2 (en) | 2006-10-18 | 2010-08-10 | Ut-Battelle, Llc | Method and system for determining a volume of an object from two-dimensional images |
US8401241B2 (en) | 2008-10-17 | 2013-03-19 | Honda Motor Co., Ltd. | Structure and motion with stereo using lines |
WO2010044939A1 (en) * | 2008-10-17 | 2010-04-22 | Honda Motor Co., Ltd. | Structure and motion with stereo using lines |
EP2437495A4 (en) * | 2009-05-27 | 2013-06-12 | Aisin Seiki | Calibration target detection apparatus, calibration target detecting method for detecting calibration target, and program for calibration target detection apparatus |
EP2437495A1 (en) * | 2009-05-27 | 2012-04-04 | Aisin Seiki Kabushiki Kaisha | Calibration target detection apparatus, calibration target detecting method for detecting calibration target, and program for calibration target detection apparatus |
US8605156B2 (en) | 2009-05-27 | 2013-12-10 | Aisin Seiki Kabushiki Kaisha | Calibration target detection apparatus, calibration target detecting method for detecting calibration target, and program for calibration target detection apparatus |
US8855406B2 (en) | 2010-09-10 | 2014-10-07 | Honda Motor Co., Ltd. | Egomotion using assorted features |
CN102288606A (en) * | 2011-05-06 | 2011-12-21 | 山东农业大学 | Pollen viability measuring method based on machine vision |
TWI507678B (en) * | 2014-04-09 | 2015-11-11 | Inventec Energy Corp | Device and method for identifying an object |
WO2023141903A1 (en) * | 2022-01-27 | 2023-08-03 | Cognex Vision Inspection System (Shanghai) Co., Ltd. | Easy line finder based on dynamic time warping method |
Also Published As
Publication number | Publication date |
---|---|
EP0883857A2 (en) | 1998-12-16 |
JP2001524228A (en) | 2001-11-27 |
WO1998018117A3 (en) | 1998-07-02 |
US6137893A (en) | 2000-10-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP0883857A2 (en) | Machine vision calibration targets and methods of determining their location and orientation in an image | |
US6501554B1 (en) | 3D scanner and method for measuring heights and angles of manufactured parts | |
JP2835274B2 (en) | Image recognition device | |
US9704232B2 (en) | Stereo vision measurement system and method | |
US10475179B1 (en) | Compensating for reference misalignment during inspection of parts | |
US7804586B2 (en) | Method and system for image processing for profiling with uncoded structured light | |
EP2322899A1 (en) | Specimen roughness detecting method, and apparatus for the method | |
US7171036B1 (en) | Method and apparatus for automatic measurement of pad geometry and inspection thereof | |
CN106017313B (en) | Edge detection deviation correction value calculation method, edge detection deviation correction method and device | |
US6898333B1 (en) | Methods and apparatus for determining the orientation of an object in an image | |
Li et al. | Stereo vision based automated solder ball height and substrate coplanarity inspection | |
US6813377B1 (en) | Methods and apparatuses for generating a model of an object from an image of the object | |
CN116148277B (en) | Three-dimensional detection method, device and equipment for defects of transparent body and storage medium | |
Frobin et al. | Automatic Measurement of body surfaces using rasterstereograph | |
US20040151363A1 (en) | Choice of reference markings for enabling fast estimating of the position of an imaging device | |
CN115375610A (en) | Detection method and device, detection equipment and storage medium | |
Shiau et al. | Study of a measurement algorithm and the measurement loss in machine vision metrology | |
JPH0933227A (en) | Discrimination method for three-dimensional shape | |
CN114037705B (en) | Metal fracture fatigue source detection method and system based on moire lines | |
JPS63229311A (en) | Detection of cross-sectional shape | |
Kainz et al. | Estimation of camera intrinsic matrix parameters and its utilization in the extraction of dimensional units | |
Li | Locally Adaptive Stereo Vision Based 3D Visual Reconstruction | |
JP2004318488A (en) | Product inspection method and product inspection device | |
Leon | Determination Of The Coverage Of Shot-peened Surfaces. | |
JPH0334801B2 (en) |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A2 Designated state(s): CA JP KR SG |
|
AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE |
|
ENP | Entry into the national phase |
Ref country code: JP Ref document number: 1998 519430 Kind code of ref document: A Format of ref document f/p: F |
|
AK | Designated states |
Kind code of ref document: A3 Designated state(s): CA JP KR SG |
|
AL | Designated countries for regional patents |
Kind code of ref document: A3 Designated state(s): AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1997910863 Country of ref document: EP |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWP | Wipo information: published in national office |
Ref document number: 1997910863 Country of ref document: EP |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 1997910863 Country of ref document: EP |