USRE36656E - Generalized shape autocorrelation for shape acquistion and recognition - Google Patents

Generalized shape autocorrelation for shape acquistion and recognition Download PDF

Info

Publication number
USRE36656E
USRE36656E US08/719,496 US71949696A USRE36656E US RE36656 E USRE36656 E US RE36656E US 71949696 A US71949696 A US 71949696A US RE36656 E USRE36656 E US RE36656E
Authority
US
United States
Prior art keywords
local
shape
shapes
global
entries
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US08/719,496
Inventor
Andrea Califano
Rakesh Mohan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US08/719,496 priority Critical patent/USRE36656E/en
Application granted granted Critical
Publication of USRE36656E publication Critical patent/USRE36656E/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features

Definitions

  • the present invention relates generally to the recognition of complex 2D visual shapes, which could be derived from a viewing of 3D objects, and more particularly, to the automatic acquisition and recognition of complex visual shapes which are not easily modelled by conventional analytic techniques.
  • Target objects Automated systems for recognizing objects are increasingly being used in a large variety of technical fields, for example, biomedicine, cartography, metallurgy, industrial automation, and robotics. Moreover, many businesses are investing huge amounts of money and capital on the research and development of machine vision systems and related data processing systems which can automatically and accurately identify objects, often referred to as "target objects.”
  • An image scene in the context of this document is a two-dimensional (2D) representation, which for instance, could be derived from appearance data retrieved from a three-dimensional (3D) object at different viewpoints.
  • data for deriving the models is inputted manually.
  • data can be learned via some sort of image capturing device, for example, a camera or scanner.
  • model-based techniques have been conceptualized as having two phases: an object acquisition phase and a subsequent object recognition phase. More specifically, models of target objects are initially precompiled and stored during the acquisition phase, independently of the image scene. Then, the occurrences of these objects within an image scene are determined during the recognition phase by comparison of the sampled data to the stored models.
  • the task of recognizing objects in a scene is often complicated by rotation (vantage point), translation (placement), or scaling (size) of the object in a scene.
  • the task may further be complicated by the partial concealment, or "occlusion", of a target object possibly caused by overlaps from other objects or some other adverse condition.
  • parametric techniques the spatial representation of an image in orthogonal coordinates is transformed into a representation based upon another coordinate system. Analysis of the image then takes place based upon the latter coordinate system.
  • the methodology of using parametric techniques originated in U.S. Pat. No. 3,069,654 to Hough, involving the study of subatomic particles passing through a viewing field.
  • Geometric hashing is an often favored technique and is considered proper background for the present invention.
  • models of objects are represented by "interest points.”
  • An orthogonal coordinate system is defined based on an ordered pair of interest points, sometimes referred to as the "basis pair.” For example, the first and second interest points could be identified respectively as ordered pairs (0,0) and (1,0). Next, all other interest points are represented by their coordinates in the coordinate system.
  • the foregoing representation allows for comparison of objects which have been rotated, translated, or scaled, to the interest points of the model. Furthermore, the representation permits reliable comparison of the model to occluded objects, because the point coordinates of the occluded object in the sampled scene have a partial overlap with the coordinates of the stored model, provided both the model and scene are represented in a coordinate system derived from the same basis pair. However, occlusion of one or more of the basis points will preclude recognition of the object.
  • interest points are represented in all possible orthogonal coordinate systems which can be derived from all of the possible basis pairs of interest points.
  • Each coordinate is used to identify an entry to a "hash table.”
  • a "record” is stored which comprises the particular basis pair along with an identification of the particular model at issue.
  • interests points initially are extracted from a scene.
  • An arbitrary ordered pair is selected and used as the first basis pair.
  • the coordinates of all other interests points in the scene are computed utilizing this basis pair.
  • Each computed coordinate is compared to the coordinate entries of the hash table. If a computed coordinate and respective record (model, basis pair) appears in the hash table, then a "vote" is accorded the model and the basis pair as corresponding to the ones in the scene.
  • the votes are accumulated in a "bucket.” When a certain record (model basis pair) gets a large number of votes, then the record, and corresponding model, is adopted for further analysis.
  • edges of the specified model are compared against the edges in the scene. If the edges correspond, then the object is considered matched to the model specified in the adopted record. If the edges do not match, then the current basis pair is discarded and a new basis pair is considered.
  • geometric hashing for the identification of complex visual shapes remains burdensome and undesirably time consuming in many circumstances.
  • geometric hashing is often unreliable because the performance of geometric hashing degrades significantly with only very limited amounts of clutter or perturbation in the sampled data.
  • Geometric hashing can also be unreliable due to extreme sensitivity to quantization parameters.
  • geometric hashing has limited index/model selectivity, oftentimes improperly accumulates excessive votes in each vote bucket, and has a limited number of useful buckets available in the hash tables.
  • the present invention provides for the acquisition and recognition of complex visual shapes.
  • interest points are acquired from a target shape.
  • a set of high dimensional indices are then derived from the interest points.
  • Each of the indices has seven dimensions or more.
  • the indices describe collectively a set of interest points along with the related geometry.
  • the indices can encode translation, rotation, and scale information about the shape(s) to be recognized.
  • interest points are observed from an inputted image. Further, it is recognized whether the target shape resides in the image by considering the high dimension indices.
  • triplets of interest points of a target shape are transformed into local shape descriptors, which identify the interest points along with rotation and scale information about the triplet.
  • other triplets of the local shape descriptors are formed and are converted into the high dimension indices, which identify the local shape descriptors and also encode the translation, rotation, and scale information about the shape(s) to be recognized.
  • the present invention overcomes the deficiencies of the prior art, as noted above, and further provides for the following additional features and advantages.
  • the present invention provides for the time-efficient automatic acquisition and recognition of complex visual shapes, which are difficult to identify using conventional parametric techniques, including conventional geometric hashing.
  • the present invention enhances both accuracy and speed, which have traditionally been quid pro quos.
  • the global indices corresponding to a coordinate of an interest point are highly dimensional (7 or more characteristic dimensions). Moreover, these indices are very selective and invariant of Euclidean transformations.
  • the highly dimensional indices overcome problems in conventional geometric hashing, such as limited index/model selectivity, excessive accumulation of votes in each bucket, limited number of useful buckets in the hash tables, and the undesirable extreme sensitivity to quantization parameters.
  • a model data base which exhibits the properties of robustness (stability with respect to data perturbations), generalization, and recall from partial descriptions.
  • the methodology of the present invention can be implemented on commercial and inexpensive conventional computers.
  • the present invention exhibits both supervised and unsupervised learning of perceptual concepts, such as symmetry and the hierarchical organization of models into subparts.
  • FIG. 1 graphically illustrates at a high level the system architecture and methodology of the present invention wherein a local shape table and a global shape table are used to implement a parametric technique;
  • FIG. 2 shows a set of points from which triplets are formed wherein said points are either interests points from an image or local shape descriptors;
  • FIG. 3 illustrates a Flowchart for the initialization phase of the present invention wherein the local shape table of FIG. 1 is created;
  • FIG. 4 shows a Flowchart for the acquisition phase of the present invention wherein the global shape table of FIG. 1 is created;
  • FIG. 5 illustrates a high level Flowchart for the recognition phase of the present invention wherein target shapes in an image are recognized based upon those shapes that were acquired as models in FIG. 4;
  • FIG. 6 shows a low level Flowchart for the extraction of local shape descriptors during the recognition phase of the present invention
  • FIG. 7 illustrates a low level Flowchart for the filtering of the local shape descriptors of FIG. 6 during the recognition phase
  • FIG. 8 shows a low level Flowchart for the recognition of object shapes based upon an analysis of the filtered local shape descriptors of FIG. 7 during the recognition phase
  • FIG. 9 illustrates a low level Flowchart for the filtering of the object shapes of FIG. 8 during the recognition phase.
  • the present invention presents a parametric technique, i.e., using parameter transforms for the initial acquisition and then subsequent recognition of complex arbitrary shapes in an image.
  • the present invention envisions three phases for operation: (1) a one-time initialization phase, (2) an acquisition phase, and (3) a recognition phase. An overview of the phases is described below with reference to FIG. 1.
  • a local shape table 104 of FIG. 1 is created by inputting local shapes to be recognized during the recognition phase. After creation, the table 104 thereafter serves as a lookup table for local shapes during the recognition phase.
  • the local shape table 104 is structured to have an index ⁇ .sub. ⁇ for addressing a set of entries ⁇ .sub. ⁇ ⁇ .
  • Each index ⁇ .sub. ⁇ and each entry ⁇ .sub. ⁇ are in the form of tuples.
  • a "tuple” refers to a listing of elements or variables having a specific ordering. Tuples are denoted by the brackets " ⁇ . . . >" in the document.
  • the index ⁇ .sub. ⁇ and entry ⁇ .sub. ⁇ are derived by analyzing a "K-plet", a collection of K interest points, which could exist in an image, along with their associated local geometric properties, such as rotation and scale.
  • K-plet a collection of K interest points, which could exist in an image, along with their associated local geometric properties, such as rotation and scale.
  • the interest points are derived from synthetically generated images.
  • target shapes in an image are learned and are used to derive the global shape table 108 of FIG. 1.
  • local shapes are detected in the image. These local shapes are then used to acquire the target shape.
  • interest points are initially acquired from target shapes in an inputted image, which target shapes .are to be later recognized in any scene during the recognition phase. If the shape to be learned (acquired) is 2D, then as shown in FIG. 1, a single unoccluded image 102 of the target shape is presented to the system for acquisition.
  • the interest points acquired from the image are analyzed in groups of K-plets.
  • K-plets are used to derive local shape descriptors (LSD) ⁇ .
  • LSD local shape descriptors
  • Each LSD is associated with a specific interest point (x,y) in the image 102.
  • the LSDs are filtered in order to choose the best ones by conventional means.
  • the conventional "best first” technique is utilized.
  • the conventional "constraint satisfaction technique” could also be used to filter the LSDs.
  • D. H. Ballard "Parameter nets: A theory of low level vision," Proceedings of the 7th International Joint Conference On Artificial Intelligence, pp. 1068-1078, August 1981; A. Califano, R. M. Bolle, and R. W. Taylor, “Generalized neighborhoods: A new approach to complex feature extraction," Proceedings of the IEEE Conference on computer vision and Recognition, June 1989; and R. Mohan, “Constraints Satisfaction networks for computer vision,” Progress in Neural Networks, O. Omidvar Ed., Ablex, 1991.
  • the filtered local shape descriptors are analyzed in groups of K-plets to derive the global shape table 108 of FIG. 1.
  • the global shape table 108 is structured to have indices ⁇ .sub. ⁇ for addressing a set of entries ⁇ .sub. ⁇ ⁇ . Each of the indices ⁇ .sub. ⁇ and entries ⁇ .sub. ⁇ are in the form of tuples.
  • Each tuple entry ⁇ .sub. ⁇ comprises (1) a symbolic label ⁇ , which uniquely identifies the model, (2) a set of geometric parameters ⁇ , used for computing the model's position, orientation, and scale in a scene, and (3) a support mechanism ⁇ for keeping track of the votes for the same values of ⁇ , ⁇ , and ⁇ .sub. ⁇ .
  • the geometric parameters ⁇ are invariant with respect to translation, rotation and scale transformations in subsequent scenes.
  • an image is generated from a scene and inputted into the system.
  • Interest points are derived from the image.
  • local shape descriptors are obtained just as they are in the acquisition phase, discussed previously.
  • a set of object shape hypotheses ⁇ are mapped and accumulated in an object shape parameter space ⁇ .sub. ⁇ .
  • the model hypothesis includes the model label ⁇ and its corresponding registration parameters ⁇ , computed from geometric parameters ⁇ , which encode information relating to the geometry of the K-plet.
  • the registration parameters ⁇ uniquely identify the shape, rotation of the shape, and the shape's scale.
  • the object shape hypotheses ⁇ are filtered in the parameter space ⁇ .sub. ⁇ ( ⁇ , ⁇ ) to select the best, mutually compatible, object shapes ⁇ .
  • the set of filtered object shape hypotheses ⁇ selected essentially comprise the shapes recognized and located in the scene.
  • spatial autocorrelation in the context of this disclosure means that non-local spatial information is being correlated.
  • groupings of interest points or LSDs, called K-plets which are spread across an image are correlated to derive local shape descriptors ⁇ , or object shapes ⁇ , which embed global shape information.
  • spatial autocorrelation to some extent involves the conventional Hough transform technique.
  • Hough transform technique only a single interest point is considered at a time, unlike in the present invention where a K-plet is considered.
  • spatial autocorrelation see A. Califano, "Feature recognition using correlated information contained in multiple neighborhoods," Proceedings of the 7th National Conference on Artificial Intelligence, July 1988, pp. 831-836, and also, A. Califano, R. M. Bolle, and R. W. Taylor, "Generalized neighborhoods: A new approach to complex feature extraction," Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, June 1989.
  • the local shape table 104 of FIG. 1 is created.
  • the methodology for creating the foregoing table is shown and will be described below in regard to FIG. 3.
  • the creation of the local shape table need only be performed once, and then the results can be stored in a transportable file (perhaps, for example, on a magnetic medium).
  • the creation of the local shape table 104 can be considered an initialization process. In other words, when new target shapes are added to the system, the local shape table need not be recreated.
  • the local shape table is created based upon the local shapes which are sought to be recognized by the system.
  • an empty local shape table is created.
  • the local shape table is created in the form of a lookup table, where an index ⁇ corresponds with a unique set of entries ⁇ .
  • the local shapes could be, for example, lines, circular arcs, elliptic arcs (minima and maxima of curvature), and so forth. Large sets of local shapes can be considered.
  • A. P. Baji "A shape representation based on geometric topology: Bumps, Gaussian curvature, and the topological zodiac," Proceedings of the 10th International Conference on Artificial Intelligence, pp. 767-770, August 1987.
  • These local shapes are “synthetically” drawn for the system. “Synthetically” means that the local shapes are automatically generated by the system. In other words, the shapes need not be created or drawn on a display screen or some other visual device.
  • local shapes are synthetically drawn, the manner in which they are quantized, or digitized, depends upon the way the local shapes are scaled or oriented. Accordingly, via a FOR LOOP enclosed by blocks 308 and 328, the local shapes are drawn at a few different scales and orientations (translations and rotations) to enhance the integrity of the ultimate models.
  • each local shape is synthetically positioned symmetrically on an M*M array, as indicated in a block 310.
  • the center point p 1 of the local shape is identified.
  • Two other points p 2 and p 3 are arbitrarily selected along the local shape, to thereby derive a triplet of interest points.
  • Each triplet forms a specific triangle, having the three points p 1 , p 2 , and p 3 , and three sides s 1 , s 2 , and s 3 .
  • the local shape descriptor labels ⁇ i are associated with each of the points p 1 .
  • a FOR LOOP enclosed by blocks 314 and 326 operates upon each triplet of interest points p 1 , p 2 , p 3 in order to derive the indices and entries in the local shape table 104.
  • an index ⁇ .sub. ⁇ and a corresponding entry ⁇ .sub. ⁇ is computed and ⁇ .sub. ⁇ is entered into the table 104 in the form of tuple at the location addressed by the index ⁇ .sub. ⁇ .
  • the index ⁇ is an address in the lookup table.
  • the index ⁇ .sub. ⁇ is generated as follows: ##EQU1##
  • the two angles ⁇ 1 and ⁇ 2 describe the 1 st order properties of the local shape with respect to the given triplet.
  • any properties higher than 0 th order are not used, because these properties are more noise sensitive and could bias the indices for all possible triangles generated for this feature.
  • the tuple entry ⁇ corresponding to the index ⁇ , is computed as follows:
  • is the symbolic label for the local shape.
  • ⁇ T is the angle between the vector t 23 and the normal n to the contour at p 1 required to recover the shape orientation from the triplet.
  • ⁇ /S, where ⁇ is a constant, is used for scale normalization.
  • the foregoing tuples are compared with other indices and entries in the local shape table 104 to determine if it is redundant. First, it is determined whether the table 104 already has entries ⁇ .sub. ⁇ at index ⁇ .sub. ⁇ .
  • the parameters ⁇ , ⁇ , ⁇ T of the entry ⁇ .sub. ⁇ are compared with the like parameters in the already-existing entries at ⁇ .sub. ⁇ . If these parameters match, then the support mechanism ⁇ is accorded a vote, as indicated in a block 320.
  • the entry ⁇ .sub. ⁇ is entered at the location addressed by index ⁇ .sub. ⁇ in the table 104, as shown in a block 324.
  • target shapes are stored in the system for use as models to later recognize such shapes in a scene.
  • FIG. 4 illustrates the methodology for constructing the global shape table 108 of FIG. 1.
  • the global shape table 108 is created when the images to be learned, or acquired, by the system are inputted. Note that the methodology of FIG. 4 is similar to that of FIG. 3. Moreover, similar steps within referenced blocks are denoted with similar reference numerals in the FIGS. 3 and 4 for easy comparison.
  • an unoccluded image is inputted into the system.
  • the image may be a photograph, an image generated from a camera, or the like.
  • the image is inputted so that the position of the target shape's center of mass (x 0 , y 0 ) is computed.
  • a scale (e,g., perimeter) ⁇ and orientation vector d are then assigned. Further, a label ⁇ .sub. ⁇ is provided to identify the target shape.
  • an empty global shape table is created or an existing global shape table is utilized. Similar to the local shape table, the global shape table is created in the form of a lookup table, where an index ⁇ corresponds with a unique set of entries ⁇ .
  • a local shape 112 is shown in FIG. 1.
  • the edges of the image along with the locations of the edges are considered and local shapes are computed.
  • the location of the local shapes is recorded.
  • local shape descriptors ⁇ 1 - ⁇ n are derived, as indicated at reference numeral 106 of FIG. 1. The procedure for detecting local shapes will be more fully discussed in regard to FIGS. 5 and 6 hereinafter.
  • a tuple index ⁇ .sub. ⁇ and a corresponding tuple entry ⁇ .sub. ⁇ is derived for the global shape table 108 of FIG. 1.
  • This procedure is implemented via the FOR LOOP enclosed by blocks 414 and 426, which is analogous to the FOR LOOP enclosed by blocks 3 14 and 326 in FIG. 3.
  • the index ⁇ .sub. ⁇ of FIG. 4 has more parameters than the index ⁇ .sub. ⁇ of FIG. 3.
  • index ⁇ .sub. ⁇ is computed and is specified as follows: ##EQU2##
  • the index ⁇ .sub. ⁇ comprises three additional parameters, namely, ⁇ 1 , ⁇ 2 , ⁇ 3 , which are essentially the identification of the LSD triplet.
  • the LSD triplets form a triangle with sides s 1 , s 2 , s 3 , as shown in FIG. 2.
  • the two angles ⁇ 1 and ⁇ 2 describe the 1 st order properties of the local shape with respect to the given triplet.
  • any properties higher than 0 th order are not used, because these properties are more noise sensitive and could bias the indices for all possible LSD triangles generated for this feature.
  • the tuple entry ⁇ .sub. ⁇ , corresponding to the index ⁇ .sub. ⁇ , is computed as follows:
  • the tuple ⁇ , x T , y T , ⁇ , ⁇ T , ⁇ > is inserted into the global shape table 108 at the location indexed by the index ⁇ .sub. ⁇ .
  • each index ⁇ .sub. ⁇ and corresponding entry ⁇ .sub. ⁇ are computed for each LSD triplet, as shown in a block 418, the foregoing tuples are compared with other indices and entries in the global shape table 108 to determine if it is redundant. First, it is determined whether the table 108 already has an index ⁇ .sub. ⁇ .
  • the parameters ⁇ , x T , y T , ⁇ , ⁇ T of the entry ⁇ .sub. ⁇ are compared with the like parameters in the already-existing entry. If these parameters match, then the support mechanism ⁇ (initially set to one) of the matching entry ⁇ .sub. ⁇ is accorded a vote, as indicated in a block 420.
  • the entry ⁇ .sub. ⁇ is entered at the location addressed by index ⁇ .sub. ⁇ in the table 108, as shown in a block 424.
  • FIG. 5 illustrates a high level Flowchart 500 showing the overall recognition methodology.
  • FIGS. 6-9 comprise low level Flowcharts 600-900 describing in detail respective blocks 506-509 of FIG. 5.
  • edges of an image to be recognized are extracted from a scene. This procedure is well known in the art.
  • a "local feature set” is derived.
  • the set comprises the LSDs ⁇ 1 - ⁇ N for local shapes of the image.
  • a local shape is shown as reference numeral 112 in FIG. 1.
  • the local feature set ⁇ 1 - ⁇ N ⁇ generated in the previous step is filtered using any conventional filtering technique.
  • the resultant filtered set of LSDs is extremely smaller than the original set of LSDs ⁇ 1 - ⁇ N .
  • the resultant filtered set is illustrated at reference numeral 106 in FIG. 1.
  • the best first technique is utilized to filter the LSDs, which technique is well known in the art.
  • any conventional filtering technique could be utilized, such as the constraint satisfaction technique.
  • the specific implementation of this technique is shown in Flowchart 700, entitled “Filtering Local Shape Descriptors" of FIG. 7 Due to the well known nature and applicability of this technique, no discussion of the Flowchart 700 is undertaken.
  • the model hypothesis set ⁇ 1 - ⁇ N ⁇ generated in the previous step is filtered using any conventional filtering technique.
  • the resultant filtered set is extremely smaller than the original set.
  • the first best technique is again utilized for filtering.
  • the specific implementation of this technique is shown in Flowchart 900, entitled “Filtering Object Shape Hypotheses", of FIG. 9. Due to the well known nature and applicability of this technique, no discussion of the Flowchart 900 is undertaken.
  • the model hypotheses which survive the foregoing filtering, if any, are adopted as the shapes which are recognized in the scene. Thus, recognition has occurred.
  • FIG. 6 illustrates an exemplary Flowchart 600 for the extraction of local shapes during the recognition phase.
  • a local feature set is derived in FIG. 6.
  • the set comprises the LSDs ⁇ 1 - ⁇ N corresponding with local shapes of the image.
  • a local shape 112 is shown in FIG. 1.
  • N interest points p of an image are extracted from a scene using any of the numerous conventional techniques.
  • the location of each interest point p is identified by orthogonal coordinates (x,y). This procedure is well known in the art.
  • a temporary parameter space ⁇ .sub. ⁇ (a hash table) is generated for use in the proceeding steps of the Flowchart 600.
  • a FOR LOOP enclosed by blocks 604 and 632 commences for each interest point p 1 of ⁇ p ⁇ .
  • the parameter space ⁇ .sub. ⁇ is first cleared, as indicated in block 606.
  • a neighborhood W.sub. ⁇ on the contour around p 1 is selected.
  • triplets of interest points will be selected from the neighborhood W.sub. ⁇ .
  • the neighborhood W.sub. ⁇ prevents the interest points in triplets from being undesirably scattered all over the image.
  • edges are linked into contours using simple "eight neighbors connectivity.”
  • the sampling step is proportional to the length of the longest symmetrical interval around the point p i for which the tangent behavior is monotonic on a coarse scale, and the total tangent variation is less than 2 ⁇ /3. It is also inversely proportional to the total variation in tangent angle on the symmetric interval if less than 2 ⁇ /3.
  • the index ⁇ .sub. ⁇ (s 1 /S, s 2 /S, ⁇ 1 , ⁇ 2 ), described herein in the section entitled, "Local Shape Table Creation”, is computed for each triplet.
  • Next computations are performed to generate a local shape descriptor hypothesis ⁇ , as indicated in block 620. Specifically, from the ⁇ and ⁇ T of the tuple entry ⁇ .sub. ⁇ , the possible scaler orientations of the contour are created. Note that the location of the contour is at p i , and accordingly, the location of the contour need not be computed as with global shapes. Thus, only scale and rotation information need be computed.
  • the parameter space ⁇ .sub. ⁇ is examined to determine whether an LSD hypothesis ⁇ already exists in the parameter space ⁇ .sub. ⁇ for the contour at issue. If the LSD hypothesis ⁇ exists, then the existing LSD hypothesis ⁇ is merely updated by appending points p i , p j , p k to it. If the LSD hypothesis ⁇ does not exist, then the entry ⁇ .sub. ⁇ for ⁇ having the three points p i , p j , p k is created in the parameter space ⁇ .sub. ⁇ .
  • an LSD hypothesis ⁇ will have a high vote count, based upon votes accorded it by the points p 1 , p j , p k . These points along with the corresponding LSD hypothesis ⁇ belong to the particular contour at issue.
  • the LSD hypothesis ⁇ is adopted as the local shape descriptor ⁇ for p i .
  • the local feature set ⁇ 1 - ⁇ N ⁇ is filtered using any conventional filtering technique as discussed previously.
  • a conventional best first technique is utilized and is illustrated at Flowchart 700 of FIG. 7.
  • the Flowchart 700 is self explanatory.
  • the resultant filtered set of LSDs, ⁇ selected is extremely smaller than the original set of LSDs ⁇ 1 - ⁇ N .
  • the resultant filtered set is illustrated at reference numeral 106 in FIG. 1.
  • FIG. 8 shows a Flowchart 800 for recognizing an object based upon the local feature set ⁇ selected , comprising the filtered LSDs ⁇ .
  • Flowchart 800 proceeds very similarly with respect to Flowchart 600. Accordingly, similar steps within referenced blocks are denoted with similar reference numerals in the Flowcharts 800 and 600 for easy comparison. However, some differences in methodology do exist.
  • Flowchart 800 the focus is upon local curve shapes, not upon interest points as in Flowchart 600. Further, triplets in Flowchart 800 are formed from LSDs ⁇ , not with interest points as in Flowchart 600. Finally, the global shape table 108 is consulted during the methodology of Flowchart 800, not the local shape table 104 as in Flowchart 600. Note that the index ⁇ .sub. ⁇ corresponding with the global shape table 108 is larger than the index ⁇ .sub. ⁇ corresponding with the local shape table 104.
  • the N filtered LSDs ⁇ corresponding to the image are acquired.
  • a temporary parameter space ⁇ .sub. ⁇ (a hash table) is generated for use in the proceeding steps of the Flowchart 800.
  • a FOR LOOP enclosed by blocks 804 and 832 commences for each LSD ⁇ of the local feature set ⁇ selected .
  • the parameter space ⁇ .sub. ⁇ is first cleared, as indicated in block 806.
  • triplets of LSDs ⁇ are derived. All possible combinations are formed of the LSD ⁇ i in conjunction with two others ⁇ j and ⁇ k from the local feature set ⁇ selected . These combinations are made in FOR LOOPs 810,828 ( ⁇ j ) and 612,626 ( ⁇ k ), as shown. At block 814, it is insured that the same LSD is not selected twice when constructing the triplet combinations.
  • the index ⁇ .sub. ⁇ ⁇ s 1 /S, s 2 /S, ⁇ 1 , ⁇ 2 , ⁇ i , ⁇ j , ⁇ k >, described herein in the section entitled, "Global Shape Table Creation", is computed for each LSD triplet.
  • the parameter space ⁇ .sub. ⁇ is examined to determine whether an object shape hypothesis ⁇ already exists in the parameter space ⁇ .sub. ⁇ . If the object shape hypothesis ⁇ already exists, then the existing object shape hypothesis ⁇ is merely updated by appending ( ⁇ i , ⁇ j , ⁇ k ) to it and incrementing its vote. If the object shape hypothesis ⁇ does not exist, then the object shape instance ⁇ having the triplet ⁇ i , ⁇ j , ⁇ k is created in the parameter space ⁇ .sub. ⁇ .
  • the object shape hypotheses ⁇ having a votes ⁇ above a preset threshold ⁇ 0 are adopted. For each ⁇ , there exists a unique model label ⁇ a support ⁇ , and a supporting LSD list ⁇ supp ⁇ .
  • Flowchart 900 is self-explanatory.
  • the resultant filtered set of LSDs, ⁇ selected is illustrated at reference numeral 110 in FIG. 1.
  • the ⁇ selected constitute the object shape recognized and located in the image.

Abstract

The invention provides automatic acquisition and recognition of complex visual shapes within images. During an acquisition phase, models are derived from interest points acquired from a target shape. The models are stored in and can be retrieved from a lookup table via high dimension indices. When an image is inputted, triplets of interest points in the image are used to compute local shape descriptors, which descrb the geometry of local shapes in the image. In turn, triplets of local shape descriptors are used to compute high dimension indices. These indices arm used for accessing the lookup table having the models The models are used for the automatic recognition of target shapes.

Description

This application is a continuation of application Ser. No. 07/705,037, filed May 21, 1991, now abandoned.
TECHNICAL FIELD
The present invention relates generally to the recognition of complex 2D visual shapes, which could be derived from a viewing of 3D objects, and more particularly, to the automatic acquisition and recognition of complex visual shapes which are not easily modelled by conventional analytic techniques.
BACKGROUND ART
Automated systems for recognizing objects are increasingly being used in a large variety of technical fields, for example, biomedicine, cartography, metallurgy, industrial automation, and robotics. Moreover, many businesses are investing huge amounts of money and capital on the research and development of machine vision systems and related data processing systems which can automatically and accurately identify objects, often referred to as "target objects."
Automated object recognition systems are becoming more and more sophisticated as data processing techniques advance in sophistication. Most of the practical recognition systems in the conventional art employ methods involving the derivation of models to be used for recognizing objects in an image scene. An image scene in the context of this document is a two-dimensional (2D) representation, which for instance, could be derived from appearance data retrieved from a three-dimensional (3D) object at different viewpoints.
Furthermore, in most systems, data for deriving the models is inputted manually. However, in a few high-end automated systems, data can be learned via some sort of image capturing device, for example, a camera or scanner.
The model-based techniques have been conceptualized as having two phases: an object acquisition phase and a subsequent object recognition phase. More specifically, models of target objects are initially precompiled and stored during the acquisition phase, independently of the image scene. Then, the occurrences of these objects within an image scene are determined during the recognition phase by comparison of the sampled data to the stored models.
The task of recognizing objects in a scene is often complicated by rotation (vantage point), translation (placement), or scaling (size) of the object in a scene. In addition, the task may further be complicated by the partial concealment, or "occlusion", of a target object possibly caused by overlaps from other objects or some other adverse condition.
Some recognition systems employ "parametric" techniques, or mathematical parameter transforms. In parametric techniques, the spatial representation of an image in orthogonal coordinates is transformed into a representation based upon another coordinate system. Analysis of the image then takes place based upon the latter coordinate system. The methodology of using parametric techniques originated in U.S. Pat. No. 3,069,654 to Hough, involving the study of subatomic particles passing through a viewing field.
Many of the commonly used parametric techniques are interrelated and have evolved over years of experimentation since the Hough patent. For a general discussion in regard to the use of parametric techniques for shape identification, see D. H. Ballard, "Parameter nets: A theory of low level vision," Proceedings of the 7th International Joint Conference on Artificial Intelligence, pp. 1068-1078, August 1981.
Well known conventional parametric techniques include "alignment techniques", "Hough Transform techniques", and "geometric hashing". Although not particularly relevant to the present invention, alignment techniques are discussed in the following articles: D. P. Huttenlocher, S. Ullman, "Three-Dimensional Model Matching from an Unconstrained Viewpoint," Proceedings of the 1'st International Conference on Computer Vision, pp. 102-111, London, 1987, and D. P. Huttenlocher, S. Ullman, "Recognizing Solid Objects by Alignment," Proceedings of the DARPA Image Understanding Workshop, vol. II, pp. 1114-1122, Cambridge, Massachusetts, April 1988. In regard to Hough Transform techniques, which also are not particularly relevant to the present invention, see D. H. Ballard, "Generalizing the Hough Transform to Detect Arbitrary Shapes, "Pattern Recognition, vol. 13(2), pp. 111-122, 1981.
Geometric hashing is an often favored technique and is considered proper background for the present invention. In geometric hashing, models of objects are represented by "interest points." An orthogonal coordinate system is defined based on an ordered pair of interest points, sometimes referred to as the "basis pair." For example, the first and second interest points could be identified respectively as ordered pairs (0,0) and (1,0). Next, all other interest points are represented by their coordinates in the coordinate system.
The foregoing representation allows for comparison of objects which have been rotated, translated, or scaled, to the interest points of the model. Furthermore, the representation permits reliable comparison of the model to occluded objects, because the point coordinates of the occluded object in the sampled scene have a partial overlap with the coordinates of the stored model, provided both the model and scene are represented in a coordinate system derived from the same basis pair. However, occlusion of one or more of the basis points will preclude recognition of the object.
To avoid such a condition, interest points are represented in all possible orthogonal coordinate systems which can be derived from all of the possible basis pairs of interest points. Each coordinate is used to identify an entry to a "hash table." In the hash table, a "record" is stored which comprises the particular basis pair along with an identification of the particular model at issue.
During the object recognition phase, interests points initially are extracted from a scene. An arbitrary ordered pair is selected and used as the first basis pair. The coordinates of all other interests points in the scene are computed utilizing this basis pair. Each computed coordinate is compared to the coordinate entries of the hash table. If a computed coordinate and respective record (model, basis pair) appears in the hash table, then a "vote" is accorded the model and the basis pair as corresponding to the ones in the scene. The votes are accumulated in a "bucket." When a certain record (model basis pair) gets a large number of votes, then the record, and corresponding model, is adopted for further analysis.
Using the record, the edges of the specified model are compared against the edges in the scene. If the edges correspond, then the object is considered matched to the model specified in the adopted record. If the edges do not match, then the current basis pair is discarded and a new basis pair is considered.
For further discussions in regard to geometric hashing, consider: Y. Lamdan, H. J. Wolfson, "Geometric hashing: a general and efficient model-based recognition scheme," Proceedings of the 2nd International Conference on Computer Vision, December 1988; J. Hong, H. J. Wolfson, "An Improved Model-Based Matching Method using Footprints," Proceedings of the International Conference on Pattern Recognition, Rome, Italy, November 1988; Y. Lamdan, J. T. Schwartz, H. J. Wolfson, "On recognition of 3-D objects from 2-D images," Proceedings of the IEEE International Conference on Robotics and Automation, vol. 3, pp.1, 1407-1413, Philadelphia, April 1988; and, finally, A. Kalvin, E. Schonberg, J. T. Schwartz, M. Sharir, "Two Dimensional Model Based Boundary Matching Using Footprints," The International Journal of Robotics Research, vol. 5(4), pp. 38-55, 1986.
Although to some degree effective, conventional parametric techniques, and specifically, conventional geometric hashing for the identification of complex visual shapes remains burdensome and undesirably time consuming in many circumstances. Moreover, geometric hashing is often unreliable because the performance of geometric hashing degrades significantly with only very limited amounts of clutter or perturbation in the sampled data. Geometric hashing can also be unreliable due to extreme sensitivity to quantization parameters. Finally, geometric hashing has limited index/model selectivity, oftentimes improperly accumulates excessive votes in each vote bucket, and has a limited number of useful buckets available in the hash tables.
DISCLOSURE OF INVENTION
The present invention provides for the acquisition and recognition of complex visual shapes.
During an acquisition phase, interest points are acquired from a target shape. A set of high dimensional indices are then derived from the interest points. Each of the indices has seven dimensions or more. The indices describe collectively a set of interest points along with the related geometry. For example, the indices can encode translation, rotation, and scale information about the shape(s) to be recognized.
During a recognition phase, interest points are observed from an inputted image. Further, it is recognized whether the target shape resides in the image by considering the high dimension indices.
In a specific implementation of the present invention, during acquisition and recognition, triplets of interest points of a target shape are transformed into local shape descriptors, which identify the interest points along with rotation and scale information about the triplet. Then, other triplets of the local shape descriptors are formed and are converted into the high dimension indices, which identify the local shape descriptors and also encode the translation, rotation, and scale information about the shape(s) to be recognized.
FEATURES AND ADVANTAGES OF THE INVENTION
The present invention overcomes the deficiencies of the prior art, as noted above, and further provides for the following additional features and advantages.
Generally, the present invention provides for the time-efficient automatic acquisition and recognition of complex visual shapes, which are difficult to identify using conventional parametric techniques, including conventional geometric hashing. In other words, the present invention enhances both accuracy and speed, which have traditionally been quid pro quos.
The global indices corresponding to a coordinate of an interest point are highly dimensional (7 or more characteristic dimensions). Moreover, these indices are very selective and invariant of Euclidean transformations. The highly dimensional indices overcome problems in conventional geometric hashing, such as limited index/model selectivity, excessive accumulation of votes in each bucket, limited number of useful buckets in the hash tables, and the undesirable extreme sensitivity to quantization parameters.
A model data base is provided which exhibits the properties of robustness (stability with respect to data perturbations), generalization, and recall from partial descriptions.
The methodology of the present invention can be implemented on commercial and inexpensive conventional computers.
The present invention exhibits both supervised and unsupervised learning of perceptual concepts, such as symmetry and the hierarchical organization of models into subparts.
It should be noted the above list of advantages is not exhaustive. Further advantages of the present invention will become apparent to one skilled in the art upon examination of the following drawings and the detailed description. It is intended that any additional advantages be incorporated herein.
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention, as defined in the claims, can be better understood with reference to the text and to the following drawings.
FIG. 1 graphically illustrates at a high level the system architecture and methodology of the present invention wherein a local shape table and a global shape table are used to implement a parametric technique;
FIG. 2 shows a set of points from which triplets are formed wherein said points are either interests points from an image or local shape descriptors;
FIG. 3 illustrates a Flowchart for the initialization phase of the present invention wherein the local shape table of FIG. 1 is created;
FIG. 4 shows a Flowchart for the acquisition phase of the present invention wherein the global shape table of FIG. 1 is created;
FIG. 5 illustrates a high level Flowchart for the recognition phase of the present invention wherein target shapes in an image are recognized based upon those shapes that were acquired as models in FIG. 4;
FIG. 6 shows a low level Flowchart for the extraction of local shape descriptors during the recognition phase of the present invention;
FIG. 7 illustrates a low level Flowchart for the filtering of the local shape descriptors of FIG. 6 during the recognition phase;
FIG. 8 shows a low level Flowchart for the recognition of object shapes based upon an analysis of the filtered local shape descriptors of FIG. 7 during the recognition phase; and
FIG. 9 illustrates a low level Flowchart for the filtering of the object shapes of FIG. 8 during the recognition phase.
BEST MODE OF CARRYING OUT THE INVENTION Table Of Contents
A. Overview
B. Initialization--Local Table Shape Creation
C. Acquisition--Global Table Shape Creation
D. Recognition
1. Local Shape Description Extraction
2. Object Shape Recognition
A. Overview
The present invention presents a parametric technique, i.e., using parameter transforms for the initial acquisition and then subsequent recognition of complex arbitrary shapes in an image. The present invention envisions three phases for operation: (1) a one-time initialization phase, (2) an acquisition phase, and (3) a recognition phase. An overview of the phases is described below with reference to FIG. 1.
During the initialization phase, a local shape table 104 of FIG. 1 is created by inputting local shapes to be recognized during the recognition phase. After creation, the table 104 thereafter serves as a lookup table for local shapes during the recognition phase.
The local shape table 104 is structured to have an index η.sub.Ψ for addressing a set of entries {ζ.sub.Ψ }. Each index η.sub.Ψ and each entry ζ.sub.Ψ are in the form of tuples. A "tuple" refers to a listing of elements or variables having a specific ordering. Tuples are denoted by the brackets "<. . . >" in the document.
The index η.sub.Ψ and entry ζ.sub.Ψ are derived by analyzing a "K-plet", a collection of K interest points, which could exist in an image, along with their associated local geometric properties, such as rotation and scale. In the preferred embodiment, "triplets" (K=3) of interest points are analyzed. During the initialization phase, the interest points are derived from synthetically generated images.
During the acquisition phase, target shapes in an image are learned and are used to derive the global shape table 108 of FIG. 1. First, local shapes are detected in the image. These local shapes are then used to acquire the target shape.
Specifically, interest points are initially acquired from target shapes in an inputted image, which target shapes .are to be later recognized in any scene during the recognition phase. If the shape to be learned (acquired) is 2D, then as shown in FIG. 1, a single unoccluded image 102 of the target shape is presented to the system for acquisition.
The methodology as discussed in relation to 2D target shapes is equally applicable to 3D target objects. However, in the case of 3D target objects, different views are presented to the system in order to take into account a depth dimension, as is well known in the art. The views correspond to either discrete samplings of the Gaussian viewing sphere or different aspects of the model. For a discussion of modelling 3D objects, see J. Koenderink A. van Doorn, "The internal representation of solid shape with respect to vision," Biological Cybernetics, vol 32, pp. 211-216, 1979.
The interest points acquired from the image are analyzed in groups of K-plets. In the preferred embodiment, "triplets" (i.e., K=3) of interest points are analyzed. K-plets are used to derive local shape descriptors (LSD) {Ψ}. Each LSD is associated with a specific interest point (x,y) in the image 102.
Further, the LSDs are filtered in order to choose the best ones by conventional means. In the preferred embodiment, the conventional "best first" technique is utilized. The conventional "constraint satisfaction technique" could also be used to filter the LSDs. For a discussion of the constraint satisfaction technique, see D. H. Ballard, "Parameter nets: A theory of low level vision," Proceedings of the 7th International Joint Conference On Artificial Intelligence, pp. 1068-1078, August 1981; A. Califano, R. M. Bolle, and R. W. Taylor, "Generalized neighborhoods: A new approach to complex feature extraction," Proceedings of the IEEE Conference on computer vision and Recognition, June 1989; and R. Mohan, "Constraints Satisfaction networks for computer vision," Progress in Neural Networks, O. Omidvar Ed., Ablex, 1991.
Next, the filtered local shape descriptors are analyzed in groups of K-plets to derive the global shape table 108 of FIG. 1. In the preferred embodiment, "triplets" (K=3) of LSDs are analyzed to compute indicies η.sub.Λ and entries ζ.sub.Λ. The global shape table 108 is structured to have indices η.sub.Λ for addressing a set of entries {ζ.sub.Λ }. Each of the indices θ.sub.Λ and entries ζ.sub.Λ are in the form of tuples. Each tuple entry ζ.sub.Λ comprises (1) a symbolic label λ, which uniquely identifies the model, (2) a set of geometric parameters β, used for computing the model's position, orientation, and scale in a scene, and (3) a support mechanism γ for keeping track of the votes for the same values of λ, β, and η.sub.Λ. The geometric parameters β are invariant with respect to translation, rotation and scale transformations in subsequent scenes.
During the recognition phase, an image is generated from a scene and inputted into the system. Interest points are derived from the image. Moreover, local shape descriptors are obtained just as they are in the acquisition phase, discussed previously.
By analyzing K-plets of the LSDs, a set of object shape hypotheses Λ are mapped and accumulated in an object shape parameter space Σ.sub.Λ. In the preferred embodiment, "triplets" (K=3) of LSDs are analyzed. Specifically, for each entry addressed by the index ζ.sub.Λ for a K-plet of LSDs, a new model hypothesis Λ=<λ,φ> is formed. The model hypothesis includes the model label λ and its corresponding registration parameters φ, computed from geometric parameters β, which encode information relating to the geometry of the K-plet. The registration parameters φ uniquely identify the shape, rotation of the shape, and the shape's scale.
An object shape hypothesis defined by the model label λ and registration parameters φ is added to the global shape parameter space Σ.sub.Λ (λ, φ).
Finally, the object shape hypotheses {Λ} are filtered in the parameter space Σ.sub.Λ (λ, φ) to select the best, mutually compatible, object shapes {λ}. The set of filtered object shape hypotheses {λ}selected essentially comprise the shapes recognized and located in the scene.
In a general sense, the present invention is performing "spatial autocorrelation." "Spatial autocorrelation" in the context of this disclosure means that non-local spatial information is being correlated. In other words, groupings of interest points or LSDs, called K-plets which are spread across an image, are correlated to derive local shape descriptors Ψ, or object shapes Λ, which embed global shape information.
It should be noted that spatial autocorrelation to some extent involves the conventional Hough transform technique. However, in the Hough transform technique, only a single interest point is considered at a time, unlike in the present invention where a K-plet is considered. For a further discussion regarding spatial autocorrelation, see A. Califano, "Feature recognition using correlated information contained in multiple neighborhoods," Proceedings of the 7th National Conference on Artificial Intelligence, July 1988, pp. 831-836, and also, A. Califano, R. M. Bolle, and R. W. Taylor, "Generalized neighborhoods: A new approach to complex feature extraction," Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, June 1989.
B. Initialization--Local Table Creation
During the initialization phase, the local shape table 104 of FIG. 1 is created. The methodology for creating the foregoing table is shown and will be described below in regard to FIG. 3.
The creation of the local shape table need only be performed once, and then the results can be stored in a transportable file (perhaps, for example, on a magnetic medium). The creation of the local shape table 104 can be considered an initialization process. In other words, when new target shapes are added to the system, the local shape table need not be recreated.
As illustrated in FIG. 3, the local shape table is created based upon the local shapes which are sought to be recognized by the system. First, an empty local shape table is created. The local shape table is created in the form of a lookup table, where an index η corresponds with a unique set of entries {ζ}.
In a FOR LOOP enclosed by blocks 306 and 330, data corresponding to each of the curves desired to be recognized are inputted to the table 104. In this document, these curves are called "local shapes" and are accorded a specific local shape descriptor label ψ for identification purposes.
The local shapes could be, for example, lines, circular arcs, elliptic arcs (minima and maxima of curvature), and so forth. Large sets of local shapes can be considered. In this regard see A. P. Blicher, "A shape representation based on geometric topology: Bumps, Gaussian curvature, and the topological zodiac," Proceedings of the 10th International Conference on Artificial Intelligence, pp. 767-770, August 1987.
These local shapes are "synthetically" drawn for the system. "Synthetically" means that the local shapes are automatically generated by the system. In other words, the shapes need not be created or drawn on a display screen or some other visual device.
Once local shapes are synthetically drawn, the manner in which they are quantized, or digitized, depends upon the way the local shapes are scaled or oriented. Accordingly, via a FOR LOOP enclosed by blocks 308 and 328, the local shapes are drawn at a few different scales and orientations (translations and rotations) to enhance the integrity of the ultimate models.
Next, each local shape is synthetically positioned symmetrically on an M*M array, as indicated in a block 310. At a block 312, the center point p1 of the local shape is identified. Two other points p2 and p3 are arbitrarily selected along the local shape, to thereby derive a triplet of interest points. Each triplet forms a specific triangle, having the three points p1, p2, and p3, and three sides s1, s2, and s3. The local shape descriptor labels ψi are associated with each of the points p1.
A FOR LOOP enclosed by blocks 314 and 326 operates upon each triplet of interest points p1, p2, p3 in order to derive the indices and entries in the local shape table 104.
At a block 316, for each triplet, an index η.sub.Ψ and a corresponding entry ζ.sub.Ψ is computed and ζ.sub.Ψ is entered into the table 104 in the form of tuple at the location addressed by the index η.sub.Ψ.
Essentially, the index η is an address in the lookup table. The index η.sub.Ψ is generated as follows: ##EQU1## The two ratios s1 /S and s2 /S, where S=s1 +s2 +s3, essentially define the geometry of the triangle, as shown in FIG. 2. As further indicated in FIG. 2, the two angles α1 and α2 describe the 1st order properties of the local shape with respect to the given triplet.
For the first point p1, any properties higher than 0th order are not used, because these properties are more noise sensitive and could bias the indices for all possible triangles generated for this feature.
The tuple entry ζ, corresponding to the index η, is computed as follows:
ζ.sub.Ψ =<ψ, ρ, α.sub.T, γ >
for each triplet, the tuple <ψ, ρ, αT, γ > is inserted into the local shape table. Here, ψ is the symbolic label for the local shape. As shown in FIG. 2, αT is the angle between the vector t23 and the normal n to the contour at p1 required to recover the shape orientation from the triplet. ρ=μ/S, where μ is a constant, is used for scale normalization.
After each index η and corresponding set of entries ζ are computed for each triplet of interest points, as shown in a block 318, the foregoing tuples are compared with other indices and entries in the local shape table 104 to determine if it is redundant. First, it is determined whether the table 104 already has entries ζ.sub.Ψ at index η.sub.Ψ.
If entries exist, then the parameters ψ, ρ, αT of the entry ζ.sub.Ψ are compared with the like parameters in the already-existing entries at η.sub.Ψ. If these parameters match, then the support mechanism γ is accorded a vote, as indicated in a block 320.
However, if the table 104 does not have a matching entry at index η.sub.Ψ, then the entry ζ.sub.Ψ is entered at the location addressed by index η.sub.Ψ in the table 104, as shown in a block 324.
After the FOR LOOPS 314,326; 308,328; and 306,330 have completed operation, the local shape table has been completed.
C. Acquisition--Global Table Creation
During the acquisition phase of the present invention, target shapes are stored in the system for use as models to later recognize such shapes in a scene.
FIG. 4 illustrates the methodology for constructing the global shape table 108 of FIG. 1. The global shape table 108 is created when the images to be learned, or acquired, by the system are inputted. Note that the methodology of FIG. 4 is similar to that of FIG. 3. Moreover, similar steps within referenced blocks are denoted with similar reference numerals in the FIGS. 3 and 4 for easy comparison.
Initially, as indicated in a block 402 of FIG. 4, an unoccluded image is inputted into the system. The image may be a photograph, an image generated from a camera, or the like. With reference to FIG. 2, the image is inputted so that the position of the target shape's center of mass (x0, y0) is computed. A scale (e,g., perimeter) μ and orientation vector d are then assigned. Further, a label λ.sub.Λ is provided to identify the target shape.
At block 404, an empty global shape table is created or an existing global shape table is utilized. Similar to the local shape table, the global shape table is created in the form of a lookup table, where an index η corresponds with a unique set of entries {ζ}.
Next, at a block 405, the local shapes of the image are detected. As an example, a local shape 112 is shown in FIG. 1. Essentially, the edges of the image along with the locations of the edges are considered and local shapes are computed. Moreover, the location of the local shapes is recorded. Finally, from the foregoing information, local shape descriptors Ψ1n are derived, as indicated at reference numeral 106 of FIG. 1. The procedure for detecting local shapes will be more fully discussed in regard to FIGS. 5 and 6 hereinafter.
For each triplet of local shape descriptors Ψ1, Ψ2, and Ψ3, a tuple index η.sub.Λ and a corresponding tuple entry ζ.sub.Λ is derived for the global shape table 108 of FIG. 1. This procedure is implemented via the FOR LOOP enclosed by blocks 414 and 426, which is analogous to the FOR LOOP enclosed by blocks 3 14 and 326 in FIG. 3. However the index η.sub.Λ of FIG. 4 has more parameters than the index η.sub.ψ of FIG. 3.
In FIG. 4, the index η.sub.Λ is computed and is specified as follows: ##EQU2##
In comparison to the index η.sub.Ψ, the index η.sub.Λ comprises three additional parameters, namely, ψ1, ψ2, ψ3, which are essentially the identification of the LSD triplet. Further, as with the triplets of interest points used to derive the local shape descriptors, the LSD triplets form a triangle with sides s1, s2, s3, as shown in FIG. 2. The two ratios s1 /S and s2 /S, where S=s1 +s2 +s3, define the geometry of the triangle, as shown in FIG. 2. As further indicated in FIG. 2, the two angles α1 and α2 describe the 1st order properties of the local shape with respect to the given triplet.
Again, for the first LSD Ψ1 at point p1, any properties higher than 0th order are not used, because these properties are more noise sensitive and could bias the indices for all possible LSD triangles generated for this feature.
The tuple entry ζ.sub.Λ, corresponding to the index η.sub.Λ, is computed as follows:
ζ.sub.Λ =<λ, x.sub.T, y.sub.T, ρ, α.sub.T, γ>
For each LSD triplet, the tuple <λ, xT, yT, ρ, αT, γ> is inserted into the global shape table 108 at the location indexed by the index η.sub.Λ.
As shown in FIG. 2, the position (xT, yT) is the center of the shape's area in the new right hand coordinate system defined by the normalized vector t23 and its corresponding orthonormal counterpart d, the ratio ρ=λ/S, and the angle αT between the two vectors t23 and d. It should be noted that the geometric parameters β=((xT, yT), ρ, αT) are also scale and rotation invariant, and are used to recover position, orientation and scale of a feature from the corresponding 3 local shape descriptors {ψi, ψj, ψk }.
After each index η.sub.Λ and corresponding entry ζ.sub.Λ are computed for each LSD triplet, as shown in a block 418, the foregoing tuples are compared with other indices and entries in the global shape table 108 to determine if it is redundant. First, it is determined whether the table 108 already has an index η.sub.Λ.
If one exists, then the parameters λ, xT, yT, ρ, αT of the entry ζ.sub.Λ are compared with the like parameters in the already-existing entry. If these parameters match, then the support mechanism γ (initially set to one) of the matching entry ζ.sub.Λ is accorded a vote, as indicated in a block 420.
However, if the table 108 does not have a matching entry at index η.sub.Λ, then the entry ζ.sub.Λ is entered at the location addressed by index η.sub.Λ in the table 108, as shown in a block 424.
After the FOR LOOP 414,426 has completed operation, the global shape table has been completed. Accordingly, the system can now recognize shapes in target images.
D. Recognition
The methodology envisioned by the present invention for recognition of target shapes and objects will be discussed in detail hereafter in regard to FIGS. 5-9. FIG. 5 illustrates a high level Flowchart 500 showing the overall recognition methodology. FIGS. 6-9 comprise low level Flowcharts 600-900 describing in detail respective blocks 506-509 of FIG. 5.
With reference to block 502 of FIG. 5, the edges of an image to be recognized are extracted from a scene. This procedure is well known in the art.
At the next block 506, a "local feature set" is derived. The set comprises the LSDs Ψ1N for local shapes of the image. For example, a local shape is shown as reference numeral 112 in FIG. 1. A detailed description of this procedure will be discussed with reference to Flowchart 6, entitled "Local Shape Description Extraction" of FIG. 6, hereinafter.
Next, at the block 507, the local feature set {Ψ1N } generated in the previous step is filtered using any conventional filtering technique. The resultant filtered set of LSDs is extremely smaller than the original set of LSDs Ψ1N. The resultant filtered set is illustrated at reference numeral 106 in FIG. 1.
In the preferred embodiment, the best first technique is utilized to filter the LSDs, which technique is well known in the art. As mentioned previously, any conventional filtering technique could be utilized, such as the constraint satisfaction technique. The specific implementation of this technique is shown in Flowchart 700, entitled "Filtering Local Shape Descriptors" of FIG. 7 Due to the well known nature and applicability of this technique, no discussion of the Flowchart 700 is undertaken.
At the next block 508, a set of model hypotheses Λ1N is derived. This procedure is set forth in Flowchart 800, entitled "Object Shape Recognition", of FIG. 8.
The model hypothesis set {Λ1N } generated in the previous step is filtered using any conventional filtering technique. The resultant filtered set is extremely smaller than the original set.
In the preferred embodiment, the first best technique is again utilized for filtering. The specific implementation of this technique is shown in Flowchart 900, entitled "Filtering Object Shape Hypotheses", of FIG. 9. Due to the well known nature and applicability of this technique, no discussion of the Flowchart 900 is undertaken.
The model hypotheses which survive the foregoing filtering, if any, are adopted as the shapes which are recognized in the scene. Thus, recognition has occurred.
1. Local Shape Description Extraction
FIG. 6 illustrates an exemplary Flowchart 600 for the extraction of local shapes during the recognition phase. A local feature set is derived in FIG. 6. The set comprises the LSDs Ψ1N corresponding with local shapes of the image. For example, a local shape 112 is shown in FIG. 1.
First, at a block 602, N interest points p of an image are extracted from a scene using any of the numerous conventional techniques. The location of each interest point p is identified by orthogonal coordinates (x,y). This procedure is well known in the art. Moreover, a temporary parameter space Σ.sub.Ψ (a hash table) is generated for use in the proceeding steps of the Flowchart 600.
Next, a FOR LOOP enclosed by blocks 604 and 632 commences for each interest point p1 of {p}. In the FOR LOOP 604,632, the parameter space Σ.sub.Ψ is first cleared, as indicated in block 606.
At block 608, a neighborhood W.sub.Ψ on the contour around p1 is selected. As will be discussed further below, triplets of interest points will be selected from the neighborhood W.sub.Ψ. Looked at another way, the neighborhood W.sub.Ψ prevents the interest points in triplets from being undesirably scattered all over the image.
To derive the neighborhood W.sub.Ψ, edges are linked into contours using simple "eight neighbors connectivity." Given a point pi =(si, yi) on a contour, points p1 are symmetrically sampled around pi along the contour, thereby generating a neighborhood set W={p1 =(x1, y1), -N≦1≦N}. The sampling step is proportional to the length of the longest symmetrical interval around the point pi for which the tangent behavior is monotonic on a coarse scale, and the total tangent variation is less than 2π/3. It is also inversely proportional to the total variation in tangent angle on the symmetric interval if less than 2π/3. An assumption is that faster variation in tangent correspond to smaller features which, accordingly, require finer sampling for detection. In this way, a quasi-scale invariant sampling of the contours is achieved. Next, triplets of interest points are derived. All possible combinations are formed of the point (xi, yi) in conjunction with two others pj and pk from the set {p1 =(x1, y1), -N≦1≦N}. These combinations are made in FOR LOOPs 610,628 (pj) and 612,626 (Pk), as shown. At block 614, it is insured that the same point is not selected twice when constructing the triplet combinations.
At block 616, the index η.sub.Ψ =(s1 /S, s2 /S, α1, α2), described herein in the section entitled, "Local Shape Table Creation", is computed for each triplet.
At block 618, the tuple index η.sub.Ψ is now used to index into the local shape table 104 in order to retrieve the corresponding tuple entry ζ.sub.Ψ =<λ, ρ, αT, γ>.
Next computations are performed to generate a local shape descriptor hypothesis Ψ, as indicated in block 620. Specifically, from the ρ and αT of the tuple entry ζ.sub.ψ, the possible scaler orientations of the contour are created. Note that the location of the contour is at pi, and accordingly, the location of the contour need not be computed as with global shapes. Thus, only scale and rotation information need be computed.
At block 622, the parameter space Σ.sub.Ψ is examined to determine whether an LSD hypothesis Ψ already exists in the parameter space Σ.sub.Ψ for the contour at issue. If the LSD hypothesis Ψ exists, then the existing LSD hypothesis Ψ is merely updated by appending points pi, pj, pk to it. If the LSD hypothesis Ψ does not exist, then the entry ζ.sub.Ψ for Ψ having the three points pi, pj, pk is created in the parameter space Σ.sub.Ψ.
After completion of FOR LOOPs 618,624; 612,626; and 610,628, an LSD hypothesis Ψ will have a high vote count, based upon votes accorded it by the points p1, pj, pk. These points along with the corresponding LSD hypothesis Ψ belong to the particular contour at issue.
Thus, at the next block 630 the LSD hypothesis Ψ is adopted as the local shape descriptor Ψ for pi. For each point pi, there exists a local shape descriptor Ψ having a label ψ, a support mechanism γ, and a supporting point list P={p}.
After completion of Flowchart 600, a large set of local shape descriptors {Ψ} with exactly one LSD and corresponding to each interest point pi in the image is available for further mathematical manipulation.
In order to winnow down the number of LSDs Ψ and to enhance the integrity of the LSDs, the local feature set {Ψ1N } is filtered using any conventional filtering technique as discussed previously. In the preferred embodiment, a conventional best first technique is utilized and is illustrated at Flowchart 700 of FIG. 7. The Flowchart 700 is self explanatory. The resultant filtered set of LSDs, {Ψ}selected, is extremely smaller than the original set of LSDs Ψ1N. The resultant filtered set is illustrated at reference numeral 106 in FIG. 1.
2. Object Shape Recognition
FIG. 8 shows a Flowchart 800 for recognizing an object based upon the local feature set {Ψ}selected, comprising the filtered LSDs Ψ. Flowchart 800 proceeds very similarly with respect to Flowchart 600. Accordingly, similar steps within referenced blocks are denoted with similar reference numerals in the Flowcharts 800 and 600 for easy comparison. However, some differences in methodology do exist.
In Flowchart 800, the focus is upon local curve shapes, not upon interest points as in Flowchart 600. Further, triplets in Flowchart 800 are formed from LSDs Ψ, not with interest points as in Flowchart 600. Finally, the global shape table 108 is consulted during the methodology of Flowchart 800, not the local shape table 104 as in Flowchart 600. Note that the index η.sub.Ψ corresponding with the global shape table 108 is larger than the index η.sub.Λ corresponding with the local shape table 104.
With reference to Flowchart 800, at a block 802, the N filtered LSDs Ψ corresponding to the image are acquired.
Moreover, a temporary parameter space Σ.sub.Λ (a hash table) is generated for use in the proceeding steps of the Flowchart 800.
Next, a FOR LOOP enclosed by blocks 804 and 832 commences for each LSD Ψ of the local feature set {Ψ}selected. In the FOR LOOP 804,832, the parameter space Σ.sub.Λ is first cleared, as indicated in block 806.
Next, triplets of LSDs Ψ are derived. All possible combinations are formed of the LSD Ψi in conjunction with two others Ψj and Ψk from the local feature set {Ψ}selected. These combinations are made in FOR LOOPs 810,828 (Ψj) and 612,626 (Ψk), as shown. At block 814, it is insured that the same LSD is not selected twice when constructing the triplet combinations.
At block 816, the index η.sub.Λ =<s1 /S, s2 /S, α1, α2, ψi, ψj, ψk >, described herein in the section entitled, "Global Shape Table Creation", is computed for each LSD triplet.
At block 818, the index η.sub.Λ is now used to index into the global shape table 108 in order to retrieve the corresponding entry ζ.sub.Λ =<λ, xT, yT, ρ, αT, γ>.
Next, computations are performed to generate an object shape hypothesis Λ, as indicated in block 820. Specifically, from the xT, yT, ρ, αT of the tuple entry ζ.sub.Λ, the possible orientations of the contour are created. In other words, scale, rotation, and translation information is computed.
At block 822, the parameter space Σ.sub.Λ is examined to determine whether an object shape hypothesis Λ already exists in the parameter space Σ.sub.Λ. If the object shape hypothesis Λ already exists, then the existing object shape hypothesis Λ is merely updated by appending (Ψi, Ψj, Ψk) to it and incrementing its vote. If the object shape hypothesis Λ does not exist, then the object shape instance Λ having the triplet Ψi, Ψj, Ψk is created in the parameter space Σ.sub.Λ.
After completion of FOR LOOPs 818,824; 812,826; and 810,828, a number of object shape hypotheses {Λ} will have a high vote count, based upon votes accorded it by the LSDs Ψi, Ψj, Ψk.
Thus, at the next block 834, the object shape hypotheses {Λ} having a votes γ above a preset threshold Γ0 are adopted. For each Λ, there exists a unique model label λ a support γ, and a supporting LSD list {Ψsupp }.
After completion of Flowchart 800, a set of object shape hypotheses {Λ} corresponding with each LSD triplet in the image is available.
In order to enhance the integrity of the set of object shape hypotheses {Λ}, they are filtered using any conventional filtering technique as discussed previously. In the preferred embodiment, a conventional best first technique is utilized and is illustrated at Flowchart 900 of FIG. 9. The Flowchart 900 is self-explanatory. The resultant filtered set of LSDs, {Λ}selected, is illustrated at reference numeral 110 in FIG. 1. The {Λ}selected constitute the object shape recognized and located in the image.
The foregoing description of the preferred embodiment of the present invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the present invention to the precise form disclosed, and obviously many modifications and variations are possible in light of the above teachings. The particular embodiments were chosen and described in order to best explain the principles of the present invention and its practical application to those persons skilled in the art and to thereby enable those persons skilled in the art to best utilize the present invention in various embodiments and with various modifications as are suited to the particular use contemplated. It is intended that the scope of the present invention be broadly defined by the claims appended hereto.

Claims (10)

The following is claimed:
1. A computer-implemented method for recognizing objects and for automatic acquisition of models of objects, comprising the steps of:
(1) acquiring a model of an object, comprising,
(a) digitizing said object to generate a digitized image;
(b) detecting local shapes in said digitized image;
(c) grouping three or more .[.noncontinuous.]. .Iadd.noncontiguous .Iaddend.combinations of said local shapes to generate a first index;
(d) generating an entry for each group of local shapes consisting of a name of said digitized image, and information about the translation, rotation, and scale of said digitized image; and
(e) storing said entry in a shape table at said first index; and
(2) recognizing a target object that appears in a physical scene, comprising,
(a) digitizing said target object to generate a digitized target image;
(b) detecting .[.global.]. .Iadd.local .Iaddend.shapes in said digitized target image;
(c) grouping three or more .[.noncontinuous.]. .Iadd.noncontiguous .Iaddend.combinations of said .[.global.]. .Iadd.local .Iaddend.shapes to generate a second index;
(d) accessing said shape table and retrieving entries from said shape table that correspond to said second index;
(e) collecting said retrieved entries from said shape table into a vote table; and
(f) selecting said retrieved entry with a highest vote in order to recognize said target image.
2. The method of claim 1, further comprising the steps of:
creating a local lookup table;
entering local shapes to be recognized by the system into said local lookup table by synthetic generation; and
using said local lookup table to detect local shapes in said digitized image.
3. The method of claim 1, further comprising the steps of:
transforming three or more edge points of said digitized image into local groupings;
analyzing said local groupings in a parameter space;
comparing the geometric characteristics of said local groupings with local shape entries in a local lookup table;
according votes to said local shape entries which correspond with said geometric characteristics; and
adopting local shape descriptors corresponding to local shape entries which have a high number of votes.
4. The method of claim 1, further comprising the steps of:
converting three or more local shapes of said digitized target image into global groupings;
analyzing said global groupings in a parameter space;
comparing the geometric characteristics of said global groupings with global shape entries in said shape table;
according votes to said global shape entries which correspond with said geometric characteristics; and
adopting global shape descriptors corresponding to global shape entries which have a high number of votes.
5. A computer-based system for recognizing objects and for automatic acquisition of models of objects, comprising:
(a) means for acquiring a model of an object, comprising,
(1) means for digitizing said object to generate a digitized image;
(2) means for detecting local shapes in said digitized image;
(3) means for grouping three or more .[.noncontinuous.]. .Iadd.noncontiguous .Iaddend.combinations of said local shapes to generate a first index;
(4) means for generating an entry for each group of local shapes consisting of a name of said digitized image, and information about the translation, rotation, and scale of said digitized image; and
(5) means for storing said entry in said shape table at said first index; and
(b) means for recognizing a target object that appears in a physical scene, comprising,
(1) means for digitizing said target object to generate a digitized target image;
(2) means for detecting local shapes in said digitized target image;
(3) means for grouping three or more .[.noncontinuous.]. .Iadd.noncontiguous .Iaddend.combinations of said .[.local.]. shapes to generate a second index;
(4) means for accessing said shape table and for retrieving entries from said shape table that correspond to said second index;
(5) means for collecting said retrieved entries from said shape table into a vote table and for selecting said retrieved entry with a highest vote in order to recognize said target image.
6. The computer-based system of claim 5, further comprising:
(a) a camera adapted for viewing said target object and producing a signal indicative of said target object; and
(b) a digitizer, connected to said camera, for digitizing said target object represented by said signal and storing said digitized target image in a storage medium.
7. The computer-based system of claim 5, further comprising:
a local shape table;
means for entering local shapes to be recognized into said local shape table by synthetic generation; and
means for using said local shape table to detect local shapes in said digitized image.
8. The computer-based system of claim 5, further comprising:
means for transforming three or more edge points of said digitized image into local groupings;
means for analyzing said local groupings in a parameter space;
means for comparing the geometric characteristics of said local groupings with local shape entries in a local lookup table;
means for according votes to said local shape entries which correspond with said geometric characteristics; and
means for adopting local shape descriptors corresponding to local shape entries which have a high number of votes.
9. The computer-based system of claim 5, further comprising:
means for converting three or more local shapes of said digitized target image into global groupings;
means for analyzing said global groupings in a parameter space;
means for comparing the geometric characteristics of said global groupings with global shape entries in said global shape table;
means for according votes to said global shape entries which correspond with said geometric characteristics; and
means for adopting global shape descriptors corresponding to global shape entries which have a high number of votes.
10. The computer-based system of claim 5, further comprising:
means for generating a recognition signal from said selected retrieved entry, wherein said recognition signal is indicative of a recognized target object.
US08/719,496 1991-05-21 1996-09-25 Generalized shape autocorrelation for shape acquistion and recognition Expired - Lifetime USRE36656E (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US08/719,496 USRE36656E (en) 1991-05-21 1996-09-25 Generalized shape autocorrelation for shape acquistion and recognition

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US70503791A 1991-05-21 1991-05-21
US16296694A 1994-12-08 1994-12-08
US08/719,496 USRE36656E (en) 1991-05-21 1996-09-25 Generalized shape autocorrelation for shape acquistion and recognition

Related Parent Applications (2)

Application Number Title Priority Date Filing Date
US70503791A Continuation 1991-05-21 1991-05-21
US16296694A Reissue 1991-05-21 1994-12-08

Publications (1)

Publication Number Publication Date
USRE36656E true USRE36656E (en) 2000-04-11

Family

ID=26859201

Family Applications (1)

Application Number Title Priority Date Filing Date
US08/719,496 Expired - Lifetime USRE36656E (en) 1991-05-21 1996-09-25 Generalized shape autocorrelation for shape acquistion and recognition

Country Status (1)

Country Link
US (1) USRE36656E (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6542638B2 (en) * 2001-02-21 2003-04-01 Shannon Roy Campbell Method for matching spatial patterns
US6714679B1 (en) * 1998-02-05 2004-03-30 Cognex Corporation Boundary analyzer
US20050022124A1 (en) * 2003-07-24 2005-01-27 Tunney William Patrick Method and system for recognizing questionnaire data based on shape
US20050018905A1 (en) * 2003-07-24 2005-01-27 Tunney William Patrick Method and system for identifying multiple questionnaire pages
US7068856B2 (en) 2002-09-17 2006-06-27 Lockheed Martin Corporation Method and system for determining and correcting image orientation angle
US7356170B2 (en) 2004-02-12 2008-04-08 Lenovo (Singapore) Pte. Ltd. Fingerprint matching method and system
US20080294288A1 (en) * 2005-12-30 2008-11-27 Irobot Corporation Autonomous Mobile Robot
US9495609B2 (en) 2014-04-30 2016-11-15 Bendix Commercial Vehicle Systems Llc System and method for evaluating data
US10081308B2 (en) 2011-07-08 2018-09-25 Bendix Commercial Vehicle Systems Llc Image-based vehicle detection and distance measuring method and apparatus

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3069654A (en) * 1960-03-25 1962-12-18 Paul V C Hough Method and means for recognizing complex patterns
US4110736A (en) * 1974-04-24 1978-08-29 Agency Of Industrial Science & Technology Shape recognition system
US4115805A (en) * 1975-05-23 1978-09-19 Bausch & Lomb Incorporated Image analysis indexing apparatus and methods
US4115761A (en) * 1976-02-13 1978-09-19 Hitachi, Ltd. Method and device for recognizing a specific pattern
US4183013A (en) * 1976-11-29 1980-01-08 Coulter Electronics, Inc. System for extracting shape features from an image
US4388610A (en) * 1980-01-28 1983-06-14 Tokyo Shibaura Denki Kabushiki Kaisha Apparatus for reading drawings
US4704694A (en) * 1985-12-16 1987-11-03 Automation Intelligence, Inc. Learned part system
US4747153A (en) * 1984-03-08 1988-05-24 Japan As Represented By Director General Of Agency Of Industrial Science And Technology Device and method for pattern recognition
US4748674A (en) * 1986-10-07 1988-05-31 The Regents Of The University Of Calif. Pattern learning and recognition device
US4783829A (en) * 1983-02-23 1988-11-08 Hitachi, Ltd. Pattern recognition apparatus
US4817183A (en) * 1986-06-16 1989-03-28 Sparrow Malcolm K Fingerprint recognition and retrieval system
US4845765A (en) * 1986-04-18 1989-07-04 Commissariat A L'energie Atomique Process for the automatic recognition of objects liable to overlap
US4933865A (en) * 1986-12-20 1990-06-12 Fujitsu Limited Apparatus for recognition of drawn shapes or view types for automatic drawing input in CAD system
US4969201A (en) * 1987-10-08 1990-11-06 Hitachi Software Engineering Co., Ltd. Method of recognizing a circular arc segment for an image processing apparatus

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3069654A (en) * 1960-03-25 1962-12-18 Paul V C Hough Method and means for recognizing complex patterns
US4110736A (en) * 1974-04-24 1978-08-29 Agency Of Industrial Science & Technology Shape recognition system
US4115805A (en) * 1975-05-23 1978-09-19 Bausch & Lomb Incorporated Image analysis indexing apparatus and methods
US4115761A (en) * 1976-02-13 1978-09-19 Hitachi, Ltd. Method and device for recognizing a specific pattern
US4183013A (en) * 1976-11-29 1980-01-08 Coulter Electronics, Inc. System for extracting shape features from an image
US4388610A (en) * 1980-01-28 1983-06-14 Tokyo Shibaura Denki Kabushiki Kaisha Apparatus for reading drawings
US4783829A (en) * 1983-02-23 1988-11-08 Hitachi, Ltd. Pattern recognition apparatus
US4747153A (en) * 1984-03-08 1988-05-24 Japan As Represented By Director General Of Agency Of Industrial Science And Technology Device and method for pattern recognition
US4704694A (en) * 1985-12-16 1987-11-03 Automation Intelligence, Inc. Learned part system
US4845765A (en) * 1986-04-18 1989-07-04 Commissariat A L'energie Atomique Process for the automatic recognition of objects liable to overlap
US4817183A (en) * 1986-06-16 1989-03-28 Sparrow Malcolm K Fingerprint recognition and retrieval system
US4748674A (en) * 1986-10-07 1988-05-31 The Regents Of The University Of Calif. Pattern learning and recognition device
US4933865A (en) * 1986-12-20 1990-06-12 Fujitsu Limited Apparatus for recognition of drawn shapes or view types for automatic drawing input in CAD system
US4969201A (en) * 1987-10-08 1990-11-06 Hitachi Software Engineering Co., Ltd. Method of recognizing a circular arc segment for an image processing apparatus

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Bolle et al., "A Complete and Scalable Architecture for 3D Model-Based Vision", Proceedings, 5th IEEE International Symposium on Intelligent Control 1990, vol. 1, Sep. 5-7, 1990, pp. 212-219.
Bolle et al., A Complete and Scalable Architecture for 3D Model Based Vision , Proceedings, 5 th IEEE International Symposium on Intelligent Control 1990, vol. 1, Sep. 5 7, 1990, pp. 212 219. *
Califano et al., "Generalized Shape Autocorrelation", AAAI-90 Proceeding Eighth National Conference on Artificial Intelligence, vol. 2, Jul. 29-Aug. 3, 1990 pp. 1067-1073.
Califano et al., Generalized Shape Autocorrelation , AAAI 90 Proceeding Eighth National Conference on Artificial Intelligence, vol. 2, Jul. 29 Aug. 3, 1990 pp. 1067 1073. *
Y. Lamdan and H. J. Wolfson, "Geometric Hashing: A General and Efficient Model-Based Recognition Scheme," Proc. of IEEE Second Int'l Conf. on Computer Vision, pp. 238-249, Tampa, Florida, Dec., 1988.
Y. Lamdan and H. J. Wolfson, Geometric Hashing: A General and Efficient Model Based Recognition Scheme, Proc. of IEEE Second Int l Conf. on Computer Vision, pp. 238 249, Tampa, Florida, Dec., 1988. *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6714679B1 (en) * 1998-02-05 2004-03-30 Cognex Corporation Boundary analyzer
US6542638B2 (en) * 2001-02-21 2003-04-01 Shannon Roy Campbell Method for matching spatial patterns
US7068856B2 (en) 2002-09-17 2006-06-27 Lockheed Martin Corporation Method and system for determining and correcting image orientation angle
US20050018905A1 (en) * 2003-07-24 2005-01-27 Tunney William Patrick Method and system for identifying multiple questionnaire pages
US7031520B2 (en) * 2003-07-24 2006-04-18 Sap Ag Method and system for identifying multiple questionnaire pages
US20060126935A1 (en) * 2003-07-24 2006-06-15 Tunney William P Method and system for identifying multiple questionnaire pages
US20050022124A1 (en) * 2003-07-24 2005-01-27 Tunney William Patrick Method and system for recognizing questionnaire data based on shape
US7684621B2 (en) 2003-07-24 2010-03-23 Sap Ag Method and system for identifying multiple questionnaire pages
US7356170B2 (en) 2004-02-12 2008-04-08 Lenovo (Singapore) Pte. Ltd. Fingerprint matching method and system
US20080294288A1 (en) * 2005-12-30 2008-11-27 Irobot Corporation Autonomous Mobile Robot
US20110208357A1 (en) * 2005-12-30 2011-08-25 Yamauchi Brian Autonomous Mobile Robot
US10081308B2 (en) 2011-07-08 2018-09-25 Bendix Commercial Vehicle Systems Llc Image-based vehicle detection and distance measuring method and apparatus
US9495609B2 (en) 2014-04-30 2016-11-15 Bendix Commercial Vehicle Systems Llc System and method for evaluating data

Similar Documents

Publication Publication Date Title
US5351310A (en) Generalized shape autocorrelation for shape acquisition and recognition
Schmid et al. Local grayvalue invariants for image retrieval
Bappy et al. Hybrid lstm and encoder–decoder architecture for detection of image forgeries
US10198858B2 (en) Method for 3D modelling based on structure from motion processing of sparse 2D images
Drost et al. Model globally, match locally: Efficient and robust 3D object recognition
Castellani et al. Sparse points matching by combining 3D mesh saliency with statistical descriptors
Fan Describing and recognizing 3-D objects using surface properties
JP6216508B2 (en) Method for recognition and pose determination of 3D objects in 3D scenes
Guigues et al. Scale-sets image analysis
Wahl et al. Surflet-pair-relation histograms: a statistical 3D-shape representation for rapid classification
Sahar et al. Using aerial imagery and GIS in automated building footprint extraction and shape recognition for earthquake risk assessment of urban inventories
Patterson et al. Object detection from large-scale 3d datasets using bottom-up and top-down descriptors
USRE36656E (en) Generalized shape autocorrelation for shape acquistion and recognition
Yalic et al. Automatic Object Segmentation on RGB-D Data using Surface Normals and Region Similarity.
Giachetti Effective characterization of relief patterns
Li et al. Investigating the bag-of-words method for 3D shape retrieval
Schmid et al. Object recognition using local characterization and semi-local constraints
Shivanandappa et al. Extraction of image resampling using correlation aware convolution neural networks for image tampering detection
Zedan et al. Copy move forgery detection techniques: a comprehensive survey of challenges and future directions
Krig et al. Local Feature Design Concepts, Classification, and Learning
Foresti A real-time hough-based method for segment detection in complex multisensor images
Al-Shuibi et al. Survey on Image Retrieval Based on Rotation, Translation and Scaling Invariant Features
Kim et al. Method for the generation of depth images for view-based shape retrieval of 3D CAD model from partial point cloud
Huang Learning a 3D descriptor for cross-source point cloud registration from synthetic data
Walker Combining geometric invariants with fuzzy clustering for object recognition

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 8

FPAY Fee payment

Year of fee payment: 12