CA1197607A - Method and apparatus for visual image processing - Google Patents

Method and apparatus for visual image processing

Info

Publication number
CA1197607A
CA1197607A CA000424990A CA424990A CA1197607A CA 1197607 A CA1197607 A CA 1197607A CA 000424990 A CA000424990 A CA 000424990A CA 424990 A CA424990 A CA 424990A CA 1197607 A CA1197607 A CA 1197607A
Authority
CA
Canada
Prior art keywords
data
video
corner point
image processing
corner
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired
Application number
CA000424990A
Other languages
French (fr)
Inventor
Donald L. Beall
Harold W. Tomlinson, Jr.
William G. Hart, Jr.
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
General Electric Co
Original Assignee
General Electric Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by General Electric Co filed Critical General Electric Co
Application granted granted Critical
Publication of CA1197607A publication Critical patent/CA1197607A/en
Expired legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/20Contour coding, e.g. using detection of edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/469Contour-based spatial representations, e.g. vector-coding

Abstract

METHOD AND APPARATUS FOR VISUAL IMAGE PROCESSING

ABSTRACT OF THE DISCLOSURE

Method and apparatus is disclosed for automatically processing visual images electronically so as to permit intelligent machine analysis of the image content. A special distributed logic system architecture facilitates rapid real time image analysis and the production of reaction control signals in a high speed production line environment, for example. Dedicated decision logic is employed to determine in but a single micro-instruction cycle whether a detected corner point of the image should be linked to another already linked chain of such corner points so as to define part of a closed edge contour of the image under examination. This ability to so rapidly classify encoded corner points as either belonging or not belonging to a given set of such corner points which describe a closed edge contour is quite useful in achieving rapid real time image analysis capability. Special dedicated data address indexing circuitry is also employed for most expeditiously retrieving successive address linked data words from a data memory in successive single micro-instruction cycles as required to take full advantage of the high speed dedicated corner point matching circuitry. The dedicated data memory indexing circuitry is provided in addition to the usual program instruction indexing circuitry employed in connection with the instruction register of the associated data processing system.

Description

7 3s-oE-1~5 SPECIFICATION
_ _ This application is generally directed to method and apparatus for automatic electronic processing of visual images. More specifically, it is directed to a visual image processing system for digitally processing an electronic video image of an object within a predefined field of view so as to automatically identify closed edge contours of the object(s) within such an image. Typically, predetermined geometric features of such identified closed edge contours are automatically quantified and thus made available to user-generated decision logic programs designed to detect the presence or absence of predetermined desirable and/or undesirable object features. For example, the invention may be used in the environment of a high speed manufacturing line to automatically reject out-of-tolerance or otherwise unacceptable parts travelling on a conveyor belt or the like.
The invention provides a form of automated electronic vision which may be useful in a wide variety of manufactur-ing or other applications (e.g. to provide a form of robot vision or in optical character recognition or in image facsimile transmission systems, etc.). It may, for example, be employed effectively in the manufacturing processes of fabrication, assembly, packaging, inspection, monitoring, control and others. It is ideally suited to repetitive, monotonous, exacting industrial tasks, even in environments of heat, dirt or noise which would be unpleasant or intolerable for human operators. It is also not subject to the usual fatigue, boredom, per-formance degradation or error which typically affect human operators.

7 6~

Electronic visual image processing systems oE various types have been proposed and/or successfully demonstrated and used in the prior art. For example, the General Electric Company has earlier marketed a model PN 2303 OPTO~ATIONtm instrument system. This prior system utilizes a charge injection device (CID) solid state camera to convert an optical image within the camerals field of Yiew into a digitized electronic video input signal. The digitized video it "thresholded" to produce a binary-valued digital image which is then digitally analyzed by a single micro-processor based system including sore dedicated decision hardware) Jo count the number of "white or Black"
pixels along given horizontal or vertical lines so as to measure predetermined geometrical features of the image such as length, height, area, relative or absolute location, etc. Simple shape recognition and "self-teaching~ capability are also provided by comparing thè presently viewed object with a set of similar features calculated from a previous reference image. Electronic defined "windows" within the overall camera field of view permit multiple measurements to be jade concurrently on different objects or portions of an object within a given field of view.
This earlier model Pi 2303 equipment was also user-pro~rammable by means of control switches which were provided so as to select various performance options and to establish desired measurement values, etc. It i5 typically used in conjunction with a General
2~ Electric motel Pi 2120 strobe illumination system so as to stop"
the motion of a part on a moving conveyor belt as each frame of video data is captured.
While the earlier PN 2303 system has provided a powerful tool for a wide variety of applications including high speed on-line inspection, part sorting or selection, in-process monitoring and control, real-time precision measurements, etc., it has been somewhat limited in the type of applications and measurements 9~ ;~ 35-OE-l45 that may be performed. It also does not provide for higher level image analysis functions.
Other image processing techniques have been proposed which would perform binary picture analysis using corner point detection and t`ne sorting of corner point data into linked sets describing closed edge contours. Such techniques are described in a paper authored by Mr. C. T. Zahn entitled "A Formal Description For Two Dimensional Patterns", published in the Proceedings of the International Joint Conference on Artificial Intelligence, May 1969, pages 621-628. The corner point encoder ~Ç
Zahn employs two encoding masks or windows, one 2 x 3 and the other 3 x 2 pixels, which scan the picture ln alternating passes requiring data storage in a frame buffer. Other systems employing 3 x 3 pixel mask logic have also been proposed, us for example in U.S. Patent Nos. 3,889,234 - Makihara et al and 4,087,788 - Johanne~son.
Another visual image processing system is presently marketed by Auto~atix of Burlington, Massachusetts, unaer the name ~Autovision II programmable visual inspection system" as described by Messrs. Reinhold and Vanderbrug in the Fall 1980 issue of Robotics Age in an article entitled "The Autovision System at pages 22-28.
Still other electronic visual image processing systems are described in U.S. Patent No. 4tll8,730 - Lemelson ~1978) and in Dow Smart Robots are Becoming Smar~r~ my Paul ~innucan, pub-lished in sigh Technology September/October 1~81, pages 32-40.
The functional architecture of the present invention provides a unique and highly advantageous allocation of L~age processing functions to dedicated logic decision hardware where high speed performance is required, to dedicated Eirmware-implemented decision logic where relatively greater amounts of time are available for processing relatively smaller amsunts of data in a real time environment and finally, to software-
3 _ i i9~ 6~ 35-OE-145 implemented decision logic where user-genera~ed flexibility of operation is the prime consideration and where relatively treater periods of time are available for processing the data that has been so greatly reduced by the earlier dedicated hardware and firmw~re-implemented decision logic.
In this unique system architecture, initial video data processing functions are performed under the direct control of a firs microprocessor while subsequent more complex image data processing functions are controlled by a second independent microprocessor. Both the video processing and image processing subsystems are capable of substantial asychronous independent operations once set us and initiated by an overall system control implemented with yet a third microprocessor main CPU~ which is in co~mu~ication with the other two microprocessors via a common bus conductor system tWhich conveys parallel digital address, data and control signals between the three different micropro-cessor systems. In addition to coordinating the data processing activities of the video processing and image processing subsystems, the third microprocessor-based system (or main CPUn) may include user-generated software for analyzing, comparing or otherwise making desired decisions as to acceptanceJreje~ion, tolerance, measurement, etc.-of object in the image under inspection based, at least in part, upon predetermined geometric features that have been previously identified and quantized by the video processing and image processing subsystems.
Stated somewhat differently, the architectural approach embedded in the present invention represents an optimized partitioning of hardware versus soEtware and dedicated versus programmable implementation of data processing functions for this type of visual image processing. Dedicated hardware modules are utilized for the high data content, high data rate preprocessing and image processing functions which are relatively application invariant. These modules are controlled by respectively ~L:19~

corresponding microprocessors which also help perform initial selective data reduction and compression at high speed in the data path. In this manner, flexibility is maintained without sacrificing speed. Hardware/firm~are modules are also utilized S in the data path so as to efficiently perEorm the necessary corner encoding, feature extraction (i.e. sorting of encoded corner points into linked lists defining closed edge contours, "macro-image~ reconstruction and manipulation, etc. again, controlled by t'ne respective independent microprocessors providea in the video processing and image pro^essing subsystems. In this manner, high speed is maintained so as to permit high speed real-time image analysis by successively compressing the amount of data which must be retained to sufficiently describe the image under examination. Once the dedicated hardware/irmware video processing and image processing subsystems have reduced the data to a level which is manageable by the main CPU, it is user-programmed to perform any desired feature analysis, comparison with reference data and/or other subsequent logical operations what may be desired so as to generate appropriate output decisions and/or data. what is, the inherent flexibility of software processing is taken advantage of primarily only in the pos~-processin~ system microprocessor where it is most needed without limiting the overall machine processing speed Special asynchronous buffer data s orage is employed between dedicated decision logic modules capable of simultaneous operation. Interrelated control of such modules and buffer storage is coordinated by the main C~U so as to keep all he dedicated logic ~oaules simultaneously working at maximum capability. Special data multiplexing and switching paths also provide for improved data processing flexibility.
Within the overall novel system architecture just described is especially unique and novel dedicated decision logic (mostly hardware in the exemplary embodiment for determining 5~ _ whether a given corner point is "matched" to a free end of a previously linked chain of such encoded corner points.
If so, then the corner point under examination is itself linked to the matched chain so as to become one free end of the chain. Or, if matched to both free ends of an existing linked chain, it is linked to both such ends so as to complete the chain. If matched to a free end of two different already linked chains, then it is linked to both such chains which are thereafter "merged" to form a single chain having the usual two free ends until it is completed. If not matched to any end of any existing already linked chain of such encoded corner points, then the corner point under examination itself constitutes the beginning of a new linked chain.
So far as is known, in the past, when such corner point matching has been attempted it has been software implemented in a fashion which requires many instruction execution cycle times to determine whether a corner point is "matched" to a previously existing linked set of corner points. In a practical environment, there may literally be thousands of corner points in a given image and, accordingly, if high part throughput is to be handled in real time so as to permit parts to automatically be ejected from a moving conveyor belt or the like (e.g.
900 parts per minute on each of four parallel lines under inspection by the same system, then a much faster decision must be made as to whether a corner point is "matched" or not to a free end of a preexisting linked chain of corner points.
The dedicated corner poin-t decision logic hardware as described herein, is capable of determining whether a corner point is "matched" to the free end of an 35-OE-1~5 existing linked chain of such corner points in only a single micro-instruction execution cycle time of the controlling microprocessor of the image processing subsystem. Because a complete match or no match decision can thus be implemented within a single instruction cycle time, it is possible to make hundreds and thousands of such decisions in analyzing a given image within the very short time that may be available between successive images that are to be analyzed in a real time environment.
In brief, the dedicated corner point sorter is concerned with binary (thresholded) picture analysis and the rapid generation of boundaries around items in the picture. The boundaries are formed by connecting the corner points found on the item's edges. The exemplary embodiment of the invention includes a digital hardware device that rapidly determines how certain corner points are to be connected to form the item boundaries.
The picture analysis in the exemplary overall system starts with another hardware device which scans the binary picture for the corner points. Each point is marked by its coordinate and the direction of the boundary change "in" and "out" of the corner. One problem is to correctly sort the corner points into their respective items and then link them to form a continuous corner point chain, i.e., a boundary. If the sorting is done in software by a computer, the boundary generation time can be long compared to video rates (33ms). To preform the picture analysis at real time video rates, a fast corner point sorter is required.

This invention sorts corner points rapidly, while using a small amount of memory storage. The corner point sorter hardware is used in the exemplary embodiment with ~7~

a 32 bit-slice microprocessor and a digital memory. The picture analysis starts with a corner point encoder, which finds the picture's corner points and generates coordinate, direction and processing information for each one. This information is read by the bitslice processor which uniquely connects the corner points and stores the item boundaries as doubly linked lists in the memory. For most corner points, the corner point sorter hardware is used to determine to which boundary the corner point belongs.
The type of processing to be performed on a corner point is typically passed to the processor by a 6 bit code.
Each corner point is processed in a specific way, de-pending on its "in" and "out" vectors. In general, a corner point will:
l. Start a new boundary.
2/3. Connect immediately to the in/out side of the last point processed.
4/5. Connect to the in end of a boundary.
6. Merge two boundaries or close a boundary.
In case 4~5, only the in or out free ends of the corner point chains are checked for a match, as determined by the corner code. If a corner point merges two chains, then a check must be made on both the in and out free ends of the corner point chains.
The eight possible vector directions are represented by a 3 bit code. The "in" and "out" vector information as well as the coordinate of the corner point enables the proper sorting of the corner points.
To determine to which corner point chain a corner point belongs, the vector and coordinate information of the end corner points of all unclosed corner point chains must be stored in memory. Each unclosed chain has two ends ~9 35-OE-145 to it, an end with a free "in" vector, and an end with a free "out" vector. The end point data from each such chain is stored in a linked list called Sort Link Block (SLB) chain to speed up the sorting process. The coordinate and vector data from each end corner point is stored in the same word in memory along with the index to the next datum in the list. When a corner is received which must be sorted to the correct corner point chain, the processor searches the SLB chain until a match is found between the corner point and a corner point chain end. The match is signalled by the corner point sorter hardware when any of 3 conditions are satisfied:
Case 1:
Corner point in(out) vector = corner point chain out(in) vector and XcOrner point Xend point in(out) vector = C10 (where C = "don't care").
Case 2:
Corner point in(out) vector = corner point chain out(in) vector and XcOrner print ~Xend point Ycorner point MY and in(out) vector = C01 (where end po1nt C = "don't care").
Case 3:
Corner point in(out) vector = corner point chain out(in) vector and XcOrner point Xend point -(Y -Y ) and in(out) corner point end polnt vector = Cll (where C = "don't care").
Before the sorting process starts, the processor stores the corner point vector data in the sorter hardware and selects whether the corner point "in" or "out" vector is to be compared by the corner point sorting hardware.

97~ 35-OE-145 This is determined by the corner point processing cocle. Then the carry flow between two slices of the microprocessor is interrupted to form two 16 bit slice microprocessors. Next the processor searches the SLB chain by reading the end point coordinate data from the memory and subtracting it from the corner point coordinate data to generate X and Y coordinate difference data. Because the carry flow between two slices has been interrupted, the X data will appear in the upper 16 bits of the microprocessor output, and the Y data will appear in the lower 16 bits. The Jo X and fry data along with -the end point "in" and "out" vector information are sent to the sorter hardware, which substantially immediately signals the processor if a match was found. Preferably, also taking place in the same micro-instruction is the indexing of the memory address to the next boundary in the list. If this is done, a new bcundary compared to every micro-instruction, and the sorting process is very fast.
The amount of hardware necessary to determine the corner point match has been considerably reduced by a unique combination of firmware and hardware. With the addition of a single "OR"
gate to the mainline processor to interrupt the carry flow between the 2nd and 3rd 4-bit slices, the processor may be microprogrammably controlled to perform arithmetic in a first mode as two simultaneous 16 bit calculators or in a second mode as a single 32 bit calculator. The connectivity process utilizes the first mode to generate the~X and Y data in firmware (prior to being passed to the hardware) while the analysis -tasks utilize the 2nd mode.
The feature extractor in the exemplary embodiment of an overall visual image processor uses this invention to speed up picture analysis by a factor of up to 10 over previously used software techniques.

~76l3 73 s_oE-1~5 As previously mentioned, with such greatly enhanced corner point matching ability at hand, it thus becomes desirable to rapidly access the free end corner points of previously linked chains of corner points so that such free ends may be successively tested for match with a new corner point. wince the matching decision can be made in a single micro-instruction cycle time of the controlling microprocessor, the free end corner points of previously linked chains should be retrievable from memory in successive ins-truction cycle times.
That is, if there are, for example, 100 free end corner points of 50 previously linked but unclosed chains of corner points, to take full operational advantage of the enhanced corner point matching capability earlier described (one decision each instruction cycle time), it is necessary to present 50 of the 100 successive free end corner points, representing either the "in" to "out" free ends, to the corner point matching circuitry in successive instruction cycles until a match is found. Accordingly, in 5Q instruction cycle times or less, it can be determined whether the free corner point under examination matches any of the previously linked free end corner points and, if so, exactly which ones are so matched. Thereafter, appropriate action can be taken to link the new corner point with the proper corner point chain.
To permit this rapid sequential retrival of free end corner points from memory, a unique dedicated hardware data memory address indexing circuit is employed. It provides the ability to immediately transfer predetermined bi-ts from a word read out of the data memory directly to the data memory address register so that, on the very next successive free end corner point may be retrieved for ~97~7 35-OE-1~5 comparision in the corner point match logic (even though it is not stored in the next successive memory location).
Such free end corner points are preferably stored in an address-linked fashion such that part of the free end corner point data read from memory identifies the next (and or the previous) free end corner point address. As may be recognized from the above description, the data memory contents involve doubly-linked lists. At a first level, the individual list of previously linked corner poin-ts are address linked one to another to form corner chains and, in addition, the free end corner points of these linked chains or lists are themseleves address linked to other free end corner points of similar chains already identified in the sorting process to form SLB chains. The special data memory address indexing hardware just described is provided in addition to the typical and usually provided instruction memory address indexing hardware associated with the instruction register of a data processor. In addition, in the preferred embodiment, a multiplexer is employed in the feedback circuitry from the data memory output to the data memory address register so that different per-mutations and/or combinations of subsets of the output data words may be fed back to the address register and/or so that other more traditional sources of address data may be used to fill the address register.
These as well as other objects and advantages of this invention will be more completely understood and appreciated by studying the following detailed description of the presently preferred exemplary embodiment of this invention taken in conjunction with the accompanying drawings, of which:

FIGURE 1 is an overall block diagram of the system architecture employed in -the presently preferred exemplary embodiment of this invention;
FIGURE 2 is a graphical depiction of the corner point encoding and sorting process utilizing Freeman chain codes as employed in the exemplary embodiment of this invention shown in FIGURE l;
FIGURE 3 is a more detailed schematic diagram of the video processing subsystem shown in FIGURE l;
FIGURE 4 is a flow chart of an exemplary firmware/
hardware program used in the video processing subyss-tem shown in FIGURES 1 and 3;
FIGURE 5 illustrates the manner in which a detected corner point within a 3X3 pixel matrix may be identified and encoded; and the resulting exemplary format of a 32 bit digital word generated by the corner point encoder of FIGURE l;
FIGURE 6 is a diagram depicting four exemplary matched corner point possibilities useful in understanding the exemplary embodiment of this invention;
FIGURE 7 is a more detailed schematic diagram of the corner point encoder shown in FIGURE l;
FIGURES 8-10 comprise flow charts of the hardwired program implemented in the corner point encoder of FIGURE
7;
FIGURE 11 is a more detailed schematic diagram of the feature extractor/sorter shown in FIGURE l;
; FIGURE 12 is a schematic depiction of the doubly linked organization of data within the corner memory of FIGURE 11 depicting sort link block (SLB) data chains 3Q with each SLB also being linked to sorted blob descriptors (SBD? which are, in turn, linked to both free ends of preyiously linked corner point chains which, when finally closed, will describe a closed edye contour of -the object under examination;

FIGURE 13 is a schematic depiction of the organized contents of the corner memory shown in FIGURE 11 after the corner point chains are completely linked or closed with each completely closed chain being linked to its own SBD
which represents the output data from the feature extractor/
sorter that is communicated to the core memory of the main CPU for access by user-generated decision processing;
FIGURES 14 and l are more detailed schematic diagrams of the dedicated decision logic hardware employed in the corner point match logic of FIGURE 11;
FIGURES 16-1 through 16-9 comprise a flow chart of the firmware program employed for controlling the micro-processor of the feature extractor/sorter shown in FIGURE
11;
FIGURE 17 is a flow chart of the hardware/software interfaces with the program utilized to control the operations of the main CPU of the FIGURE l;
FIGURES 18-20 depict exemplary interrupt service routines which the main CPU is programmed to perform in interfacing with the video processing and image processing subsystems; and FIGURE 21 is a flow chart of an exemplary user-generated program for the main CPU.
The processor system to be controlled, analyzed or monitored by this invention may take many and varied forms. However, in the exemplary embodiment of FIGURE
1, a typical application is depicted. Here, manufactured parts 100 are rapidly passed by a visual inspection station 104 on a moving conveyor belt 102. If the part is det-ermined to meet acceptable criteria (e.g. predetermined -- l --~19~

quantized geometric Eeatures) it is permitted to pass uninterrupted on the conveyor belt or to be ejected into an "acceptable" bin or the like. On the other hand, if the part is determined to be defective, it is ejected from the belt into a "reject" bin (or perhaps permitted to continue if the acceptable parts are instead ejected from the belt. Typically, as many as 900 parts per minute may pass inspection station 104 and there may be plural similar conveyor belts and inspection stations requiring similar capabilities for similar or different manufactured parts. The exemplary embodiment of this invention may be capable of simultaneously servicing up to four such independent inspection stations, each of which may pass up to 900 parts per minute.
Typically, as shown in FIGURE 1, the part 100 under inspection will itself interrupt a light beam or otherwise generate a "picture-taking" or "part-in-position" signal on line 106 which will, in turn, cause a conventional strobe flash light 108 (e.g., the General Electric Model PN 2120 strobe illumination system) and a conventional electronic video camera 110 capable of converting the visual field of view within inspec-tion station 104 to a sequence of digital electronic video signals, each of which represents a corresponding elemental picture element (i.e. pixel) of the field of view. A new frame of such video signals is provided in response to each stimulation via line 106 (provided that such signals do not occur in excess of some predetermined upper limit such as 900 parts per minute).
A suitable conventional camera used with the presently preferred exemplary embodiment of this invention is a solid state GID (charge injection device) camera manufactured by 7~ 35-OE-145 General Electric Company and marke-ted under Model No.
TN 2500. Such video camera appara-tus for capturing and reading out digitized video signals is also described in prior U.S. Patent No. 3,805,062 - Michon et al dated April 16, 1974 and U.S. Patent No. 3,993,897 to Burke et al dated November 23, 1976, both of which are, at present, commonly assigned herewith. As indicated in FIGURE 1, additional cameras 110 (up to three) may be provided in the exemplary system so as to monitor up to four separate vision inspection stations if desired.
As shown in FI&URE 1, the visual image processing systems of this invention is divided into three major subsystems (in addition to the conventional camera inputs, the system under control or observation, the video monitor, eta In particular, the overall architecture of the system is divided into a video processing subsystem 200, an image processing subsystem 300, and an overall system/
decision processing subsystem 400. These three subsystems are operatively interconnected via a common bus conductor system (e.g. Intel MULTIBUS ) system of address, data and control conductors 500.
Each frame of digitized video data emanating from camera(s) 110, includes a 256 by 256 array of pixels with each pixel being represented by an eight bit digital word representing the detected gray level for a res-pectively corresponding pixel of the CID camera. To reduce the volume of data involved, each pixel value is immediately threshold with a controllable digital threshold word (value) so as to result in but a single bit for each pixel. In effect, any pixel having a gray level equal to or greater than -the threshold value will be represented by a single "1" valued bit (or alternatively a "O" valued bit) while any pixel having a gray level below the threshold value will be depicted by a single "O" valued bit (or alternatively a "1" valued bit). Such "threshold-ing" is one of the functions performed by the camera image interface (CII) 201.
The resulting 256 by 256 bit array of binary valued video data for a given image "frame" is then steered to a desired (i.e. available) plane of the image plane memory (IPM~ 202 where it is temporarily stored (together with data identifying the camera from which it emanated, the frame sequence etc.) awaiting further processing by the image processing subsystem 300. The image control and display (ICD) module 203 includes microprocessor No. 1 (type 8748 in the exemplary embodiment) which has been programmed so as to synchronously control read out of IPM data to a video generator for the video monitor and, in the otherwise available time, to also control the CII
201 and the IPM 202, to accept the "part-in-position"
input on line 106 and to generate in response appropriate output signals to trigger the strobe flash light 108 and the initiation of the readout cycle for the appropriate camera(s) 110.
Because the video processing subsystem 200 includes independent decision processing logic (inlcuding some dedicated hardware logic), it is enabled to quickly xespond to the presence of a part within the inspection area of a monitored line so as to trigger the strobe flash to capture a frame of video data within the camera of that particular inspection area. As soon as time permits, a readout cycle of the appropriate camera(s) is initi-ated while substantially simultaneously thresholding the digital output of the camera and steering the resulting 76~3~ 35-OE-145 thresholded data to a currently available temporary buffer storage area within the image plane memory 202 so that it will be immediately available whenever the image processing subsystem 300 is next available for processing it.
In the exemplary system, all of these video processing subsystem functions are carried out under the general supervision of the main CPU in the overall system/decision processing subsystem 400 via the common bus conductors 500. New frames of video data are capturedl thresholded and temporarily stored until needed by the video processing subsystem 200 requiring only a minimum of supervisory control by the system/decision processing subsystem 400.
This division of labor permits the video processing sub-system 200 to perform these vital initial data com-pression and buffer storage functions for plural in-dependent camera monitoring systems. Other functions are also performed by the video processing subsystem. For example, the video monitor 112 is synchronously fed digital video signals representing the contents of one or more selected planes within the image plane memory.
Alternatively, the monitor 112 may be connected so as to directly receive and display the gray scale video output of any of the cameras llQ as may be desired.
In the exemplary embodiment, the image processing subsystem 300 signals the system/decision processing subsystem 4Q0 whenever it *inishes the processing of a frame of data and is thus next available for processing another frame. Thereafter, the main CPU in the system/
decision subsystem 400 signals the video processing sub-system 200 to transmit another frame of threshold and previously stored digital video data from the image plane ~97~ 35-OE-145 memory 202 to the corner point encoder (CPE) 301 of the image processing subsystem 300. The corner point encoder 301 is, in the exemplary embodiment, a free running de-dicated hardware device designated to automatically identify and encode corner points within an input frame of binary valued video data as will be explained in detail below.
A simple list of the x, y coordinates of the corner points as well as the IN and OUT vectors associated with each corner point is then stored in a "first-in-first-out (FIFO) memory 302 which acts as an output buffer for the corner point encoder 301. That is, a dedicated logic decision module is permitted to run freely to identify and encode corner points and to store them in the FIFO buffer memory 302. Accordingly, this function may be performed substantially independently and asynchronously with respect to the other ongoing functions being performed within the overall system.

S
The list of un-st~re~ but encoded corner points in FIFO memory 302 may then be accessed as required by another dedicated decision logic module, the feature extractor/sorter (FES) 303 (including microprocessor No. 2 which, in the exemplary system, is a type 2901). The feature extractor/sorter identifies and constructs closed linked sets (sometimes referred to as chains, subsets, lists, etc.l of the corner points first identi-fied by the corner point encoder 301 in feature memory 304.
In the exemplary embodiment, many of the desired geometric features to be associated with each thus identi-fied closed edge contour of the object under examinationare incrementally calculated during the sorting process.
That is, each time a corner point is added to a linked 7 35-oE 145 chain of such points (which will eventually, when closed, represent a closed edge contour), an incremental calculatlon is made of the perimeter, area, centroid, principal angle, maximum-minimum xy boundaries, etc.) by making these calculations incrementally as each new corner point is associated with a linked list of corner points, an overall time saving is often possible when compared with a more traditional approach which waits until the entire closed edge boundary is identified before performing and summing similar incremental calculations. This is so because the incremental calculations can often be performea right at the time a new corner point is linked to a given chain while the image processing subsystem 300 is otherwlse performing steps required to initiate the identification of yet another linked corner point. In other words, time that might otherwise be spent only in setting up the decision logic to identify another linked corner point may be simultaneously used to perform incremental calculations of predetermined geometric features that have to be made at some point anyway. The resulting linked lists or chains of corners points identifying closed edge contours of the object under test as well as many quantized geometric features of such contours are thus formed in the feature memory 304. Such data is thereafter trans-ferred via the common bus conductor system 500 to an on-board RAM within the main CPU 403 of the system/decision processing subsystem 400.
The system~decision processing subsystem 400 is a typical microprocessor-based system having a core memory 4Ql for storing data that ls to be directly accessed by the main CPU and a PROM memory 402 for storing computer programs (e.g.user-generated programs, system and/or executive program, etc.). Its overall operation is controlled by a main CPU which, in the exemplary embodiment, is an Intel single board computer 86/12A which includes an 8086 microprocessor (the 3rd microprocessor in the exemplary embodiment). The main CPU ~03 communicates with a con-ventional keyboard/display terminal 404 via a conventional RS232 communication link. Standard combination input/
output boards 405 and/or conventional I/O relays 406 are provided for generating suitable drive signals to reject/accept actuators or other conventional components of the system under control or observation. If desired/
a mass storage device (e.g. a cassette recorder) 407 may be provided for storing additional user-generated computer programs or the like. User-generated programs may of course be originally input via the keyboard 404 or through an I/O device such as the recorder 407.
As will be appreciated, once the closed edge contours within a given frame have all been identified (each by a closed linked set of corner points) and/or once the predetermined geometric features calculated by the feature extractor/sorter 303 have been quantized and stored in the RAM memory within 403, user-generated programs may be designed as desired to analyze such closed edge contours, to compare these quantized parameters with previously stored comparision parameters and/or to otherwise devise suitable decision logic for accepting/rejecting or other-wise analyzing the closed contours under examination. In the exemplary system, each closed edge contour is represented by a closed linked set of corner point coordinate data which is itself linked to a collection of data describing predetermined geometric features of the "blow" thus defined (e.g., the number of corner points involved, the l 7~ 35-OE-145 maximum and minimum x, y coordinates, the x, y coorclinates of the centroid, the area, the perimeter, the x, y coordinates o the principal angle, etc.). Other features oE the thus identified and described "blob" may of course be quantized and/or otherwise analyzed by user-generated decision logic as may be desired.
The overall functions being performed in this system are generally depicted at FIGURE 2. Although a frame of digital video data in the exemplary system actually comprises 256 x 256 pixels, the example shown in FIGURE
2 has been simplified to an 8 x 8 array of pixels. The thresholded (e.g., binary valued) digital video data for one frame is depicted at 120 to represent a rectangular part having one corner cut off at a 45 angle and an inner rectangular hole slightly displaced toward the lower left from the center of the part. As will no doubt be under-stood, the shaded pixels of this 8 x 8 array would be represented by a binary "l" value (or, alternatively, a "O" value) while the unshaded pixels would be represented by a "O" value (or, alternatively, a "l" value).
As shown at 122 in FIGURE 2, the outside closed edge boundary of the object under examination is represented by a sequence of eight "corner points" having "IN" and "OUT" vectors associated therewith which define a counter-clockwise-directed closed edge contour when successively linked together in the order of their appearance in a counterclockwise circuit. Similarly, the inside closed edge contour of the "hole" of the object under examination is represented by eight "corner points" each of which has an "IN" and an "OUT" vector associated therewith such that they define a clockwise-directed circuit of this edge contour if these corner points are appropriately linked in a closed set. It should now be appreciated -that -the useful information content of the image depicted in FIGURE
2 can be simply represented by two linked lists of eight corner points each. That is, if one known the x, y co-ordinates ox all the corner points and also knows the sequence with which these corner points are to be linked so as to traverse a closed edge contour, then the geometry of this contour and hence of this portion of the object under test is quantitatively determined with the result then being available for user-generated decision testing to see if the object meets predetermined specifications or to "recognize" an objec-t, etc.
The general technique of uniquely defining corner points (having x, y coordinates and IN/OUT vectors) about the edge contours of a binary-valued image, then uniquely linking such encoded corner points into associated lists together defining the image, probably can be implemented using many possible variations of corner encoding and sorting processes. The preferred implementation in accordance with the present invention uses a 3 x 3 pixel encoding mask 124 which is (conceptually) raster scanned from left-to-right, top-to-bottom of the threshold digital video data frame. That is, the center plxel of this 3 x 3 pixel encoding mask (marked with an X in FIGURE 2) is successively scanned in a raster-like fashion over the threshold digital video data frame one pixel at a time, from left to right, top to bottom. So as to permit the encoding process to proceed even at -the edges of a frame and to also ensure that each edge contour in a given frame is indeed a "closed" edge contour, an outer "O" valued boundary 126 which is pixel wide may be ~19~ 7 35-OE-145 (conceptually) added about -the periphery of a frame before the 3 x 3 encoding mask is passed over the pixels of the actual data frame.
At any given location during this scanning process, there are 512 possible contents of the 3 x 3 pixel encoding mask. In accordance with a predetermined lock-up table, some of these 512 combinations may be defined as representing one or more valid corner points. For example, referring briefly to FIGURE 5, one of the 512 possible contents of the 3 x 3 encoding mask is depicted which may be considered to define a corner point A. To permit a "matched'l con-nection with another corner point about a closed edge contour of the object under examination, both "IN" and "OUT" vectors are defined for each corner point ln ac-cordance with the IN/OUT vector coding diagram shown in FIGURE 2 (i.e., codes 0-7 to represent successive 45 increments in a clockwise direction starting from an initial horizontal vector "O" directed to the right).
For example, for the exemplary corner point A of FIGURE 5, the IN vector would have a value of 5 and the OUT vector would have a value of 3 in accordance with this IN~OUT vector coding scheme. To ensure unique coding of corner points in this fashion so that they may be later uniquely sorted into sets which define completely closed edge contours, if a corner point is defined at any given pixel position of the 3 x 3 pixel encoding mask 124, it is considered to be located either in position A or in position B as depicted in FIGURE 2. In some cases, "double" corner points may be defined by a given content of the 3 x 3 encoding mask. For example, the first corner point situation encountered in raster scanning of the ~7~0~ 85-OE-145 example shown in FIGURE 2 defines such a double corner point which actually comprises two separate corner points la and lb each having x, y coordinates and IN/OUT vectors as depicted in FIGURE 2.
The 3 x 3 pixel encoding mask con:Eiguration enables complete encoding of a video data frame on a single pass, thus avoiding the need for frame buffer storage as in multiple-pass encoding schemes. This 3 x 3 configuration also is of advantage in faciliting single pixel noise cancellation. Any time when the pixel which is centered in the 3 x 3 window is different from all of its immediately neighboring pixels within the window, such single pixel may be assumed to represent noise and the pixel data may be discarded thus afforing significantly improved noise immunity with only small additional system complexity and cost.
As will be explained in more detail below, the corner point encoder 301 is especially designed to perform this raster scanning, corner point detection and encoding function on command so as to produce a complete list of encoded corner points (stored in the FIFO memory 302) for each frame of thresholded video data as it is supplied thereto from a selected IMP plane. The exemplary listing of such encoded corner points in FIGURE 2 has been denoted in base 10 numerical notation for explanatory purposes although it will be appreciated that the quantized x, y coordinates and IN/OUT vectors are actually quantized using base 2 numerical notation in the exemplary embodi-ment. For example, the format of 32-bit binary word used in the exemplary system FIFO memory 302 for storing successive detected and encoded corner points is shown in FIGURE 5. As may be seen, 18 binary bits are provided ~9 76~ 35-OE-1~5 for s-toring the pixel x,y coordinates while six binary bits are provided for storing the OUT/IN vector codes associated with that corner poin-t. Six binary bits are also provided for a "corner code" which may be used to identify particular types of detected corners. For example, if a "double" corner is detected, it may be sufficient to merely store the x,y coordinates and IN/OUT vectors for one of the corner points together with a suitable "corner code" that may subsequently be translted (e.g.
by the FES) into a second or B encoded corner point word having x,y coordinates and IN/OUT vectors which differ by predetermined known factors from the x,y coordinate and IN/OUT vector data actually stored for the first or A
corner point. In this way, two related or "double"
corner points may be stored in the FIFO memory utilizing only a single 32-bit word instead of requiring two 32-bit words so as to separately store each actual corner pointO
The two remaining "sign" bits associated with the x,y coordinates of the encoded corner point word shown in FIGURE 5 are not used in the exemplary embodiment.
Analysis of the encoded corner point list of FIGURE 2 reveals that corner points la and lb, for example, should be "linked" so as to at least partially define a closed outer edge contour. For example, the differences X and MY (between x coordinates and between y coordinates) for these corner points are equal (signifying a ~5 line would connect these corner points) while -the "OUT" vector for corner point la equals or "matches" the IN vector of corner point lb. It is also apparent that corner point 2 should be linked to corner point la since it is the next successive corner point encountered in the same line of the raster scanning process (thus insuring that - ~6 -~97~7 35-OE-1~5 its y coordinate is equal to the y coordinate of corner point la) and its OUT vector equals the aligned IN vector of corner point la. By performing such decision logic, the encoded corner points in FIGURE 2 may be sorted into a linked corner block (SLB) chain (e.g. see solid and dotted arrows illustrating linkage).
Once such basic connectivity decisions have been made, the IN/OUT vector data may be discarded since a completely closed linked set of corner points individually defined by respective x,y coordinates is sufficient to define a closed edge contour. In the example shown at FIGURE 2, there are two such closed edge contours and they are represented by the two SLB chains depicted in FIGURE 2. In the exemplary embodiment, the x, y coordinates for each corner point are stored togther in one 32 bit word together with linging addresses to the previous and to the subsequent corner point data word in the chain so as to form a completely closed linked set of corner point coordinate data.
During the corner point matching process, the IN/OUT
vectors for the free IN and OUT corner points of a partially linked (i.e., not yet closed) set of encoded corner points are also stored in a linked list (SLB chain) so as to permit them to be tested for a "match" against other encoded corner points not yet sorted into a particular corner point chain.
The match determination is made by the corner point match determination is made by the corner point match logic hardware. Furthermore, as soon as another corner point is stored into a given corner point chain, advantage is taken of any available data processing time to immediately make incremental calculations of predetermined "atomic" geometric features thus avoiding the necessity to make all such ~9~7 35-OR-145 calculations at one time after the entire SLB has been defined. Once all of the encoded corner points have been sorted into their proper chain, sorted blob descriptor (SBD~ data is thus quickly available for each chain so as to quantitatively supply certain predetermined geometric features of the thus described blob. In the example shown at FIGURE 2, each SBD may include data identifying the number of individual corner points included within a given blob, the minimum and maximum x, y coordinates of the blob (thus defining a minimum "window" or "box" which contains the blob), the x, y coordinates of the centroids for the blob (assuming a homogeneous blob of uniform thicknessl, the principal angle, perimeter, etc. These quantized descriptive data are then stored in other memory facilities available to user-generated decision processing logic and represent the useful basic information gleaned by the visual image processing system of this invention for each frame image.
The overall process depicted in FIGURE 2 represents a considerable data reduction process. For example, the gray scale valued digital data actually provided by the camera represents approximately 512,000 bits per frame.
After the initial thresholding process, each frame is represented by onl-y about 64,000 bits. Assuming that the frame contains on the order of 500 corner points, the useful information in the frame mav be represented by approximately 16,0Q0 bits per frame of encoded corner point and SLB data. If only one blob is identified in a given frame, then the SBD for that blob may be contained within approximately 1,000 bits per frame. Ultimately, the user-generated decision processing logic will analyze the data contained within the SBD so as to generate perhaps only a 7~ i 35-OE-145 single bit per frame (i.e., reject/accept).
The video processing subsystem 200 is shown in more detail at FIGURE 3. The timing for this entire subsystem is provided by timing generator 203-1 which produces system clock signals at an approximately ~.5 mHz rate and also drives the video generator 203-3. In the exemplary embodi-ment, the timing generator 203-1 is included as a part of the image control and dispaly unit 203 which also includes an 8748-type microprocessor-based data processing system 203-2. This processor is programmed so as to synchronously supply video data to a conventional video generator 203-3 so as to provide one source of video signals for the video minotor 112. The "end active videol' and 1I start active video" signals are supplied by the timing generator 203-1 and/or the video generator 203-3 so as to permit the 8748 microprocessor to synchronize its operations with the need to synchronously supply video data input to the video generator from the IPM. In the available data processing - time otherwise available, the 8748 is also programmed:
(a) to accept a "picture-taking" or "part in position" interrupt signal on line 106 (also shown in FIGURE 1), (by to produce mode control signals for the image plane memory, (c) to provide data steering and mode controlling signals for the camera image interface 201 (as well as charge injection inhibition signals to the camera effectively causing the camera CID elements to acts as a temporary storage buffer until a desired frame of data may actually be read therefrom into the IPM) and activate the strobe light 108, and (d) to control in-terrupt, data transfers, ~7~
acknowledgemen-ts, etc., with the main CPU
403 via the common bus system 500.
For example, the main CPU 403 may issue control in-formation to the ICD 203 by addressing partlcular ad-drssable control registers 203-4 through a conventional address decoder 203-5 on the address line of bus 500 while data is transferred to the desired control register via the data lines of bus 500 and conventional controllable tri-state buffers 203-6 (whose control inputs may be provided by the outputs of the address decoder 203-5, as should be apprciated). Similar address decoders 202-1 and 201-1 as well as further addressable control registers 202-2 and 201-2 are respectively provided in the image plane memory 202 and camera image interface 201 so as to similarly permit the reception of data signals from the main CPU 403.
The camera control units (CCU), a conventional unit supplied with the GE Model TN-2500 camera, provides eight parallel bits per pixel. As shown in FIGURE 3, each camera output is synchronously clocked into a respective 8-parallel bit register (typically 74LS374) 201-3 through 201-6. Four 8-bit digital threshold values Tl-T4 are supplied by the main CPU 403 via -the addressable registers 201-2. These digital threshold values Tl-T4 are supplied to respective comparators (typically 74LS85) 201-7 through 201-10 as shown in FIGURE 3. Each 8-bit gray level valued pixel signal from registers 201-3 through 201-5 is also directly connected to another input port of respective comparators 201-7 through 201-9 as shown in FIGURE 3 such that the outputs of these comparators directly represent binary thresholded digital data emanating from a respective one of three different cameras. That is, whereas the camera ~g~7~ 35-OE-145 input to the CII 201 includes eight bits per pixel in a 256 x 256 array of pixels for each frame of video da-ta, the output from the thresholding comparators includes only a single binary valued bit for each such pixel.
To provide added flexibility, the second input port for comparator 201-10 is not connected directly to the output of register 201-6 (which is, in turn, associated with CCU 4). Rather, the second input of comparator 201-10 is taken from the output of multiplexer 201-11 or, alternatively, as a test word provided by register 201-12 (which constitutes another addressable register which may be filled by the main CPU 403 via the bus system 500). Thus, the second input of comparator 201-10 may be selectively controlled by the main CPU 403 to perform special functions.
For example, the output of a given camera could be simultaneoulsy threshold against two different threshold values to provide two respective frames of thresholded data which may be individually analyzed with the resulting feature data being processed so as to provide desired user-genrated image analysis. For example, if the part under inspection is comprised of two different parts having quite different reflectivities, then images of the parts might be effectively separated by such dual thresholding techniques.
Of course, multiplexer 201-11 might also be controlled so as to normally pass the output of register 201-6 to the second input of comparator 201-10 thus providing one thresholded bit per pixel per camera to the data steering multiplexers 201-13 (typically LS253). By applying suitable steering controls to the multiplexers 201-13, any one of the four incoming single bit lines may be connected to any one of the four outgoing single bit lines. This feature also provides added flexibility to the system. For example, 35_0E_145 a76~3t7 if camera No. 1 is monitoring a process whlch requires extremely high frame rates while camara No. 2, for example, is monitoring another line which produces only very low frame rates, a frame from camera No. 1 may be steered onto the multiplexer output line normally used for camera No. 2 or other output lines as they are from time-to time available for this purpose.
A linear or gray-scale mode of operation is a]so permitted for special purposes (e.g., to drive the video monitor) by controlling tri-state buEfers 201-14 to pass a 6-bit gray-sclae signal (the two least significant bits of the camera output are dropped) via multiplexer 201-11 from any desired one of the four cameras. Three of the out-put lines from tri-state buffers 201-14 are shared with the output from tri-state buffers 201-15 at the output of multiplexers 201-13. Accordingly, there are actually eight data bit lines at the output of the CII 201 for transferring video data to the IPM 202 as shown in FIGURE 3.
The synchronization error detector 201-16 shown in 2Q FIGURE 3 is preferably also included to compare the CCU
clock signals with the ICD system clock signals. If excessive synchronization errors are detected, suitable status indicators and data output registers 201-17 are provided so as to alert a supervising operator as well as the main CPU 403 via the bus system 500.
As previously indicated, the input 3-parallel bit registers are synchronously clocked by system clocks from the ICD so as to simultaneously present their prespective outputs words to the thresholding comparators. The digital 3Q threshold words (as well as proper multiplex selection control signals for the fourth channel) are transmitted to the addressable data and control registers 201-2 by the 1 g 35-OE-143 main CPU via the bus conductors 500. Other shorter term steering con-trol signals, mode controlling signals to linear mode or non-linear) signals are generated by the ICD 8748 microprocessor. In other words, the longer term data and control signals which change relatively infrequently are supplied by the main CPU 403 but other data and/or control signals which require faster response times are supplied by tha distributed logic of the 8748 microprocessor in the ICD 203.
The dynamic image plane memory 202-3 in the exemplary embodiment is not capable of directly handling the approxi-mately 4.5 mHz data rates present on the 8-bit data output channel provided by the CII 201. Accordingly, a 32-bit register 202-4 (typically 74LS374) is connected to be cyclically filled on four successive clock cycles so as to act as buffer storage and effectively reduce the data rate actually presented to the dynamic memory 202-3 to a level that can be handled (e.g. about 1.125 mHz). It will be seen that a similar 32-bit buffer register 202-5 may be filled with overlay frame data from the main CPU 403 via bus system 500 and presented to the input of the dynamic memory 202-3.
In the exemplary embodiment, each frame of thresholded video data comprises only 256 x 256 bits and there are eight such planes of dynamic memory 202-3 provided for temporarily storing such data until it can be processed by the image processing subsystem 300. The dynamic memory is normally addressed in a cyclical fashion by address signals con-ventionally generated in the ICD timing generator 203-1 via address multiplexer 202-6 (typically MC3242). However, alternatively, addressing of the dynamic image plane memory 202-3 may be effected directly by the main CPU 403 via bus 6V~ 35-OE-143 system 500 and tri-state buEfers 202-7 or from selected outputs of the CPE of the image processing subsystem via tri-statebuffer 202-8. The output of the dynamic memory 202-3 is accumulated in two 32-bit registers 202-9 and 202-10 (typically type 74LS374) from which it is read out at a higher (approximately 4.5 mHz) bit rate over an 8-parallel bit output data channel. The output of register 202-9 provides the video input to the video generator 203-3 previously discussed while the output of register 202-10 provides output to the corner point encoder 301 or, if desired, to the main system/decision processing subsystem 400 vis bus conductors 500.
The dedicated decision logic flow diagram for control of the video processing subsystem 200 is shown at FIGURE 4.
Initially, after power is applied, decision loop 204 is entered until the video generator 203-3 is ready to begin an "active video time" (i.e. the raster scanning of the video monitor). At this time, microprocessor 203-2 generates an interrupt signal to the main CPU 403 (such an interrupt is generated only once each frame, approxi-mately each 33 milliseconds). Once the main 8086 mic-roprocessor has been interrupted at task 206, it wires system configuration control signals into addressable registers within the video processing system as previously described.
The main CPU 403 is thus permitted to define which cameras are to be active, which overall modes of operation are to be followed in the various components of the video proces-sing subsystem, etc.
After receiving such initialization and control information from the main CPU at block 206, decision block 208 is entered. Here, a test is made to see whether the system has been configured so as to permit hardware triggering 6~7 35-OE-145 or software triqgering of the picture-taking function. I:E
hardware triggering is indicated, then decision block 210 is entered until a part of one of the monitored lines moves into position so as to generate a "part in position"
or a "take snapshot" signal on line 106 (see FIGURE 1).
Once such a hardware generated trigger signal is detected, then at block 212, the ICD generates appropriate control signals to energize the appropriate strobe lamp and to inhibit further charge injection in the appropriate CID
camera (i.e., in effect to latch a snapshot of the pre-defined field of view for that camera within the CID array of that camera). Thereafter, a test is made at 214 to determine whether there is available time for the ICD
microprocessor 203-2 to perform the necessary computations required for steering and storing the data that is now present in the camera of interest. Typically, such ICD
computation time is begun (and completed) during the vertical retrace time of the video monitor so that the ICD microprocessor 203-2 may properly drive the video generator 203-3. When an available computation time is detected at 214, the ICD microprocessor performs task 216 which determines which cameras now contain data that is to be threshold and loaded into the IPM. Another test is made at 218 to wait for the onset of the next "active video time" and thus ensure that the ICD microprocessor is not interfering with any required active video time.
When the onset of such a time is detected, then the ICD
microprocessor 203-2 is programmed at 219 to initiate the readout o complete frames of camera data from the selected .30 cameras to available planes of the dynamic memory 202-3 and, simultaneously, to supply the requisite video data input to the video generator 203-1 from the IPM. Preferably each ~976~ 7 35-OE-145 camera is normally assigned to a predetermined IPM plane.
However, if the normally assigned plane is temporarily unavailable, then any other available plane is assigned for temporary buffer storage purposes. After initiating the thresholding and steering of such data to/from the IPM, the ICD also communicates with the main CPU 403 at tasks 221 and 223 to inform it that these particular frames have been "taken" and stored in particular IPM planes while the main CPU, in turn, informs the ICD microprocessor as to which previously stored Erames of thresholded data have now been processed by the image processing subsystem 300. These latter planes ox the IPM are thus again made available for storage of future frames of video data.
In short, the ICD microprocessor is programmed so as to be informed (via the main CPU 403) as to which cameras are of interest. It also maintains a map of the available IPM space into which future frames of video data can be steered. All desired video frames are written to their respectively steered IPM planes in parallel if there is sufficient IPM space available for storage. Any leftover frames of camera data which are to be stored but which cannot presently be stored due to the unavailability of IPM space are identified and serviced at the next available time. The relative age of each such unserviced video frame is also maintained so that after some predetermined maximum time is elapsed, further injection of the CID camera in question is permitted (effectively losing that frame) and, of course, informing'the main CPU 403 of this event by passing appropriate status information during the next communication with the main CPU. The ICD microprocessor also maintains an account of those frames of data still stored in the IPM but fully processed by the image proces-~97~0 35-OE-145 sing so that future frames of data -to be cap-tured may be steered to these now reusable memory locations. Further-more, the ICD microprocessor is programmed so as to keep the main CPU 403 fully informed as to which frames of data have been effectively captured during the previous active video time.
A timing and mode control unit 202-11 is also provided in the IPM. It receives the system clock at approximately 4.5 mHz as well as mode control signals from the ICD.
Conventional frequency dividers and logic gates are employed so as to provide reduced frequency read/write or read/
modify/write phase I and phase II signals. These control signals are alternately "on" for approximately 440 nanosecond periods. During a phase I period, the IPM
may be mode controlled either to output data to the corner point encoder or to permit the main CPU to read or write data either from or to the IPM via buffers 202-10 or 2Q2-5. During phase II periods, incoming video frames are steered to available planes of the IPM while the IPM
2Q contents are simultaneously read to the video generator via registers 202-9. If there are no new video frames to be stored in the IPM during a phase II period, the contents of the IPM are nevertheless read to the video generator via buffer 2Q2-9.
Using the exemplary form of corner point chain coding, there are four possible situations that may be encountered in finding "matched" corner points that are to be linked as part of a closed edge contour. These possibilities are shown explicitly in FIGURE 6. For example, case one is a situation where the corner points are vertically aligned.
This situation is indicated when the x coordinates of the corner points are equal, when the IN vector of one ~9~6V7 35-0~-145 corner point is equal to the OUT vector of the other eorner point and when these IN/OUT vectors are both vertically oriented. In case two, the differences between the x coordinates and y coordinates for the two corner points are equal, the out vector from one corner point equals the IN vector for the second corner point and these same OUT/IN vectors are aligned along a left diagonal.
In case three, the differences between the x coordlnates and the y coordinates for the two corner points are equal in magnitude but opposite in sign while the OUT vector of one eorner point is equal to the IN vector of the second corner point and these same OUT/IN vectors are aligned along a right diagnol. In the fourth possibility illustrated at FIGURE 6, the y coordinates of the corner points are equal, the OUT vector of one equals the IN
vector of the other and these same OUT/IN veetors are aligned along a horizontal line. As should be appreciated, these same rules apply whether one is traversing an outside boundary in a counterclockwise fashion or an inside boundary in a clockwise fashion.
In the presently preferred exemplary embodiment, case four is a particularly simple case because of the left-to-right raster scanning process used to encode the corner points. In this instance, if two suecessive corner points are deteeted along any given horizontal line of the raster sean, if the OUT vector of the first corner point and the IN vector ox the second corner point are equal and aligned horizontally, it is immediately known that these two corner points should be linked. Accordingly, corner point 3Q match decisions for case four in the exemplary embodiment can be made substantlally immediately. However, decisions with respect to the other three possibilities cannot normally ~197~ 35-OE-145 be made immediately in the exemplary embodiment. As will be explained in more detail below, special dedicated logic decision hardware is provided for identifying a match condition corresponding to any of these three remaining cases in a single micro-instruction cycle time. As should be appreciated, different embodiments of the invention may provide such dedicated decision logic hardware for all the possibilities or perhaps fewer possibilities if different raster scanning or other encoding techniques are employed.
Furthermore, if more complex chain schemes are employed, there may be additional corner point match possibilities that must be taken into account.
The corner point encoder 301 is shown in more detail in FIGURE 7. Serial thresholded video from the IPM is received here by a multiplexer 301-1 (typically 74LS251) which selects one of the eight IPM planes for video processing.
Video polarity may be controlled at its output. The serial bit stream from 301-1 goes to the first row line buffer 301-2. This serial bit stream includes the 65, 536 bits of a 256 x 256 bit video frame starting at the upper left-hand corner of the frame and proceeding in a raster-like fashion from left-to-right, top-to-bottom as previously described. The selection of a particular IPM plane for processing is done by the multiplexer 301-1 under control of the contents of the control register 301-3 which is filled by main CPU 403 via the common bus system 500 and the usual address decoder 301-4 which produces various control signals used within the corner point encoder to control the loading of the control register 301-3, the initiation of a new cycle of operation for a new frame of data, etc.
The line buffers 301-2, 301-5, and 301-6 each comprise three serially-connected D-type flip-flops. Connected 1~9~6~

seriously between the first and second line buffers in an additional 253 pixel buffer line delay 301-7 (typically realized by a type 2125 RAM). Similarly, serially-connected between the second and third line buffers is another 253 pixel buffer line delay 301-8. It should now be ap-preciated that as the bit serial stream representing each frame of video data is serially passed through this chain of buffer storage, the contents of the 3 x 3 pixel array defined by line buffer 301~2, 301-5, and 301-6 corresponds exactly to the raster scanning of a conceptual 3 x 3 encoder mask 124 over the entire 256 x 256 frame of video data as described in FIGURE 2. Accordingly, on any given cycle of operation, the 9-bit content of this 3 x 3 encoding mask is presented to a PROM look-up table and valid corner detector 301-9 (typically a type 82~181). As previously explained, there are only 512 possible contents of such a 3 x 3 encoding mask and the PROM look-up table is set up so as to produce an output in response to any particular one of those 512 possibilities which represents whether it is a valid corner point and, if so, whether it is an A or B-type corner point, the identity of its IN
vector and of its OUT vector and possibly corner codes .. indicating whether a double corner point or the like is involyed and, if so, what type of double corner point.
To keep track of the x, y coordinates for any detected corner point, x and y address counters 301-10 and 301-11 are provided. Whenever a new frame of data is to be encoded, the main CPU triggers a "start" signal which initializes the x and y address counters. In the simplest case where the entire frame is to be encoded, both address counters may start at 0 with the x address counter incrementing up to a count of 256 before being reset to start over again a-t which time the y address coun-ter is incremented. When the contents of both counters equals 256, -the entire :Erame will have been analyzed and a "stop"
signal or the like may be generated to signal the end of a complete encoding cycle. In this manner, the x and y address counters are continually updated (at the pixel bit rate on the serial input to the line buffers) so as to always instantaneously represent the x, y coordinates of the center of a 3 x 3 pixel encoding maks. IE desired, a window control circuit 301-12 may also be provided so as to effectively limit the encoding process to a portion of the video frame. If so, addressable registers within this circuit would be filled by the main CPU 403 with data representing the beginning and ending x and y coor-dinates of the window together with suitable logic circuits for inhibiting the encoding process unless the contents of the x and y address counters is within the defined window, etc. Such window controls are believed to be conventional and are not critical to the practice of the claimed invention. Accordingly, they are not described in further detail here.
Typically, the ICD 4.5 mHz system clock is used for clocking the address counters and line buffers, etc., of the corner point encoder so as to maintain its operation in synchronism with the data stream being read from the IPM
under control of the same ICD timing circuits. Depending upon how the x and y address counters are initialized, it may be necessary to insert some delay 301-13 between the output of the x and y address counters and the circuitry which is employed for computing and recording the actual x and y coordinates of the corner points to be encoded.
Once proper synchronism is achieved (possible by including ~3~97~V7 35-OE-145 such delay), the x, y address counter contents are presented to parallel adders (typically LS283) at 301-14 to compute the actual coordinates of the corner point (then being detected) to be recorded in the FIFO memory 302. As may be seen in FIGURES 2 and 5, an A-type corner point in the exemplary embodiment falls halfway between the vertical edges of a pixel while a B-type corner point falls halfway between the horizontal edges of a pixel. Accordingly, in this particular embodiment, to obtain an accurate digital representation of the actual x, y coordinates, the contents of the x and y address counters are doubled and added to another digital signal provided by the PROM 301-9 indicating whether an A or B-type of corner has been detected. A
further constant term (e.g. three in the exemplary embodi-ment) may also be required to ensure that the calculated x, y coordinates actually represent the proper coordinates with respect to the center pixel of the 3 x 3 pixel encoding mask.
The output from the PROM 301-9 and from the parallel adders 3Ql-14 now contains all of the data required for encoding a corner point as illustrated in FIGURE 5. Such data is passed through a data formatting multiplexer (typically a type 74LS157) 301-15 before being formatting multiplexer (typically a type 74LS157) 301-15 before being recorded in the FIFO memory 302. Actual writing to the FIFO memory 302 is done only when a valid corner point is detected by an output from the PROM 301-9 as indicated in FIGURE 7. As also indicated, actual writes to the FIFO
memory 302 may be inhibited unless the x, y address counters are within a defined window. The data formatting multipl-3Q exer 3Ql-15 is provided so that a special "start" word and a special "stop" word may be also written at the beginning and ending of the list of encoded corner points for a given frame ~976V 7 35-OE-145 in the FIFO memory 302. By inserting such star-t and s-top words in the FIFO memory, the *eature extractor/sorter 303 can itself determine when it has come to the end of a list of encoded corner points for a given frame. Typically, the start and/or stop words will include data identifying the camera and/or frame from which the encoded corner points were derived. In this way once final decisions are made as to whether the frame represents an acceptable part, for example, further properly timed control signals may be generated by the main CPU 403 to actuate a reject mechanism or the like at a predetermined point downstream of the inspection station on the conveyor belt.
The hardwired dedicated decision logic of the CPE shown in FIGURE 7 is connected so as to perform the tasks re-presented in the flow charts of FIGURES 8-10. When initiated with a CPEHN signal from the main CPU 403, the CPE begins the scanning process at 302-16 -- typically beginning at a desired "window" cordinate address. The line buffers are filled with thresholded data at 301-17 so as to define the contents of the 3 x 3 pixel encoding mask at that location and a PROM look-up cycle is performed at 301-18 to see if a valid corner point has been detected at decision point 301-19. If not, the line buffers and address counters are incremented at 301-20 and a test is made to detect the last pixel at 301-21. If the last pixel has not yet been reached, then another pixel of data is serially passed into and along the line buffer and another PROM
look-up cycle is performed to test for the presence of a valid corner point. Whenever a valid corner point is detected, the current x, y coordinates, the IN,OUT vectors and the appropriate corner code are formatted at 301-22.

If the FIFO memory is temporarily full, a wait loop is ~1~76~)7 entered at 301-23 until, at the next available opportunity, the encoded corner point is written to the FIFO memory at 301-24. The encoding process is then continued by incrementing the line buffers at 301 20, etc.
Before the encoding process of FIGURE 8 is initiated by the main CP~ issuing the CPEHW start signal, the main CPU first issues a CPE TASK to initiate the CPE in accord-ance with the flow chart of FIGURE 9. Here, at 301-25, the main CPU loads the CPE addressable registers which define the beyinning and ending of a desired window and at 301-26, another addressable (frame) register is loaded with the identity of the frame to be encoded and/or of the IPM plane in which that frame is presently resident.
This permits an appropriate "start" word to be written to the FIFO.
When a given encoding process has ended, the CPE
generates a CPE interrupt signal to the main CPU and, at 301-27, writes status data to the main CPU which informes it that a complete frame has now been encoded and that this particular frame in the IPM may now be reused. At approximately the same time, an appropriate "stop" word is written to the FIFO memory as should now be apparent.
The feature extractor sorter 303 is shown in more detail at FIGURES 11, 14 and 15. The overall architecture is shown in FIGURE 11 and is, for the most part, a con-ventional parallel bit slice processor architecture. It is divided into three units: the feature arithemetic unit (FAU~ 303-1, the feature extractor memory (FEM) 303-2, and the feature instruction unit (FIU) 303-3. The usual communication capabilities are provided with the main CPU
403 via common bus conductor system 500. For example, I/0 1~7~7 35-OE-145 registers 303-4 are provided for holding status data, address bus output and data bus output signals for access by the main CPU 403. Other registers such as a data bus input register may also be included so as to permit the main CPU
403 to write control instructions and the like to the FES.
The heart of the FAU 303-1 is a 32-bit/slice micropro-cessor 303-5. In the exemplary embodiment, this 32-bit/slice microprocessor is formed by the interconnection of eight 4-bit/slice AM 2901 processors. In the preferred exemplary embodiment, these processors are interconnected as shown in FIGURE 15 so as to permit interruption of the arithmetic "carryl' signals between two 16-bit half-words during arithmetic computations for purposes that will be explained below in connection with the special corner point match logic 303-6 shown in more detail at FIGURE 14.
It should be noted that the feature extractor/sorter is independently run with its own 5 M~lz clock 303-7. The interrupt vector register 303-8, the literal or temporary data storage register 303-9, the pipeline instruction register 2Q 3Q3-10, the microprogram memory 303-11 and the address sequencer 303-12 are all conventionally connected so as to proyide the usual data and instruction transfer paths to implement a complete 32-bit/slice data processing system.
In the exemplary embodiment, virtually all of the required control signals for the FAU, FIU and FEM are derived from the pipeline instruction register as indicated in FIGURE 11. The FAU 303-1 is also connected to receive 32-bit parallel words from the FIFO memory 302.
Multiplication capability is available in the bit/slice data processor of the FAU by virtue of a conventional multiplier 303-13. However, in the exemplary embodiment, the multiplier is a 16 x 16 multiplier integrated into the bit/slice 11976~3~

configuration in a unique way. In particular, the 2901 bit/processor sources the operands, processes them in the arithmetric logice unit (ALU) and writes the results from the ALU to a designated destination all in one micro-instruction cycle. Accordingly, to take maximum advantage of this feature, the multiply input registers should have a programmable destination and the multiplier accumulating register should be programmably sourced.
However, due to a limited number of pins on the multi-plier circuit, the inputs and outputs are pin-shared for the least significant half of the resulting product and the y input. Such pin-sharing would seem to rule out the desirable configuration just described, however, it has been discovered that a split cycle arrangement may be provided as shown in FIGURE 11 such that the y input is first loaded into a temporary multiply input register 303-14 when it is sourced. It is later transferred to the multiplier during the first quarter of the actual multiply cycle. It has been found that this operation is possible when the microcycle time is on the order of 200 nanoseconds. By providing the additional multiply input register 3Q3-14, it is possible to achieve the desired programmably sourced anddestined multiply operations using a single microinstruction cycle time rather than using two such cycle times. This is especially advantageous where multiply operations are used frequently. Latch 303-15 is also provided in the FLU to permit both the sourcing of FEM memory 303-21 and the destining of its ALU~d valve back to the same FEM memory location all within a single microcycle.

11976~7 35-OE--145 -I The Eeature start regis'cer 303-1~ is addressed a;ld filled by the ma;n CPU when it is desired to cause the FAU 303-1 to jump to a particular subroutine in its microprogram memory.
After possible modification by counting circuit 303-17 and selection by multiplexer 303-18, the appropriate first instruc-tion of the desired subroutine is withdrawn from feature instruction memory 303-19 and passed via tri-state buffers 303-20 to the address sequencer 303-12 of the FAU. This is believed to ye a conventional arrangement for permitting, in effect, the main CPU to vector interrupt the FAU microprocessor.
The corner memory 303-21 in the FEM is used during the ongoing sorting process to accumulate corner point chains, SLY
chains and SLD's. Once the entire lists of encoded corner points for a given frame have been processed by the FES, the accumulatea sorted lists of data are then read through the I/O registers 303-4 and the common bus conductor system 500 to the main CP~
subsystem where they are stored in RAM memory for convenient access by user-generated decision loqic programs.
As has already been mentioned (and as will be explained in more detail below), the corner point match logic circuit 303-6 is capable of determining whether a cornet point match condition exists in a single micro-instruction cycle time. To wake pull advantage of this capability, the free ena corner points already stored into linked lists within the corner memory tSLB chain) 303-21 must ba successively presented one after another in successive micro-instruction cycle times as well so that the corner point match logic circuit 303-6 is timely supplied wlth the-data required to make its corner point match decisions. To permit these successive rapid data transfers, an additional multiplexer 303-22 is connected between the output of the corner memory 303-21 and the address counter/register 303-23 whose contents determine the next word to be read out of the corner memory. Including multiple.~er 303-24 in this feedback circuit _ ~7~

1 197~U 7 3 5 - OE - 1 4 5 also permits other inputs to tlle ad~re:;s counte~/register 303-23 in accordance with mole conventional arrangements.
A data swap multiplexer 303-25 permits the upper and lower halves oE data words for example, to be swapped one for another and transEerred from the FEM to the FAU or FIU or, if desired, back through the multiplexer 303-24 to the address register 303-23 of the FEZ itselE. This provides still furtner degrees of flexibility that can often be used for saving time This is especially true, for example in processing doubly linked sorted lists of data where, typically, one word associated with each data word may have an upper half-word containing the address of the previous data word linked thereto while the lower half-word contains the address of the next subsequent data word linked thereto. In this circumstance, forward progression through the doubly linked lists could be made by passing the lower half of thy link word to the address counter while reverse progression through the doubly linked lists could be had by transferring the upper half-word to the address register the provision of a data swap multiplexer 303-25 permits this typa of direction reversal in the linked list processing sequence to ye effected immediately if desired.
Thus a variable jump address capability has been provided in the data memory portion of a microprogrammed machine. This capability is programmable within the micro-instruction of the feature extractor/sorter and is a completely independent field prom the jump capability of the micro-instruction memory. Both jump capabilities may be used independently or concurrently. The programmed capability of the data memory address is such that the address may be:
(1) loaded prom the 2901 output;
~2) incremented;
(3) jump immediate as function of left half of data word;

~197607 35-OE-145 ~4) jump immediate as unction oE ~ig'nt ha.~E o data word;
(~) indexed jump ~0-63) as function of let half of data word; and ~6) îndexed jump ~0-63~ as function of right half of data word.
Conventional PROM math function look-up table 303-~6 is provided, if desired, or quickly looking up math functions such as sines, logarithms, etc.
The processing perEormance goal for the overall system is to process simple scenes a a rate of 900 par~s/minute (67 millisec/part). The 67 millisec must be shared by relatively 310wly executed user-genera~ed application programs and the feature extraction/sorter processing which can wherefore only have the smaller portion o this time The feature extractor sorter is in general able to do both connectivity (the connecting of the corners to form boundary outlines) and basic feature calculations such as area, perimeter, centro;d and enclosi~ box coordinates of each item, whether a hole or a solid, within the scene in about 20-30 ms. I the simple scene contains up Jo 10 items (or blobs) and 100 corner points per item, both connec-tlvity and feature calculations must be done in 20 microsec on each corner poLnt. reasonable subdivision o this would be Jo allow half the time or calculation o the features. Therefore, 10 microsec maximum is the time allowed for connectivity per corner point Most of that time will be dedicated Jo updating the various linked lists, so 2 microsec is allowed to find the ~a~ch. If there are 10 boundaries in the scene which are in the process of being formed, up to 10 comparisons must be made between the corner point being processed and one end ox each partial boundary. This establishes a requirement of one comparison/20~ nanosec.

_ 4~ _ 76~ 35-OE-145 The following ls a set of basic clesired parame-ters Eor -the eature extractor/sorter: programmable, 5MHz operation, cledicated multiply capability and connec-tivity match determina-tion in a single operation. Further, moment of inertia calculations are desired in order to compute features such as principle axis and centroid. This requires a (pixel address)3 operation which yields 27 bit significance. Since these features are being computed on an incremental basis (as each corner point is received) in order to minimize processing time, even further significance must be carried to avoid error when updating the accumulation after each incremental calculation. Thus, 32 bits is preferred as the processor computational size. The processor, therefore, must be a custom, micro programmable, 5 MHz machine.
One CPU choice for this requirement is eight AMD four bit slices accompanied by the TRW 1010 sixteen x sixteen mul-tiplier.
The firmware controlled 2901's are configured to compute X and Y simultaneously by interrupting the carry flow between the 4 bit slices. The remaining comparisons are done simultaneously by dedicated match logic. The preceding configuration takes care of the arithmetic requirement in order to perform the connectivity match determination in a single micro cycle. An addressing requirement exists in that the corner points at the ends of each unclosed item (partial boundary) must be accessed from corner memory indirectly i.e., specified as part of the previous fetch from corner memory. Thus, no time is expended in corner memory address determination. The potential match corner points are brought into the 2901 structure sequentially every 200 ns. from corner memory -to be compared with the corner point being processed.
Considerable interface exists between the various boards of the system over the Intel Multibus . This is especially true between the feature extractor/sorter and the main CPU. All of the above interface is memory mapped which yields virtually 976~7 imitless caoability considering tlle 20 bit address sp3ce of the 8086. Preferably the micro program memory is RAM and it is both memory mapped writable and readable over the Multibus Addition-ally, it is desired to have the feature extractor/sor~er have th2 S capability of Eunctioning as a ~ultibus master so that when the feature computations are complete, it can take over the ~ultib~ls and transfer these results to system control memory in a block transfer. Then the main CPU has all feature data reguired to proceed with the analysis task while the feature extractor~sorter }0 can proceed with the next assigned task. In order for the feature extractor/sorter to act as a bus master, it is no necessary for it to contain an Intel bus arbiter circuit. The requirement is that the feature extractor/sor~er have access to the "Bus Priority In" line of the 86/12A~
To facilitate processing, it is desired to be able to take a literal value directly off the micro-instruckion which can be used to Index, Mask, etc. This is the function of the Literal register. A latch between the corner memory output and the 2901 external port allows a read from the FEM, an MU execution and the resultant written back to the same EM address, all within the same micro cycle. the data swap MUX in conjunction with he latch permits a single instruction half word exchange of a word resident in the FEZ.
The micro sequencer of this processor has the normal iump condition address sources of pipeline register, interrupt vector register and feature instruction memory (~IM~. The interrupt vector addrees may ye loaded from the 2901 output, and the lower 6 bits may be loaded by the cornet code portion of the corner point encoder word transferred to the feature ex~rac~or.
.30 the I permits macro capability when the start addresses ox various feature calculation tasks have been memory mapped written into consecutive locations of the FIX by the system controller.

119760~ 3 5-OE--14 5 the completion ox each task the ea~ure counter addressing the JIM is incremented to the next task via micro orogcam control.
Other desirable Eeatures oE the processor are run, halt, step control capability via the MultibUS, full 32 bit diagnostic monitoring oE the FAU output, controllable micro program starts and breakpoint trap capability. Bi-directional interrupt communications are utilized between system controller and the feature extractor. The interrupts to the 80~6 go into its normal priority interrupt structure while interrupts to the feature extractor are processed on a polled basis.
he feature extractor/sorter is microcoded to accomplish image analysis in the vastest possible time. The corner point sorting firmware occupies about two-thirds of the system's lR X
48 writable control store (WCS). The custom designed dedicated corner point match hardware is used to speed up the corner point sorting routine by quickly finding a "match" between a corner point and a partially formed item chain. The routine can service a corner point every ~3 microseconds, taking 13 microseconas to sort and link the corner point, and 10 microseconds to compute the atomic features. At thaw rate, a 1400 corner po;nt scene, for example, can be analyzed in real time.
When other "extended" features are calculated, the corresponding firmware is stripped into the WCS no used by the corner point sorter firmware. Since the sorting routine is large and frequently used, it remains resident in the WCS. Most extended feature calculations average 3.6 microseconas/corner point so that an extended feature takes 360 microseconas for a 100 corner point item, not including the strip time. The 8086 takes 2 ms to strip a 200 micro-ins~ruction routine into the WCS.
The processor has an 8K X 3~ RUM corner Emory for data and variable storage. B0% ox the memory is devoted to corner point storage, with a maximum ox 33~8 corner points available in a given picture. 12~ ox the memory is ased to hold the partial 1~976V7 35-OE-145 . ~mic ~ea~ure calculations Eot each itern as the picture mo-3el is being built. Presently, a total o 100 item3 are allowed in a given picture The rest of the memory holds variables and data necessary for t`ne proper sorting of corner points into their respective item boundaries.
The 48 bit control word or the bit slice processor i5 preferably designed to allow 4 types of microinstructions as determined by bits 0 and 1. The first type is a normal All instruction, where internal and/or external data is selected or input into the ALU. The ALU field selects the ALU operation desired and the AhU output is placed in either an internal ALU
register or an external destination (e.g., memory). The second instruction type is like the first except it allows the modification of the feature extractor memory address tFE.~A). The third type of instruction perrorms an ALU ~O-OR, but allows the loading of the 32 bit literal register with a constant contained in the microinstruction. This type of instruction also allows special functions to be performed, like a ~ultibu~ read or write function. The fourth instruction type allows the mic~opr~gram to jump to the address specified in the microinstruction and also performs an ALU COP The four instruction types allow jump and literal capability without extending the microword beyona 48 bits.
In summary the architecture of he parallel bit slice/
data processing system shown in FIGURE 11 is believed to be of substantially conventional design with the following exceptions:
(a) the unique dedicated corner point match logic circuits 303-6 shown in detail in FIGURE 14 and the related special organization of the 32-bit/slice micro-processor 303-~ shown in greater aetail at FIGURE 15;
(b) the provision of a multiply input register 303-14 so as to permit desired multiply I/O operations to be performed in a single micro-instruction cycle;
-5~ -11~37607 35-OE-145 (c) the provision oE a feedback circuit Erom the output of the corner me,nory 303-21 to its address register 303-23 which feedback circuit may possibly include a multiplexer such as 303-22; and S (d) the prov;sion of the data swap multiplexer 303-25.
the doubly linked data content oE the corner memory 303-21 is shown durinq the sorting process at FIGURE 12 and after the sorting process at FIGURE 13. Starting at the lowest level of the linked organization, the x,y ~oordina~es of already-linked corner point chains but still not completely closed) are depicted at 600-605. Mach such corner point is represented by two 3~-bit words. The irst 32-bit word includes the x and y coordinates of the corner point while the second immediately succeeding 32-bit word includes the relative address ox the previous corner point and the adaress of the next corner point in the linked chain. The arrows ;n FIGURE 12 diagrammatically depict such linkages. As shown, there are two partially completed chains of linked corner points. Since the corner points are no yet completely linked to represent a closed edge contour, there are two free end corner points in each such chain. or example, in the first chain in FIGURE 12, there is tree end corner point at 600 and a free end corner point at 602 while in the second chain there is a free end corner point at 603 and at bO5. Linked addresses to these free end corner points are maintained in associated sorter blob descriptor (SBD) files with one SBD Nile being maintained for each corner point chain either partially completed or closed, as shown in FIG~R~ 12. As earlier explained, each SBD also includes the current accumulation of incrementally calculated geometric eatures for the blob associated therewith.
A linked chain o SLB files is also maintained, one SUB
being provided for each partially formed corner point chain, as - 5~ -~976~7 35-0~-145 `so shown in FIGURE 12. The SLY contains all the information specifying the open ends of the blob, i.e., the in and out vectors and the x,y coordinates. One word of the SLB is headed to determine whether an unsorted corner belongs to that blob, such determination being made by the corner print match }ogic.
As soon as the linked corner chains aye closed (by finding a corner point that links Jo both of the free end poLnts), its corresponding SLB is freed or use in initiating another SBD and an associated partially closed new corner chain. The completed SBD and associated closed dou~ly-linked corner point list is stored as shown in FIGURE 13 for ev2ntual transfer to the main C~U 403.

In earlier explaining the different possible mean conditions, it will be noted that it is necessary to compute the difference between x and y coordinates for the corner points being tested. The 32-bit~slice processor is preferably connected as show in FIGURE 15 so as to permit this calculation Jo be made simultaneously for both x and y coordinates in one micro-instruc-tion cycle time. Typically, eight 4-~it~AM 2901 processors are connected with look aheaa and carry chips JAM 2902) so as to perform 32-~it arithmetic in parallel in one micro-instruc~ion cycle time. however, by using an Ox Nate 700, the A of the EM
2901 may be caused to simultaneously perform dual 16-bit arithmetic instead of 32-~it parallel arithmetic. Selection between these two modes of operation may be program controlled by a signal such as CFLOWENL shown in FIGURE 15 as one input to NOR
sate 7~0. In the presently preferrea embodiment, this signal may be generated by one stage of the pipeline instruction register included in the 2901 microprocessor system as earlier discussed. In e~ec~, the gate 700 permits interruption of the normally propagated carry signals between the lower and upper 16-bit halves of the 32-bit/slice ALU. When this carry path is - 5~ -~97~7 35-OE-145 thus interrupted, the upper and lower 16-bit halves of the ALU perform arithmetic independently of each other.
It will be recalled that the corner point encoder produces encoded corner points in the 32 bit word forma-t shown in FIGURE 5. In the preferred examplary embodiment, the corner code data as well as the OUT and IN vector data is initially stripped by the FES from each CPE word to be compared for match conditions.
The OUT and IN vector data is separately stored in register 702 (FIGURE 14) so that it may be accessible for decision lQ logic purposes at a later time while the corner code data may have already been used to generate an additional CPE word -- for example if a double corner point is involved. When the corner code and vector data fields of the 32-bit Ford shown in FIGURE 5 are thus stripped (i.e., set to 0), the only thing that remains is the pixel x address in the first 16 bits and pixel y address in the second 16 bits. The vectors for the free end points of the already partially linked chain for which a match is being attempted are retrieved from the SLB and presented to the dedicated logic on six parallel bit lines at 704 in FIGURE 14.
A single bit IN/OUT selection signal (i.e., from the pipeline register) is also presented at 706 depending upon whether a match is being attempted to the free IN vector of a partially linked chain of corner points or to the free OUT vector of that chain. The pixel x and y addresses for the newly-presented corner point and for the free end corner point to be tested for match are presented to the 2901 as it is conditioned to perform dual 16-bit arithmetic as previously discussed. Accordingly, within one micro-instruction cycle time, the diEference X
between the x coordinates of the corner points involved and the difference A Y between the y coordinates of the corner points involved will be simultaneously computed and presented as the 9-bit X, Y output signals shown in FIGURE 15 and presented as 9~6~ 7 35~OE-145 inputs to the dedicated decision loglc circui-try shGwn in FIG 14.
The X, MY signals are presented -to parallel bit adders 708 (typically type S283) and parallel bit subtractors 710 (typically type S86) to simultaneously compute the sum and difference, respectively, of the X, Y signals. The results of the summation at 708 and subtraction at 710 are logically combined in gates 712 and 714, respectively, to produce a single bit output indicating whether the sum and difference, respectively, are equal to zero. At the same time, gates 716 are wired to detect whether OX is equal to 0. As previously explained, because of the left-to-right raster scanning process employed in the exemplary embodiment, it is not necessary to test for Y being equal to 0.
Simultaneously, multiplexer 718 and multiplexer 720 have selected the appropriate 3-bit pairs representing the IN vector of the new corner point being tested and the OUT vector of a free end corner point in a partially linked chain or, alternatively, the OUT vector of the new corner point to be tested and the IN
vector of the free end corner point in a partially linked chain. These selected IN/OUT or OUT/IN vector data are compared by gates 722 to detect whether they are equal. If so, they present a gate enabling signal to each of decision logic output gates 724, 726 and 728. Inverters 730, 732 and gates 734, 736 and 738 are connected as shown in FIGURE 14 so as to detect whether the IN/OUT or OUT/IN vectors being compared align vertically, along a left diagonal or along a right diagonal, respectively. The output from gates 734-738 and 712-714 are logically combined together with the gate enabling signal previously discussed in decision logic output gates 724, 726 and 728 so as to identify the presence of any of the match possibilities corresponding to cases one, two or three as previously discussed with respect to FIGURE 6. If any of these f 57 ~19ti~6C~ 35-OE-145 m3tch possibilities are present, a "match' output is o~oduced by OR gate 740 to indicate that the ne~ly-presented corner point no.
under test is indeed matched to the tree end corner joint now being tested for match therewith. This hardware control outpu.
is then sensed by the image processing data processor to indicate that the linked chains should be properly updated so as to reElect this linkage. Of course, when one f ree end of a partially completed chain has been matched to a newly-presented corner joint, a similar test must also be made to other free end 10 ox that and other chains to detect whether the newly-presented corner point is the final link in a chain or a bridge between two chains, etc. so as to finally describe a completely closed edge contour It should be noted that all ox the dedicated hardware decision logic shown in FIGURES 14 and 15 is capable of detecting whether a given corner point is matched to a given free ena corner point of a previously linked set in jut a single micro-instruction cycle tome.
An exemplary flow chart for a suitable microprogram to be stored in thy mircoprogram memory 3~3-11 of the FES for achieving such corner point sorting us depicted in the slow charts of FIGURES 16-1 through 16-9. Thy overall executive program shown in FIGURE 16-1 maintains a constant lookout for an interrupt from the main CP~ at decision block 800. Whenever such an interrupt is detected, a jump is made to the microcode routine 25 at 802 so as to.execute the proper feature extractor tusks. When any of these tasks ha been completed, the executive routine is again entered to await a forther interrupt instruction from the main CP~.
The sorter task shown in FIGURB 16-2 firs initializes the corner memory at 804 by relinking all of the old SUB dummy records to ensure that they are ready for use in a sorting routine. Then a wait loop is entered at B06 until the FIFO
memory ~02 is ready to furnish another encoded corner point 58 _ :~ 11 97~07 word. The next encoded corner point word is then received from the FIFO memory at 808 and a jump is maze to a service subroutine based upon the corner code at 810. If tlle corner code indicates that a double corner point is actually involved, a further encored corner point record will be automatically created by the FES to represent the second corner points x, y coordinates and IN~OUT vectors. Unless the double corner point is one where linkaqe is automatically implied, it will subsequently be treatPd just as though it were another encoded corner point read prom the FIFO memory.
Typica9ly, the corner point processing uses some combination of routines as shown in FIG~E 16. If a match to a partially completed corner chain is necessary, either the FIND
TN MATCH or the "FIND OUT MATGH" routine i5 called. The special corner point match logic circuitry is initialized and the existing SUB list is scanned in successive micro~ins~ructions until a match indication is indicated. A record ox the ma~r-~ed SLB address is retained before continuing thy corner point processing.
For corner points which merge two corner chains or close a corner chain, a check is made to see whether the corner point matches both free ends of a corner chain. this indicates hat the corner point under test closes the partially linked slob and transfer is made to a close blob s~routine a5 shown at FIGURE
16-9. On the other hand, if two matches have been detected and they do not involve the same partially linked blob, then transfer is maze to the proper merge routine as shown in FIGURE 16-8 as this indicates that two partially closed linked chains are now to be linkea together so as to form one partially linked chain.
. A new blob is started as shown in FIGURE 16-3 by creating a new SLB and linking it to tbe initial corner point of a new chain. At the same time, a new SBD is created and properly linked, initialized, etc. The corner point x,y coo~dina~es are _ 59 _ ~9 7~ 35--OE-145 then stored in the pro2er doubly linked mannec as previously described.
Connect on to the IN side o an existing p~r~ially completed blob i5 made with the subroutine of FIGURE 16-6. were, at 838, the proper SLY IN vector and coordinate data is updated. As previously indicated, because of the lef t-to-righ~
raster scanning process involved in the encoding (and hence in the storage of successively encoded corner points in the FI~O
memory), it is relatively easy to determine whether successive corner points on a given line are to be linker. If one of those conditions is detected, @ntry is made at 840 to the connection subroutine shown in FIGURE 16-6. At 842, the linked pointers in the SBD are updated and at 844, the corner chain itself is updated as are the Linkage addresses. At 846, the incremental or natomic~ seometric features are calculated using the new corner point and either the just previous corner point or the next corner point in the partially linked corner point chain, depending on whether the new corner point connects to the OUT"
3ide or the "It side of the chain. From these points, calculations can be made for the blob area, first moment, second moment and per imeter.
The connection subroutine for the ~C)~T" side of the partially completed blob as shown in Fit 7 is similar to that or the YIN" side of the blob with the steps 848-856 corresponding respectively to the steps 838-846 except thaw linkage are made to the ~O~T" free end corner of the partially completed blob rather than to the IN free end corner.
the merge routine shown in FIGURES 16-8 merges the ~O~T~
side of one partially completed blob to the IN side ox a second partially completed blob or vice versa. If the first case is at hand, then the new corner point is connected to the OUT side while if the converse is the case, the new point is connectea to the $~ side of the ifs blob at 860. In any event, only the _ 60 _ iL19 76117 35-OE-145 first SLB is retained and the second SLB is Ereed ~o~ later use at 8~2 w`nile the first SL-~ is updated so as to properly re1ect tne totality of the now merged and linked chain of corner joints into a single chain. The actual corner point data i5, of course, also address linked at 864 and new atomic feature calculations are also made. Finally, at 866, the first SB~ is updated so as to include total data representing the combination oE these two partially linked chains.
When a corner is found to be linked to both free ends of a given partially completed corner point chain, then the routine at FIGURE 16-9 is entered and the SLB is removed f rom the cave chain at 868 so as to free it for further use in subsequent sorting procedures The corner point in question is then doubly linked to both f ree ends of the chain at ~70 while the atomic feature calculations are f inished at 872 and the SOD address is stored away at 874.
After a given encoaed corner point word rom the FI~O
memory has thus been handled by one of the service subroutines, a test is made at 876 of the sorter task in FIGURE 16-2 to see if the last CPE word coming from a given frame has yet been processed. If not, the entire chain uf events just recounted is repeated as shown in FIGURE 16-2. When the last CPE word of a given list corresponding to a given frame of video data has been thus processed, final atomic feature calculations are made at 878 so as to update the SBD to its final form for eventual transfer to the main CP~ so that it may be conveniently accessed by the user-generated decision logic The overall system control including hardware/software and/or firmware interfaces is generally depicted at FIGURE 17.
As shown, and as previously explained, the raw video input is provided by cameras 110 to the CII 201. The gray-scaled images are thresholded and then temporarily stored in the IPM bufEer 202 before being passed to the CPE 301~ The CPE 301 further reduces _ -61 _ fame of thresholded video data to a list oE encored corner points which are thereaEter stored in a FIFO memory 302. The feature extrac~or/sor~er 303 then 50rts the encoded corner points into ordered lists and calculates predeLermined geometric features of the resulting closed linked lists (which describe closed edge contours in the image under test. Toe results ox the feature extractor/sorter computations are then stored in the R~ memory 401 ox the main CPU 403 in the system/decision processing subsystem 400. User-generated programs 410 are stored in the core memory or in the PROM 40~, cassette tape recorder 407, etc. Tyoically, the user programs may be in a relatively high level language w`nich requires a software-implemen~ed interpreter 412 to create an executable user-generated software library 414. The user is thus permitted to instruct the ICD
under software control at 416 to capture a frame of video data, to display a particular Erame of the IP~ on the image display 112, etc. Typically, the user will configure thy system so as to be triggered by external hard~arP generated signals on line 106 as previously described.
hardware interrupts 1, 2 and 3 are shown in FIGURE 17 as dotted lines. For example, as soon as a new frame of video data has been captured in one of the cameras 110, a hardware interrupt is transmitted to the main CPU 403 which pauses a jump in its program to an interrupt service routine tISR) laboled ICD and shown in FIGURE 18. Similarly, whenever the corner point encoder 301 is finished encoding a frame of video data, a hardware interrupt is transmitted to the main CP~ for 403 to cause the entry ox the CPE interrupt service routine shown in FIGURE 19.
Whenever the feature extractor/sort2r 303 has finished processing a given frame of data, another hardware interrupt signal is generated and passed to the main CPU or 403 which causes entry to the FES interrupt service routine shown in FIGURE 20. Memory mapped commands are generated by the main CPU 403 to generate _ 6~

l 7~ 3 5 - oE - 14 5 - star. signals or the CPE 301 and the feature extracto.c~sorter 303 us hell as to signal high I'M planes may be released or further reuse. The ICD itself generates the camera steering control and bit steering control signals to the CII 201 and IPM
202 as shown in FIGURE 17. As also shown in FIGURE 17, the user-generated software is informed when the feature extractor~sorter 303 has eompleted each Erame of captured video data so that user-generated programs can then access that data within the R~?~ 403 and perform any desired decision logic thereon to generate accept~reject control signals or the like.
- The ICD interrupt service routine shown.in FIGURE l is actually entered every 33 milliseconds once for each display frame on the monitor 112~ but since a picture is not taken unless a hardware or software command is received to wake a picture, a check is made a 900 of the ~CD status at the time of the interrupt to see if any new pictures have been takPn. If nott control i5 given back to the user generated sof.tware ~nder execution by the main CPU 403 until the next interrupt service routine occurs. however, if the status o the ICD indicates that a picture" has been captured in the IPM memory, then the corner point encoder 301 will be signalled to begin scanning the proper IPM plan. It will, of course, continue to carry out its functions after once being started and to store the results in the F~FO memory. At 904, the feature extractor/sorter 303 is also slgnalled to start end it will do so based upon the contents of the ~IFO memory 302 until an ena of frame signal is sensed in that memory.
The CPE interrupt service routine shown in FIGURE 19 causes the main CP~ 403 to transmit data to the ICD indicating that the CPE has finished processing a particular plane ox the IP~ and that, accordingly, this particular plane of toe IPM is now available for bufer storage ox another frame of thresholded video data.

- _ 63 _ The F~S interruot service routine s`nown in FIG~P~ 20 signals t'ne user decision logic (UD~) that the feature extractor has finished processing a given Erame oE previously captured data and the location oE the resulting SOD in the R~1 memory 401 which is no available for access by the user-generated decision logic.
An exemplary user--generated proqram for the main CPU 403 is depicted in the flow chart of FIGURE 21. At 950, the overall system is initialized by setting the desired threshold levels in the CII, the desired window in the CPE, etc. At 952, the ICD is instructed to capture a frame ox data no a spec;fic camera number or perhaps to do so when it receives an externally generated hardware signal indicating that a part to be inspected is within the field of v;ew. A wait loop is then entered at 9S4 until that frame of video data nas been captured and completely processed by the dedicated logic of the video processing and imag2 processing subsystems. Then, a ~56, a derision is made based upon the atomic feature data base ~e.g, area, centroid, perimeter, etc.) already present in the SBD for that frame as Jo whether the part meets predetermined criteria. At 958, the feature extractor may be instructed to perform additional geometric calculations (such as the measurement between specified points, etc. ) so that additional decision criteria will be available for processing. Another wait loop may be entered at 960 while the feature extractor perEorms these aaditional calculations. Once they become available, a final output decision is calculated by the user-generated decision logic at 962 before looping back to capture another frame of video data, for example. It should be appreciated, th-re will be a wide variety of user-generated decision logic that may be employea so as to use this system in a specific application.
It should no be appreciated that ills invention provides method and apparatus for e~iciently converting binary 3 7 3 5--oE - l 4 5 -ictures into an easily analyzed region bee rep~esenta~ion by, among other things, employing:
1. A two-dimensiona.l encoding scheme utili2ing a 3 x 3 window kick generates an output at and only at each "corner" in the picture. (I corner is a loca-- tion where a region boundary changes direction.) 2. A sorting scheme which sorts the corners effi-ciently by region and places the in an ordered list representiny the closed boundary of thP
regions.
3. Region boundary analysis schemes which con be used 'co efficiently compute measura'cion type feature values, shape descr iptions r si zes, topological relationships, etc.
The various components of this method along with the haraware and sof .ware required to implement them provide an improved solution to the picture digestion and analysis problem which has a umber of unique advantages:
lo The corner point encoding scheme encodes the picture based on the ~wo~dimensional information content. Thus a higher level of data compression is achiev d than with one dimensional schemes such as run-length encoding.
2. The encoder is based on passing a single 3 by 3 window once over the picture while Zahn for example) required alternating passes of 2 x 2 and 3 x 2 windows. This permits a much simpler imple-men~ation and encodes the picture while it is being output by the camera in real time not requiring it to be stored in a frame bufer.
3. The 3 by 3 window encoder can also be utilized to quickly eliminate single pixel noise while encoding the picture.

:1197ti!~)7 35-OE-145 . rho sor~:ing scneme sorts the corner points as they come Erom the sorter into ordered lists b2~ region.
The sorter operates quickly requiring little searching; also the searching con be expedited by use of special hardware.
5. The sorting scheme accounts for the fact that corners come in from two rows simultaneously without buffering corners from t'ne second row until the irst has been sorted.
;~6. Th2 resulting region boundary description is in an optimal format for analysis. The system automati-cally genera te s a bou ndar y descr ip, ion ba sed on vertices tcorners). Since these vertices are represented as X and Y positions, properties and :,;measurements can be directly computed. Other systems often generate intermediate format data which must be converted, or whey lose the boundary description altogether.
7. This vertex based boundary description can be easily scaled shifted, and rotated. }n many other techniques, especially those which store pixels, this is almost impossible as a practical tatter.
8. The region boundary descriptions can be further simp}ified by syntatically parsing them to remove extraneous corner points. By utilizing a high order syntatical parser which is aware of the uniqtle features generated by the basic spatial quantization and encoding schemeO the boundary oan be substantially simplified without distortion.

1~9760 7 3 5 -OE- 1 4 5 Although only one present.ly preEerred exemplary embodiment ox this invention llas been described in detail in the foregoing specificatiun, those skilled in the ark oE desgning vision processing systems will recognîze that many variations and modifications may be made ;n the exemplary embodiment while yet retaining many of the novel and advantageous features of its construction and/or operation. Accordingly, all such variations and modifications are intended to be included within the scope of the following appended claims.

67 _

Claims (22)

The embodiments of the invention in which an exclusive property or privilege is claimed are defined as follows:
1. A visual image processing system for digitally processing an electronic video image of an object within a pre-defined field of view so as to automatically identify closed edge contours of the object within such an image, said system comprising:
electronic video camera means for converting a predetermined visual field of view to a sequence of digital electronic video signals each of which represents a corresponding elemental picture element of such a visual field of view and for providing a new frame of such video signals in response to an input electronic command signal;
video processing means connected to said camera means and including an image plane memory means for temporarily storing video digital data produced in response to plural frames of said video signals and a first electronic control means for controlling said image plane memory means to accept said video digital data at a first data rate corresponding to the rate at which it is caused to occur by said camera means and to provide said video digital data at a second data rate, which may be different from said first data rate;
image processing means connected to receive successive frames of said temporarily stored video digital data from the image plane memory means and including: (a) corner point encoding means for identifying all contour corner points included in a given frame of said video digital data and for generating digital data representative thereof, (b) feature extracting sorter means for automatically sorting said corner point digital data into separate closed linked sets, each set being representative of one closed edge contour of an object thus identified in said video image, and (c) a second electronic control means connected to control the operation of said corner point encoding means and said feature extracting sorter means; and third electronic control means connected to coordinate said first and second electronic control means so as to collect successive frames of said video digital data in said image plane memory means and to subsequently supply such collected data to said image processing means when it is next available for processing another frame of similar data.
2. A visual image processing system as in claim 1 wherein each of said first, second and third electronic control means comprise a programmed micro-processor connected to communicate digital electronic signals therebetween via a common bus system of electrical conductors.
3. A visual image processing system as in claim 1 and further comprising a number N of said electronic video camera means and wherein:
said image plane memory means comprises a number M of memory means for storing individual frames of digitized video signals, M being at least equal to N; and said first control means includes means for steering a frame of video signals emanating from any one of said camera means to any then available one of said M memory means and for maintaining accurate identifying digital data associated therewith identifying the camera and/or the relative sequence of the frame's occurrence so that any desired one of previously stored frames of such video data for any given camera may be accurately located and supplied to said image processing means when requested.
4. A visual image processing system as in claim 3 wherein said first control means includes means for steering a frame of video signals emanating from each one of said camera means to a predetermined respectively corresponding one of said memory means unless it is then already still storing a previously acquired frame of video signals not yet supplied to said image processing means.
5. A visual image processing system as in claim 3 wherein said image plane memory means comprises further memory means for storing individual overlay frames of predetermined digitized video signals which may be logically combined with the frames of digitized video signals stored in said image plane memory means.
6. A visual image processing system as in claim 1 or 2 wherein:
said corner point encoding means includes FIFO memory means for storing the encoded corner point digital data generated by it until said feature extracting sorter means is next available for sorting such data.
7. A visual image processing system as in claim 1 or 2 wherein said feature extracting sorter means sequentially sorts individual corner point digital data into respectively associated closed linked sets to thus incrementally define each of the corresponding closed edge contours and further comprising atomic feature calculating means for calculating predetermined incremental geometric features of the closed edge contours then being incrementally defined each time another corner point is sorted into a given closed linked set.
8. A visual image processing system as in claim 1 or 2 wherein said third electronic control means includes user-program input means for adapting the third electronic control means to automatically analyze predetermined geometric features of the identified contours and to generate corresponding accept/reject.
output signals based upon whether such features are within pre-determined tolerances.
9. A visual image processing system as in claim 4 wherein:
said corner point encoding means includes FIFO memory means for storing the encoded corner point digital data generated by it until said feature extracting sorter means is next available for sorting such data.
10. A visual image processing system as in claim 4 wherein said feature extracting sorter means sequentially sorts individual corner point digital data into respectively associated closed linked sets to thus incrementally define each of the corresponding closed edge contours and further comprising atomic feature calculating means for calculating predetermined incremental geometric features of the closed edge contours then being incrementally defined each time another corner point is sorted into a given closed linked set.
11. A visual image processing system as in claim 4 wherein said third electronic control means includes user-program input means for adapting the third electronic control means to automatically analyze pre-determined geometric features of the identified contours and to generate corresponding accept/reject output signals based upon whether such features are within pre-determined tolerances.
12. A visual image processing system as in claim 9 wherein said feature extracting sorter means sequentially sorts individual corner point digital data into respectively associated closed linked sets to thus incrementally define each of the corresponding closed edge contours and further comprising atomic feature calculating means for calculating predetermined incremental geometric features of the closed edge contours then being incrementally defined each time another corner point is sorted into a given closed linked set.
13. A visual image processing system as in claim 9 wherein said third electronic control means includes user-program input means for adapting the third electronic control means to automatically analyze pre-determined geometric features of the identified contours and to generate corresponding accept/reject output signals based upon whether such features are within pre-determined tolerances.
14. A visual image processing system as in claim 12 wherein said third electronic control means includes user-program input means for adapting the third electronic control means to automatically analyze pre-determined geometric features of the identified contours and to generate corresponding accept/reject output signals based upon whether such features are within pre-determined tolerances.
15. A visual image processing system as in claim 1 wherein said image processing means includes:
random access memory means for storing sorted partially closed linked sets of digital data representing corner points, and data swap multiplex means connected to supply said stored data from the random access memory means while at the same time permitting selective alteration of the format of such data so as to facilitate the rapid automatic sorting of as yet unsorted corner point digital data therewith to form said separate completely closed link sets.
16. A visual image processing system comprising:
a plurality of electronic video cameras, each providing a stream of digital video data representing successive pixel values of a visual image;
plural input data registers, each being connected to synchronously receive and temporarily store at least one pixel of said digital video data from one of said cameras;
plural digital data comparators, each being connected to receive said temporarily stored digital video data from one of said cameras, pixel-by-pixel to compare the relative value of such video data for each pixel with a predetermined digital threshold word and to produce, in response, a single bit binary-valued output for each corresponding pixel of the visual image;
plural image plane memory means for simultaneously storing the single bit binary-valued outputs from each of said digital data comparators;
data steering means connected to receive said single bit binary-valued outputs from each said digital data comparator and to selectively steer it to any desired one of said image plane memory means;
encoding means connected to successively receive data from the image plane memory means representing the pixels of a given visual image and to detect and produce a list of encoded digital data representing the closed edge contour(s) of object(s) within said visual image;
FIFO memory means connected to receive and temporarily store said list of encoded digital data;
sorting means connected to receive said list of encoded digital data from said FIFO memory means and to sort same into plural ordered lists which individually represent the closed edge contour(s) of object(s) within said visual image; and a main CPU data processor connected to coordinate the functions of the aforesaid apparatus and to accept user-generated decision logic for analyzing said closed edge contour(s).
17. A visual image processing method for digitally processing an electronic video image of an object within a pre-defined field of view so as to automatically identify closed edge contours of the object within such an image, said method comprising:
converting a predetermined visual field of view to a sequence of digital electronic video signals each of which represents a corresponding elemental picture element of such a visual field of view and providing a new frame of such video signals in response to an input electronic command signal;
temporarily storing video digital data produced in response to plural frames of said video signals in an image plane memory means and controlling said image plane memory means to accept said video digital data at a first data rate corresponding to the rate at which it is caused to occur by said camera means and to provide said video digital data at a second data rate, which may be different from said first data rate;
identifying all contour corner points included in a given frame of said video digital data and generating digital data representative thereof, automatically sorting said corner point digital data into separate closed linked sets, each set being representative of one closed edge contour of an object thus identified in said video image, and coordinating said storing, identifying and sorting steps so as to collect successive frames of said video digital data in the image plane memory means and to subsequently supply such collected data when it is next possible to process another frame of similar data by performing said identifying and sorting steps.
18. A visual image processing method as in claim 17 further comprising storing individual frames of digitized video signals from a number N of electronic video cameras in a number M
of memory means, M being at least equal to N; and steering a frame of video signals emanating from any one of said cameras to any then available one of said memory means and maintaining accurate identifying digital data associated therewith identifying the camera and/or the relative sequence of the frame's occurrence so that any desired one of previously stored frames of such video data for any given camera may be accurately located and supplied for image processing when requested.
19. A visual image processing method as in claim 18 wherein said steering step includes steering a frame of video signals emanating from each one of said camera means to a predetermined respectively corresponding one of said memory means unless it is then already still storing a previously acquired frame of video signals not yet supplied for image processing.
20. A visual image processing method as in claim 18 wherein said temporarily storing step comprises storing individual overlay frames of predetermined digitized video signals which may be logically combined with the frames of digitized video signals stored in said image plane memory means.
21. A visual image processing method as in claim 17 wherein:
said identifying step includes storing the identified corner point digital data generated by it in a FIFO memory until said automatically sorting step can next be performed.
22. A visual image processing method as in claim 17 wherein said automatically sorting step includes sequentially sorting individual corner point digital data into respectively associated closed linked sets to thus incrementally define each of the corresponding closed edge contours and further comprises calculating predetermined incremental geometric features of the closed edge contours then being incrementally defined each time another corner point is sorted into a given closed linked set.
CA000424990A 1982-03-31 1983-03-31 Method and apparatus for visual image processing Expired CA1197607A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US363,664 1982-03-31
US06/363,664 US4493105A (en) 1982-03-31 1982-03-31 Method and apparatus for visual image processing

Publications (1)

Publication Number Publication Date
CA1197607A true CA1197607A (en) 1985-12-03

Family

ID=23431159

Family Applications (1)

Application Number Title Priority Date Filing Date
CA000424990A Expired CA1197607A (en) 1982-03-31 1983-03-31 Method and apparatus for visual image processing

Country Status (3)

Country Link
US (1) US4493105A (en)
JP (1) JPS5916081A (en)
CA (1) CA1197607A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9064187B2 (en) 2008-10-14 2015-06-23 Sicpa Holding Sa Method and system for item identification

Families Citing this family (56)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4628353A (en) * 1984-04-04 1986-12-09 Chesebrough-Pond's Inc. Video measuring system
US4777651A (en) * 1984-06-25 1988-10-11 Tektronix, Inc. Method of pixel to vector conversion in an automatic picture coding system
KR900001696B1 (en) * 1984-11-09 1990-03-19 가부시기가이샤 히다찌세이사꾸쇼 Method for controlling image processing device
GB2175396B (en) * 1985-05-22 1989-06-28 Filler Protection Developments Apparatus for examining objects
US4876728A (en) * 1985-06-04 1989-10-24 Adept Technology, Inc. Vision system for distinguishing touching parts
GB2177871B (en) * 1985-07-09 1989-02-08 Sony Corp Methods of and circuits for video signal processing
DE3787587T2 (en) * 1986-07-17 1994-01-27 Matsushita Electric Ind Co Ltd Shape recognition process.
US5053989A (en) * 1986-08-27 1991-10-01 Minolta Camera Kabushiki Kaisha Digital image processing apparatus having a microprogram controller for reading microinstructions during a vacant period of the image processing circuit
US4817187A (en) * 1987-02-19 1989-03-28 Gtx Corporation Apparatus and method for vectorization of incoming scanned image data
US4752897A (en) * 1987-05-01 1988-06-21 Eastman Kodak Co. System for monitoring and analysis of a continuous process
US4814868A (en) * 1987-10-02 1989-03-21 Quadtek, Inc. Apparatus and method for imaging and counting moving particles
US5042076A (en) * 1988-12-02 1991-08-20 Electrocom Automation, Inc. Programmable optical character recognition
US4972495A (en) * 1988-12-21 1990-11-20 General Electric Company Feature extraction processor
JP3001966B2 (en) * 1990-11-30 2000-01-24 株式会社リコー How to create a dictionary for character recognition
US5319778A (en) * 1991-07-16 1994-06-07 International Business Machines Corporation System for manipulating elements in linked lists sharing one or more common elements using head nodes containing common offsets for pointers of the linked lists
US5371810A (en) * 1991-09-27 1994-12-06 E. I. Du Pont De Nemours And Company Method of determining the interior points of an object in a background
US6058209A (en) * 1991-09-27 2000-05-02 E. I. Du Pont De Nemours And Company Method for resolving redundant identifications of an object
US5337085A (en) * 1992-04-10 1994-08-09 Comsat Corporation Coding technique for high definition television signals
US5318173A (en) * 1992-05-29 1994-06-07 Simco/Ramic Corporation Hole sorting system and method
US5305894A (en) * 1992-05-29 1994-04-26 Simco/Ramic Corporation Center shot sorting system and method
US5590048A (en) * 1992-06-05 1996-12-31 Fujitsu Limited Block exposure pattern data extracting system and method for charged particle beam exposure
JP2710202B2 (en) * 1993-03-24 1998-02-10 インターナショナル・ビジネス・マシーンズ・コーポレイション Method and data processor for bordering closed contour image with convex polygon
US5497314A (en) * 1994-03-07 1996-03-05 Novak; Jeffrey M. Automated apparatus and method for object recognition at checkout counters
US6178262B1 (en) * 1994-03-11 2001-01-23 Cognex Corporation Circle location
JPH0835818A (en) * 1994-07-25 1996-02-06 Omron Corp Image processing apparatus and method
CA2161942A1 (en) * 1994-12-21 1996-06-22 Laiguang Zeng Character smoothing in scanners/printers
US5764808A (en) * 1995-10-26 1998-06-09 Motorola, Inc. Method and device for compact representation of a discrete region contour
JPH09126890A (en) * 1995-11-01 1997-05-16 Toshiba Corp Color inspecting device
GB2324867B (en) * 1995-11-30 2000-03-22 Tokyo Seimitsu Co Ltd Method and apparatus for automatic shape computing for contour shape determining machine
US5752436A (en) * 1996-10-24 1998-05-19 Utz Quality Foods, Inc. Potato peeling apparatus
US5662034A (en) * 1996-03-08 1997-09-02 Utz Quality Foods, Inc. Potato peeling system
US6307588B1 (en) 1997-12-30 2001-10-23 Cognex Corporation Method and apparatus for address expansion in a parallel image processing memory
US6157751A (en) * 1997-12-30 2000-12-05 Cognex Corporation Method and apparatus for interleaving a parallel image processing memory
US5982395A (en) * 1997-12-31 1999-11-09 Cognex Corporation Method and apparatus for parallel addressing of an image processing memory
AUPP702498A0 (en) * 1998-11-09 1998-12-03 Silverbrook Research Pty Ltd Image creation method and apparatus (ART77)
US6876991B1 (en) 1999-11-08 2005-04-05 Collaborative Decision Platforms, Llc. System, method and computer program product for a collaborative decision platform
US6774916B2 (en) * 2000-02-24 2004-08-10 Texas Instruments Incorporated Contour mitigation using parallel blue noise dithering system
JP4375523B2 (en) * 2002-12-20 2009-12-02 富士ゼロックス株式会社 Image processing apparatus, image processing method, image processing program, printed material inspection apparatus, printed material inspection method, printed material inspection program
EP1959391A1 (en) * 2007-02-13 2008-08-20 BrainLAB AG Determination of the three dimensional contour path of an anatomical structure
US8189855B2 (en) 2007-08-31 2012-05-29 Accenture Global Services Limited Planogram extraction based on image processing
US8630924B2 (en) * 2007-08-31 2014-01-14 Accenture Global Services Limited Detection of stock out conditions based on image processing
US8009864B2 (en) 2007-08-31 2011-08-30 Accenture Global Services Limited Determination of inventory conditions based on image processing
US9135491B2 (en) 2007-08-31 2015-09-15 Accenture Global Services Limited Digital point-of-sale analyzer
US7949568B2 (en) * 2007-08-31 2011-05-24 Accenture Global Services Limited Determination of product display parameters based on image processing
US8121415B2 (en) 2008-10-28 2012-02-21 Quality Vision International, Inc. Combining feature boundaries
US9875574B2 (en) 2013-12-17 2018-01-23 General Electric Company Method and device for automatically identifying the deepest point on the surface of an anomaly
US10019812B2 (en) 2011-03-04 2018-07-10 General Electric Company Graphic overlay for measuring dimensions of features using a video inspection device
US9984474B2 (en) 2011-03-04 2018-05-29 General Electric Company Method and device for measuring features on or near an object
US10586341B2 (en) 2011-03-04 2020-03-10 General Electric Company Method and device for measuring features on or near an object
US10157495B2 (en) 2011-03-04 2018-12-18 General Electric Company Method and device for displaying a two-dimensional image of a viewed object simultaneously with an image depicting the three-dimensional geometry of the viewed object
US9412189B2 (en) 2013-05-13 2016-08-09 General Electric Company Method and system for detecting known measurable object features
US9074868B2 (en) * 2013-05-13 2015-07-07 General Electric Company Automated borescope measurement tip accuracy test
US9818039B2 (en) 2013-12-17 2017-11-14 General Electric Company Method and device for automatically identifying a point of interest in a depth measurement on a viewed object
US9842430B2 (en) 2013-12-17 2017-12-12 General Electric Company Method and device for automatically identifying a point of interest on a viewed object
CN112926590B (en) * 2021-03-18 2023-12-01 上海晨兴希姆通电子科技有限公司 Segmentation recognition method and system for characters on cable
CN113781290B (en) * 2021-08-27 2023-01-31 北京工业大学 Vectorization hardware device for FAST corner detection

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4118730A (en) * 1963-03-11 1978-10-03 Lemelson Jerome H Scanning apparatus and method
US3576980A (en) * 1968-03-28 1971-05-04 California Computer Products Automatic corner recognition system
US3706071A (en) * 1970-06-22 1972-12-12 Information Int Inc Binary image processor
US3889234A (en) * 1972-10-06 1975-06-10 Hitachi Ltd Feature extractor of character and figure
US3863218A (en) * 1973-01-26 1975-01-28 Hitachi Ltd Pattern feature detection system
JPS5425782B2 (en) * 1973-03-28 1979-08-30
US3987412A (en) * 1975-01-27 1976-10-19 International Business Machines Corporation Method and apparatus for image data compression utilizing boundary following of the exterior and interior borders of objects
JPS51112236A (en) * 1975-03-28 1976-10-04 Hitachi Ltd Shape position recognizer unit
US4183013A (en) * 1976-11-29 1980-01-08 Coulter Electronics, Inc. System for extracting shape features from an image
US4093941A (en) * 1976-12-09 1978-06-06 Recognition Equipment Incorporated Slope feature detection system
US4087788A (en) * 1977-01-14 1978-05-02 Ncr Canada Ltd - Ncr Canada Ltee Data compression system
US4162482A (en) * 1977-12-07 1979-07-24 Burroughs Corporation Pre-processing and feature extraction system for character recognition
JPS5847064B2 (en) * 1978-07-08 1983-10-20 工業技術院長 Character reading method
US4300122A (en) * 1979-04-02 1981-11-10 Sperry Corporation Apparatus for processing digital data representative of a two-dimensional image
US4288782A (en) * 1979-08-24 1981-09-08 Compression Labs, Inc. High speed character matcher and method
US4400728A (en) * 1981-02-24 1983-08-23 Everett/Charles, Inc. Video process control apparatus

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9064187B2 (en) 2008-10-14 2015-06-23 Sicpa Holding Sa Method and system for item identification

Also Published As

Publication number Publication date
JPS5916081A (en) 1984-01-27
US4493105A (en) 1985-01-08

Similar Documents

Publication Publication Date Title
CA1197607A (en) Method and apparatus for visual image processing
US4490848A (en) Method and apparatus for sorting corner points in a visual image processing system
EP0090395B1 (en) Method and apparatus for visual image processing and for sorting corner points in a visual image processing system
EP0122543B1 (en) Method of image processing
Schlag et al. Implementation of automatic focusing algorithms for a computer vision system with camera control
US7181059B2 (en) Apparatus and methods for the inspection of objects
US4075604A (en) Method and apparatus for real time image recognition
US4742552A (en) Vector image processing system
EP0198481A2 (en) Image processing apparatus
US5222158A (en) Pattern recognition apparatus
CN110712202B (en) Special-shaped component grabbing method, device and system, control device and storage medium
CN111027507A (en) Training data set generation method and device based on video data identification
CN115330824A (en) Box body grabbing method and device and electronic equipment
CN112101060A (en) Two-dimensional code positioning method based on translation invariance and small-area template matching
CN111353577A (en) Optimization method and device of multi-task-based cascade combination model and terminal equipment
US4246570A (en) Optical wand for mechanical character recognition
Wang et al. HFR-video-based machinery surveillance for high-speed periodic operations
CA1216933A (en) Gray scale image processor
Losty et al. Computer vision for industrial applications
CN116475081B (en) Industrial product sorting control method, device and system based on cloud edge cooperation
Ullmann Analysis of 2-D occlusion by subtracting out
Dagli et al. Automated assembly systems
Liu et al. Design of Hardware Acceleration in Edge Computing Device for Bottle Cap High-Speed Inspection
Gräßler et al. Creating Synthetic Training Data for Machine Vision Quality Gates
Lougheed Application of parallel processing for automatic inspection of printed circuits

Legal Events

Date Code Title Description
MKEC Expiry (correction)
MKEX Expiry