PROVIDING INFORMATION OF MOVING OBJECTS
Field of the Invention
The present invention relates to provision of information of moving objects, and in particular, but not exclusively, to provision of information by a machine vision system of objects moved by a conveyor.
Background of the Invention
Handling and/or processing of various objects is a commonplace in various fields of industry. The objects may be, for example, any workpieces, tools, goods, pallets or similar articles that are to be handled in industrial or commercial processes. Objects may also need to be moved, e.g. during manufacturing or transporting operations, by an appropriate conveyor or similar apparatus arranged to move the objects. In several applications there also exists a need to pick the moving objects from a conveyor or similar and/or to move the objects to another location and/or to process the objects further e.g. by subjecting the objects to predefined packaging, machining or finishing operations.
During the further processing of the object it may be necessary to know one or more of the characteristics of the object, such as the position, orientation, shape or size of the object, so that it is possible for example to grip and move the object from one location to another. Machine vision systems may be used for providing the required information on the objects to be handled and/or processed. More detailed
examples of the (machine) vision systems are disclosed in international publications Nos . O95/00299 and 097/17173.
When employing a vision system an object carried by an conveyor may be detected and/or recognised by an imaging apparatus of the vision system. The object may then be subjected to further processing based on information on the object provided by the vision system. The vision system arrangement may even be such that an object and/or predefined characteristic information of an object is detected while the object is moving. After the detection the object may, for example, be picked by appropriate means such as a gripping mechanism of a robot or a manipulator or similar actuator from the conveyor and moved to a desired next location or stage of processing. In other words, the machine vision based systems typically operate such that an object is detected and imaged by means of an imaging apparatus, whereafter the object is recognised and predefined characteristics thereof are determined. After the determination of information that is required for processing the object further, the object may be processed, such as gripped by appropriate gripping means. The further processing may include operations such as machining or packaging that utilises the information received from the imaging of the object.
The imaging apparatus is typically arranged to take a plurality of subsequent picture frames from its imaging area as the objects move i.e. flow pass the imaging area. Thus the object flow is divided into several slices or frames each presenting a still image of the object or objects within the imaging area in a given moment. The division of
the object flow into the subsequent still image "slices" may be referred to as slicing. Two subsequent slices usually overlap so that none of the objects on the conveyor will be missed. The interval between the subsequent slices is typically optimised such that all objects will become imaged with a certain reliability while only a minimum number of images is needed.
A conveyor apparatus may operate with a relatively high speed and thus the interval between the subsequent images should be relatively short. With short imaging intervals it is, however, possible that an object becomes imaged twice, i.e. an object may be found from two subsequent images. In addition, the vision system may detect twice the same object because of the shape and/or size of the object. This may occur e.g. when some of the objects are relatively big and require relatively wide imaging area and short interval while some of the other objects to be processed by the same system are of a relatively small size. If a double detection occurs, the vision system may then output information that instead of one actual object, the conveyor carries two (or even more) objects. The subsequent processing apparatus will then operate as the conveyor would carry two (or even more) objects. For example, a picking device may then accomplish a so called "double-pick" operation. The second pick would, however, be a "ghost" pick as the real object has already been picked.
The reason why a double-picking may occur is that a controller of the object handling and/or processing system may find the same object from two (or more) different images which were taken shortly one after another. For example, as
the conveyor has moved the object between the images, the object may lie in the first image on a top or front part of the image and in the second image the same object may lie on a bottom or rear part of the image. The picking device will then get two search results based on the first and the second images respectively, and will thus make two pickings. This consumes unnecessarily the resources and working time of the picking device. In addition, while the device makes a "ghost" pick, a following real object may simultaneously pass the picking area so that the following object will not become picked at all. Therefore it would be advantageous to reduce the number of any useless "ghost" operations of the handling and/or processing apparatus.
Summary of the Invention
It is an aim of the embodiments of the present invention to address one or several of the above problems.
According to one aspect of the present invention, there is provided a method of providing information on moving objects by an imaging apparatus, comprising: determining the speed of the objects based on information provided by the imaging apparatus; generating a plurality of images of the objects; and determining whether an image is the first image of an object based on the determined speed of the objects.
According to another aspect of the present invention there is provided an apparatus for providing information of moving objects, comprising:
an imaging apparatus arranged to generate a plurality of images of the objects moving past an imaging area of the imaging apparatus; and a controller arranged to determine the speed of the objects by processing information that is based on images generated by the imaging apparatus and to determine if an object exists in more than one image of the plurality of images based on the determined speed of the objects.
The speed of the objects may be determined based on information provided by two images taken by the imaging apparatus. The direction of movement of the objects may also be determined based on information provided by the imaging apparatus. The information on which the speed and/or direction determination is based and the plurality of images of the objects may be provided with a common imaging device. The imaging apparatus may comprise at least one camera. If it is determined that the image is not the first image of the object, the object in the image may be ignored and no further control instructions concerning the object in two different images may not be generated. The determination of whether the image is the first image including the object may comprise computing a theoretic location of the object in an image and verifying whether the theoretic location of the object and the real location of the object in the image match with a predefined accuracy. The accuracy may be adjustable .
In order to make the recognition process faster an object in an image may be recognized based on determining predefined information of the boundary of the object from an image of the object. At least one characteristic of an object may
also be determined based on determining predefined information of the boundary of the object.
The embodiments of the present invention provide several advantages. By providing a controller of a system processing an object with information concerning the manner the object moves it is possible to reduce the risk that a handling/processing apparatus subjects an operation twice to the same object, e.g. to reduce the possibility of an operation where a robot tries to pick an object twice. The same imaging apparatus that is employed for recognizing and/or for determining the characteristics of the object may be used for the determination of the parameters relating to the movement of the object.
Srief Description of Drawings
For better understanding of the present invention, reference will now be made by way of example to the accompanying drawings in which:
Figure 1 shows one embodiment of the present invention; Figures 2a and 2b disclose two subsequent images of an imaging area of an imaging apparatus; and
Figure 3 is a flowchart illustrating the operation of one embodiment of the present invention.
Description of Preferred Embodiments of the Invention
Reference is made to Figure 1 which shows a schematic presentation of an embodiment of the present invention. The system includes a conveyor 1 for supporting and moving objects 2. The objects are moved at speed v in a direction
from left to right as is indicated by the arrow 10. Even though Figure 1 shows a belt conveyor, the skilled person is familiar with many other possible types of conveyors that could be used for conveying objects. These include chain conveyors and conveyors in which the objects are moved below the conveyor structure, e.g. are supported by appropriate hangers. Thus it is to be appreciated that the embodiments of the invention can be applied to any type of conveying arrangements adapted to move objects.
The system includes further an imaging apparatus comprising a camera 3. Various possibilities for imaging apparatus are known, these possibilities including, without restricting to these, cameras such as CCD (Charge Coupled Device) matrix cameras, progressive scan cameras, CCIR cameras (CCIR is a European standard for machine vision cameras employing a resolution of 768x576 pixels) and RS170 cameras (a North American standard with resolution of 640x480 pixels) and laser and infrared imaging applications. The camera 3 is arranged to image objects 2 on the belt 1 that are within an imaging area 40 between dashed lines 4a and 4b (see also Figs . 2a and 2b) .
In Figure 1 a single camera 3 is shown to be disposed above the conveyor 1. However, the position and general arrangement of the imaging apparatus may differ from this, and the embodiments of the invention are applicable in various possible imaging apparatus variations in which the number and positioning of the imaging apparatus and components can be freely chosen. The position of the imaging device 3 is typically chosen such that it is possible to detect desired points of the objects for the provision of a
reliable detection of the objects forwarding on the belt. Thus the camera may also be positioned in the side of the conveyor or even below the conveyor. The vision system may also be provided with more than one camera, e.g. in applications where three dimensional images of the objects are produced or a great accuracy is required.
The exemplifying system of Figure 1 includes further a robot 5 for picking the objects from the conveyor 1. More particularly, the objects are picked by gripping means 6 of the robot 5. It is to be understood that the robot 5 is only a preferred example of a possible subsequent handling and/or processing device. Any suitable actuator device may be used for the further processing of the objects 2 after they have been imaged by the imaging apparatus 3.
A control unit 7 is also shown. The control unit is arranged to process information received from the imaging apparatus 3 via connection 8 and to control the operation of the robot 5 via connection 9. The controller unit includes required data processing capability, and may be based on microprocessor technology. For example, the controller may be based on Pentium™ processor, < even thoug a less or more powerfull processor may also be employed depending on the requirements of the system and the objects to be handled. Depending on the application, the controller 7 may be provided with appropriate memory devices, drives, display means, a keyboard, a mouse or other pointing device and any adapters and interfaces that may be required. In addition, an appropriate imaging software is typically required. The controller may also be provided with a network card to enable installations requiring communication over a data
network. The controller may be adapted for communication e.g. based on TCP/IP networking (Transport Control Protocol/Internet Protocol) or over a local area network (LAN) .
A double pick checking procedure in accordance with an embodiment of the present invention will be discussed now with reference to Figures 2a and 2b. Figures 2a and 2b show two subsequent images from an imaging area 40 and a flow of objects 21, 2, and 22 through the imaging area 40. As can be seen, the object 2 can be found from both of these images. However, the second image of Figure 2b should produce information concerning the next object 22 only.
The procedure employed herein is based on the idea that by providing the controller 7 of the system with information concerning the manner how the conveyor 1 moves, and thus how the object may appear m the different images taken by the camera 3, it is possible to prevent the robot 5 from trying to pick the same object 2 twice. After the determination procedure by the controller the robot will be provided with only one position determination results per object and will thus not try to attempt to make more than one picking. More particularly, if the system is provided with predefined information concerning the movement of the conveyor and the objects on the conveyor (direction and speed v) , it is possible to calculate if the object 2 m the second image
(as shown in Figure 2b) is m reality the same object 2 that already appeared in the first image (as shown in Figure 2a) . The speed of the objects may be calculated and given as pixels/second or as "real" units, e.g. mm/second. A
possibility for the determination of the direction and speed of the objects will be discussed below.
A direction vector may be used to define the direction in which the objects on the conveyor belt move. For example, component I of the vector may define the movement in the horizontal direction, and a positive value of said vector component may indicate movement from left to right, or vice versa. Component J of the vector may define the movement in vertical direction. A positive value of the component J may indicate a movement from up to down, or vice versa.
The following will give, with reference to the flow chart of Figure 3, a more precise example of the procedure for an automated conveyor direction vector and speed determination. During automatic conveyor direction and speed determination (i.e. calibration) procedure, two images will be taken i.e. grabbed consecutively. To keep the procedure simple, both images should preferably contain only one object, even though this is not a requirement in all embodiments. Based on the imaging interval and object positions in the images, conveyor direction and speed can be determined by appropriate computations based on the lengths of the interval and the distance the object has travelled.
The determination of the speed and direction parameters may comprise the following steps. Stop the conveyor 1, put a test object in the imaging area 40 and grab an image of the test object. Define the characteristics of the object from the image so that the system will know what kind of objects to look for during an automatic conveyor direction and speed determination operation. The auto-determination procedure
will then wait until a "matching" object comes to the imaging area along the conveyor, records the object coordinates and waits for a specified time. After the time has lapsed a second image is grabbed and the new position of the object is recorded. The imaging interval between the subsequent two images is determined. For example, if the imaging apparatus is a camera that is positioned above a conveyor such that the top and bottom edges of the imaging area (4a and 4b, respectively) are perpendicular to the direction of movement of the objects carried by the conveyor, the first image may be grabbed when the object moves into the top of the imaging area. The second image will then be grabbed later after a xgrab interval' (for example, some milliseconds) has lapsed. After this procedure the controller of the system will be provided with the required information so that it is possible to define the length of travel the object has moved during the grab interval' . The speed may be determined by dividing the length of travel by the grab interval' in appropriate units. The test or "calibrating" object may then be taken away from the conveyor belt, and the actual processing of the objects may start. According to one possibility the calibration of the speed information is accomplished during a normal operation of the conveyor, and the object is not taken away but will be processed in a predefined manner after recognition.
After the controller of the system is provided with the speed and direction information, it may verify whether an object in an image is in reality an object that was already processed based on information from the previous image. This may be done by computing what will be the "theoretic"
location of the object 2 in the second image (Fig. 2b) based on the determined speed and direction information of the objects. After the "theoretic" location of the object 2 in the second image is computed, an object appearing in that location in the second image may be ignored. In other words, based on the conveyor direction vector and the detected speed it is possible to calculate if the object has moved to the current position from a position which has already been reported to a robot. If the double pick checking procedure finds that the calculated old position matches in a predefined accuracy with the actual old position, the new object position will not be sent to the robot. Thus, when the system finds that the object in the second image is in fact the same object as in the first image, it will not include the object in the search results of the second image, and will not process the information of the "ghost" object any further. For example, no instructions are given to the robot 5 of Figure 1 based on the second results . By means of this the robot 5 will get only one characteristics (e.g. position, shape and so on) for each object provided by the first image and any "ghost" pickings may be avoided.
It is possible to accomplish some further checks, such as whether the shape and/or size of the object in the computed theoretic location in the second image corresponds the particular object in the first image, before ignoring the object in the second image.
It is also possible to define the accuracy level of the double or "ghost" action preventing system. This may be done e.g. by means of a tolerance value that defines how much the difference between the calculated theoretical location of
the object and the actual position of the object in the second image may differ from each other until the location of the object in the second image is considered to be different. The location may be different e.g. since the object is not the same object as in the first image or since the object has, for some reason, moved relative to the conveyor. If the location of the object differs more than what the tolerance value allows, the object may be considered as a "real" object, or some other procedure may follow. The tolerance may be given in pixels or in real units, e.g. in millimetres. It may also be preferred in some embodiments to be able to adjust the accuracy level which can be accomplished by changing the tolerance value.
According to a preferred embodiment the imaging is based on a method where only the boundary of the object is imaged and analysed, and wherein the object is recognised based on the boundary characteristics. This enables a faster processing as the computing capacity and time required for the boundary detection and computations is less than if the entire object area is analysed on pixel to pixel basis. According to a possibility the actual object related data is retrieved from a object database based on the recognition of predefined points on the boundary and then this retrieved data is used in the actual handling and/or processing of the object.
The above example described use of a test or "calibrating" object for the calibration procedure. However, it is possible to provide the conveyor with a special marking, such as a cross, spot or line extending perpendicular to the conveyor belt, or to arrange a special sign or marking to some of the objects to be conveyed through the imaging area.
According to one possibility the system automatically detects the speed of the conveyor, e.g. from a marking provided in the conveyor, within predefined intervals, and adaptively adjusts the speed information whenever the speed has changed. The arrangement may also be such that whenever the special marking or pattern is detected (either in the belt, chain or similar element of the conveyor or on an object), a speed calibration procedure will occur.
It was already noted above that the embodiments are applicable also if several cameras or other imaging devices are used for the vision system. The two or more imaging devices may also be monitoring different conveyors. Each of the imaging devices may have its own double pick checking system or then the arrangement may be such that a common controller controls the operation of each of the separate imaging devices .
A message may be shown to the user to confirm a successful auto-determination procedure. If the process fails, an error message may be shown.
It should be appreciated that whilst embodiments of the present invention have been described in relation to picking of object, embodiments of the present invention are applicable to any other type of operations where there may be a need for providing information of moving objects so that it is possible to reduce the amount of any unnecessary double or "ghost" operations following a "double" detection of the object.
It is also noted herein that while the above describes exemplifying embodiments of the invention, there are several variations and modifications which may be made to the disclosed solution without departing from the scope of the present invention as defined in the appended claims.