CN105528789A - Robot vision positioning method and device, and visual calibration method and device - Google Patents

Robot vision positioning method and device, and visual calibration method and device Download PDF

Info

Publication number
CN105528789A
CN105528789A CN201510900027.0A CN201510900027A CN105528789A CN 105528789 A CN105528789 A CN 105528789A CN 201510900027 A CN201510900027 A CN 201510900027A CN 105528789 A CN105528789 A CN 105528789A
Authority
CN
China
Prior art keywords
image
module
speck
coordinate
template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510900027.0A
Other languages
Chinese (zh)
Other versions
CN105528789B (en
Inventor
王晓东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHENZHEN HENGKETONG ROBOT CO., LTD.
Original Assignee
Shenzhen Hengketong Multidimensional Vision Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Hengketong Multidimensional Vision Co Ltd filed Critical Shenzhen Hengketong Multidimensional Vision Co Ltd
Priority to CN201510900027.0A priority Critical patent/CN105528789B/en
Publication of CN105528789A publication Critical patent/CN105528789A/en
Application granted granted Critical
Publication of CN105528789B publication Critical patent/CN105528789B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The present invention provides a robot vision positioning method and a device. The method comprises a step of obtaining a target image and carrying out preprocessing on the target image, a step of carrying out feature segmentation on the image according to a preset segmentation parameter, filtering the segmented image, carrying out connected domain detection on the filtered image to extract bright spots which form a characteristic marker, filtering the extracted bright spots, judging whether the number of the filtered bright spots is in accordance with a preset number or not, if not, readjusting the segmentation parameter and carrying out detection again, if so, identifying a bright spot contour, judging whether the identified bright spot contour is matched with a preset contour template or not, and outputting the identified characteristic mark if so. According to the method, through automatically adjusting the segmentation parameter, the segmented contour is in accordance with an initial setting condition and is adapted to the image detection with different lighting conditions. In addition, the invention also provides a visual calibration method and a device.

Description

Robot visual orientation method and device, vision calibration method and device
Technical field
The present invention relates to robot field, particularly relate to a kind of robot visual orientation method and device, vision calibration method and device.
Background technology
In industrial robot system, Workpiece fixing method has machinery location, photoelectric sensor, Magnetic Induction device, vision location etc., and wherein machinery location and inductor location have advantage with low cost, but positioning precision is poor, flexibility is poor.And to have precision high in vision location, the advantage that dirigibility is good.
Traditional visual processing method is all the method adopting template characteristic matching algorithm or spot scanning, but template characteristic matching process computation complexity is large, robot vision application is needed to set up multi-template data, complexity is set, constantly along with the appearance adjustment template of new situation, and spot detection method, being detected as power, to be subject to the impact of environmental lighting conditions and camera parameters larger, above two kinds of methods all need often to arrange image parameter and template, namely need to reset template as long as identification marking changes, otherwise can occur to leak the phenomenon identifying and maybe can not identify, the robust performance of system is poor.
Summary of the invention
Based on this, be necessary the problem often resetting template for above-mentioned needs, propose a kind of robot visual orientation method and the device that simply do not need often again to read in template.
A kind of robot visual orientation method, described method comprises: S1: obtain target image, and carry out pre-service to described target image; S2: the partitioning parameters according to presetting carries out Image Segmentation Methods Based on Features to the image of step S1 process; S3: filtering process is carried out to the image of step S2 process; S4: the speck image of step S3 process being carried out to connected domain Detection and Extraction composition characteristic mark; S5: filtration treatment is carried out to described speck; S6: judge whether the speck number after filtering meets default speck number, if so, then enters step S7, if not, then by the partitioning parameters in preset rules set-up procedure S2, repeat above-mentioned steps S2-S6; S7: identify described speck outline line; S8: whether the speck outline line identified described in judgement mates with the template contours line preset; If coupling, then enter step S9; S9: export the characteristic indication identified.
A kind of Robot visual location device, described device comprises: acquisition module, for obtaining target image, and carries out pre-service to described target image; Segmentation module, for carrying out Image Segmentation Methods Based on Features according to the partitioning parameters preset to pretreated image; Filtration module, carries out filtering process for the image crossed segmentation resume module; Detection module, for carrying out the speck of connected domain Detection and Extraction composition characteristic mark to the image of filtration module process; Filtering module, for carrying out filtration treatment to described speck; Judge module, for judging whether the number of spots after filtering meets default number, if not, then notice segmentation module is according to the rule adjustment partitioning parameters preset; Identification module, if meet default number for the number of spots after filtering, identifies described speck outline line; Whether matching module, mate with the template contours line preset for the speck outline line identified described in judging; Output module, if for the speck outline line identified and the template contours lines matching preset, then export the characteristic indication identified.
Said method and device by obtaining target image, and carry out pre-service to described target image; Partitioning parameters according to presetting carries out Image Segmentation Methods Based on Features to image, filtering process is carried out to the image after segmentation, filtered image is carried out to the speck of connected domain Detection and Extraction composition characteristic mark, filtration treatment is carried out to the speck extracted, judge whether the speck number after filtering meets default number, if not, then readjust partitioning parameters, again detect, if so, then identify speck outline line, judge whether the speck outline line identified mates with the template contours line preset, if so, the characteristic indication identified then is exported.When speck number does not meet default number, automatically readjust partitioning parameters, do not need again to read in template parameter.The method is by automatically adjusting partitioning parameters, the profile after splitting is made to meet initial setting up condition, characteristic indication can be identified after iteration several times, adapt to the different image of illumination condition detect, also can the identification of realization character mark under the condition of illumination instability, in addition, this method avoid manual intervention adjustment parameter, achieve vision system and run steadily in the long term.
A kind of vision calibration method, the method comprises: identify the characteristic indication on workpiece; Camera is driven to move to above described characteristic indication by mechanical arm, the physical coordinates of record move; The image corresponding to described physical coordinates processes, and identifies characteristic indication coordinate in the picture; According to physical coordinates and the corresponding image coordinate of record, determine the mapping relations between the image coordinate of characteristic indication and physical coordinates.
A kind of vision calibration device, this device comprises: landmark identification module, for identifying the characteristic indication on workpiece; Coordinate record module, for driving camera to move to above described characteristic indication by mechanical arm, the physical coordinates of record move; Coordinate identification module, processes for the image corresponding to described physical coordinates, identifies characteristic indication coordinate in the picture; Relationship determination module, for according to the physical coordinates of record and the image coordinate of correspondence, determines the mapping relations between the image coordinate of characteristic indication and physical coordinates.
Above-mentioned vision calibration method and device, by identifying the characteristic indication on workpiece, then camera is driven to move to above characteristic indication by mechanical arm, the physical coordinates of record move, the image corresponding to physical coordinates processes, identify characteristic indication coordinate in the picture, according to physical coordinates and the corresponding image coordinate of record, determine the mapping relations between the image coordinate of characteristic indication and physical coordinates.This scaling method is easy, and directly use actual product to demarcate, marking process automatically carries out, and avoids manual intervention, and calibrating parameters accurately and reliably.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of robot visual orientation method in an embodiment;
Fig. 2 is the schematic diagram of the star topology mode of connected domain in an embodiment;
Fig. 3 is the schematic diagram that in an embodiment, characteristic indication rotates;
Fig. 4 be in an embodiment computing method to the schematic diagram at angle;
Fig. 5 is the structural representation of angle concordance list in an embodiment;
Fig. 6 A to 6C is the characteristic indication schematic diagram in an embodiment under different illumination intensity;
Fig. 7 is the process flow diagram of robot visual orientation method in another embodiment;
Fig. 8 is the process flow diagram of robot visual orientation method in another embodiment;
Fig. 9 is the method flow diagram identifying speck outline line in an embodiment;
Figure 10 judges the method flow diagram whether outline line mates in an embodiment;
Figure 11 is the process flow diagram of vision calibration method in an embodiment;
Figure 12 is the process flow diagram of vision calibration method in another embodiment;
Figure 13 is the schematic diagram of the difference of an embodiment middle ideal coordinate and actual coordinate;
Figure 14 is the apparatus structure block diagram of Robot visual location in an embodiment;
Figure 15 is the apparatus structure block diagram of Robot visual location in another embodiment;
Figure 16 is the apparatus structure block diagram of Robot visual location in another embodiment;
Figure 17 is the structured flowchart of identification module in an embodiment;
Figure 18 is the structured flowchart of matching module in an embodiment;
Figure 19 is the structured flowchart of vision calibration device in an embodiment;
Figure 20 is the structured flowchart of vision calibration device in another embodiment.
Embodiment
In order to make object of the present invention, technical scheme and advantage clearly understand, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
As shown in Figure 1, in one embodiment, propose a kind of robot visual orientation method, the method comprises:
Step S1, obtains target image, and carries out pre-service to target image.
In the present embodiment, target image is obtained by photographic subjects thing, the target image of acquisition is carried out pre-service, concrete, pre-service can be carried out to target image by sub-sampling and isolated point filtering method, sub-sampling method to refer to according to certain rule every several pixel extraction picture as valid pixel, such as, the level of grade and etc. vertical interval every 3 pixel extraction pixel preserve; Can obtain through sub-sampling the image that data volume reduces, be conducive to like this reducing computation complexity, improve computing velocity.Isolated point filtering method filters isolated noise point in image according to picture quality choice for use linear filter or morphologic filtering.
Step S2, the image of partitioning parameters to step S1 process according to presetting carries out Image Segmentation Methods Based on Features.
Concrete, carry out Image Segmentation Methods Based on Features by through pretreated target image according to the partitioning parameters preset, characteristics of image segmentation be exactly image be divided into several specifically, have peculiar property region and the technology of interesting target is proposed.In the present embodiment, the object of picture being carried out to Image Segmentation Methods Based on Features is to allow the color of characteristic indication and global context look distinguish, to extract characteristic indication, such as, white pattern is become through Image Segmentation Methods Based on Features relief characteristic indication, and the background colour of its periphery becomes black, for subsequent extracted characteristic indication is prepared.
Step S3, carries out filtering process to the image of step S2 process.
In the present embodiment, namely image filtering process suppresses the noise of target image under the condition as far as possible retaining image detail feature.Some noise spots will inevitably being there are in image background after Image Segmentation Methods Based on Features, by carrying out filtering process to the image after Image Segmentation Methods Based on Features, noise spot and noise spot can be removed.
Step S4, carries out the speck of connected domain Detection and Extraction composition characteristic mark to the image of step S3 process.
Concrete, it is exactly that the connected domain extracted is called Blob (speck) having same pixel extracting section in image out that connected domain detects.Connected domain detection is carried out to extract the speck of composition characteristic mark to the image processed after filtering.In the following argument structure body array that Blob feature is all stored in.Wherein, array location is defined in LTRegion1 structure, and Blob distributed store is in marking pattern flagmap.
typedefstructRegionSurround
{ doubleKAngle; // object angle
DoubleLength; // object length
DoubleWidth; // object width
DoubleCenx; // centre coordinate
DoubleCeny; // centre coordinate
}RegionSurround;
typedefstructLTRegion1
{
IntleftPOINT [maxlmageSizeY]; // region is row left end point often
IntrightPOINT [maxlmageSizeY]; // region is row right endpoint often
IntRegionNum; // number of regions, LTRegion [0] .RegionNum effective coverage number
RECTSurround_Rect; // surround frame
IntLTRegion_ID; ID LTRegion [0] .LTRegion_ID region, // region total number
IntRegion_shape; // region shape code
IntRegiondeleted; // region shape validity code, 0 effective coverage, 1 inactive area
IntAngle_longAxis; // regional perspective
doubleFill_rate;
Int*flagmap; // area flag figure
RegionSurroundRegionRect; The smallest enclosing box in // region
}LTRegion1;
Step S5, carries out filtration treatment to described speck.
Concrete, what the speck extracted through connected domain had may not be characteristic indication, needs to carry out filtrations eliminating to speck.First, carry out an initial filtration according to size and area features to speck and get rid of, highly whether size exclusion method adopt the width of the minimum rectangle judging encirclement speck and realize a rational scope.Area filter method adopts the number of corresponding patch indicia pixel in calculation flag figure how much to realize.
Step S6, judges whether the speck number after filtering meets default number, if so, then enters step S7, if not, then by the partitioning parameters in preset rules set-up procedure S2, repeats above-mentioned steps S2-S6;
In the present embodiment, judge whether the speck number after filtering meets default number, and preset number is here a scope, such as scope can be set to 100-120, by judging the number of spots that detects whether in the scope preset, if so, then enter the step identifying speck outline line; If not, then illustrate that the characteristic indication of identification is inaccurate, need the partitioning parameters readjusted in step S2, then re-execute the step of above-mentioned identification.The process of adjustment partitioning parameters is according to certain strategy setting partitioning parameters adjusted value, such as adjusts according to adjustment direction and adjustment step-length.Concrete, adjusted value is added original partitioning parameters is as new partitioning parameters, according to this new partitioning parameters, Image Segmentation Methods Based on Features is carried out to image, often adjust once, adjustment counter adds 1, can pre-set the threshold value of adjustment calculator will, judges whether the number of times adjusting partitioning parameters exceedes default threshold value, if not, then according to new partitioning parameters, Image Segmentation Methods Based on Features process is carried out to image; If adjustment counter exceedes the threshold value of setting, then directly enter the output stage of function result, whole power function returns NoObject state.
Step S7, identifies speck outline line.
In the present embodiment, when number of spots meets default number, illustrate that the speck extracted meets the requirements, next need to identify that the outline line of speck also extracts the outline line identified.Concrete, first, need the center determining connected domain, the center of connected domain is split connected domain as central point in the mode of star topology as shown in Figure 2, and adopt polar coordinate mode to sample to the connected domain edge after segmentation with the angle (such as 1 degree) preset, by the rectangular coordinate data (i.e. the coordinate of each vector of correspondence) after sampling with counterclockwise polar coordinates corner-turn in linear array.Concrete, the characteristic storage of outline line is in following structure.
Step S8, judges whether the speck outline line identified mates with the template contours line preset; If coupling, then enter step S9, if do not mate, then terminate.
In the present embodiment, after identifying speck outline line, judge whether the speck outline line identified mates with the template contours line preset, if coupling, then exports the characteristic indication identified, if do not mate, then terminates to return NoObject state.Concrete, by the mean value of the mean value of the outline line detected polar coordinates radius and template contours line polar coordinates radius is compared, if do not meet, the polar coordinates center of connected domain is redefined according to comparative result, and the polar data of outline line is redefined according to the polar coordinates center redefined, the polar data redefined and template polar data are carried out Difference Calculation and obtains difference value T, judge whether the difference value T obtained is less than default value, if, both explanations are mated, if not, illustrate and do not mate.
In addition, for postrotational workpiece, along with the rotation of workpiece, its characteristic indication also rotates thereupon, and as shown in Figure 3, in figure, drumheads are exactly the characteristic indication on workpiece.In order to realize quick position characteristic indication angle, in one embodiment, adopt outline method vector angle matching process, this method can process big angle rotary characteristic indication image, only needs the image processing time of increase by 5%.Concrete, first, the outline line normal angles vector of calculation template image, is then stored in a linear list using the template contours line normal angles obtained vector as template parameter; Secondly, calculate normal angles vector ObjectAngle [] of outline line in target image, adopt a kind of overall normal angle computing method here, namely adopt outline line vector Vector (VectorX, VectorY) normal angles vector is calculated, here normal angle vector representation, i.e. normal angles vector, the computation process of normal angle is as shown in Figure 4, such as, use adjacent outline line vector Vec1-Vec2, Vec2-Vec3, normal angle is perpendicular to the difference of adjacent outline line vector.Normal angles vector is stored in a circulation linear list.Finally, the mode increased progressively in order changes the normal angle ObjectAngle [] of image at linear list start element index index, find out an index value index=IA, this index value makes the normal angle minimum differential cumulative sum of real image and template image minimum, index value IA is multiplied by angle yardstick scale-up factor, be converted to characteristic indication rotation angle value, angle concordance list as shown in Figure 5.
Step S9, exports the characteristic indication identified.
In the present embodiment, when identifying speck outline line with the template contours lines matching preset, illustrating and successfully have identified characteristic indication, the signature identification identified is exported.
In the present embodiment, by obtaining target image, and pre-service is carried out to target image, partitioning parameters according to presetting carries out Image Segmentation Methods Based on Features to image, filtering process is carried out to the image after segmentation, filtered image is carried out to the speck of connected domain Detection and Extraction composition characteristic mark, filtration treatment is carried out to the speck extracted, judge whether the speck number after filtering meets default speck number, if not, then readjust partitioning parameters, again detect, if, then identify speck outline line, judge whether the speck outline line identified mates with the template contours line preset, if, then export the characteristic indication identified.When number of spots does not meet default number, automatically partitioning parameters is readjusted, the image that can adapt under different illumination conditions detects, also can the identification of realization character mark when illumination instability, actual test cases as shown in Figure 6, the intensity of light source condition used when setting up template as shown in Figure 6A, changes at light-source brightness in actual application after a while, as shown in figs. 6b and 6c.The method is by automatically adjusting partitioning parameters, the profile after splitting is made to meet initial setting up condition, characteristic indication can be identified after iteration several times, adapt to the different image of illumination condition detect, also can the identification of realization character mark under the condition of illumination instability, in addition, this method avoid manual intervention adjustment parameter, achieve vision system and run steadily in the long term.
As shown in Figure 7, in one embodiment, also comprised before step S1:
Step S01, reads in template image.
Concrete, before the target image of photographic subjects thing, first, read in the template image taken in advance, template image is for the image as reference standard, and it weighs the whether correct standard of the characteristic indication of subsequent extracted.
Step S02, according to the template image drawing template establishment parameter of reading in, template parameter comprises number of spots and the template contours line of extraction.
Concrete, according to template image, extract the template parameter in image.Concrete, template parameter comprises at least one in the outline line of the number of pixel in template image, the number of spots of composition characteristic mark, the height of characteristic indication, width, area, duty ratio, gray threshold and template.
In one embodiment, above-mentioned steps S1, for obtaining target image, carries out sub-sampling to target image.
Concrete, sub-sampling be according to certain strategy every several pixel extraction pixel as the method for valid pixel, the image of a relative decrease can be obtained after sub-sampling, be convenient to reduce computation complexity, improve computing velocity.
As shown in Figure 8, in one embodiment, also comprise after the speck outline line identified and described template contours lines matching:
Step S90, reads target image again, extracts the outline line of target image.
In the present embodiment, due to before in order to improve the speed of calculating, the image that sub-sampling obtains a relative decrease has been carried out to target image, so the outline line extracted is the outline line of sub-sampling image, its precision and resolution are not too high, so need again to read initial target image, the basis of original image is extracted the outline line of characteristic indication again, to improve precision and resolution.Concrete, utilize the initial value of outline to extract the outline line of characteristic indication on original target image, outline line extracting method can adopt sub-pixel method and marking competition law.
Step S91, the outline line of parameter to the target image extracted according to the speck outline line of step S7 identification carries out matching.
In the present embodiment, utilize the parameter of the outline line of the image through sub-sampling extracted in the step s 7, matching is carried out to the outline line of the original target image extracted.Concrete, utilize the geometric parameter of the outline line of the image extracted in S7 step to adopt the outline line of least square method to the original target image extracted to carry out matching.Such as, circular characteristic indication uses Circle Parameters radius and central coordinate of circle to build circular fit model, and square and straight line mark uses fitting a straight line, and after matching, image position accuracy can reach 0.1 pixel.
As shown in Figure 9, in one embodiment, identify that the step of speck outline line comprises:
Step S71, determines the center of connected domain.
Concrete, first the identification carrying out speck outline line need the initial value and the central value that calculate connected domain, if the center of outline line only has one, the geometric center of center at outline line of connected domain is set, if there is multiple profile heart, then the center of connected domain is arranged on the mean center place of many group outline lines.
S72: using the center of described connected domain as polar limit, carries out polar coordinates collection with the angle intervals preset at described connected domain edge.
In the present embodiment, after having determined the center of connected domain, using the center of this connected domain as polar limit, on the edge of connected domain, gather polar data with the angle intervals (such as 1 degree) preset.Concrete, adopt the mode of star topology as shown in Figure 2 to divide outline line, carry out polar collection at the edge of this outline line.
S73: by the polar data that collects with counterclockwise polar coordinates corner sequential storage in linear array.
Concrete, the polar data after sampling is pressed counterclockwise polar coordinates corner (0-360 degree) sequential storage in linear array VectorX [] and VectorY [].The position of the characteristic indication collected is adjusted after being convenient to.
As shown in Figure 10, in one embodiment, described step S8 comprises:
Step S81, judges whether the difference of the mean value of speck outline line polar coordinates radius and the mean value of template contours line polar coordinates radius detected is less than default distance, if so, enters step S9; If not, then step S82 is entered.
Concrete, according to each polar data collected, calculate the mean value of speck outline line polar coordinates radius, judge that the difference of the mean value of speck outline line polar coordinates radius that detects and the mean value of template contours line polar coordinates radius is whether in rational scope, pre-set maximum error distance, if both differences are less than default distance, illustrate that the polar coordinates center of current connected domain is suitable, do not need adjustment, can directly export the characteristic indication identified.If both differences are greater than default distance, illustrate that the connected domain polar coordinates center determined at present is improper, need to re-start adjustment.
Step S82, redefines the polar coordinates center of connected domain according to comparative result.
Concrete, if template contours line is a part for the outline line calculated, both explanations are not mated, and need the polar coordinates center of reorientating the outline line of characteristic indication according to the result after relatively.
Step S83, redefines the polar data of outline line according to the described polar coordinates center redefined.
Concrete, to redefine centered by polar coordinates center, again with the polar data at this central data outline line edge, and the polar data redefined is stored in linear data group.
Step S84, carries out Difference Calculation by the polar data redefined and template polar data and obtains difference value, judges whether described difference value is less than default value, if so, then mates.
Concrete, the polar data redefined and template polar data are carried out Difference Calculation, obtains a difference value T, judge whether this difference value T is less than default value, if so, the outline line and template contours lines matching that collect are described, export the characteristic indication identified; If not, illustrate that outline line extracts improper, return the state of NoObject.
As shown in figure 11, in one embodiment, provide a kind of vision calibration method, the method comprises:
Step 1102, identifies the characteristic indication on workpiece.
In the present embodiment, first need by the characteristic indication on vision fixation and recognition workpiece, characteristic indication is used to labeling operation point position, such as put the position of glue, boring.The shape of characteristic indication has multiple, and can be circular, also can be square, can also be cruciform etc.
Step 1104, drives camera to move to above described characteristic indication by mechanical arm, the physical coordinates of record move.
Concrete, mechanical arm refers to some holding function imitating staff and arm, in order to capture by fixed routine, the automatic pilot of carry an object or operation tool.Drive camera to move to above characteristic indication by mechanical arm, record the physical coordinates of mechanical arm movement in this process.
Step 1106, the image corresponding to described physical coordinates processes, and identifies characteristic indication coordinate in the picture.
Concrete, after the physical coordinates of record mechanical arm movement, the image corresponding to this physical coordinates processes, and identifies characteristic indication image coordinate in the picture, and line item of going forward side by side stores.
Step 1108, according to physical coordinates and the corresponding image coordinate of record, determines the mapping relations between the image coordinate of characteristic indication and physical coordinates.
Concrete, mobile manipulator m time in a surface level, make monumented point be in camera image inside at every turn, physical coordinates (the Xw of each movement is recorded by host query kinematic axis, Yw), simultaneously corresponding to physical coordinates image processes, and identifies the coordinate (Un, Vn) of the characteristic indication in corresponding image.In order to obtain the mapping relations between the image coordinate of characteristic indication and physical coordinates, need at least to record 4 groups of physical coordinates and corresponding image coordinate.Following formula (3) and (4) are utilized to calculate coefficient a11 in formula, a12, a21 and a22.Wherein, derived by formula (1) and (2) and obtain in formula (3) and (4).
Xw=a11*U+a12*V+Tx(1)
Yw=a21*U+a22*V+Ty(2)
dXw=a11*dU+a12*dV(3)
dYw=a21*dU+a22*dV(4)
The conversion relation of image coordinate and physical coordinates needs by a11, a12, a21 in theory, a22, Tx and Ty realize, but in fact control system only need the standard model workpiece of practical work piece and teaching relative uniquely, therefore, a kind of simplification calibration strategy of relative displacement coordinate can be adopted, now, only need 3 above-mentioned cameras of movement, generalized least square method is adopted to calculate a11, a12, a21, a22 coefficient.
In the present embodiment, by identifying the characteristic indication on workpiece, then camera is driven to move to above characteristic indication by mechanical arm, the physical coordinates of record move, the image corresponding to physical coordinates processes, identify characteristic indication coordinate in the picture, by record at least 4 group physical coordinates and image coordinate, determine the mapping relations between the image coordinate of characteristic indication and physical coordinates.This scaling method is easy, and directly use actual product to demarcate, marking process automatically carries out, and avoids manual intervention, and calibrating parameters accurately and reliably.
As shown in figure 12, in one embodiment, above-mentioned vision calibration method also comprises:
Step 1110, calculates the coordinate difference of desired characteristics mark and practical work piece characteristic indication by Differential positioning algorithm.
Concrete, after demarcation completes, when placement practical work piece to present situation as shown in fig. 13 that, wherein Mark1 and Mark2 is monumented point ideally, and Mark1 1and Mark2 1monumented point 1 when being actual working state and monumented point 2, (Xgt, Ygt) be the coordinate position of operating point (some glue, boring) of any mechanical arm on calibration sample, and the operating point position of actual product is (Xgr, Ygr), the coordinate (Xgr, Ygr) of practical operation point can pass through formula (5) and (6) calculate.Wherein, (Xg ', Yg ') and (Xg, Yg) be Mark1, the physical coordinates line mid point physical coordinates of 2 respectively, they can calculate according to above-mentioned formula (1)-(4), and a is the angle of actual product and calibration sample.
Xgr=cos(a)*(Xgt-Xg)-sin(a)*(Ygt-Yg)+(Xg’-Xg)(5)
Ygr=sin(a)*(Xgt-Xg)+cos(a)*(Ygt-Yg)+(Yg’-Yg)(6)
Step 1112, the compensating approach value coordinate difference calculated being converted to end effector of robot physical coordinates is concrete, by the variable quantity of operation coordinate and coordinate difference (dXg, dYg) robot control system is transferred to, wherein, dXg=Xgr-Xgt, dYg=Xgr-Xgt.The Coordinate Adjusting of desired characteristics mark is the coordinate of practical work piece characteristic indication according to this coordinate difference by robot control system, the compensating approach value of end effector of robot physical coordinates is converted to according to the coordinate difference calculated, specifically, according to standard model coordinate data and the coordinate data of workpiece of the actual identification of correspondence and the mapping relations between the image coordinate of the difference value determination characteristic indication between them and the physical coordinates correction-compensation of end effector of robot of record.Thus realize the robot manipulation of random place work piece, such as put glue, boring, the turn of the screw etc.On actual robot process unit, all have device for pre-positioning after workpiece feeding, positioning precision is within +/-1mm, and the distortion of industrial vision camera lens is about 0.1%, and after above vision calibration process, positioning precision can bring up to +/-0.005-0.05mm.
As shown in figure 14, in one embodiment, propose a kind of Robot visual location device, this device comprises:
Acquisition module 1402, for obtaining target image, and carries out pre-service to described target image;
Segmentation module 1404, for carrying out Image Segmentation Methods Based on Features according to the partitioning parameters preset to pretreated image;
Filtration module 1406, carries out filtering process for the image crossed segmentation resume module;
Detection module 1408, for carrying out the speck of connected domain Detection and Extraction composition characteristic mark to the image of filtration module process;
Filtering module 1410, for carrying out filtration treatment to described speck;
Judge module 1412, for judging whether the number of spots after filtering meets default number, if not, then notice segmentation module is according to the rule adjustment partitioning parameters preset;
Identification module 1414, if meet default number for the number of spots after filtering, identifies described speck outline line;
Whether matching module 1416, mate with the template contours line preset for the speck outline line identified described in judging;
Output module 1418, if for the speck outline line identified and the template contours lines matching preset, then export the characteristic indication identified.
As shown in figure 15, in one embodiment, said apparatus also comprises:
Read in module 1400, for reading in template image.
Creation module 1401, for the template image drawing template establishment parameter of reading in described in basis, described template parameter comprises speck number and the template contours line of extraction.
In one embodiment, acquisition module, also for obtaining target image, carries out sub-sampling to target image.
As shown in figure 16, in one embodiment, said apparatus also comprises:
Extraction module 1420, for again reading target image, extracts the outline line of described target image.
Fitting module 1422, the parameter for the speck outline line according to identification module identification carries out matching to the outline line of the original target image of described extraction.
As shown in figure 17, in one embodiment, identification module comprises:
Center calculation module 1414a, for determining the center of described connected domain.
Coordinate acquisition module 1414b, for using the center of described connected domain as polar limit, with preset angle intervals carry out coordinate collection at described connected domain edge.
Memory module 1414c, for the coordinate data that will collect with counterclockwise polar coordinates corner sequential storage in linear array.
As shown in figure 18, in one embodiment, matching module comprises:
Radius judge module 1416a, whether the difference for the mean value of the mean value and template contours line polar coordinates radius that judge the speck outline line polar coordinates radius detected is less than default distance.
Center determination module 1416b, if be greater than default distance for the average difference of the speck outline line polar coordinates radius mean value that detects and described template contours line polar coordinates radius, then redefines the polar coordinates center of connected domain according to comparative result.
Outline line index module 1416c, for redefining the polar data of outline line according to the polar coordinates center redefined.
Computing module 1416d, obtains difference value for the polar data redefined and template polar data are carried out Difference Calculation, judges whether difference value is less than default value, if so, notifies that output module performs the characteristic indication exporting and identify.
As shown in figure 19, in one embodiment, propose a kind of vision calibration device, this device comprises:
Landmark identification module 1902, for identifying the characteristic indication on workpiece.
Coordinate record module 1904, for driving camera to move to above described characteristic indication by mechanical arm, the physical coordinates of record move.
Coordinate identification module 1906, processes for the image corresponding to described physical coordinates, identifies characteristic indication coordinate in the picture.
Relationship determination module 1908, for according to the physical coordinates of record and the image coordinate of correspondence, determines the mapping relations between the image coordinate of characteristic indication and physical coordinates.
As shown in figure 20, in one embodiment, said apparatus also comprises:
Coordinate difference computing module 1910, for calculating the coordinate difference of desired characteristics mark and practical work piece characteristic indication by Differential positioning algorithm.
Adjusting module 1912, for converting the compensating approach value of end effector of robot physical coordinates to by the coordinate difference calculated.
The above embodiment only have expressed several embodiment of the present invention, and it describes comparatively concrete and detailed, but therefore can not be interpreted as the restriction to the scope of the claims of the present invention.It should be pointed out that for the person of ordinary skill of the art, without departing from the inventive concept of the premise, can also make some distortion and improvement, these all belong to protection scope of the present invention.Therefore, the protection domain of patent of the present invention should be as the criterion with claims.

Claims (16)

1. a robot visual orientation method, described method comprises:
S1: obtain target image, and pre-service is carried out to described target image;
S2: the partitioning parameters according to presetting carries out Image Segmentation Methods Based on Features to the image of step S1 process;
S3: filtering process is carried out to the image of step S2 process;
S4: the speck image of step S3 process being carried out to connected domain Detection and Extraction composition characteristic mark;
S5: filtration treatment is carried out to described speck;
S6: judge whether the speck number after filtering meets default number, if so, then enters step S7, if not, then by the partitioning parameters in preset rules set-up procedure S2, repeat above-mentioned steps S2-S6;
S7: identify described speck outline line;
S8: whether the speck outline line identified described in judgement mates with the template contours line preset; If coupling, then enter step S9;
S9: export the characteristic indication identified.
2. method according to claim 1, is characterized in that, also comprises before described step S1:
S01: read in template image;
S02: according to described template image drawing template establishment parameter of reading in, described template parameter comprises number of spots and the template contours line of extraction.
3. method according to claim 1, is characterized in that, described step S1, for obtaining target image, carries out sub-sampling to described target image.
4. method according to claim 3, is characterized in that, also comprises after described step S8:
S90: again read target image, extracts the outline line of described target image;
S91: the outline line of parameter to the described target image extracted according to the speck outline line of step S7 identification carries out matching.
5. method according to claim 1, is characterized in that, described step S7 comprises:
S71: the center determining described connected domain;
S72: using the center of described connected domain as polar limit, carries out coordinate collection with the angle intervals preset at described connected domain edge;
S73: by the coordinate data that collects with counterclockwise polar coordinates corner sequential storage in linear array.
6. method according to claim 1, is characterized in that, described step S8 comprises:
S81: judge whether the difference of the mean value of speck outline line polar coordinates radius and the mean value of described template contours line polar coordinates radius detected is less than default distance, if so, then enters step S9; If not, then step S82 is entered;
S82: the polar coordinates center redefining described connected domain according to comparative result;
S83: the polar data redefining outline line according to the described polar coordinates center redefined;
S84: the polar data redefined and template polar data are carried out Difference Calculation and obtains difference value, judges whether described difference value is less than default value, if so, then enters step S9.
7. a vision calibration method, described method comprises:
Identify the characteristic indication on workpiece;
Camera is driven to move to above described characteristic indication by mechanical arm, the physical coordinates of record move;
The image corresponding to described physical coordinates processes, and identifies characteristic indication coordinate in the picture;
According to described physical coordinates and the corresponding image coordinate of record, determine the mapping relations between the image coordinate of characteristic indication and physical coordinates.
8. method according to claim 7, is characterized in that, described method also comprises:
The coordinate difference of desired characteristics mark and practical work piece characteristic indication is calculated by Differential positioning algorithm;
The coordinate difference calculated is converted to the compensating approach value of end effector of robot physical coordinates.
9. a Robot visual location device, is characterized in that, described device comprises:
Acquisition module, for obtaining target image, and carries out pre-service to described target image;
Segmentation module, for carrying out Image Segmentation Methods Based on Features according to the partitioning parameters preset to pretreated image;
Filtration module, carries out filtering process for the image crossed segmentation resume module;
Detection module, for carrying out the speck of connected domain Detection and Extraction composition characteristic mark to the image of filtration module process;
Filtering module, for carrying out filtration treatment to described speck;
Judge module, for judging whether the number of spots after filtering meets default number, if not, then notice segmentation module is according to the rule adjustment partitioning parameters preset;
Identification module, if meet default number for the number of spots after filtering, identifies described speck outline line;
Whether matching module, mate with the template contours line preset for the speck outline line identified described in judging;
Output module, if for the speck outline line identified and the template contours lines matching preset, then export the characteristic indication identified.
10. device according to claim 9, is characterized in that, described device also comprises:
Read in module, for reading in template image;
Creation module, for the template image drawing template establishment parameter of reading in described in basis, described template parameter comprises speck number and the template contours line of extraction.
11. devices according to claim 9, is characterized in that, described acquisition module, also for obtaining target image, carries out sub-sampling to described target image.
12. devices according to claim 9, is characterized in that, described device also comprises:
Extraction module, for again reading target image, extracts the outline line of described target image;
Fitting module, the parameter for the speck outline line according to identification module identification carries out matching to the outline line of the original target image of described extraction.
13. devices according to claim 9, is characterized in that, described identification module comprises:
Center calculation module, for determining the center of described connected domain;
Coordinate acquisition module, for using the center of described connected domain as polar limit, with preset angle intervals carry out coordinate collection at described connected domain edge;
Memory module, for the coordinate data that will collect with counterclockwise polar coordinates corner sequential storage in linear array.
14. devices according to claim 9, is characterized in that, described matching module comprises:
Radius judge module, whether the difference for the mean value of the mean value and described template contours line polar coordinates radius that judge the speck outline line polar coordinates radius detected is less than default distance;
Center determination module, if be greater than default distance for the average difference of the speck outline line polar coordinates radius mean value that detects and described template contours line polar coordinates radius, then redefines the polar coordinates center of connected domain according to comparative result;
Outline line index module, for redefining the polar data of outline line according to the described polar coordinates center redefined;
Computing module, obtains difference value for the polar data redefined and template polar data are carried out Difference Calculation, judges whether described difference value is less than default value, if so, notifies that output module performs the characteristic indication exporting and identify.
15. 1 kinds of vision calibration devices, is characterized in that, described device comprises:
Landmark identification module, for identifying the characteristic indication on workpiece;
Coordinate record module, for driving camera to move to above described characteristic indication by mechanical arm, the physical coordinates of record move;
Coordinate identification module, processes for the image corresponding to described physical coordinates, identifies characteristic indication coordinate in the picture;
Relationship determination module, for according to the physical coordinates of record and the image coordinate of correspondence, determines the mapping relations between the image coordinate of characteristic indication and physical coordinates.
16. devices according to claim 15, is characterized in that, described device also comprises:
Coordinate difference computing module, for calculating the coordinate difference of desired characteristics mark and practical work piece characteristic indication by Differential positioning algorithm;
Adjusting module, for converting the compensating approach value of end effector of robot physical coordinates to by the coordinate difference calculated.
CN201510900027.0A 2015-12-08 2015-12-08 Robot visual orientation method and device, vision calibration method and device Active CN105528789B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510900027.0A CN105528789B (en) 2015-12-08 2015-12-08 Robot visual orientation method and device, vision calibration method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510900027.0A CN105528789B (en) 2015-12-08 2015-12-08 Robot visual orientation method and device, vision calibration method and device

Publications (2)

Publication Number Publication Date
CN105528789A true CN105528789A (en) 2016-04-27
CN105528789B CN105528789B (en) 2018-09-18

Family

ID=55770992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510900027.0A Active CN105528789B (en) 2015-12-08 2015-12-08 Robot visual orientation method and device, vision calibration method and device

Country Status (1)

Country Link
CN (1) CN105528789B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107582001A (en) * 2017-10-20 2018-01-16 珠海格力电器股份有限公司 Dish-washing machine and its control method, device and system
CN107665350A (en) * 2016-07-29 2018-02-06 广州康昕瑞基因健康科技有限公司 Image-recognizing method and system and autofocus control method and system
CN108460388A (en) * 2018-01-18 2018-08-28 深圳市易成自动驾驶技术有限公司 Detection method, device and the computer readable storage medium of witness marker
CN108453356A (en) * 2018-03-29 2018-08-28 江苏新时代造船有限公司 A kind of complexity Zhong Zuli robots compression arc MAG welding methods
CN108480826A (en) * 2018-03-29 2018-09-04 江苏新时代造船有限公司 A kind of complexity Zhong Zuli robots compression arc MAG welders
CN110008955A (en) * 2019-04-01 2019-07-12 中国计量大学 A kind of automotive brake pads face character coining quality inspection method
CN110163921A (en) * 2019-02-15 2019-08-23 苏州巨能图像检测技术有限公司 Automatic calibration method based on laminating machine vision system
CN110378970A (en) * 2019-07-08 2019-10-25 武汉理工大学 A kind of monocular vision deviation detecting method and device for AGV
CN110755142A (en) * 2019-12-30 2020-02-07 成都真实维度科技有限公司 Control system and method for realizing space multi-point positioning by adopting three-dimensional laser positioning
CN110773842A (en) * 2019-10-21 2020-02-11 大族激光科技产业集团股份有限公司 Welding positioning method and device
CN111091086A (en) * 2019-12-11 2020-05-01 安徽理工大学 Method for improving single-feature information recognition rate of logistics surface by using machine vision technology
CN111161232A (en) * 2019-12-24 2020-05-15 贵州航天计量测试技术研究所 Component surface positioning method based on image processing
CN111390882A (en) * 2020-06-02 2020-07-10 季华实验室 Robot teaching control method, device and system and electronic equipment
CN111721507A (en) * 2020-06-30 2020-09-29 东莞市聚明电子科技有限公司 Intelligent detection method and device for keyboard backlight module based on polar coordinate identification

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6137893A (en) * 1996-10-07 2000-10-24 Cognex Corporation Machine vision calibration targets and methods of determining their location and orientation in an image
CN101630409A (en) * 2009-08-17 2010-01-20 北京航空航天大学 Hand-eye vision calibration method for robot hole boring system
CN103292695A (en) * 2013-05-10 2013-09-11 河北科技大学 Monocular stereoscopic vision measuring method
CN104019745A (en) * 2014-06-18 2014-09-03 福州大学 Method for measuring size of free plane based on monocular vision indirect calibration method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6137893A (en) * 1996-10-07 2000-10-24 Cognex Corporation Machine vision calibration targets and methods of determining their location and orientation in an image
CN101630409A (en) * 2009-08-17 2010-01-20 北京航空航天大学 Hand-eye vision calibration method for robot hole boring system
CN103292695A (en) * 2013-05-10 2013-09-11 河北科技大学 Monocular stereoscopic vision measuring method
CN104019745A (en) * 2014-06-18 2014-09-03 福州大学 Method for measuring size of free plane based on monocular vision indirect calibration method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
洪泉 等: "一种基于图象内部信息的轮廓匹配和切片对齐新方法", 《中国图象图形学报》 *
王彦 等: "工件自动视觉定位识别系统研究", 《计算机工程与应用》 *
邝泳聪 等: "基于轮廓矢量化的形状匹配快速算法", 《计算机应用研究》 *
陈思伟: "视觉引导抓取机械手工作平面定位误差与修正", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107665350A (en) * 2016-07-29 2018-02-06 广州康昕瑞基因健康科技有限公司 Image-recognizing method and system and autofocus control method and system
CN107582001A (en) * 2017-10-20 2018-01-16 珠海格力电器股份有限公司 Dish-washing machine and its control method, device and system
CN107582001B (en) * 2017-10-20 2020-08-11 珠海格力电器股份有限公司 Dish washing machine and control method, device and system thereof
CN108460388A (en) * 2018-01-18 2018-08-28 深圳市易成自动驾驶技术有限公司 Detection method, device and the computer readable storage medium of witness marker
CN108453356A (en) * 2018-03-29 2018-08-28 江苏新时代造船有限公司 A kind of complexity Zhong Zuli robots compression arc MAG welding methods
CN108480826A (en) * 2018-03-29 2018-09-04 江苏新时代造船有限公司 A kind of complexity Zhong Zuli robots compression arc MAG welders
CN110163921A (en) * 2019-02-15 2019-08-23 苏州巨能图像检测技术有限公司 Automatic calibration method based on laminating machine vision system
CN110163921B (en) * 2019-02-15 2023-11-14 苏州巨能图像检测技术有限公司 Automatic calibration method based on lamination machine vision system
CN110008955B (en) * 2019-04-01 2020-12-15 中国计量大学 Method for testing character imprinting quality of surface of automobile brake pad
CN110008955A (en) * 2019-04-01 2019-07-12 中国计量大学 A kind of automotive brake pads face character coining quality inspection method
CN110378970A (en) * 2019-07-08 2019-10-25 武汉理工大学 A kind of monocular vision deviation detecting method and device for AGV
CN110378970B (en) * 2019-07-08 2023-03-10 武汉理工大学 Monocular vision deviation detection method and device for AGV
CN110773842A (en) * 2019-10-21 2020-02-11 大族激光科技产业集团股份有限公司 Welding positioning method and device
CN111091086B (en) * 2019-12-11 2023-04-25 安徽理工大学 Method for improving identification rate of single characteristic information of logistics surface by utilizing machine vision technology
CN111091086A (en) * 2019-12-11 2020-05-01 安徽理工大学 Method for improving single-feature information recognition rate of logistics surface by using machine vision technology
CN111161232B (en) * 2019-12-24 2023-11-14 贵州航天计量测试技术研究所 Component surface positioning method based on image processing
CN111161232A (en) * 2019-12-24 2020-05-15 贵州航天计量测试技术研究所 Component surface positioning method based on image processing
CN110755142B (en) * 2019-12-30 2020-03-17 成都真实维度科技有限公司 Control system and method for realizing space multi-point positioning by adopting three-dimensional laser positioning
CN110755142A (en) * 2019-12-30 2020-02-07 成都真实维度科技有限公司 Control system and method for realizing space multi-point positioning by adopting three-dimensional laser positioning
CN111390882B (en) * 2020-06-02 2020-08-18 季华实验室 Robot teaching control method, device and system and electronic equipment
CN111390882A (en) * 2020-06-02 2020-07-10 季华实验室 Robot teaching control method, device and system and electronic equipment
CN111721507A (en) * 2020-06-30 2020-09-29 东莞市聚明电子科技有限公司 Intelligent detection method and device for keyboard backlight module based on polar coordinate identification

Also Published As

Publication number Publication date
CN105528789B (en) 2018-09-18

Similar Documents

Publication Publication Date Title
CN105528789A (en) Robot vision positioning method and device, and visual calibration method and device
CN110555889B (en) CALTag and point cloud information-based depth camera hand-eye calibration method
CN111721259B (en) Underwater robot recovery positioning method based on binocular vision
CN109211207B (en) Screw identification and positioning device based on machine vision
CN105844622A (en) V-shaped groove welding seam detection method based on laser visual sense
CN106774296A (en) A kind of disorder detection method based on laser radar and ccd video camera information fusion
CN113052903B (en) Vision and radar fusion positioning method for mobile robot
CN102622614B (en) Knife switch closing reliability judging method based on distance between knife switch arm feature point and fixing end
CN104112269A (en) Solar cell laser-marking parameter detection method based on machine vision and system thereof
CN110189375B (en) Image target identification method based on monocular vision measurement
CN105865329A (en) Vision-based acquisition system for end surface center coordinates of bundles of round steel and acquisition method thereof
CN105631893A (en) Method and device for detecting whether capacitor is correctly mounted through photographing
CN103824298A (en) Intelligent body visual and three-dimensional positioning method based on double cameras and intelligent body visual and three-dimensional positioning device based on double cameras
CN109584258A (en) Meadow Boundary Recognition method and the intelligent mowing-apparatus for applying it
CN107527368A (en) Three-dimensional attitude localization method and device based on Quick Response Code
CN108074265A (en) A kind of tennis alignment system, the method and device of view-based access control model identification
CN111968132A (en) Panoramic vision-based relative pose calculation method for wireless charging alignment
CN111784655A (en) Underwater robot recovery positioning method
CN111862043A (en) Mushroom detection method based on laser and machine vision
CN109472826A (en) Localization method and device based on binocular vision
WO2022036478A1 (en) Machine vision-based augmented reality blind area assembly guidance method
CN114842335A (en) Slotting target identification method and system for construction robot
CN105741268B (en) A kind of vision positioning method based on colored segment and its topological relation
CN111553891B (en) Handheld object existence detection method
Roy et al. Robotic surveying of apple orchards

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20180706

Address after: 518000 Guangdong Shenzhen Baoan District Xixiang Street Sanwei community science and technology park business building 12 floor A1201-A1202

Applicant after: SHENZHEN HENGKETONG ROBOT CO., LTD.

Address before: 518000 A1203-A1204 12, Suo business building, 7 Air Road, Baoan District Xixiang street, Shenzhen, Guangdong.

Applicant before: SHENZHEN HENGKETONG MULTIDIMENSIONAL VISION CO., LTD.

GR01 Patent grant
GR01 Patent grant