WO2017054651A1 - Method and device for determining fusion coefficient - Google Patents

Method and device for determining fusion coefficient Download PDF

Info

Publication number
WO2017054651A1
WO2017054651A1 PCT/CN2016/099290 CN2016099290W WO2017054651A1 WO 2017054651 A1 WO2017054651 A1 WO 2017054651A1 CN 2016099290 W CN2016099290 W CN 2016099290W WO 2017054651 A1 WO2017054651 A1 WO 2017054651A1
Authority
WO
WIPO (PCT)
Prior art keywords
point
area
target
points
fusion
Prior art date
Application number
PCT/CN2016/099290
Other languages
French (fr)
Chinese (zh)
Inventor
陈岩
黄英
邹建法
Original Assignee
阿里巴巴集团控股有限公司
陈岩
黄英
邹建法
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司, 陈岩, 黄英, 邹建法 filed Critical 阿里巴巴集团控股有限公司
Publication of WO2017054651A1 publication Critical patent/WO2017054651A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4023Decimation- or insertion-based scaling, e.g. pixel or line decimation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present invention relates to the field of computer application technologies, and in particular, to a method and apparatus for determining a fusion coefficient.
  • the main method is to locate the key points of the organs in the static images, determine the areas that need to be beautiful according to the positioning results, and use image fusion technology and color probability.
  • the model performs background blending. In the background fusion, it is necessary to determine the fusion coefficient of the unknown region.
  • the unknown region is the transition region between the foreground region and the background region, which is a linear fusion of the foreground region and the background region, and satisfies the relationship represented by the following expression:
  • U represents the matrix of pixel values of the unknown region
  • F represents the matrix of pixel values of the foreground region
  • B represents the matrix of pixel values of the background region
  • alpha is the fusion coefficient
  • the existing fusion coefficient is determined by combining the pixel values of all unknown regions into a super-large linear equation group, and the solution is required to solve the inverse matrix of a super-large matrix.
  • the algorithm is complex, takes a long time, and has poor real-time performance.
  • the present invention provides a method and apparatus for determining a fusion coefficient, so as to reduce algorithm complexity and improve real-time performance.
  • the present invention provides a method of determining a fusion coefficient, the method comprising:
  • Feature point location for an object in the image the feature point including a contour point
  • N a preset positive integer
  • the other coefficients of the unknown regions are interpolated by using the fusion coefficients of the respective isometric lines of the unknown region, and the fusion coefficients of the pixels in the unknown region are obtained.
  • the target is equally divided by the contour point to form N isometric lines including:
  • a proportional line is formed by equal proportions of the points on each line.
  • the center point of the target is the center point of the area covered by the target
  • the center point of the target is an intermediate point of the inner contour point and its corresponding outer contour point.
  • the method further includes:
  • a foreground area, a background area, and an unknown area of the target are determined.
  • a region consisting of a point between a length of the inner contour and a distance between the inner and outer contours at a point between the first threshold and the second threshold is used as a foreground.
  • the first threshold is smaller than the second threshold, and the first threshold and the second threshold are preset positive numbers less than one;
  • an area other than the foreground area is regarded as an unknown area
  • An area other than the area covered by the target is used as the background area.
  • a point is formed from a point at which the ratio of the length of the center point of the target to the distance between the center point and the outer contour is within a third threshold.
  • a region, as a foreground region, the third threshold is a preset positive number less than one;
  • an area other than the foreground area is regarded as an unknown area
  • An area other than the area covered by the target is used as the background area.
  • each of the equal proportions of the unknown region in the target is divided
  • the fusion coefficients of the contour lines are determined using the fusion coefficients of the points on the proportional lines.
  • determining the fusion coefficients of the proportional lines by using the fusion coefficients of the points on the proportional line includes:
  • the fusion coefficient having the most occurrences is used as the fusion coefficient of the proportional lines.
  • determining the fusion coefficients of the targeted points by using the pixel values of the points and the pixel values of the sampled foreground region points and the background region points include:
  • each combination includes a foreground area point and a background area point, the number of the combinations being less than a preset positive integer
  • the fusion coefficients of the points are calculated separately using each combination.
  • the fusion coefficients for the points are calculated in the following manner:
  • the alpha is a fusion coefficient for a point
  • the U is a pixel value of a point
  • the F and B are pixel values of a foreground region point and a pixel value of a background region point, respectively.
  • the number of combinations is greater than 1 and less than 10.
  • the present invention also provides an apparatus for determining a fusion coefficient, the apparatus comprising:
  • a positioning unit configured to perform feature point positioning on an object in the image, where the feature point includes a contour point
  • a dividing unit configured to form an N proportional line by using the contour point to divide the target into equal proportions, wherein the N is a preset positive integer
  • a first determining unit configured to respectively determine a fusion system for each of the equal proportion lines of the unknown region in the target number
  • the second determining unit is configured to perform interpolation processing on the other pixels of the unknown region by using the fusion coefficients of the respective isometric lines of the unknown region to obtain a fusion coefficient of each pixel in the unknown region.
  • the splitting unit specifically executes:
  • a proportional line is formed by equal proportions of the points on each line.
  • the center point of the target is the center point of the area covered by the target
  • the center point of the target is an intermediate point of the inner contour point and its corresponding outer contour point.
  • the apparatus further includes: a region dividing unit configured to determine a foreground region, a background region, and an unknown region of the target.
  • the area dividing unit specifically performs:
  • the target has an inner and outer contour, an area formed by a point between a length of the inner contour and a distance between the inner and outer contours at a point between the first threshold and the second threshold, as the foreground region, the first threshold is smaller than The second threshold, and the first threshold and the second threshold are preset positive numbers less than one;
  • an area other than the foreground area is regarded as an unknown area
  • An area other than the area covered by the target is used as the background area.
  • the area dividing unit specifically performs:
  • the target has only an outer contour
  • an area formed by a point that is within a third threshold from a distance between a center point of the target and a distance between the center point and the outer contour is used as a foreground area
  • the third threshold is a preset positive number less than one
  • an area other than the foreground area is regarded as an unknown area
  • An area other than the area covered by the target is used as the background area.
  • the first determining unit specifically executes:
  • the fusion coefficients of the contour lines are determined using the fusion coefficients of the points on the proportional lines.
  • the first determining unit determines the fusion coefficient of the proportional line by using the fusion coefficient of each point on the proportional line, the first determining unit performs:
  • the fusion coefficient having the most occurrences is used as the fusion coefficient of the proportional lines.
  • the first determining unit specifically performs: when determining the fusion coefficient of the point to be:
  • each combination includes a foreground area point and a background area point, the number of the combinations being less than a preset positive integer
  • the fusion coefficients of the points are calculated separately using each combination.
  • the first determining unit calculates the fusion coefficient of the point for the following manner:
  • the alpha is a fusion coefficient for a point
  • the U is a pixel value of a point
  • the F and B are pixel values of a foreground region point and a pixel value of a background region point, respectively.
  • the number of combinations is greater than 1 and less than 10.
  • the present invention determines the fusion coefficient of the isometric line by means of equal division of the target, and uses the fusion coefficients of the equal proportion lines of the unknown region to the other pixels of the unknown region. Interpolation processing is performed to obtain a fusion coefficient of each pixel in the unknown region. In other words, first replace the point with a line, and then use the interpolation method to avoid the inverse matrix of the large matrix. Solving, both in scale and in algorithm, reduces complexity, takes less time, and improves real-time performance.
  • FIG. 1 is a flowchart of a main method according to an embodiment of the present invention
  • 3a and 3b are respectively a schematic diagram and a schematic diagram of a process of equal division according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of area division according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of pixel value collection along a normal direction according to an embodiment of the present invention.
  • FIG. 6 is a diagram showing the effect of each pixel point fusion coefficient of the lip according to an embodiment of the present invention.
  • FIG. 7 is a structural diagram of an apparatus for determining a fusion coefficient according to an embodiment of the present invention.
  • FIG. 1 is a flowchart of a main method according to an embodiment of the present invention. As shown in FIG. 1 , the method mainly includes the following steps:
  • feature point positioning is performed on a target in the image.
  • the feature points involved in this step mainly include contour points, which may include only outer contour points, for example, the target is an eye; and may include inner contour points and outer contour points, for example, the target is the lips (ie, the inner and outer boundaries of the lips). Depending on the specific target type, some special feature points may also be included. For example, if the target is a lip, the feature point may also include a corner of the mouth. This step actually determines the position information of these feature points, which can be embodied as coordinate information in the image.
  • the target is equally divided by the contour points to form N isometric lines, and N is a preset positive integer.
  • the profiling is performed on the connecting line from the center point of the target to each contour point, and N dividing points are obtained on each connecting line, and the equal proportioning lines are formed by the equal proportioning points on the respective connecting lines.
  • the main use in the present invention is a proportional line that falls within an unknown area.
  • the center point is the center point of the area covered by the target, which is better understood. That is, the distance is split from the line connecting the target center point to the outer contour point, for example, divided into 8 equal parts, and there are 7 split points on each line, assuming that the line is cut according to the center point.
  • the connections form an isometric line, and so on.
  • the center point is an intermediate point of the inner contour point and its corresponding outer contour point.
  • the number of inner contour points and outer contour points obtained by the same is the same, and is stored in the form of an array, assuming that there are 10 inner contour points and 10 outer contour points, and the inner contour points of the inner contour point and the outer contour point having the sequence number 1 are The middle point of the inner contour point and the outer contour point of sequence number 2, and so on.
  • the intermediate points are proportionally divided with the corresponding inner contour points, and then the equal proportioning points on the respective connecting lines form a proportional line; the intermediate points are proportionally divided with the corresponding outer contour points, and then The equal proportioning points of the respective lines form an equal proportion line. This situation will be described in detail in the subsequent embodiments.
  • the fusion coefficients are determined for each of the isometric lines of the unknown region in the target.
  • the fusion coefficients of each pixel on each isometric line are equal, and other pixels on the unknown region can be interpolated by the fusion coefficients of the proportional lines, thereby avoiding one by one for all points. Determining the fusion coefficient greatly reduces the amount of calculation and improves the calculation efficiency. The manner in which the fusion coefficients of the respective proportional lines are determined will be described in detail in the subsequent embodiments.
  • each pixel of the unknown region is interpolated by using the fusion coefficients of the respective isometric lines of the unknown region, and the fusion coefficients of the pixels in the unknown region are obtained.
  • FIG. 2 is a flowchart of a detailed method according to an embodiment of the present invention. As shown in FIG. 2, the process may specifically include the following steps:
  • feature points are positioned on the lips in the image.
  • the feature points may include: an inner contour and an outer contour of the upper lip, and an inner contour and an outer contour of the lower lip, and may further include a corner of the mouth.
  • the feature point positioning described above can be employed.
  • the closed state of the lips it is also possible to position only the outer contour of the upper and lower lips, and may further include positioning of the corners of the mouth.
  • the method for locating feature points is not limited in the present invention, and any feature point positioning method such as positioning based on SDM (Supervised Descent Method) model and id-exp model positioning may be adopted, and finally the above feature points may be obtained. Location information.
  • an equal-split is performed on a line connecting the intermediate point of the inner and outer contour points to the outer contour point, and the equal-distance line is formed by the equal-distinct points on the respective lines, at an intermediate point from the inner and outer contour points.
  • the equidistant division is performed on the line connecting the inner contour points, and the equal proportion lines are formed by the equal-distinct points on the respective connecting lines.
  • a5 and b5 are taken as examples.
  • A5 and b5 are corresponding inner and outer contour points, and the middle point is o5, and the line connecting o5 and a5 is equally divided, and it is assumed that the section is divided into three sections. There are 2 split points, and the lines connecting o5 and b5 are equally divided. It is assumed that the split is divided into 3 segments and there are 2 split points.
  • o6 is the middle point of a6 and b6, there are also two split points between o6 and a6, and there are also two cuts between o6 and b6.
  • Points Connect the split points on the a5 and o5 lines to the split points on the line connecting a6 and o6 one by one. As shown in Figure 3a, these lines are equal proportion lines. Then all the inner and outer contour points are similarly processed, and the resulting equal proportion line can be as shown in Fig. 3b.
  • the foreground, background, and unknown regions of the lips in the image are determined.
  • the distance can be An area formed by a point between the length of the inner contour and the distance between the inner and outer contours at a point between the first threshold and the second threshold as the foreground area, that is, the intermediate position in the area covered by the target as the foreground region.
  • the first threshold is smaller than the second threshold, and the first threshold and the second threshold are preset positive numbers less than 1, and usually take an empirical value, for example, the first threshold is 0.3, and the second threshold is 0.6.
  • areas other than the foreground area are regarded as unknown areas.
  • the area outside the area covered by the target is used as the background area.
  • the target has only the outer contour
  • an area composed of points whose distance from the center point of the target and the distance between the center point and the outer contour is within a third threshold is used as the foreground area, wherein the third threshold is A positive number less than one is usually set, and an empirical value is usually taken.
  • the third threshold is 0.6.
  • areas other than the foreground area are regarded as unknown areas.
  • the area outside the area covered by the target is used as the background area.
  • the ratio of the distance between the length of the inner contour and the inner and outer contours on the line 2 is 0.3, and the distance between the point on the line 1 and the length of the inner contour and the inner and outer contours.
  • the ratio is 0.6, the area between line 1 and line 2 is the foreground area, the area between the inner contour and line 2, and the area between the outer contour and line 1 are unknown areas, and the other areas are background areas.
  • the range of the background area may be further limited, for example, the range of the background area is limited to: the distance between the length of the outer contour and the inner and outer contours of each point in the background area is smaller than the fourth threshold, fourth The threshold is a preset value between 0 and 1, for example, 0.1. If the distance between the length of the outer contour and the inner and outer contours is 0.1, the area between the line 3 and the outer contour is Background area.
  • the present invention does not limit the order of the above steps 202 and 203, and may be performed sequentially in any order, or may be performed simultaneously.
  • steps 2041 to 2043 are performed for respective isometric lines of the unknown region, respectively.
  • the following steps 2041 to 2043 are actually the process of determining the fusion coefficients of the isometric lines for each isometric line.
  • pixel values of the foreground area point and the background area point are sampled along the normal direction for the points on the equal-scale line.
  • all the pixels on the equal-scale line may be used, or some pixels on the equal-scale line may be selected, for example, some pixels may be selected from the interval on the proportional line.
  • the fusion coefficient of the point is determined using the pixel value of the point and the pixel values of the sampled foreground area and background area points.
  • each combination includes a foreground area point and a background area point, and each combined value is substituted into the following formula, and a fusion coefficient can be calculated:
  • alpha is the fusion coefficient of the point
  • U is the pixel value of the point
  • F and B are the pixel values of the foreground area point and the pixel value of the background area point, respectively.
  • the number of combinations obtained by sampling can be controlled to be between 1 and 10.
  • the proportional line n is one of the proportional lines in the unknown region
  • pn1 is a point on the proportional line at which the background region is collected along the normal direction.
  • the fusion coefficients of the contour lines are determined using the fusion coefficients of the points on the proportional lines.
  • the number of occurrences of each fusion coefficient may be counted in a fusion coefficient of each point on the proportional line, such as a histogram, and the fusion coefficient having the most occurrences is used as the fusion coefficient of the proportional line.
  • the fusion coefficient of the contour line may be determined by using the mean value, the median value, and the like of the fusion coefficients of the points on the proportional line.
  • the other coefficients of the unknown region are interpolated by using the fusion coefficients of the respective isometric lines of the unknown region, and the fusion coefficients of the pixels in the unknown region are obtained.
  • the pixel fusion coefficient of the lip portion can be as shown in FIG. 6, and the brighter the color (the lighter the color, the more White) indicates that the fusion coefficient is larger.
  • the fusion coefficient can be used to fuse the color model and the current color of the target unknown region in the image to obtain the target after the color processing, and the embodiment is reflected in the lip. If the beauty application colors the lips, the color model to be loaded is blended with the current color of the unknown region of the lips in the image, and the color of the unknown portion of the lips after the beauty is obtained.
  • the formula used can be:
  • Pho is the pixel value after color processing
  • Model is the pixel value of the color model to be loaded
  • Cur is the current pixel value in the image.
  • FIG. 7 is a structural diagram of an apparatus for determining a fusion coefficient according to an embodiment of the present invention.
  • the apparatus may be used in an application for color processing an image.
  • the apparatus may include: a positioning unit 01, a splitting unit. 02.
  • the first determining unit 03 and the second determining unit 04 may further include a region dividing unit 05.
  • the main functions of each component are as follows:
  • the positioning unit 01 is responsible for performing feature point positioning on the target in the image, wherein the feature point mainly includes the contour point, and may only include the outer contour point, for example, the target is an eye; and may also include an inner contour point and an outer contour point, for example, the target is a lip ( That is, the inner and outer boundaries of the lips). Depending on the specific target type, some special feature points may also be included. For example, if the target is a lip, the feature point may also include a corner of the mouth.
  • the splitting unit 02 is responsible for using the contour points to divide the target into equal proportions to form N proportional lines, and N is a preset positive integer. Specifically, the proportional split can be performed on the connecting line from the center point of the target to the contour point, and N dividing points are obtained on each connecting line; the equal-scale lines are formed by the equal-divided points on the respective connecting lines.
  • the center point of the target is the center point of the area covered by the target. That is, the distance is split from the line connecting the target center point to the outer contour point, for example, divided into 8 equal parts, and there are 7 split points on each line, assuming that the line is cut according to the center point.
  • the connections form an isometric line, and so on.
  • the center point of the target is the intermediate point of the inner contour point and its corresponding outer contour point.
  • the number of inner contour points and outer contour points obtained by the same is the same, and is stored in the form of an array, assuming that there are 10 inner contour points and 10 outer contour points, and the inner contour points of the inner contour point and the outer contour point having the sequence number 1 are The middle point of the inner contour point and the outer contour point of sequence number 2, and so on.
  • the intermediate points are proportionally divided with the corresponding inner contour points, and then the equal proportioning points on the respective connecting lines form a proportional line; the intermediate points are proportionally divided with the corresponding outer contour points, and then The equal proportioning points of the respective lines form an equal proportion line.
  • FIG. 3a and FIG. 3b in the above method embodiment.
  • the area dividing unit 05 is responsible for determining the foreground area, the background area, and the unknown area of the target. Specifically, if the target has an inner and outer contour, the area dividing unit 05 sets a region composed of a point between the first threshold and the second threshold as a ratio of the distance between the length of the inner contour and the inner and outer contour as the foreground region, A threshold is less than the second threshold, and the first threshold and the second threshold are preset positive numbers less than one, and generally take an empirical value, for example, the first threshold is 0.3, and the second threshold is 0.6. Among the areas covered by the target, areas other than the foreground area are regarded as unknown areas. The area outside the area covered by the target is used as the background area.
  • the region dividing unit 05 sets a region composed of a point whose distance from the center point of the target and the distance between the center point and the outer contour is within a third threshold as the foreground region, and third.
  • the threshold is a preset positive number less than 1, for example, the third threshold is 0.6.
  • areas other than the foreground area are regarded as unknown areas.
  • the area outside the area covered by the target is used as the background area.
  • the first determining unit 03 is responsible for determining the fusion system for each of the isometric lines of the unknown region in the target. number.
  • the first determining unit 03 may first perform pixel value sampling of the foreground area point and the background area point along the normal direction for the points on the equal-scale line. It can be used for all pixels on the equal-scale line, or for some pixels on the equal-scale line. For example, some pixels can be selected from the interval on the proportional line.
  • the first determining unit 03 determines the fusion coefficient of the point for which the pixel value of the point is used and the pixel value of the sampled foreground area point and the background area point.
  • each combination includes a foreground area point and a background area point, and each combined value is substituted into the following formula, and a fusion coefficient can be calculated:
  • alpha is the fusion coefficient of the point
  • U is the pixel value of the point
  • F and B are the pixel values of the foreground area point and the pixel value of the background area point, respectively.
  • the number of combinations obtained by sampling can be controlled to be between 1 and 10.
  • the first determining unit 03 determines the fusion coefficient of the proportional line by using the fusion coefficients of the points on the equal-scale line.
  • the number of occurrences of each fusion coefficient may be counted in a fusion coefficient of each point on the proportional line, such as a histogram, and the fusion coefficient having the most occurrences is used as a fusion coefficient of the proportional line.
  • the fusion coefficient of the contour line may be determined by using the mean value, the median value, and the like of the fusion coefficients of the points on the proportional line.
  • the second determining unit 04 is responsible for performing interpolation processing on the other pixels of the unknown region by using the fusion coefficients of the respective isometric lines of the unknown region, and obtaining the fusion coefficients of the pixels in the unknown region.
  • the color processing application can use the fusion coefficient to fuse the color model with the current color of the target unknown region in the image to obtain the color processing. aims.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division, and the actual implementation may have another division manner.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the above-described integrated unit implemented in the form of a software functional unit can be stored in a computer readable storage medium.
  • the above software functional unit is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to perform the methods of the various embodiments of the present invention. Part of the steps.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program codes. .

Abstract

A method and device for determining a fusion coefficient. The method comprises: positioning a feature point for a target in an image (101), wherein the feature point comprises a contour point; dividing the target isometrically by utilizing the contour point to form N isometric lines, wherein N is a pre-set positive integer (102); determining a fusion coefficient of each isometric line of an unknown region in the target respectively (103); and performing interpolation processing on each of the other pixel points of the unknown region by utilizing the fusion coefficient of each isometric line of the unknown region to obtain a fusion coefficient of each pixel point in the unknown region (104). The method can reduce the algorithm complexity and improve the real-time performance.

Description

一种确定融合系数的方法和装置Method and device for determining fusion coefficient 【技术领域】[Technical Field]
本发明涉及计算机应用技术领域,特别涉及一种确定融合系数的方法和装置。The present invention relates to the field of computer application technologies, and in particular, to a method and apparatus for determining a fusion coefficient.
【背景技术】【Background technique】
随着智能终端的不断普及,人们利用智能终端进行图像处理的需求越来越高,各类美颜类APP受到爱美人士的广泛青睐。然而,现有这类APP都是基于静态图像的美颜处理,采用的方式主要是对静态图像进行器官的关键点定位,根据定位结果确定需要进行美颜的区域,利用图像融合技术和颜色概率模型进行背景融合。在进行背景融合时,需要确定未知区域的融合系数,未知区域是前景区域和背景区域之间的过渡区域,是前景区域与背景区域的线性融合,满足以下表达式所体现的关系:With the increasing popularity of smart terminals, the demand for image processing using smart terminals is increasing, and various beauty apps are widely favored by beauty lovers. However, the existing APPs are based on the beauty processing of still images. The main method is to locate the key points of the organs in the static images, determine the areas that need to be beautiful according to the positioning results, and use image fusion technology and color probability. The model performs background blending. In the background fusion, it is necessary to determine the fusion coefficient of the unknown region. The unknown region is the transition region between the foreground region and the background region, which is a linear fusion of the foreground region and the background region, and satisfies the relationship represented by the following expression:
U=F×alpha+(1-alpha)×BU=F×alpha+(1-alpha)×B
其中U表示未知区域的像素值矩阵,F表示前景区域的像素值矩阵,B表示背景区域的像素值矩阵,alpha是融合系数。Where U represents the matrix of pixel values of the unknown region, F represents the matrix of pixel values of the foreground region, B represents the matrix of pixel values of the background region, and alpha is the fusion coefficient.
现有融合系数的确定是将所有未知区域的像素值联立成一个超大线性方程组,需求解一个超大矩阵的逆矩阵,算法复杂,耗时较长,实时性很差。The existing fusion coefficient is determined by combining the pixel values of all unknown regions into a super-large linear equation group, and the solution is required to solve the inverse matrix of a super-large matrix. The algorithm is complex, takes a long time, and has poor real-time performance.
【发明内容】[Summary of the Invention]
有鉴于此,本发明提供了一种确定融合系数的方法和装置,以便于降低算法复杂度,提高实时性。In view of this, the present invention provides a method and apparatus for determining a fusion coefficient, so as to reduce algorithm complexity and improve real-time performance.
具体技术方案如下:The specific technical solutions are as follows:
本发明提供了一种确定融合系数的方法,该方法包括:The present invention provides a method of determining a fusion coefficient, the method comprising:
对图像中的目标进行特征点定位,所述特征点包括轮廓点;Feature point location for an object in the image, the feature point including a contour point;
利用所述轮廓点对所述目标进行等比例剖分,形成N条等比例线,所述N 为预设的正整数;Using the contour points to divide the target into equal proportions to form N isometric lines, the N a preset positive integer;
对所述目标中未知区域的各条等比例线分别确定融合系数;Determining a fusion coefficient for each of the equal proportion lines of the unknown region in the target;
利用所述未知区域的各条等比例线的融合系数,对所述未知区域的其他各像素点进行插值处理,得到所述未知区域中各像素点的融合系数。The other coefficients of the unknown regions are interpolated by using the fusion coefficients of the respective isometric lines of the unknown region, and the fusion coefficients of the pixels in the unknown region are obtained.
根据本发明一优选实施方式,利用所述轮廓点对所述目标进行等比例剖分,形成N条等比例线包括:According to a preferred embodiment of the present invention, the target is equally divided by the contour point to form N isometric lines including:
在从所述目标的中心点到轮廓点的连线上进行比例剖分,各连线上得到N个剖分点;Proportional splitting on the line connecting the center point of the target to the contour point, and obtaining N dividing points on each connecting line;
由各连线上的等比例剖分点形成等比例线。A proportional line is formed by equal proportions of the points on each line.
根据本发明一优选实施方式,如果所述轮廓点仅包括外轮廓点,则所述目标的中心点为所述目标所覆盖区域的中心点;According to a preferred embodiment of the present invention, if the contour point includes only the outer contour point, the center point of the target is the center point of the area covered by the target;
如果所述轮廓点包括内轮廓点和外轮廓点,则所述目标的中心点为内轮廓点及其对应外轮廓点的中间点。If the contour point includes an inner contour point and an outer contour point, the center point of the target is an intermediate point of the inner contour point and its corresponding outer contour point.
根据本发明一优选实施方式,该方法还包括:According to a preferred embodiment of the present invention, the method further includes:
确定所述目标的前景区域、背景区域和未知区域。A foreground area, a background area, and an unknown area of the target are determined.
根据本发明一优选实施方式,如果所述目标具有内外轮廓,则将距离内轮廓的长度与内外轮廓之间的距离的比例在第一阈值和第二阈值之间的点构成的区域,作为前景区域,所述第一阈值小于所述第二阈值,且所述第一阈值和所述第二阈值为预设的小于1的正数;According to a preferred embodiment of the present invention, if the target has an inner and outer contour, a region consisting of a point between a length of the inner contour and a distance between the inner and outer contours at a point between the first threshold and the second threshold is used as a foreground. The first threshold is smaller than the second threshold, and the first threshold and the second threshold are preset positive numbers less than one;
将所述目标覆盖的区域中,除了所述前景区域之外的区域作为未知区域;In the area covered by the target, an area other than the foreground area is regarded as an unknown area;
将所述目标覆盖的区域之外的区域,作为背景区域。An area other than the area covered by the target is used as the background area.
根据本发明一优选实施方式,如果所述目标仅具有外轮廓,则将距离所述目标的中心点的长度与中心点与外轮廓之间的距离的比例在第三阈值之内的点构成的区域,作为前景区域,所述第三阈值为预设的小于1的正数;According to a preferred embodiment of the present invention, if the target has only an outer contour, a point is formed from a point at which the ratio of the length of the center point of the target to the distance between the center point and the outer contour is within a third threshold. a region, as a foreground region, the third threshold is a preset positive number less than one;
将所述目标覆盖的区域中,除了所述前景区域之外的区域作为未知区域;In the area covered by the target, an area other than the foreground area is regarded as an unknown area;
将所述目标覆盖的区域之外的区域,作为背景区域。An area other than the area covered by the target is used as the background area.
根据本发明一优选实施方式,在对所述目标中未知区域的各条等比例线分 别确定融合系数时,针对未知区域的各等比例线分别执行:According to a preferred embodiment of the present invention, each of the equal proportions of the unknown region in the target is divided When determining the fusion coefficient, perform the respective proportional lines for the unknown area:
针对该等比例线上的点,分别沿着法线方向进行前景区域点和背景区域点的像素值采样;Pixel value sampling of the foreground area point and the background area point respectively along the normal direction for the points on the proportional line;
利用所针对点的像素值以及采样的前景区域点和背景区域点的像素值,确定所针对点的融合系数;Determining a fusion coefficient of the target point by using the pixel value of the point and the pixel value of the sampled foreground area point and the background area point;
利用该等比例线上各点的融合系数,确定该等比例线的融合系数。The fusion coefficients of the contour lines are determined using the fusion coefficients of the points on the proportional lines.
根据本发明一优选实施方式,利用该等比例线上各点的融合系数,确定该等比例线的融合系数包括:According to a preferred embodiment of the present invention, determining the fusion coefficients of the proportional lines by using the fusion coefficients of the points on the proportional line includes:
将该等比例线上各点的融合系数中,出现次数最多的融合系数作为该等比例线的融合系数。Among the fusion coefficients of the points on the proportional line, the fusion coefficient having the most occurrences is used as the fusion coefficient of the proportional lines.
根据本发明一优选实施方式,利用所针对点的像素值以及采样的前景区域点和背景区域点的像素值,确定所针对点的融合系数包括:According to a preferred embodiment of the present invention, determining the fusion coefficients of the targeted points by using the pixel values of the points and the pixel values of the sampled foreground region points and the background region points include:
针对每个等比例线上的点,将采样得到的前景区域点和背景区域点进行组合,每个组合包括一个前景区域点和一个背景区域点,所述组合的数量小于预设的正整数;For each point on the equal-scale line, the sampled foreground area point and the background area point are combined, and each combination includes a foreground area point and a background area point, the number of the combinations being less than a preset positive integer;
利用各组合分别计算所针对点的融合系数。The fusion coefficients of the points are calculated separately using each combination.
根据本发明一优选实施方式,所针对点的融合系数的计算采用以下方式:According to a preferred embodiment of the invention, the fusion coefficients for the points are calculated in the following manner:
Figure PCTCN2016099290-appb-000001
Figure PCTCN2016099290-appb-000001
所述alpha为所针对点的融合系数,所述U为所针对点的像素值,所述F和B分别为组合中前景区域点的像素值和背景区域点的像素值。The alpha is a fusion coefficient for a point, the U is a pixel value of a point, and the F and B are pixel values of a foreground region point and a pixel value of a background region point, respectively.
根据本发明一优选实施方式,所述组合的数目大于1且小于10。According to a preferred embodiment of the invention, the number of combinations is greater than 1 and less than 10.
本发明还提供了一种确定融合系数的装置,该装置包括:The present invention also provides an apparatus for determining a fusion coefficient, the apparatus comprising:
定位单元,用于对图像中的目标进行特征点定位,所述特征点包括轮廓点;a positioning unit, configured to perform feature point positioning on an object in the image, where the feature point includes a contour point;
剖分单元,用于利用利用所述轮廓点对所述目标进行等比例剖分,形成N条等比例线,所述N为预设的正整数;a dividing unit, configured to form an N proportional line by using the contour point to divide the target into equal proportions, wherein the N is a preset positive integer;
第一确定单元,用于对所述目标中未知区域的各条等比例线分别确定融合系 数;a first determining unit, configured to respectively determine a fusion system for each of the equal proportion lines of the unknown region in the target number;
第二确定单元,用于利用所述未知区域的各条等比例线的融合系数,对所述未知区域的其他各像素点进行插值处理,得到所述未知区域中各像素点的融合系数。The second determining unit is configured to perform interpolation processing on the other pixels of the unknown region by using the fusion coefficients of the respective isometric lines of the unknown region to obtain a fusion coefficient of each pixel in the unknown region.
根据本发明一优选实施方式,所述剖分单元,具体执行:According to a preferred embodiment of the present invention, the splitting unit specifically executes:
在从所述目标的中心点到轮廓点的连线上进行比例剖分,各连线上得到N个剖分点;Proportional splitting on the line connecting the center point of the target to the contour point, and obtaining N dividing points on each connecting line;
由各连线上的等比例剖分点形成等比例线。A proportional line is formed by equal proportions of the points on each line.
根据本发明一优选实施方式,如果所述轮廓点仅包括外轮廓点,则所述目标的中心点为所述目标所覆盖区域的中心点;According to a preferred embodiment of the present invention, if the contour point includes only the outer contour point, the center point of the target is the center point of the area covered by the target;
如果所述轮廓点包括内轮廓点和外轮廓点,则所述目标的中心点为内轮廓点及其对应外轮廓点的中间点。If the contour point includes an inner contour point and an outer contour point, the center point of the target is an intermediate point of the inner contour point and its corresponding outer contour point.
根据本发明一优选实施方式,该装置还包括:区域划分单元,用于确定所述目标的前景区域、背景区域和未知区域。According to a preferred embodiment of the present invention, the apparatus further includes: a region dividing unit configured to determine a foreground region, a background region, and an unknown region of the target.
根据本发明一优选实施方式,所述区域划分单元,具体执行:According to a preferred embodiment of the present invention, the area dividing unit specifically performs:
如果所述目标具有内外轮廓,则将距离内轮廓的长度与内外轮廓之间的距离的比例在第一阈值和第二阈值之间的点构成的区域,作为前景区域,所述第一阈值小于所述第二阈值,且所述第一阈值和所述第二阈值为预设的小于1的正数;If the target has an inner and outer contour, an area formed by a point between a length of the inner contour and a distance between the inner and outer contours at a point between the first threshold and the second threshold, as the foreground region, the first threshold is smaller than The second threshold, and the first threshold and the second threshold are preset positive numbers less than one;
将所述目标覆盖的区域中,除了所述前景区域之外的区域作为未知区域;In the area covered by the target, an area other than the foreground area is regarded as an unknown area;
将所述目标覆盖的区域之外的区域,作为背景区域。An area other than the area covered by the target is used as the background area.
根据本发明一优选实施方式,所述区域划分单元,具体执行:According to a preferred embodiment of the present invention, the area dividing unit specifically performs:
如果所述目标仅具有外轮廓,则将距离所述目标的中心点的长度与中心点与外轮廓之间的距离的比例在第三阈值之内的点构成的区域,作为前景区域,所述第三阈值为预设的小于1的正数;If the target has only an outer contour, an area formed by a point that is within a third threshold from a distance between a center point of the target and a distance between the center point and the outer contour is used as a foreground area, The third threshold is a preset positive number less than one;
将所述目标覆盖的区域中,除了所述前景区域之外的区域作为未知区域;In the area covered by the target, an area other than the foreground area is regarded as an unknown area;
将所述目标覆盖的区域之外的区域,作为背景区域。 An area other than the area covered by the target is used as the background area.
根据本发明一优选实施方式,第一确定单元,具体执行:According to a preferred embodiment of the present invention, the first determining unit specifically executes:
针对该等比例线上的点,分别沿着法线方向进行前景区域点和背景区域点的像素值采样;Pixel value sampling of the foreground area point and the background area point respectively along the normal direction for the points on the proportional line;
利用所针对点的像素值以及采样的前景区域点和背景区域点的像素值,确定所针对点的融合系数;Determining a fusion coefficient of the target point by using the pixel value of the point and the pixel value of the sampled foreground area point and the background area point;
利用该等比例线上各点的融合系数,确定该等比例线的融合系数。The fusion coefficients of the contour lines are determined using the fusion coefficients of the points on the proportional lines.
根据本发明一优选实施方式,所述第一确定单元在利用该等比例线上各点的融合系数,确定该等比例线的融合系数时,具体执行:According to a preferred embodiment of the present invention, when the first determining unit determines the fusion coefficient of the proportional line by using the fusion coefficient of each point on the proportional line, the first determining unit performs:
将该等比例线上各点的融合系数中,出现次数最多的融合系数作为该等比例线的融合系数。Among the fusion coefficients of the points on the proportional line, the fusion coefficient having the most occurrences is used as the fusion coefficient of the proportional lines.
根据本发明一优选实施方式,所述第一确定单元在确定所针对点的融合系数时,具体执行:According to a preferred embodiment of the present invention, the first determining unit specifically performs: when determining the fusion coefficient of the point to be:
针对每个等比例线上的点,将采样得到的前景区域点和背景区域点进行组合,每个组合包括一个前景区域点和一个背景区域点,所述组合的数量小于预设的正整数;For each point on the equal-scale line, the sampled foreground area point and the background area point are combined, and each combination includes a foreground area point and a background area point, the number of the combinations being less than a preset positive integer;
利用各组合分别计算所针对点的融合系数。The fusion coefficients of the points are calculated separately using each combination.
根据本发明一优选实施方式,所述第一确定单元采用以下方式计算所针对点的融合系数:According to a preferred embodiment of the present invention, the first determining unit calculates the fusion coefficient of the point for the following manner:
Figure PCTCN2016099290-appb-000002
Figure PCTCN2016099290-appb-000002
所述alpha为所针对点的融合系数,所述U为所针对点的像素值,所述F和B分别为组合中前景区域点的像素值和背景区域点的像素值。The alpha is a fusion coefficient for a point, the U is a pixel value of a point, and the F and B are pixel values of a foreground region point and a pixel value of a background region point, respectively.
根据本发明一优选实施方式,所述组合的数目大于1且小于10。According to a preferred embodiment of the invention, the number of combinations is greater than 1 and less than 10.
由以上技术方案可以看出,本发明通过对目标进行等比例剖分的方式,确定等比例线的融合系数,利用未知区域的各条等比例线的融合系数,对未知区域的其他各像素点进行插值处理,得到未知区域中各像素点的融合系数。也就是说,首先以线代替点,然后利用插值方式,避免了超大矩阵的逆矩阵 求解,无论从规模上还是算法上都降低了复杂度,耗时短,提高了实时性。It can be seen from the above technical solution that the present invention determines the fusion coefficient of the isometric line by means of equal division of the target, and uses the fusion coefficients of the equal proportion lines of the unknown region to the other pixels of the unknown region. Interpolation processing is performed to obtain a fusion coefficient of each pixel in the unknown region. In other words, first replace the point with a line, and then use the interpolation method to avoid the inverse matrix of the large matrix. Solving, both in scale and in algorithm, reduces complexity, takes less time, and improves real-time performance.
【附图说明】[Description of the Drawings]
图1为本发明实施例提供的主要方法流程图;FIG. 1 is a flowchart of a main method according to an embodiment of the present invention;
图2为本发明实施例提供的一个详细方法流程图;2 is a flowchart of a detailed method according to an embodiment of the present invention;
图3a和图3b分别为本发明实施例提供的等比例剖分的过程示意图和结果示意图;3a and 3b are respectively a schematic diagram and a schematic diagram of a process of equal division according to an embodiment of the present invention;
图4为本发明实施例提供的区域划分的示意图;4 is a schematic diagram of area division according to an embodiment of the present invention;
图5为本发明实施例提供的沿法线方向进行像素值采集的示意图;FIG. 5 is a schematic diagram of pixel value collection along a normal direction according to an embodiment of the present invention; FIG.
图6为本发明实施例提供的唇部各像素点融合系数状况效果图;FIG. 6 is a diagram showing the effect of each pixel point fusion coefficient of the lip according to an embodiment of the present invention; FIG.
图7为本发明实施例提供的确定融合系数的装置结构图。FIG. 7 is a structural diagram of an apparatus for determining a fusion coefficient according to an embodiment of the present invention.
【具体实施方式】【detailed description】
为了使本发明的目的、技术方案和优点更加清楚,下面结合附图和具体实施例对本发明进行详细描述。The present invention will be described in detail below with reference to the drawings and specific embodiments.
本发明用于针对图像中的目标确定其未知区域的融合系数,该融合系数用于对该目标进行背景融合使用,即将目标的颜色模型与背景进行融合。图1为本发明实施例提供的主要方法流程图,如图1中所示,该方法主要包括以下步骤:The invention is used for determining a fusion coefficient of an unknown region for an object in an image, the fusion coefficient being used for background fusion of the target, that is, the color model of the target is merged with the background. FIG. 1 is a flowchart of a main method according to an embodiment of the present invention. As shown in FIG. 1 , the method mainly includes the following steps:
在101中,对图像中的目标进行特征点定位。In 101, feature point positioning is performed on a target in the image.
本步骤中涉及的特征点主要包括轮廓点,可以仅包括外轮廓点,例如目标是眼睛;也可以包括内轮廓点和外轮廓点,例如目标是嘴唇(即嘴唇的内边界和外边界)。根据具体的目标类型,还可以包括一些特殊的特征点,例如,如果目标为嘴唇,则特征点还可以包括嘴角。本步骤实际上是确定这些特征点的位置信息,在图像中可以体现为坐标信息。The feature points involved in this step mainly include contour points, which may include only outer contour points, for example, the target is an eye; and may include inner contour points and outer contour points, for example, the target is the lips (ie, the inner and outer boundaries of the lips). Depending on the specific target type, some special feature points may also be included. For example, if the target is a lip, the feature point may also include a corner of the mouth. This step actually determines the position information of these feature points, which can be embodied as coordinate information in the image.
在102中,利用轮廓点对目标进行等比例剖分,形成N条等比例线,N为预设的正整数。 In 102, the target is equally divided by the contour points to form N isometric lines, and N is a preset positive integer.
本步骤中,在从目标的中心点到各轮廓点的连线上进行比例剖分,各连线上得到N个剖分点,由各连线上的等比例剖分点形成等比例线。本发明中利用的主要是落入未知区域的等比例线。In this step, the profiling is performed on the connecting line from the center point of the target to each contour point, and N dividing points are obtained on each connecting line, and the equal proportioning lines are formed by the equal proportioning points on the respective connecting lines. The main use in the present invention is a proportional line that falls within an unknown area.
具体可以分成两种情况,如果上述定位得到的轮廓点仅包括外轮廓点,则中心点为目标所覆盖区域的中心点,这种情况比较好理解。即从目标中心点位置向外轮廓点辐射的连线上进行比例剖分,例如分成8等分,每个连线上都存在7个剖分点,假设连线上剖分点按照从中心点的顺序开始排号,从1到7进行排号,然后将所有连线上排号为1的剖分点进行连接形成一条等比例线,将所有连线上排号为2的剖分点进行连接形成一条等比例线,依次类推。Specifically, it can be divided into two cases. If the contour point obtained by the above positioning only includes the outer contour point, the center point is the center point of the area covered by the target, which is better understood. That is, the distance is split from the line connecting the target center point to the outer contour point, for example, divided into 8 equal parts, and there are 7 split points on each line, assuming that the line is cut according to the center point. The order of the starting line number, from 1 to 7 for the number, then all the line points with the number 1 is connected to form a proportional line, all the line points with the number 2 is divided The connections form an isometric line, and so on.
如果上述定位得到的轮廓点包括内轮廓点和外轮廓点,则中心点为内轮廓点及其对应外轮廓点的中间点。通常定位得到的内轮廓点和外轮廓点数量相同,通过数组的形式进行存储,假设存在10个内轮廓点和10个外轮廓点,序号为1的内轮廓点和外轮廓点的中间点,序号为2的内轮廓点和外轮廓点的中间点,依次类推。各中间点与对应内轮廓点的连线进行比例剖分,然后各连线上的等比例剖分点形成等比例线;各中间点与对应外轮廓点的连线进行比例剖分,然后将各连线的等比例剖分点形成等比例线。该情形将在后续实施例中进行详细描述。If the contour points obtained by the above positioning include an inner contour point and an outer contour point, the center point is an intermediate point of the inner contour point and its corresponding outer contour point. Usually, the number of inner contour points and outer contour points obtained by the same is the same, and is stored in the form of an array, assuming that there are 10 inner contour points and 10 outer contour points, and the inner contour points of the inner contour point and the outer contour point having the sequence number 1 are The middle point of the inner contour point and the outer contour point of sequence number 2, and so on. The intermediate points are proportionally divided with the corresponding inner contour points, and then the equal proportioning points on the respective connecting lines form a proportional line; the intermediate points are proportionally divided with the corresponding outer contour points, and then The equal proportioning points of the respective lines form an equal proportion line. This situation will be described in detail in the subsequent embodiments.
在103中,对目标中未知区域的各条等比例线分别确定融合系数。In 103, the fusion coefficients are determined for each of the isometric lines of the unknown region in the target.
在本发明实施例中,认为每条等比例线上各像素点的融合系数是相等的,且未知区域上其他像素点可以由等比例线的融合系数插值得到,这样可以避免对所有点都逐一确定融合系数,大大降低了计算量,提高了计算效率。各条等比例线的融合系数确定方式将在后续实施例中详细描述。In the embodiment of the present invention, it is considered that the fusion coefficients of each pixel on each isometric line are equal, and other pixels on the unknown region can be interpolated by the fusion coefficients of the proportional lines, thereby avoiding one by one for all points. Determining the fusion coefficient greatly reduces the amount of calculation and improves the calculation efficiency. The manner in which the fusion coefficients of the respective proportional lines are determined will be described in detail in the subsequent embodiments.
在104中,利用未知区域的各条等比例线的融合系数,对未知区域的各像素点进行插值处理,得到未知区域中各像素点的融合系数。In 104, each pixel of the unknown region is interpolated by using the fusion coefficients of the respective isometric lines of the unknown region, and the fusion coefficients of the pixels in the unknown region are obtained.
本发明提供的方式可以应用于诸如对图像中的器官进行颜色处理,例如对图像中的嘴唇、眼睛等进行上色,从而达到美颜的目的。下面实施例以图 像中的嘴唇作为目标为例,确定嘴唇的未知区域的融合系数。图2为本发明实施例提供的一个详细方法流程图,如图2中所示,该流程可以具体包括以下步骤:The manner provided by the present invention can be applied to, for example, color processing an organ in an image, such as coloring lips, eyes, and the like in an image, thereby achieving the purpose of beauty. The following example is in the figure Taking the lips in the image as an example, the fusion coefficient of the unknown region of the lips is determined. FIG. 2 is a flowchart of a detailed method according to an embodiment of the present invention. As shown in FIG. 2, the process may specifically include the following steps:
在201中,对图像中的嘴唇进行特征点定位。其中特征点可以包括:上嘴唇的内轮廓和外轮廓以及下嘴唇的内轮廓和外轮廓,还可以包括嘴角。对于嘴唇的张开或者闭合状态,可以采用上述的特征点定位。对于嘴唇的闭合状态,也可以仅对上下嘴唇的外轮廓进行定位,还可以进一步包括对嘴角的定位。In 201, feature points are positioned on the lips in the image. The feature points may include: an inner contour and an outer contour of the upper lip, and an inner contour and an outer contour of the lower lip, and may further include a corner of the mouth. For the open or closed state of the lips, the feature point positioning described above can be employed. For the closed state of the lips, it is also possible to position only the outer contour of the upper and lower lips, and may further include positioning of the corners of the mouth.
对特征点的定位方式本发明并不加以限制,可以采用诸如基于SDM(Supervised Descent Method,监督下降方法)模型的定位、id-exp模型定位等任意特征点定位方式,最终可以得到上述各特征点的位置信息。The method for locating feature points is not limited in the present invention, and any feature point positioning method such as positioning based on SDM (Supervised Descent Method) model and id-exp model positioning may be adopted, and finally the above feature points may be obtained. Location information.
在202中,在从内外轮廓点的中间点到外轮廓点的连线上进行等比例剖分,由各连线上的等比例剖分点形成等比例线,在从内外轮廓点的中间点到内轮廓点的连线上进行等比例剖分,由各连线上的等比例剖分点形成等比例线。In 202, an equal-split is performed on a line connecting the intermediate point of the inner and outer contour points to the outer contour point, and the equal-distance line is formed by the equal-distinct points on the respective lines, at an intermediate point from the inner and outer contour points. The equidistant division is performed on the line connecting the inner contour points, and the equal proportion lines are formed by the equal-distinct points on the respective connecting lines.
举一个例子,如图3a中所示,假设定位分别得到12个内外轮廓点,内轮廓点在图3a中标识为a1~a12,外轮廓点在图3a中标识为b1~b12。在进行等比例剖分时,以a5与b5为例,a5与b5为对应的内外轮廓点,其中间点为o5,o5与a5的连线进行等比例剖分,假设剖分为3段,存在2个剖分点,o5与b5的连线进行等比例剖分,假设剖分为3段,存在2个剖分点。对其他内外轮廓点进行同样的剖分,以a6和b6为例,o6为a6与b6的中间点,o6与a6之间也存在2个剖分点,o6与b6之间也存在2个剖分点。将a5与o5连线上各剖分点与a6与o6连线上各剖分点进行一一对应的连接,如图3a中所示,这些连线就是等比例线。然后将所有内外轮廓点都进行类似处理,最终得到的等比例线可以如图3b所示。As an example, as shown in Fig. 3a, it is assumed that the positioning results in 12 inner and outer contour points, respectively, the inner contour points are identified as a1 to a12 in Fig. 3a, and the outer contour points are identified as b1 to b12 in Fig. 3a. In the case of equal proportioning, a5 and b5 are taken as examples. A5 and b5 are corresponding inner and outer contour points, and the middle point is o5, and the line connecting o5 and a5 is equally divided, and it is assumed that the section is divided into three sections. There are 2 split points, and the lines connecting o5 and b5 are equally divided. It is assumed that the split is divided into 3 segments and there are 2 split points. The other internal and external contour points are equally divided, taking a6 and b6 as examples, o6 is the middle point of a6 and b6, there are also two split points between o6 and a6, and there are also two cuts between o6 and b6. Points. Connect the split points on the a5 and o5 lines to the split points on the line connecting a6 and o6 one by one. As shown in Figure 3a, these lines are equal proportion lines. Then all the inner and outer contour points are similarly processed, and the resulting equal proportion line can be as shown in Fig. 3b.
在203中,确定图像中嘴唇的前景区域、背景区域和未知区域。In 203, the foreground, background, and unknown regions of the lips in the image are determined.
如果目标具有内外轮廓,假设如图3a中所示的张开的嘴唇,则可以将距 离内轮廓的长度与内外轮廓之间的距离的比例在第一阈值和第二阈值之间的点构成的区域,作为前景区域,也就是说,处于目标覆盖的区域中靠中间位置的作为前景区域。其中第一阈值小于第二阈值,且第一阈值和第二阈值为预设的小于1的正数,通常取经验值,例如第一阈值取0.3,第二阈值取0.6。将目标覆盖的区域中,除了前景区域之外的区域作为未知区域。将目标覆盖的区域之外的区域作为背景区域。If the target has an inner and outer contour, assuming the open lips as shown in Figure 3a, the distance can be An area formed by a point between the length of the inner contour and the distance between the inner and outer contours at a point between the first threshold and the second threshold as the foreground area, that is, the intermediate position in the area covered by the target as the foreground region. The first threshold is smaller than the second threshold, and the first threshold and the second threshold are preset positive numbers less than 1, and usually take an empirical value, for example, the first threshold is 0.3, and the second threshold is 0.6. Among the areas covered by the target, areas other than the foreground area are regarded as unknown areas. The area outside the area covered by the target is used as the background area.
如果目标仅具有外轮廓,则将距离目标的中心点的长度与中心点与外轮廓之间的距离的比例在第三阈值之内的点构成的区域,作为前景区域,其中第三阈值为预设的小于1的正数,通常取经验值,例如第三阈值取0.6。将目标覆盖的区域中,除了前景区域之外的区域作为未知区域。将目标覆盖的区域之外的区域作为背景区域。If the target has only the outer contour, an area composed of points whose distance from the center point of the target and the distance between the center point and the outer contour is within a third threshold is used as the foreground area, wherein the third threshold is A positive number less than one is usually set, and an empirical value is usually taken. For example, the third threshold is 0.6. Among the areas covered by the target, areas other than the foreground area are regarded as unknown areas. The area outside the area covered by the target is used as the background area.
以图4中所示嘴唇为例,线2上的点距离内轮廓的长度与内外轮廓之间的距离的比值为0.3,线1上的点距离内轮廓的长度与内外轮廓之间的距离的比值为0.6,线1与线2之间的区域为前景区域,内轮廓与线2之间的区域以及外轮廓与线1之间的区域为未知区域,其他区域为背景区域。为了后续采样的限制,可以将背景区域的范围进行进一步限制,例如将背景区域的范围限制为:背景区域中各点距离外轮廓的长度与内外轮廓之间的距离比值小于第四阈值,第四阈值为0与1之间的预设值,例如取0.1,如果线3上的点距离外轮廓的长度与内外轮廓之间的距离比值为0.1,那么线3与外轮廓之间的区域即为背景区域。Taking the lip shown in FIG. 4 as an example, the ratio of the distance between the length of the inner contour and the inner and outer contours on the line 2 is 0.3, and the distance between the point on the line 1 and the length of the inner contour and the inner and outer contours. The ratio is 0.6, the area between line 1 and line 2 is the foreground area, the area between the inner contour and line 2, and the area between the outer contour and line 1 are unknown areas, and the other areas are background areas. For the limitation of subsequent sampling, the range of the background area may be further limited, for example, the range of the background area is limited to: the distance between the length of the outer contour and the inner and outer contours of each point in the background area is smaller than the fourth threshold, fourth The threshold is a preset value between 0 and 1, for example, 0.1. If the distance between the length of the outer contour and the inner and outer contours is 0.1, the area between the line 3 and the outer contour is Background area.
本发明对上述步骤202和步骤203的顺序并不加以限定,可以以任意的顺序先后执行,也可以同时执行。The present invention does not limit the order of the above steps 202 and 203, and may be performed sequentially in any order, or may be performed simultaneously.
在204中,分别针对未知区域的各条等比例线执行步骤2041~2043。以下步骤2041~2043实际上是针对每条等比例线确定等比例线的融合系数的过程。At 204, steps 2041 to 2043 are performed for respective isometric lines of the unknown region, respectively. The following steps 2041 to 2043 are actually the process of determining the fusion coefficients of the isometric lines for each isometric line.
在2041中,针对该等比例线上的点,分别沿着法线方向进行前景区域点和背景区域点的像素值采样。 In 2041, pixel values of the foreground area point and the background area point are sampled along the normal direction for the points on the equal-scale line.
本步骤中,可以针对等比例线上的所有像素点,也可以针对等比例线上的部分像素点,例如可以从等比例线上间隔选择一些像素点。In this step, all the pixels on the equal-scale line may be used, or some pixels on the equal-scale line may be selected, for example, some pixels may be selected from the interval on the proportional line.
在2042中,利用所针对点的像素值以及采样的前景区域点和背景区域点的像素值,确定所针对点的融合系数。In 2042, the fusion coefficient of the point is determined using the pixel value of the point and the pixel values of the sampled foreground area and background area points.
对于采样的前景区域点和背景区域点可以进行组合,每个组合中包括一个前景区域点和一个背景区域点,每个组合的值代入如下公式,都能够计算出一个融合系数:For the sampled foreground area points and the background area points can be combined, each combination includes a foreground area point and a background area point, and each combined value is substituted into the following formula, and a fusion coefficient can be calculated:
Figure PCTCN2016099290-appb-000003
其中alpha为所针对点的融合系数,U为所针对点的像素值,F和B分别为组合中前景区域点的像素值和背景区域点的像素值。
Figure PCTCN2016099290-appb-000003
Where alpha is the fusion coefficient of the point, U is the pixel value of the point, and F and B are the pixel values of the foreground area point and the pixel value of the background area point, respectively.
为了控制计算量,在此,可以控制采样得到的组合的个数在1和10之间。In order to control the amount of calculation, here, the number of combinations obtained by sampling can be controlled to be between 1 and 10.
举一个例子,如图5中所示,等比例线n为未知区域中的其中一条等比例线,pn1为该等比例线上的一个点,在该点上沿着法线方向采集到背景区域点pb1、pb2和pb3,前景区域点pf1、pf2和pf3,经过两两组合后,得到如下组合{pb1,pf1}、{pb1,pf2}、{pb1,pf3}、{pb2,pf1}、{pb2,pf2}、{pb2,pf3}、{pb3,pf1}、{pb3,pf2}、{pb3,pf3},利用各组合分别求得pn1的融合系数,有9个结果。可以将这9个结果取均值,或者取中值,或者取其中一个值等方式,得到该pn1点的融合系数。也可以将这9个结果都作为pn1点的融合系数,继续进行后续步骤。As an example, as shown in FIG. 5, the proportional line n is one of the proportional lines in the unknown region, and pn1 is a point on the proportional line at which the background region is collected along the normal direction. Points pb1, pb2, and pb3, foreground region points pf1, pf2, and pf3, after combining the two pairs, obtain the following combinations {pb1, pf1}, {pb1, pf2}, {pb1, pf3}, {pb2, pf1}, { Pb2, pf2}, {pb2, pf3}, {pb3, pf1}, {pb3, pf2}, {pb3, pf3}, and the fusion coefficients of pn1 were obtained by using each combination, and there were nine results. You can take the average of these 9 results, or take the median value, or take one of the values to get the fusion coefficient of the pn1 point. It is also possible to use these 9 results as the fusion coefficients of the pn1 point and continue the subsequent steps.
在2043中,利用该等比例线上各点的融合系数,确定该等比例线的融合系数。In 2043, the fusion coefficients of the contour lines are determined using the fusion coefficients of the points on the proportional lines.
在本步骤中,可以将该等比例线上各点的融合系数中,采用诸如直方图的方式统计各融合系数的出现次数,将出现次数最多的融合系数作为该等比例线的融合系数。当然,除了这种方式之外,还可以采用对该等比例线上各点的融合系数求均值、中值等的方式来确定等比例线的融合系数。In this step, the number of occurrences of each fusion coefficient may be counted in a fusion coefficient of each point on the proportional line, such as a histogram, and the fusion coefficient having the most occurrences is used as the fusion coefficient of the proportional line. Of course, in addition to this method, the fusion coefficient of the contour line may be determined by using the mean value, the median value, and the like of the fusion coefficients of the points on the proportional line.
对未知区域中每一条等比例线都执行上述步骤2041~2043,就可以得到 未知区域中各等比例线的融合系数。By performing the above steps 2041 to 2043 for each of the equal proportion lines in the unknown area, you can obtain The fusion coefficient of each proportional line in the unknown region.
在205中,利用未知区域的各条等比例线的融合系数,对未知区域的其他各像素点进行插值处理,得到未知区域中各像素点的融合系数。In 205, the other coefficients of the unknown region are interpolated by using the fusion coefficients of the respective isometric lines of the unknown region, and the fusion coefficients of the pixels in the unknown region are obtained.
按照上述方式进行融合系数的确定后,如果以明亮程度代表唇部各像素点的融合系数大小,则唇部各像素点融合系数状况可以如图6中所示,越明亮(颜色越淡、越白)表明融合系数越大。After the fusion coefficient is determined as described above, if the brightness of each pixel of the lip is represented by the brightness degree, the pixel fusion coefficient of the lip portion can be as shown in FIG. 6, and the brighter the color (the lighter the color, the more White) indicates that the fusion coefficient is larger.
在得到未知区域中各像素点的融合系数后,可以利用该融合系数,将颜色模型和图像中目标未知区域当前的颜色做融合,得到颜色处理之后的目标,反映在上述嘴唇为例的实施例中,如果美颜类应用对嘴唇进行上色,那么将加载的颜色模型采用上述融合系数与图像中嘴唇未知区域当前的颜色进行融合,就可以得到美颜后嘴唇未知区域部分的颜色。颜色处理时,采用的公式可以为:After obtaining the fusion coefficient of each pixel in the unknown region, the fusion coefficient can be used to fuse the color model and the current color of the target unknown region in the image to obtain the target after the color processing, and the embodiment is reflected in the lip. If the beauty application colors the lips, the color model to be loaded is blended with the current color of the unknown region of the lips in the image, and the color of the unknown portion of the lips after the beauty is obtained. When color processing, the formula used can be:
Pho=Model×alpha+(1-alpha)×CurPho=Model×alpha+(1-alpha)×Cur
其中,Pho为颜色处理之后的像素值,Model为要加载的颜色模型的像素值,Cur为图像中的当前像素值。Where Pho is the pixel value after color processing, Model is the pixel value of the color model to be loaded, and Cur is the current pixel value in the image.
图7为本发明实施例提供的确定融合系数的装置结构图,该装置可以用于对图像进行颜色处理的应用中,如图7中所示,该装置可以包括:定位单元01、剖分单元02、第一确定单元03和第二确定单元04,还可以进一步包括区域划分单元05。各组成单元的主要功能如下:FIG. 7 is a structural diagram of an apparatus for determining a fusion coefficient according to an embodiment of the present invention. The apparatus may be used in an application for color processing an image. As shown in FIG. 7, the apparatus may include: a positioning unit 01, a splitting unit. 02. The first determining unit 03 and the second determining unit 04 may further include a region dividing unit 05. The main functions of each component are as follows:
定位单元01负责对图像中的目标进行特征点定位,其中特征点主要包括轮廓点,可以仅包括外轮廓点,例如目标是眼睛;也可以包括内轮廓点和外轮廓点,例如目标是嘴唇(即嘴唇的内边界和外边界)。根据具体的目标类型,还可以包括一些特殊的特征点,例如,如果目标为嘴唇,则特征点还可以包括嘴角。The positioning unit 01 is responsible for performing feature point positioning on the target in the image, wherein the feature point mainly includes the contour point, and may only include the outer contour point, for example, the target is an eye; and may also include an inner contour point and an outer contour point, for example, the target is a lip ( That is, the inner and outer boundaries of the lips). Depending on the specific target type, some special feature points may also be included. For example, if the target is a lip, the feature point may also include a corner of the mouth.
剖分单元02负责利用轮廓点对目标进行等比例剖分,形成N条等比例线,N为预设的正整数。具体地,可以在从目标的中心点到轮廓点的连线上进行比例剖分,各连线上得到N个剖分点;由各连线上的等比例剖分点形成等比例线。 The splitting unit 02 is responsible for using the contour points to divide the target into equal proportions to form N proportional lines, and N is a preset positive integer. Specifically, the proportional split can be performed on the connecting line from the center point of the target to the contour point, and N dividing points are obtained on each connecting line; the equal-scale lines are formed by the equal-divided points on the respective connecting lines.
其中,如果轮廓点仅包括外轮廓点,则目标的中心点为目标所覆盖区域的中心点。即从目标中心点位置向外轮廓点辐射的连线上进行比例剖分,例如分成8等分,每个连线上都存在7个剖分点,假设连线上剖分点按照从中心点的顺序开始排号,从1到7进行排号,然后将所有连线上排号为1的剖分点进行连接形成一条等比例线,将所有连线上排号为2的剖分点进行连接形成一条等比例线,依次类推。Wherein, if the contour point includes only the outer contour point, the center point of the target is the center point of the area covered by the target. That is, the distance is split from the line connecting the target center point to the outer contour point, for example, divided into 8 equal parts, and there are 7 split points on each line, assuming that the line is cut according to the center point. The order of the starting line number, from 1 to 7 for the number, then all the line points with the number 1 is connected to form a proportional line, all the line points with the number 2 is divided The connections form an isometric line, and so on.
如果轮廓点包括内轮廓点和外轮廓点,则目标的中心点为内轮廓点及其对应外轮廓点的中间点。通常定位得到的内轮廓点和外轮廓点数量相同,通过数组的形式进行存储,假设存在10个内轮廓点和10个外轮廓点,序号为1的内轮廓点和外轮廓点的中间点,序号为2的内轮廓点和外轮廓点的中间点,依次类推。各中间点与对应内轮廓点的连线进行比例剖分,然后各连线上的等比例剖分点形成等比例线;各中间点与对应外轮廓点的连线进行比例剖分,然后将各连线的等比例剖分点形成等比例线。具体实例可以参见上述方法实施例中针对图3a和图3b的描述。If the contour point includes an inner contour point and an outer contour point, the center point of the target is the intermediate point of the inner contour point and its corresponding outer contour point. Usually, the number of inner contour points and outer contour points obtained by the same is the same, and is stored in the form of an array, assuming that there are 10 inner contour points and 10 outer contour points, and the inner contour points of the inner contour point and the outer contour point having the sequence number 1 are The middle point of the inner contour point and the outer contour point of sequence number 2, and so on. The intermediate points are proportionally divided with the corresponding inner contour points, and then the equal proportioning points on the respective connecting lines form a proportional line; the intermediate points are proportionally divided with the corresponding outer contour points, and then The equal proportioning points of the respective lines form an equal proportion line. For a specific example, reference may be made to the description of FIG. 3a and FIG. 3b in the above method embodiment.
区域划分单元05负责确定目标的前景区域、背景区域和未知区域。具体地,如果目标具有内外轮廓,则区域划分单元05将距离内轮廓的长度与内外轮廓之间的距离的比例在第一阈值和第二阈值之间的点构成的区域,作为前景区域,第一阈值小于第二阈值,且第一阈值和第二阈值为预设的小于1的正数,通常取经验值,例如第一阈值取0.3,第二阈值取0.6。将目标覆盖的区域中,除了前景区域之外的区域作为未知区域。将目标覆盖的区域之外的区域,作为背景区域。The area dividing unit 05 is responsible for determining the foreground area, the background area, and the unknown area of the target. Specifically, if the target has an inner and outer contour, the area dividing unit 05 sets a region composed of a point between the first threshold and the second threshold as a ratio of the distance between the length of the inner contour and the inner and outer contour as the foreground region, A threshold is less than the second threshold, and the first threshold and the second threshold are preset positive numbers less than one, and generally take an empirical value, for example, the first threshold is 0.3, and the second threshold is 0.6. Among the areas covered by the target, areas other than the foreground area are regarded as unknown areas. The area outside the area covered by the target is used as the background area.
如果目标仅具有外轮廓,则区域划分单元05将距离目标的中心点的长度与中心点与外轮廓之间的距离的比例在第三阈值之内的点构成的区域,作为前景区域,第三阈值为预设的小于1的正数,例如第三阈值取0.6。将目标覆盖的区域中,除了前景区域之外的区域作为未知区域。将目标覆盖的区域之外的区域,作为背景区域。If the target has only the outer contour, the region dividing unit 05 sets a region composed of a point whose distance from the center point of the target and the distance between the center point and the outer contour is within a third threshold as the foreground region, and third. The threshold is a preset positive number less than 1, for example, the third threshold is 0.6. Among the areas covered by the target, areas other than the foreground area are regarded as unknown areas. The area outside the area covered by the target is used as the background area.
第一确定单元03负责对目标中未知区域的各条等比例线分别确定融合系 数。The first determining unit 03 is responsible for determining the fusion system for each of the isometric lines of the unknown region in the target. number.
具体地,第一确定单元03可以首先针对该等比例线上的点,分别沿着法线方向进行前景区域点和背景区域点的像素值采样。其中可以针对等比例线上的所有像素点,也可以针对等比例线上的部分像素点,例如可以从等比例线上间隔选择一些像素点。Specifically, the first determining unit 03 may first perform pixel value sampling of the foreground area point and the background area point along the normal direction for the points on the equal-scale line. It can be used for all pixels on the equal-scale line, or for some pixels on the equal-scale line. For example, some pixels can be selected from the interval on the proportional line.
然后,第一确定单元03利用所针对点的像素值以及采样的前景区域点和背景区域点的像素值,确定所针对点的融合系数。对于采样的前景区域点和背景区域点可以进行组合,每个组合中包括一个前景区域点和一个背景区域点,每个组合的值代入如下公式,都能够计算出一个融合系数:Then, the first determining unit 03 determines the fusion coefficient of the point for which the pixel value of the point is used and the pixel value of the sampled foreground area point and the background area point. For the sampled foreground area points and the background area points can be combined, each combination includes a foreground area point and a background area point, and each combined value is substituted into the following formula, and a fusion coefficient can be calculated:
Figure PCTCN2016099290-appb-000004
其中alpha为所针对点的融合系数,U为所针对点的像素值,F和B分别为组合中前景区域点的像素值和背景区域点的像素值。为了控制计算量,在此,可以控制采样得到的组合的个数在1和10之间。
Figure PCTCN2016099290-appb-000004
Where alpha is the fusion coefficient of the point, U is the pixel value of the point, and F and B are the pixel values of the foreground area point and the pixel value of the background area point, respectively. In order to control the amount of calculation, here, the number of combinations obtained by sampling can be controlled to be between 1 and 10.
最后,第一确定单元03利用该等比例线上各点的融合系数,确定该等比例线的融合系数。可以将该等比例线上各点的融合系数中,采用诸如直方图的方式统计各融合系数的出现次数,将出现次数最多的融合系数作为该等比例线的融合系数。当然,除了这种方式之外,还可以采用对该等比例线上各点的融合系数求均值、中值等的方式来确定等比例线的融合系数。Finally, the first determining unit 03 determines the fusion coefficient of the proportional line by using the fusion coefficients of the points on the equal-scale line. The number of occurrences of each fusion coefficient may be counted in a fusion coefficient of each point on the proportional line, such as a histogram, and the fusion coefficient having the most occurrences is used as a fusion coefficient of the proportional line. Of course, in addition to this method, the fusion coefficient of the contour line may be determined by using the mean value, the median value, and the like of the fusion coefficients of the points on the proportional line.
第二确定单元04负责利用未知区域的各条等比例线的融合系数,对未知区域的其他各像素点进行插值处理,得到未知区域中各像素点的融合系数。The second determining unit 04 is responsible for performing interpolation processing on the other pixels of the unknown region by using the fusion coefficients of the respective isometric lines of the unknown region, and obtaining the fusion coefficients of the pixels in the unknown region.
在利用上述确定融合系数的装置得到未知区域中各像素点的融合系数后,颜色处理类应用可以利用该融合系数,将颜色模型和图像中目标未知区域当前的颜色做融合,得到颜色处理之后的目标。After obtaining the fusion coefficient of each pixel in the unknown region by using the above-mentioned device for determining the fusion coefficient, the color processing application can use the fusion coefficient to fuse the color model with the current color of the target unknown region in the image to obtain the color processing. aims.
在本发明所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。 In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the device embodiments described above are merely illustrative. For example, the division of the unit is only a logical function division, and the actual implementation may have another division manner.
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit. The above integrated unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
上述以软件功能单元的形式实现的集成的单元,可以存储在一个计算机可读取存储介质中。上述软件功能单元存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本发明各个实施例所述方法的部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。The above-described integrated unit implemented in the form of a software functional unit can be stored in a computer readable storage medium. The above software functional unit is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to perform the methods of the various embodiments of the present invention. Part of the steps. The foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program codes. .
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明保护的范围之内。 The above are only the preferred embodiments of the present invention, and are not intended to limit the present invention. Any modifications, equivalents, improvements, etc., which are made within the spirit and principles of the present invention, should be included in the present invention. Within the scope of protection.

Claims (22)

  1. 一种确定融合系数的方法,其特征在于,该方法包括:A method for determining a fusion coefficient, the method comprising:
    对图像中的目标进行特征点定位,所述特征点包括轮廓点;Feature point location for an object in the image, the feature point including a contour point;
    利用所述轮廓点对所述目标进行等比例剖分,形成N条等比例线,所述N为预设的正整数;And dividing the target into equal proportions by using the contour points to form N isometric lines, wherein the N is a preset positive integer;
    对所述目标中未知区域的各条等比例线分别确定融合系数;Determining a fusion coefficient for each of the equal proportion lines of the unknown region in the target;
    利用所述未知区域的各条等比例线的融合系数,对所述未知区域的其他各像素点进行插值处理,得到所述未知区域中各像素点的融合系数。The other coefficients of the unknown regions are interpolated by using the fusion coefficients of the respective isometric lines of the unknown region, and the fusion coefficients of the pixels in the unknown region are obtained.
  2. 根据权利要求1所述的方法,其特征在于,利用所述轮廓点对所述目标进行等比例剖分,形成N条等比例线包括:The method according to claim 1, wherein the dividing the target by the contour point to form an N proportional line comprises:
    在从所述目标的中心点到轮廓点的连线上进行比例剖分,各连线上得到N个剖分点;Proportional splitting on the line connecting the center point of the target to the contour point, and obtaining N dividing points on each connecting line;
    由各连线上的等比例剖分点形成等比例线。A proportional line is formed by equal proportions of the points on each line.
  3. 根据权利要求2所述的方法,其特征在于,如果所述轮廓点仅包括外轮廓点,则所述目标的中心点为所述目标所覆盖区域的中心点;The method according to claim 2, wherein if the contour point includes only an outer contour point, a center point of the target is a center point of the area covered by the target;
    如果所述轮廓点包括内轮廓点和外轮廓点,则所述目标的中心点为内轮廓点及其对应外轮廓点的中间点。If the contour point includes an inner contour point and an outer contour point, the center point of the target is an intermediate point of the inner contour point and its corresponding outer contour point.
  4. 根据权利要求1所述的方法,其特征在于,该方法还包括:The method of claim 1 further comprising:
    确定所述目标的前景区域、背景区域和未知区域。A foreground area, a background area, and an unknown area of the target are determined.
  5. 根据权利要求4所述的方法,其特征在于,如果所述目标具有内外轮廓,则将距离内轮廓的长度与内外轮廓之间的距离的比例在第一阈值和第二阈值之间的点构成的区域,作为前景区域,所述第一阈值小于所述第二阈值,且所述第一阈值和所述第二阈值为预设的小于1的正数;The method according to claim 4, wherein if the target has an inner and outer contour, the ratio of the distance between the length of the inner contour and the inner and outer contour is at a point between the first threshold and the second threshold. And the first threshold is smaller than the second threshold, and the first threshold and the second threshold are preset positive numbers less than 1;
    将所述目标覆盖的区域中,除了所述前景区域之外的区域作为未知区域;In the area covered by the target, an area other than the foreground area is regarded as an unknown area;
    将所述目标覆盖的区域之外的区域,作为背景区域。 An area other than the area covered by the target is used as the background area.
  6. 根据权利要求4所述的方法,其特征在于,如果所述目标仅具有外轮廓,则将距离所述目标的中心点的长度与中心点与外轮廓之间的距离的比例在第三阈值之内的点构成的区域,作为前景区域,所述第三阈值为预设的小于1的正数;The method according to claim 4, wherein if the target has only an outer contour, a ratio of a length from a center point of the target to a distance between the center point and the outer contour is at a third threshold The area formed by the points inside, as the foreground area, the third threshold is a preset positive number less than 1;
    将所述目标覆盖的区域中,除了所述前景区域之外的区域作为未知区域;In the area covered by the target, an area other than the foreground area is regarded as an unknown area;
    将所述目标覆盖的区域之外的区域,作为背景区域。An area other than the area covered by the target is used as the background area.
  7. 根据权利要求1至6任一项所述的方法,其特征在于,在对所述目标中未知区域的各条等比例线分别确定融合系数时,针对未知区域的各等比例线分别执行:The method according to any one of claims 1 to 6, wherein when the fusion coefficients are respectively determined for respective isometric lines of the unknown region in the target, respectively, the respective proportional lines of the unknown region are respectively executed:
    针对该等比例线上的点,分别沿着法线方向进行前景区域点和背景区域点的像素值采样;Pixel value sampling of the foreground area point and the background area point respectively along the normal direction for the points on the proportional line;
    利用所针对点的像素值以及采样的前景区域点和背景区域点的像素值,确定所针对点的融合系数;Determining a fusion coefficient of the target point by using the pixel value of the point and the pixel value of the sampled foreground area point and the background area point;
    利用该等比例线上各点的融合系数,确定该等比例线的融合系数。The fusion coefficients of the contour lines are determined using the fusion coefficients of the points on the proportional lines.
  8. 根据权利要求7所述的方法,其特征在于,利用该等比例线上各点的融合系数,确定该等比例线的融合系数包括:The method according to claim 7, wherein the fusion coefficient of each of the proportional lines is determined by using a fusion coefficient of each point on the proportional line:
    将该等比例线上各点的融合系数中,出现次数最多的融合系数作为该等比例线的融合系数。Among the fusion coefficients of the points on the proportional line, the fusion coefficient having the most occurrences is used as the fusion coefficient of the proportional lines.
  9. 根据权利要求7所述的方法,其特征在于,利用所针对点的像素值以及采样的前景区域点和背景区域点的像素值,确定所针对点的融合系数包括:The method according to claim 7, wherein the determining the fusion coefficients of the targeted points by using the pixel values of the points for the points and the pixel values of the sampled foreground region points and the background region points comprises:
    针对每个等比例线上的点,将采样得到的前景区域点和背景区域点进行组合,每个组合包括一个前景区域点和一个背景区域点,所述组合的数量小于预设的正整数;For each point on the equal-scale line, the sampled foreground area point and the background area point are combined, and each combination includes a foreground area point and a background area point, the number of the combinations being less than a preset positive integer;
    利用各组合分别计算所针对点的融合系数。The fusion coefficients of the points are calculated separately using each combination.
  10. 根据权利要求9所述的方法,其特征在于,所针对点的融合系数的计算采用以下方式: The method according to claim 9, characterized in that the fusion coefficient for the point is calculated in the following manner:
    Figure PCTCN2016099290-appb-100001
    Figure PCTCN2016099290-appb-100001
    所述alpha为所针对点的融合系数,所述U为所针对点的像素值,所述F和B分别为组合中前景区域点的像素值和背景区域点的像素值。The alpha is a fusion coefficient for a point, the U is a pixel value of a point, and the F and B are pixel values of a foreground region point and a pixel value of a background region point, respectively.
  11. 根据权利要求9所述的方法,其特征在于,所述组合的数目大于1且小于10。The method of claim 9 wherein the number of combinations is greater than one and less than ten.
  12. 一种确定融合系数的装置,其特征在于,该装置包括:An apparatus for determining a fusion coefficient, the apparatus comprising:
    定位单元,用于对图像中的目标进行特征点定位,所述特征点包括轮廓点;a positioning unit, configured to perform feature point positioning on an object in the image, where the feature point includes a contour point;
    剖分单元,用于利用利用所述轮廓点对所述目标进行等比例剖分,形成N条等比例线,所述N为预设的正整数;a dividing unit, configured to form an N proportional line by using the contour point to divide the target into equal proportions, wherein the N is a preset positive integer;
    第一确定单元,用于对所述目标中未知区域的各条等比例线分别确定融合系数;a first determining unit, configured to respectively determine a fusion coefficient for each of the equal proportion lines of the unknown area in the target;
    第二确定单元,用于利用所述未知区域的各条等比例线的融合系数,对所述未知区域的其他各像素点进行插值处理,得到所述未知区域中各像素点的融合系数。The second determining unit is configured to perform interpolation processing on the other pixels of the unknown region by using the fusion coefficients of the respective isometric lines of the unknown region to obtain a fusion coefficient of each pixel in the unknown region.
  13. 根据权利要求12所述的装置,其特征在于,所述剖分单元,具体执行:The device according to claim 12, wherein the dividing unit specifically executes:
    在从所述目标的中心点到轮廓点的连线上进行比例剖分,各连线上得到N个剖分点;Proportional splitting on the line connecting the center point of the target to the contour point, and obtaining N dividing points on each connecting line;
    由各连线上的等比例剖分点形成等比例线。A proportional line is formed by equal proportions of the points on each line.
  14. 根据权利要求13所述的装置,其特征在于,如果所述轮廓点仅包括外轮廓点,则所述目标的中心点为所述目标所覆盖区域的中心点;The apparatus according to claim 13, wherein if said contour point includes only outer contour points, a center point of said target is a center point of said target covered area;
    如果所述轮廓点包括内轮廓点和外轮廓点,则所述目标的中心点为内轮廓点及其对应外轮廓点的中间点。If the contour point includes an inner contour point and an outer contour point, the center point of the target is an intermediate point of the inner contour point and its corresponding outer contour point.
  15. 根据权利要求12所述的装置,其特征在于,该装置还包括:区域划分单元,用于确定所述目标的前景区域、背景区域和未知区域。The apparatus according to claim 12, further comprising: a region dividing unit configured to determine a foreground region, a background region, and an unknown region of the target.
  16. 根据权利要求15所述的装置,其特征在于,所述区域划分单元,具体执行: The device according to claim 15, wherein the area dividing unit specifically performs:
    如果所述目标具有内外轮廓,则将距离内轮廓的长度与内外轮廓之间的距离的比例在第一阈值和第二阈值之间的点构成的区域,作为前景区域,所述第一阈值小于所述第二阈值,且所述第一阈值和所述第二阈值为预设的小于1的正数;If the target has an inner and outer contour, an area formed by a point between a length of the inner contour and a distance between the inner and outer contours at a point between the first threshold and the second threshold, as the foreground region, the first threshold is smaller than The second threshold, and the first threshold and the second threshold are preset positive numbers less than one;
    将所述目标覆盖的区域中,除了所述前景区域之外的区域作为未知区域;In the area covered by the target, an area other than the foreground area is regarded as an unknown area;
    将所述目标覆盖的区域之外的区域,作为背景区域。An area other than the area covered by the target is used as the background area.
  17. 根据权利要求15所述的装置,其特征在于,所述区域划分单元,具体执行:The device according to claim 15, wherein the area dividing unit specifically performs:
    如果所述目标仅具有外轮廓,则将距离所述目标的中心点的长度与中心点与外轮廓之间的距离的比例在第三阈值之内的点构成的区域,作为前景区域,所述第三阈值为预设的小于1的正数;If the target has only an outer contour, an area formed by a point that is within a third threshold from a distance between a center point of the target and a distance between the center point and the outer contour is used as a foreground area, The third threshold is a preset positive number less than one;
    将所述目标覆盖的区域中,除了所述前景区域之外的区域作为未知区域;In the area covered by the target, an area other than the foreground area is regarded as an unknown area;
    将所述目标覆盖的区域之外的区域,作为背景区域。An area other than the area covered by the target is used as the background area.
  18. 根据权利要求12至17任一项所述的装置,其特征在于,第一确定单元,具体执行:The device according to any one of claims 12 to 17, wherein the first determining unit specifically executes:
    针对该等比例线上的点,分别沿着法线方向进行前景区域点和背景区域点的像素值采样;Pixel value sampling of the foreground area point and the background area point respectively along the normal direction for the points on the proportional line;
    利用所针对点的像素值以及采样的前景区域点和背景区域点的像素值,确定所针对点的融合系数;Determining a fusion coefficient of the target point by using the pixel value of the point and the pixel value of the sampled foreground area point and the background area point;
    利用该等比例线上各点的融合系数,确定该等比例线的融合系数。The fusion coefficients of the contour lines are determined using the fusion coefficients of the points on the proportional lines.
  19. 根据权利要求18所述的装置,其特征在于,所述第一确定单元在利用该等比例线上各点的融合系数,确定该等比例线的融合系数时,具体执行:The apparatus according to claim 18, wherein the first determining unit performs the specific use of the fusion coefficient of each point on the proportional line to determine the fusion coefficient of the proportional line:
    将该等比例线上各点的融合系数中,出现次数最多的融合系数作为该等比例线的融合系数。Among the fusion coefficients of the points on the proportional line, the fusion coefficient having the most occurrences is used as the fusion coefficient of the proportional lines.
  20. 根据权利要求18所述的装置,其特征在于,所述第一确定单元在确定所针对点的融合系数时,具体执行:The apparatus according to claim 18, wherein the first determining unit specifically performs: when determining a fusion coefficient of the targeted point:
    针对每个等比例线上的点,将采样得到的前景区域点和背景区域点进行组 合,每个组合包括一个前景区域点和一个背景区域点,所述组合的数量小于预设的正整数;For the points on each isometric line, the sampled foreground area points and background area points are grouped And each combination includes a foreground area point and a background area point, the number of the combinations being less than a preset positive integer;
    利用各组合分别计算所针对点的融合系数。The fusion coefficients of the points are calculated separately using each combination.
  21. 根据权利要求20所述的装置,其特征在于,所述第一确定单元采用以下方式计算所针对点的融合系数:The apparatus according to claim 20, wherein said first determining unit calculates a fusion coefficient for the point in question in the following manner:
    Figure PCTCN2016099290-appb-100002
    Figure PCTCN2016099290-appb-100002
    所述alpha为所针对点的融合系数,所述U为所针对点的像素值,所述F和B分别为组合中前景区域点的像素值和背景区域点的像素值。The alpha is a fusion coefficient for a point, the U is a pixel value of a point, and the F and B are pixel values of a foreground region point and a pixel value of a background region point, respectively.
  22. 根据权利要求20所述的装置,其特征在于,所述组合的数目大于1且小于10。 The device of claim 20 wherein the number of combinations is greater than one and less than ten.
PCT/CN2016/099290 2015-09-29 2016-09-19 Method and device for determining fusion coefficient WO2017054651A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510633094.0 2015-09-29
CN201510633094.0A CN106558043B (en) 2015-09-29 2015-09-29 A kind of method and apparatus of determining fusion coefficients

Publications (1)

Publication Number Publication Date
WO2017054651A1 true WO2017054651A1 (en) 2017-04-06

Family

ID=58414535

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/099290 WO2017054651A1 (en) 2015-09-29 2016-09-19 Method and device for determining fusion coefficient

Country Status (2)

Country Link
CN (1) CN106558043B (en)
WO (1) WO2017054651A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111178366A (en) * 2018-11-12 2020-05-19 杭州萤石软件有限公司 Mobile robot positioning method and mobile robot

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108848367B (en) * 2018-07-26 2020-08-07 宁波视睿迪光电有限公司 Image processing method and device and mobile terminal
CN109472753B (en) * 2018-10-30 2021-09-07 北京市商汤科技开发有限公司 Image processing method and device, computer equipment and computer storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5680471A (en) * 1993-07-27 1997-10-21 Kabushiki Kaisha Toshiba Image processing apparatus and method
CN102393959A (en) * 2010-06-28 2012-03-28 索尼公司 Image processing apparatus, image processing method, and image processing program
CN104156959A (en) * 2014-08-08 2014-11-19 中科创达软件股份有限公司 Video matting method and device
CN104182950A (en) * 2013-05-22 2014-12-03 浙江大华技术股份有限公司 Image processing method and device thereof
CN104537608A (en) * 2014-12-31 2015-04-22 深圳市中兴移动通信有限公司 Image processing method and device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100369066C (en) * 2002-12-05 2008-02-13 精工爱普生株式会社 Characteristic region extraction device, characteristic region extraction method, and characteristic region extraction program
CN1273940C (en) * 2004-04-12 2006-09-06 浙江大学 Fast drawing forest method of graded hierarchical assembling depth paste-up atlas
CN1564198A (en) * 2004-04-13 2005-01-12 浙江大学 Natural image digging method based on sensing colour space
CN101777180B (en) * 2009-12-23 2012-07-04 中国科学院自动化研究所 Complex background real-time alternating method based on background modeling and energy minimization
CN102063707B (en) * 2011-01-05 2013-06-12 西安电子科技大学 Mean shift based grey relation infrared imaging target segmentation method
CN103606138B (en) * 2013-08-28 2016-04-27 内蒙古科技大学 A kind of fusion method of the medical image based on texture region division

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5680471A (en) * 1993-07-27 1997-10-21 Kabushiki Kaisha Toshiba Image processing apparatus and method
CN102393959A (en) * 2010-06-28 2012-03-28 索尼公司 Image processing apparatus, image processing method, and image processing program
CN104182950A (en) * 2013-05-22 2014-12-03 浙江大华技术股份有限公司 Image processing method and device thereof
CN104156959A (en) * 2014-08-08 2014-11-19 中科创达软件股份有限公司 Video matting method and device
CN104537608A (en) * 2014-12-31 2015-04-22 深圳市中兴移动通信有限公司 Image processing method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111178366A (en) * 2018-11-12 2020-05-19 杭州萤石软件有限公司 Mobile robot positioning method and mobile robot
CN111178366B (en) * 2018-11-12 2023-07-25 杭州萤石软件有限公司 Mobile robot positioning method and mobile robot

Also Published As

Publication number Publication date
CN106558043B (en) 2019-07-23
CN106558043A (en) 2017-04-05

Similar Documents

Publication Publication Date Title
US11170558B2 (en) Automatic rigging of three dimensional characters for animation
JP6636154B2 (en) Face image processing method and apparatus, and storage medium
US9547908B1 (en) Feature mask determination for images
CN109344742B (en) Feature point positioning method and device, storage medium and computer equipment
WO2020119458A1 (en) Facial landmark detection method and apparatus, computer device and storage medium
KR20220006657A (en) Remove video background using depth
US10706613B2 (en) Systems and methods for dynamic occlusion handling
WO2017054651A1 (en) Method and device for determining fusion coefficient
WO2022179401A1 (en) Image processing method and apparatus, computer device, storage medium, and program product
CN108463823A (en) A kind of method for reconstructing, device and the terminal of user's Hair model
WO2021062998A1 (en) Image processing method, apparatus and electronic device
CN113554665A (en) Blood vessel segmentation method and device
Hsieh et al. Automatic trimap generation for digital image matting
AU2006345533B2 (en) Multi-tracking of video objects
US8941651B2 (en) Object alignment from a 2-dimensional image
WO2022237089A1 (en) Image processing method and apparatus, and device, storage medium, program product and program
US11461870B2 (en) Image processing method and device, and electronic device
WO2022061850A1 (en) Point cloud motion distortion correction method and device
TWI724092B (en) Method and device for determining fusion coefficient
CN114913305A (en) Model processing method, device, equipment, storage medium and computer program product
CN111652978A (en) Grid generation method and device, electronic equipment and storage medium
TWI817540B (en) Method for obtaining depth image , electronic device and computer-readable storage medium
CN106709892A (en) Rapid region expansion algorithm and device of any structure element based on stroke coding
CN114998554A (en) Three-dimensional cartoon face modeling method and device
CN115861041A (en) Image style migration method and device, computer equipment, storage medium and product

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16850275

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16850275

Country of ref document: EP

Kind code of ref document: A1