CN103093469A - Depth extraction method based on visual model - Google Patents

Depth extraction method based on visual model Download PDF

Info

Publication number
CN103093469A
CN103093469A CN2013100237991A CN201310023799A CN103093469A CN 103093469 A CN103093469 A CN 103093469A CN 2013100237991 A CN2013100237991 A CN 2013100237991A CN 201310023799 A CN201310023799 A CN 201310023799A CN 103093469 A CN103093469 A CN 103093469A
Authority
CN
China
Prior art keywords
depth
row
extraction method
method based
vision mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013100237991A
Other languages
Chinese (zh)
Other versions
CN103093469B (en
Inventor
戴琼海
张洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201310023799.1A priority Critical patent/CN103093469B/en
Publication of CN103093469A publication Critical patent/CN103093469A/en
Application granted granted Critical
Publication of CN103093469B publication Critical patent/CN103093469B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a depth extraction method based on a visual model. The depth extraction method comprises the following steps: firstly, transforming a color signal of a video image into a grey-scale signal; then, calculating probability that two adjacent pixels are located on the same depth; calculating depth values of all pixels; and finally, carrying out filter smoothing and normalization processing on the depth values of all the pixels. The depth extraction method based on the visual model introduces the visual model into the depth extraction and enables a depth calculation result to be accurate.

Description

A kind of depth extraction method based on vision mode
Technical field
The present invention relates to technical field of computer vision, particularly a kind of depth extraction method based on vision mode.
Background technology
Compare with watching the 2D video, spectators watch the 3D video can produce sense of reality on the spot in person, and therefore, the development of 3D technology is subject to increasing attention.But the manufacturing cost of 3D video is expensive at present, causes the 3D film source not enough, and it is to convert the 3D video to by the 2D video that a solution is arranged.
Usually adopt the pattern of " video+depth " due to the 3D video format, in converting the 2D video process of 3D video to, need to carry out depth extraction, present existing depth extraction method is to extract the degree of depth intensity, color and texture information from picture, this extracting method is not considered the mankind's vision mode, and the depth information of extraction is not accurate enough.
Summary of the invention
The present invention is intended to solve at least the technical matters that exists in prior art, has proposed to special innovation a kind of depth extraction method based on vision mode.
In order to realize above-mentioned purpose of the present invention, the invention provides a kind of depth extraction method based on vision mode, it comprises the steps:
The first step: the colour signal of video image is converted into grey scale signal;
Second step: calculate the probability that adjacent two pixels are in the same degree of depth;
The 3rd step: the depth value that calculates all pixels;
The 4th step: the depth value to all pixels of calculating carries out filtering and normalized.
In a preferred embodiment of the invention, calculate according to the HMAX vision mode probability that adjacent two pixels are in the same degree of depth, the probability that adjacent two pixels of this HMAX vision mode calculating are in the same degree of depth comprises the steps:
A: at the S1 layer, video image is carried out filtering and process;
B: at the C1 layer, filtering and normalization operation are carried out in the output of comprehensive described S1 layer;
C: at the S2 layer, image is carried out filtering and process;
D: at the C2 layer, the output of comprehensive described S2 layer is also carried out filtering and the normalization operation.
Depth extraction method based on vision mode of the present invention is introduced depth extraction with vision mode, makes the depth calculation result more accurate.
Additional aspect of the present invention and advantage part in the following description provide, and part will become obviously from the following description, or recognize by practice of the present invention.
Description of drawings
Above-mentioned and/or additional aspect of the present invention and advantage are from obviously and easily understanding becoming the description of embodiment in conjunction with following accompanying drawing, wherein:
Fig. 1 is the process flow diagram that the present invention is based on the depth extraction method of vision mode.
Embodiment
The below describes embodiments of the invention in detail, and the example of described embodiment is shown in the drawings, and wherein same or similar label represents same or similar element or the element with identical or similar functions from start to finish.Be exemplary below by the embodiment that is described with reference to the drawings, only be used for explaining the present invention, and can not be interpreted as limitation of the present invention.
Fig. 1 is the process flow diagram that the present invention is based on the depth extraction method of vision mode, and as seen from the figure, this depth extraction method comprises the steps:
The first step: the colour signal of video image is converted into grey scale signal;
Second step: calculate the probability that adjacent two pixels are in the same degree of depth;
The 3rd step: the depth value that calculates all pixels;
The 4th step: the depth value to all pixels of calculating carries out filtering and normalized.
In a preferred embodiment of the present invention, concrete steps are:
At first, the colour signal of video image is converted into grey scale signal, to each pixel, grey scale signal is taken as I (u, v)=(R (u, v)+G (u, v)+B (u, v))/3, wherein, R, G, B are the triple channel value of the pixel of the capable v of video image u row when being colour signal.
Then, calculate according to the HMAX vision mode probability that adjacent two pixels are in the same degree of depth, the probability that adjacent two pixels of this HMAX vision mode calculating are in the same degree of depth comprises the steps:
A: at the S1 layer, video image is carried out filtering and process;
B: at the C1 layer, filtering and normalization operation are carried out in the output of comprehensive described S1 layer;
C: at the S2 layer, image is carried out filtering and process;
D: at the C2 layer, the output of comprehensive described S2 layer is also carried out filtering and the normalization operation.
Wherein, in step a, to θ=0, θ=π/4, θ=pi/2, θ=3 π/4 four directions two angle φ=0 and φ=-these 8 kinds of combinations of pi/2 use respectively formula f1 θ, φRectangular window and the original image matrix I of=exp ((xcos θ+ysin θ) ^2+ (xsin θ+ycos θ) ^2/10) * cos (π (xcos θ+ysin θ)/2+ φ) definition carry out two-dimensional convolution computing I*f1 θ, φ, obtain 8 matrix S 1 θ, φ, this rectangular window f1 θ, φSize be 5*5, the coordinate of central point is (0,0).In the present embodiment, x, y are the coordinate of the filter window of rectangular window formation.
In step b, respectively to 8 matrix S 1 of two angles of four direction in step a θ, φCarry out the square weighting combination, formula is g1 θ=sqrt (S1 θ, 0+ S1 θ ,-pi/2) then, with rectangular window and the matrix g1 of formula f2=g2 (exp ((x^2+y^2)/10)/10-exp ((x^2+y^2)/5)/5) definition θCarry out respectively two-dimensional convolution computing g1 θ* f2, obtain 4 Matrix C 1 after filtering θ, wherein as x〉and 0 the time, g2 (x)=x; When x≤0, g2 (x)=0, f2 choose size and are 15*15, and the coordinate of central point is (0,0).
In step c, respectively to θ=0, θ=π/4, θ=pi/2, θ=3 π/4 four directions two angle φ=-π/4, formula f3 is used respectively in these the 8 kinds of combinations of φ=-3 π/4 θ, φImage array C1 after the rectangular window of=exp ((xcos θ+ysin θ) ^2+ (xsin θ+ycos θ) ^2/10) * cos (π (xcos θ+ysin θ)/2+ φ) definition and step b process θCarry out respectively two-dimensional convolution computing C1 θ* f3 θ, φ, obtain 8 matrix S 2 θ, φ, rectangular window f3 θ, φSize be 5*5, the coordinate of central point is (0,0).
In steps d, at first, respectively the result of two angles of four direction in step c is carried out the square weighting combination, formula is g3 θ=sqrt (S2 θ, 0+ S2 θ, pi/2), then, with rectangular window and the matrix g1 of formula f4=g4 (exp ((x^2+y^2)/10)/10-exp ((x^2+y^2)/5)/5) definition θCarry out respectively two-dimensional convolution computing g3 θ* f4 carries out filtering, obtains 4 Matrix C 2 θWherein as x〉0 the time, g4 (x)=x; When x≤0, g4 (x)=0, f4 choose size and are 15*15, and the coordinate of central point is (0,0).
4 Matrix C 2 that obtain in steps d θCarry out the normalization operation, with Matrix C 2 θBe converted into probability, namely span is limited between (0,1), each Matrix C 2 θH*w pixel arranged, with Matrix C 2 θArrange from big to small, get the large pixel value P0 of 0.1*h*w, each element is calculated as follows: P θ(x, y)=(C2 θ(x, y)/P0) ^2, wherein, 1≤x≤h, 1≤y≤w, h are the line number of image slices vegetarian refreshments, w is the columns of image slices vegetarian refreshments, if P θ(x, y)〉1, get P θ(x, y)=1.
After utilizing adjacent two pixels of HMAX vision mode calculating to be in the probability of the same degree of depth, the method for calculating the depth value of all pixels is: at first, calculate the depth value of pixel in first row and the first row; Then, calculate the depth value of the pixel of each row and each row except first row and the first row.
The method of calculating the depth value of pixel in first row and the first row is: the degree of depth of establishing top left corner pixel point is 0, i.e. depth (1,1)=0; The depth value of the pixel of first row is:
depth(k1,1)=depth(k1-1,1)+λ|I(k1,1)-I(k1-1,1)|,
The depth value of the pixel of the first row is:
depth(1,k2)=depth(1,k2-1)+λ|I(1,k2)-I(1,k2-1)|,
Wherein, 1<k1≤h, 1<k2≤w, h are the line number of image slices vegetarian refreshments, w is the columns of image slices vegetarian refreshments, the gray-scale value of the capable y row of I (x, y) expression x, λ is depth coefficient, in the present embodiment, the value of depth coefficient λ is 0.5.
Point (x, y) beyond the first row and first row has upper left point (x-1, y-1), upper point (x, y-1), upper right point (x+1, y-1), left point (x-1, y), the method for calculating the depth value of the pixel of each row and each row except first row and the first row is:
depth(x,y)=(depth(x-1,y-1)+P3π/4(x,y)×|I(x,y)-I(x-1,y-1)|)/4+(depth(x,y-1)+Pπ/2(x,y)×|I(x,y)-I(x,y-1)|)/4+(depth(x+1,y-1)+Pπ/4(x,y)×|I(x,y)-I(x+1,y-1)|)/4+(depth(x-1,y)+P0(x,y)×|I(x,y)-I(x-1,y)|)/4,
Wherein, 1<x≤h, 1<y≤w, h are the line number of image slices vegetarian refreshments, w is the columns of image slices vegetarian refreshments.
After the depth value that calculates all pixels, depth value to all pixels of calculating carries out filtering and normalized, be specially: at first, carry out the two-dimensional convolution computing with the depth value matrix depth of the rectangular window of formula f5=exp ((x^2+y^2)/5)/5 and all pixels and obtain g5=depth*f5, rectangle f5 chooses size and is 15*15, the coordinate of central point is (0,0); Then, the standard of g5 as a result that just obtains is namely sought maximal value max and the minimum value min of all elements in g5 in interval [0,1], result=(g5-min)/(max-min) by formula, and the result that obtains is final depth map.
Depth extraction method based on vision mode of the present invention is introduced depth extraction with vision mode, makes the depth calculation result more accurate.
In the description of this instructions, the description of reference term " embodiment ", " some embodiment ", " example ", " concrete example " or " some examples " etc. means to be contained at least one embodiment of the present invention or example in conjunction with specific features, structure, material or the characteristics of this embodiment or example description.In this manual, the schematic statement of above-mentioned term not necessarily referred to identical embodiment or example.And the specific features of description, structure, material or characteristics can be with suitable mode combinations in any one or more embodiment or example.
Although illustrated and described embodiments of the invention, those having ordinary skill in the art will appreciate that: in the situation that do not break away from principle of the present invention and aim can be carried out multiple variation, modification, replacement and modification to these embodiment, scope of the present invention is limited by claim and equivalent thereof.

Claims (13)

1. the depth extraction method based on vision mode, is characterized in that, comprises the steps:
The first step: the colour signal of video image is converted into grey scale signal;
Second step: calculate the probability that adjacent two pixels are in the same degree of depth;
The 3rd step: the depth value that calculates all pixels;
The 4th step: the depth value to all pixels of calculating carries out filtering and normalized.
2. the depth extraction method based on vision mode as claimed in claim 1, it is characterized in that, the method that the colour signal of described video image is converted into grey scale signal is: the grey scale signal of each pixel of video image is taken as I (u, v)=(R (u, v)+G (u, v)+B (u, v))/3, wherein, R, G, B are the pixel of the capable v of the video image u row triple channel value when being colour signal.
3. the depth extraction method based on vision mode as claimed in claim 1, it is characterized in that, calculate according to the HMAX vision mode probability that adjacent two pixels are in the same degree of depth, the probability that adjacent two pixels of described HMAX vision mode calculating are in the same degree of depth comprises the steps:
A: at the S1 layer, video image is carried out filtering and process;
B: at the C1 layer, filtering and normalization operation are carried out in the output of comprehensive described S1 layer;
C: at the S2 layer, image is carried out filtering and process;
D: at the C2 layer, the output of comprehensive described S2 layer is also carried out filtering and the normalization operation.
4. the depth extraction method based on vision mode as claimed in claim 3, is characterized in that, in described step a, the method of video image being carried out the filtering processing is: to θ=0, θ=π/4, θ=pi/2, θ=3 π/4 four directions two angle φ=0 and φ=-pi/2 uses respectively formula f1 θ, φRectangular window and the original image matrix of=exp ((xcos θ+ysin θ) ^2+ (xsin θ+ycos θ) ^2/10) * cos (π (xcos θ+ysin θ)/2+ φ) definition carry out two-dimensional convolution computing I*f1 θ, φ, obtain 8 matrix S 1 θ, φ, described rectangular window f1 θ, φSize be 5*5, the coordinate of central point is (0,0).
5. the depth extraction method based on vision mode as described in claim 3 or 4, it is characterized in that, in described step b, the method for carrying out filtering and normalization operation is: at first, respectively the result of two angles of four direction in step a is carried out the square weighting combination, formula is g1 θ=sqrt (S1 θ, 0+ S1 θ ,-π / 2), then, with rectangular window and the matrix g1 of formula f2=g2 (exp ((x^2+y^2)/10)/10-exp ((x^2+y^2)/5)/5) definition θCarry out respectively two-dimensional convolution computing g1 θ* f2, obtain 4 Matrix C 1 after filtering θ, wherein as x〉and 0 the time, g2 (x)=x; When x≤0, g2 (x)=0, f2 choose size and are 15*15, and the coordinate of central point is (0,0).
6. the depth extraction method based on vision mode as claimed in claim 3, is characterized in that, in described step c, the method of image being carried out the filtering processing is: respectively to θ=0, and θ=π/4, θ=pi/2, θ=3 π/4 four directions two angle φ=-π/4, φ=-3 π/4 formula f3 θ, φImage array C1 after the rectangular window of=exp ((xcos θ+ysin θ) ^2+ (xsin θ+ycos θ) ^2/10) * cos (π (xcos θ+ysin θ)/2+ φ) definition and step b process θCarry out respectively two-dimensional convolution computing C1 θ* f3 θ, φ, obtain 8 matrix S 2 θ, φ, described rectangular window f3 θ, φSize be 5*5, the coordinate of central point is (0,0).
7. the depth extraction method based on vision mode as described in claim 3 or 6, is characterized in that, in described steps d, the method for carrying out filtering is: at first, respectively the result of two angles of four direction in step c is carried out the square weighting combination, formula is g3 θ=sqrt (S2 θ, 0+ S2 θ, pi/2), then, with rectangular window and the matrix g1 of formula f4=g4 (exp ((x^2+y^2)/10)/10-exp ((x^2+y^2)/5)/5) definition θCarry out respectively two-dimensional convolution computing g3 θ* f4 carries out filtering, obtains 4 Matrix C 2 θWherein as x〉0 the time, g4 (x)=x; When x≤0, g4 (x)=0, f4 choose size and are 15*15, and the coordinate of central point is (0,0).
8. the depth extraction method based on vision mode as described in claim 3 or 7, is characterized in that, 4 Matrix C 2 that obtain in described steps d θCarry out the normalization operation, with Matrix C 2 θBe converted into probability, namely span is limited between (0,1), each Matrix C 2 θH*w pixel arranged, with Matrix C 2 θArrange from big to small, get the large pixel value P0 of 0.1*h*w, each element is calculated as follows: P θ(x, y)=(C2 θ(x, y)/P0) ^2, wherein, 1≤x≤h, 1≤y≤w, h are the line number of image slices vegetarian refreshments, w is the columns of image slices vegetarian refreshments, if P θ(x, y)〉1, get P θ(x, y)=1.
9. the depth extraction method based on vision mode as claimed in claim 1, is characterized in that, the method for calculating the depth value of all pixels is: at first, calculate the depth value of pixel in first row and the first row; Then, calculate the depth value of the pixel of each row and each row except described first row and the first row.
10. the depth extraction method based on vision mode as claimed in claim 9, is characterized in that, the method for calculating the depth value of pixel in first row and the first row is:
If the degree of depth of top left corner pixel point is 0, i.e. depth (1,1)=0;
The depth value of the pixel of first row is:
depth(k1,1)=depth(k1-1,1)+λ|I(k1,1)-I(k1-1,1)|,
The depth value of the pixel of the first row is:
depth(1,k2)=depth(1,k2-1)+λ|I(1,k2)-I(1,k2-1)|,
Wherein, 1<k1≤h, 1<k2≤w, h are the line number of image slices vegetarian refreshments, w is the columns of image slices vegetarian refreshments, the gray-scale value of the capable y row of I (x, y) expression x, λ is depth coefficient.
11. the depth extraction method based on vision mode as claimed in claim 10 is characterized in that, the value of described depth coefficient λ is 0.5.
12. the depth extraction method based on vision mode as claimed in claim 9 is characterized in that, the method for calculating the depth value of the pixel of each row and each row except described first row and the first row is:
depth(x,y)=(depth(x-1,y-1)+P3π/4(x,y)×|I(x,y)-I(x-1,y-1)|)/4+(depth(x,y-1)+Pπ/2(x,y)×|I(x,y)-I(x,y-1)|)/4+(depth(x+1,y-1)+Pπ/4(x,y)×|I(x,y)-I(x+1,y-1)|)/4+(depth(x-1,y)+P0(x,y)×|I(x,y)-I(x-1,y)|)/4,
Wherein, 1<x≤h, 1<y≤w, h are the line number of image slices vegetarian refreshments, w is the columns of image slices vegetarian refreshments.
13. the depth extraction method based on vision mode as claimed in claim 1 is characterized in that, the method for the depth value of all pixels of calculating being carried out filtering and normalized is:
At first, carry out the two-dimensional convolution computing with the depth value matrix depth of the rectangular window of formula f5=exp ((x^2+y^2)/5)/5 and all pixels and obtain g5=depth*f5, f5 chooses size and is 15*15, and the coordinate of central point is (0,0);
Then, the g5 standard in interval [0,1], is namely sought maximal value max and the minimum value min of all elements in g5, by formula result=(g5-min)/(max-min), obtain final depth map.
CN201310023799.1A 2013-01-22 2013-01-22 A kind of depth extraction method of view-based access control model model Expired - Fee Related CN103093469B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310023799.1A CN103093469B (en) 2013-01-22 2013-01-22 A kind of depth extraction method of view-based access control model model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310023799.1A CN103093469B (en) 2013-01-22 2013-01-22 A kind of depth extraction method of view-based access control model model

Publications (2)

Publication Number Publication Date
CN103093469A true CN103093469A (en) 2013-05-08
CN103093469B CN103093469B (en) 2015-07-29

Family

ID=48206000

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310023799.1A Expired - Fee Related CN103093469B (en) 2013-01-22 2013-01-22 A kind of depth extraction method of view-based access control model model

Country Status (1)

Country Link
CN (1) CN103093469B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105550690A (en) * 2015-12-14 2016-05-04 南京邮电大学 Optimization method based on stentiford visual model

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100014781A1 (en) * 2008-07-18 2010-01-21 Industrial Technology Research Institute Example-Based Two-Dimensional to Three-Dimensional Image Conversion Method, Computer Readable Medium Therefor, and System
CN101702781A (en) * 2009-09-07 2010-05-05 无锡景象数字技术有限公司 Method for converting 2D to 3D based on optical flow method
CN102098528A (en) * 2011-01-28 2011-06-15 清华大学 Method and device for converting planar image into stereoscopic image
CN102790896A (en) * 2012-07-19 2012-11-21 彩虹集团公司 Conversion method for converting 2D (Two Dimensional) into 3D (Three Dimensional)
CN102883174A (en) * 2012-10-10 2013-01-16 彩虹集团公司 2D (two-dimensional)-to-3D (three dimensional) conversion method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100014781A1 (en) * 2008-07-18 2010-01-21 Industrial Technology Research Institute Example-Based Two-Dimensional to Three-Dimensional Image Conversion Method, Computer Readable Medium Therefor, and System
CN101702781A (en) * 2009-09-07 2010-05-05 无锡景象数字技术有限公司 Method for converting 2D to 3D based on optical flow method
CN102098528A (en) * 2011-01-28 2011-06-15 清华大学 Method and device for converting planar image into stereoscopic image
CN102790896A (en) * 2012-07-19 2012-11-21 彩虹集团公司 Conversion method for converting 2D (Two Dimensional) into 3D (Three Dimensional)
CN102883174A (en) * 2012-10-10 2013-01-16 彩虹集团公司 2D (two-dimensional)-to-3D (three dimensional) conversion method

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
MAXIMILIAN RIESENHUBER ET AL: "《Hierarchical models of object recognition in cortex》", 《NATURE NEROSCIENCE》, vol. 2, no. 11, 30 November 1999 (1999-11-30), pages 1019 - 1025 *
XI YAN ET AL: "《Depth Map Generation for 2D-to-3D Conversion by Limited User Inputs and Depth Propagation》", 《3DTV CONFERENCE:THE TRUE VISION-CAPTURE,TRANSMISSION AND DISPLAY OF 3D VIDEO CONVERSION》, 16 May 2011 (2011-05-16), pages 1 - 4 *
曹洋等: "《自然场景的高真实感雾天退化图像仿真》", 《系统仿真学报》, vol. 26, no. 6, 30 June 2012 (2012-06-30), pages 1247 - 1253 *
赵宏伟等: "《基于HMAX模型和非经典感受野抑制的轮廓提取》", 《吉林大学学报(工学版)》, vol. 42, no. 1, 31 January 2012 (2012-01-31), pages 128 - 133 *
邹小东: "《2D视频转3D视频算法的研究与软件实现》", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 1, 15 January 2013 (2013-01-15), pages 138 - 1224 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105550690A (en) * 2015-12-14 2016-05-04 南京邮电大学 Optimization method based on stentiford visual model
CN105550690B (en) * 2015-12-14 2018-09-11 南京邮电大学 A kind of optimization method based on Stentiford vision modes

Also Published As

Publication number Publication date
CN103093469B (en) 2015-07-29

Similar Documents

Publication Publication Date Title
CN106485275B (en) A method of realizing that cover-plate glass is bonded with liquid crystal display positioning
CN107103277B (en) Gait recognition method based on depth camera and 3D convolutional neural network
US20110249886A1 (en) Image converting device and three-dimensional image display device including the same
CN100545864C (en) Cylinder frame buffer texture re-labeling geometric correction method based on software
CN103345755A (en) Chessboard angular point sub-pixel extraction method based on Harris operator
CN102098528B (en) Method and device for converting planar image into stereoscopic image
CN103440653A (en) Binocular vision stereo matching method
US8610735B2 (en) Image converting device and three dimensional image display device including the same
CN108765333B (en) Depth map perfecting method based on depth convolution neural network
CN101795350B (en) Non-linear image double amplifying method based on relevance detection
CN102867313A (en) Visual saliency detection method with fusion of region color and HoG (histogram of oriented gradient) features
CN103400121A (en) License plate locating method based on colorful binary image
CN105869115B (en) A kind of depth image super-resolution method based on kinect2.0
CN103198486A (en) Depth image enhancement method based on anisotropic diffusion
CN105678719A (en) Panoramic stitching seam smoothing method and panoramic stitching seam smoothing device
CN103077542A (en) Compression method for interest region of depth map
CN103106671B (en) Method for detecting interested region of image based on visual attention mechanism
CN107909083A (en) A kind of hough transform extracting method based on outline optimization
CN105913488A (en) Three-dimensional-mapping-table-based three-dimensional point cloud rapid reconstruction method
CN102750671A (en) Image colorful noise removal method
CN106162140A (en) The compression method of a kind of panoramic video and device
CN104021527A (en) Rain and snow removal method in image
CN106204702A (en) The 3D effect of input word generates, inputs 3D display packing and the system of word
CN104820981B (en) A kind of image stereo representation method and system that segmentation is layered based on parallax
CN104794308B (en) Domain image based on Image Edge-Detection is converted to CIF document methods

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150729

CF01 Termination of patent right due to non-payment of annual fee