CN102081740B - 3D image classification method based on scale invariant features - Google Patents

3D image classification method based on scale invariant features Download PDF

Info

Publication number
CN102081740B
CN102081740B CN 201110053750 CN201110053750A CN102081740B CN 102081740 B CN102081740 B CN 102081740B CN 201110053750 CN201110053750 CN 201110053750 CN 201110053750 A CN201110053750 A CN 201110053750A CN 102081740 B CN102081740 B CN 102081740B
Authority
CN
China
Prior art keywords
prime
vector
view
target signature
epsiv
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 201110053750
Other languages
Chinese (zh)
Other versions
CN102081740A (en
Inventor
田捷
白丽君
王虎
张文生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN 201110053750 priority Critical patent/CN102081740B/en
Publication of CN102081740A publication Critical patent/CN102081740A/en
Application granted granted Critical
Publication of CN102081740B publication Critical patent/CN102081740B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a 3D image classification method based on scale invariant features. The method comprises the following steps: extracting the scale invariant features of labeled sample images and unlabeled sample images which were subjected to 3D image pretreatment to obtain sample features and target features respectively; searching for positive occurrence of the target features to generate a positive occurrence set which satisfies the sample features of geometric similarity and contour similarity; computing the estimate value of conditional probability density of the target features by using the nuclear density estimation algorithm; and computing the likelihood rates of the unlabeled sample images by using a Bayes classifier according to the estimate value of probability density of each target feature and classifying the images according to the likelihood rates. Experiments on a common data set show that the method provided by the invention is effective.

Description

A kind of 3-D view sorting technique based on the yardstick invariant features
Technical field
The invention belongs to image processing field, be specifically related to a kind of statistical classification method of the 3-D view based on the yardstick invariant features.
Background technology
3-D view can clearly be expressed the internal organizational structure and the spatial texture thereof of object, has a wide range of applications in fields such as medical image analysis, geology analyses.Utilize 3-D view to carry out statistical classification, calculate the possibility size that unmarked sample image has certain attribute, perhaps differentiate the category attribute of unmarked sample image automatically, this is an important application of 3-D view computer-aided analysis.
The 3-D view pattern of object has great changeability, and this has brought huge challenge for the classification of this type of image.Traditional sorting technique mainly contains area-of-interest (ROI) mode and two kinds of sorting techniques of voxel (voxel) mode.The sorting technique of area-of-interest mode is divided into a plurality of target areas with sample and target, and in view of the above target is classified according to the priori of object construction; The sorting technique of voxel mode adopts complicated non-linear registration, to realize the accurate correspondence between individuality to greatest extent, then with each mikey (voxel) of image as classification foundation.These the two kinds of methods all internal organizational structure of hypothetical target and sample are one to one.The former thinks that the image-region of priori is present in the middle of each target, and can accurately cut apart; The latter supposes that the voxel behind the non-linear registration is one to one.Yet such hypothesis is also unreasonable under many circumstances.Under the situation of noise and some tactic pattern variation, the area-of-interest border of 3-D view thickens, thereby makes Region Segmentation inaccurate, and residual error appears in interregional correspondence, and then causes the decline of classification performance; Non-linear method for registering can not be understood some complicated 3-D view pattern, and the corresponding relation of voxel between each sample that makes this quasi-mode comprise is equivocal or do not exist, and has reduced classification performance equally.Generally speaking, under the situation of the variation of 3-D view pattern more complicated or pattern, no priori, corresponding one by one hypothesis can the useful class relevant information of lost part, introduces noise simultaneously, thereby can not reach the classifying quality of the best.
3-D view classification based on the yardstick invariant features is a kind of brand-new classification method.This method is at first extracted the yardstick invariant features of 3-D view, in order to characterize the image model of different scale; The characteristic that then these parts is existed is set up statistical model, and quantitative test partly is present in the characteristic likelihood ratio in each sample; Type of obtaining differences explicit model characteristic, the approximate target signature likelihood ratio of the characteristic that uses a model at last likelihood ratio is classified.This method and classic method form complementation, and be especially very complicated and do not have under the situation of priori very effectively for image model.Yet also there is deficiency in this sorting technique.At first, there is certain error in the use a model estimated value of conditional probability density of the approximate target signature of estimated value of characteristic condition probability density of this method; Secondly, this method is not utilized the similarity degree information between the characteristic through counting mode calculated characteristics likelihood ratio, receives interference of noise easily.
Summary of the invention
In order to overcome the deficiency of prior art, the objective of the invention is to design the 3-D view sorting technique that a kind of classification accuracy is high, extensive performance is strong.
For realizing above-mentioned purpose, the present invention proposes a kind of 3-D view sorting technique based on the yardstick invariant features, may further comprise the steps:
Step Sa:, obtain sample characteristics and target signature respectively to extracting the yardstick invariant features through pretreated marker samples image of 3-D view and unmarked sample image;
Step Sb: the just appearance of ferret out characteristic, set, the said set that the sample characteristics that is meeting geometric phase Sihe appearance similar just occurring gathering are just appearring in generation;
Step Sc: use the algorithm of Density Estimator, calculate the estimated value of the conditional probability density of target signature;
Step Sd: according to the probability density estimated value of each target signature, the use Bayes classifier calculates the likelihood ratio of unmarked sample image, and classifies according to likelihood ratio.
The present invention is directed to the 3-D view classification problem, calculate the conditional probability density estimated value of yardstick invariant features, improved accuracy and robustness that probability density is estimated through the method for extracting yardstick invariant features, the positive characteristic appearance of search, use Density Estimator.Show according to the experimental result on the collection that at the MRI image 3-D view sorting technique based on the yardstick invariant features of the present invention has improved the classification performance of 3-D view effectively.
Description of drawings
Fig. 1 is the classification block diagram of 3-D view according to the invention;
Fig. 2 is a test data subclass 1) contrast of last two kinds of methods classification ROC curve;
Fig. 3 is a test data subclass 2) contrast of last two kinds of methods classification ROC curve;
Fig. 4 is a test data subclass 3) contrast of last two kinds of methods classification ROC curve.
Embodiment
For making the object of the invention, technical scheme and advantage clearer, below in conjunction with specific embodiment, and with reference to accompanying drawing, to further explain of the present invention.
With reference to Fig. 1, a kind of 3-D view sorting technique of the present invention is confirmed the classification of unmarked sample image according to the marker samples image, and the practical implementation step is following:
Step Sa to extracting the yardstick invariant features through pretreated marker samples image of 3-D view and unmarked sample image, obtains sample characteristics and target signature respectively;
1. 3-D view pre-service
Because the difference of intrasubject difference or image-forming condition exists yardstick and locational difference between 3-D view, and this total difference is irrelevant with the group attribute of being paid close attention to; The 3-D view pre-service is when keeping the 3-D view details; Through each 3-D view and template are carried out the pre-service of affine registration mapping mode; Eliminate the total difference that has nothing to do with group attribute that paid close attention between each image; Wherein template can be used the standard picture of this type of 3-D view, also can the usage flag sample image in a comparatively average width of cloth.
2. the yardstick invariant features extracts
In general, each 3-D view comprises a large amount of yardstick invariant features, and each yardstick invariant features is expressed as: f={g; A}, g={x, σ }; Wherein, g representes the four-dimensional geometry position vector of metric space, and x representes the point coordinate vector of 3-D view; σ representation feature yardstick, a are represented the profile description vector of yardstick invariant features f; From marker samples image and unmarked sample image, extract sample characteristics and target signature respectively, extraction step is following:
At first set up the four-dimensional metric space discrete data structure of 3-D view; Utilize the pyramidal extreme point of difference as candidate point set; After interpolation, low contrast filtration, surface and the filtration of tubulose point of instability; Obtain the geometric position vector set of residue extreme point, further each geometric position vector is carried out the neighborhood sampling, obtain each geometric position pairing profile of vector and describe vector.
Suppose S lRepresent L marker samples image, S L+1Represent unmarked sample image, make f jExpression is from S lMiddle N the sample characteristics that extracts makes f ' iExpression is from S L+1Middle M the target signature of extracting, wherein, subscript l=1 ...; The numbering of L expressive notation sample image, subscript j=1 ..., N representes the unified numbering of all sample characteristics; Subscript i=1 ..., M representes the unified numbering of all target signatures, does not have between each subscript and contacts directly.
3. because sample characteristics quantity is bigger, in order to improve the search efficiency of subsequent step to sample characteristics, carry out the sample characteristics index here in advance, make each characteristic orderly by each dimension, concrete grammar is following:
With sample characteristics f jAccording to its four-dimensional geometry vector g iBe mapped to D 1* D 2* D 3* D 4Same four-dimensional discrete space, set up index and record mapping sample characteristics numbering so far in the discrete space position of correspondence; D wherein 1, D 2, D 3, D 4Size of each dimension of difference representation feature space; In this embodiment, coordinate vector x is according to linear mapping, D 1, D 2, D 3Be respectively each dimension size of former 3-D view, linear mapping after yardstick σ takes the logarithm, D 4Value is the pyramidal number of plies of difference.
To unmarked sample image S L+1Carry out individual segregation, may further comprise the steps:
Step Sb, the just appearance of ferret out characteristic, set, the said set that the sample characteristics that is meeting geometric phase Sihe appearance similar just occurring gathering are just appearring in generation; To each target signature f ' iThe sample characteristics that meets the following conditions of search is as the just appearance of target signature:
| | x i k - x i ' | | ≤ ϵ x ' + 2 · ξ i ' | 1 n σ i k - 1 n σ i ' | ≤ ϵ σ ' | | a i k - a i ' | | ≤ ϵ a i ' - - - ( 1 )
Wherein, ‖ ‖ representes the Euclidean distance of vector, and x representes the point coordinate vector of 3-D view, and σ representation feature yardstick, a are represented profile description vector, and subscript i, k represent target signature and the numbering that is just occurring respectively,
Figure BDA0000049073820000042
With
Figure BDA0000049073820000043
Expression is numbered i respectively kSample characteristics
Figure BDA0000049073820000044
Point coordinate vector, characteristic dimension and profile vector, x ' are described i, σ ' iAnd a ' iRepresent target signature f ' respectively iPoint coordinate vector, characteristic dimension and profile vector, ε ' are described xAnd ε ' σCoordinate part and the characteristic dimension part of representing the geometric similarity threshold value respectively are corresponding to the size of the disparity between the pre-service backward three-dimensional viewing, ξ ' iExpression target signature f ' iThe sampling rate of corresponding difference pyramid image layer,
Figure BDA0000049073820000045
Expression target signature f ' iThe appearance similar threshold value;
Here; Through using the sample characteristics index of setting up; Confirm that earlier target signature is just appearing at the position that possibly occur in the discrete space; The characteristic of collecting these location records then generates the set of geometric similarity sample characteristics, in this characteristic set, confirms to satisfy the sample characteristics of formula (1) at last, and this mode can greatly improve search efficiency;
The sample characteristics set of (1) satisfies condition
Figure BDA0000049073820000051
Be characteristic f ' iJust appearance set;
In order to reach under the prerequisite of geometric similarity the limits of error of reflection appearance similar property, confirm the appearance similar threshold value automatically through the relevant mode of characteristic:
ϵ a i ' = sup { ϵ a ∈ [ 0,1 ) : | A i ( ϵ a ) ∩ G i | ≥ | A i ( ϵ a ) - G i | } - - - ( 2 )
Wherein, sup{} representes the supremum gathered, ε aExpression Candidate value, G iThe sample characteristics set that the expression meeting geometric is similar, A ia) represent given ε aSatisfy the sample characteristics set of appearance similar under the condition:
Figure BDA0000049073820000054
A i ( ϵ a ) = { f i k : | | a i k - a i ' | | ≤ ϵ a } ,
Step Sc, the algorithm of use Density Estimator, the estimated value of the conditional probability density of calculating target signature;
According to the group mark of affiliated marker samples image, the just appearance that further is divided into each group correspondence that just occurs collecting that step Sb obtains is gathered, make f ' iThe just appearance set of corresponding group C is:
{ f i k : k = 1 , . . . , N i ( C ) } ,
Wherein, N i(C) quantity that just occurring of expression; Use the Density Estimator method of gaussian kernel to calculate each target signature f ' iThe estimated value of the probability density under the condition of given conversion T, group C:
p ^ ( f i ' | C , T ) = 1 N C Σ k = 1 N i ( C ) 1 ( 2 π h 2 ) d / 2 exp ( - | | a i k - a i ' | | 2 2 ( ϵ a i ' ) 2 h 2 ) - - - ( 3 )
Wherein, C is given group; T representes the pre-service conversion; N CExpression group C internal labeling sample image quantity; N i(C) quantity for just occurring; H is the bandwidth of Density Estimator method, is controlling the level and smooth amount of probability density function; D representes the length of profile description vector;
Figure BDA0000049073820000058
Expression is numbered i kThe profile of sample characteristics vector is described; A ' iExpression target signature f ' iProfile vector is described.
Step Sd, according to the probability density estimated value of each target signature, the use Bayes classifier calculates the likelihood ratio of unmarked sample, and classifies according to likelihood ratio.
According to the Bayes principle, unmarked sample S L+1Category attribute determine by its likelihood ratio (DLR).The likelihood ratio account form is following:
DLR = ∏ i = 1 M p ^ ( f i ' | C , T ) p ^ ( f i ' | C ‾ , T ) - - - ( 4 )
Wherein, the supplementary set operation is got in
Figure BDA0000049073820000062
expression.
The probability distribution situation of all single characteristics of likelihood ratio DLR is integrated 3-D view, thereby can reflect the group attribute of 3-D view on the whole.Likelihood ratio DLR is big more, and it is big more to mean that unmarked sample image belongs to the possibility of C, and vice versa; Set a likelihood ratio threshold value, then be classified as follows: if the likelihood ratio of unmarked sample image is higher than this threshold value, then be categorized as C, otherwise think that unmarked image does not belong to group C; According to the actual requirements, can between True Positive Rate and false positive rate, reach a balance, thereby reduce the overall risk of categorised decision through regulating this threshold value;
Parameter is selected, and this method comprises three adjustable parameters: the coordinate part ε ' of the geometric similarity threshold value among the step Sb xWith characteristic dimension part ε ' σ, and the Density Estimator bandwidth h among the step Sc, these three parameters can be passed through the experience setting; For further improving the specific aim of parameter, in sorting technique according to the invention, select these three parameter ε ' through Leave-One-Out mode cross validation to target group x, ε ' σ, h optimal value: at first one group of preset value is provided for each parameter; The class test of Leave-One-Out mode is carried out in each combination of using the mode of grid search to test each parameter then; At last, make the maximum parameter combinations of area under experimenter's operating characteristic curve be chosen to be the optimal value of each parameter.
The effect of sorting technique according to the invention can be able to explanation through the experiment on the common data sets of MRI brain 3-D view:
(1) data set and method are relatively
For showing effect of the present invention; The test set that in this embodiment, adopts is OASIS MRIData (http://www.oasis-brains.org/) common data sets (cross-sectional study, skull are removed), further obtains three different subclass (shown in form 1) according to different age brackets with clinical dementia classification (CDR) rank.
Form 1 data subset
Figure BDA0000049073820000063
Cross validation and contrast the sorting technique based on local feature (method B) of the method for the invention (method A) and prior art on each data set; Obtain experimenter's operating characteristic (ROC) curve and the TG-AUC (AUC) thereof of sorting technique, and with ROC curve and the AUC tolerance as the sorter performance.
(2) experimental result
The classification ROC curve of two kinds of methods is respectively shown in Fig. 2-4 on three data sets; Wherein Fig. 2 is a test data subclass 1) contrast of last two kinds of methods classification ROC curve; Fig. 3 is a test data subclass 2) contrast of last two kinds of methods classification ROC curve, Fig. 4 is a test data subclass 3) contrast of last two kinds of methods classification ROC curve; The ROC curve of method A is higher than method B in most of threshold range; AUC value contrast situation is shown in form 2, and the AUC value of method A all is higher than method B.
Experimental result explanation, the 3-D view sorting technique based on the yardstick invariant features of the present invention has improved the classification performance of 3-D view effectively.
Form 2 the inventive method (A) and prior art are up-to-date based on local feature sorting technique CB) the comparison of AUC value
Figure BDA0000049073820000072
The above; Be merely the embodiment among the present invention, but protection scope of the present invention is not limited thereto, anyly is familiar with this technological people in the technical scope that the present invention disclosed; Can understand conversion or the replacement expected, all should be encompassed within the protection domain of claims of the present invention.

Claims (8)

1. the 3-D view sorting technique based on the yardstick invariant features is characterized in that, may further comprise the steps:
Step Sa:, obtain sample characteristics and target signature respectively to extracting the yardstick invariant features through pretreated marker samples image of 3-D view and unmarked sample image;
Step Sb: the just appearance of ferret out characteristic, set, the said set that the sample characteristics that is meeting geometric phase Sihe appearance similar just occurring gathering are just appearring in generation; The sample characteristics that search meets the following conditions to each target signature is as the just appearance of target signature:
| | x i k - x i ′ | | ≤ ϵ x ′ + 2 · ξ i ′ | ln σ i k - ln σ i ′ | ≤ ϵ σ ′ | | a i k - a i ′ | | ≤ ϵ a i ′
Wherein, || || the Euclidean distance of expression vector, x representes the point coordinate vector of 3-D view, and σ representation feature yardstick, a are represented profile description vector, and subscript i, k represent target signature and the numbering that is just occurring respectively,
Figure FDA00002149127200012
With
Figure FDA00002149127200013
Expression is numbered i respectively kSample characteristics Point coordinate vector, characteristic dimension and profile vector, x ' are described i, σ ' iAnd a ' iRepresent target signature f ' respectively iPoint coordinate vector, characteristic dimension and profile vector, ε ' are described xAnd ε ' σCoordinate part and the characteristic dimension part of representing the geometric similarity threshold value respectively are corresponding to the size of the disparity between the pre-service backward three-dimensional viewing, ξ ' iExpression target signature f ' iThe sampling rate of corresponding difference pyramid image layer, Expression target signature f ' iThe appearance similar threshold value;
Step Sc: use the algorithm of Density Estimator, calculate the estimated value of the conditional probability density of target signature;
Step Sd: according to the probability density estimated value of each target signature, the use Bayes classifier calculates the likelihood ratio of unmarked sample image, and classifies according to likelihood ratio.
2. the 3-D view sorting technique based on the yardstick invariant features as claimed in claim 1 is characterized in that, the 3-D view pre-service is when keeping the 3-D view details, uses 3-D view and template to carry out the pre-service of affine registration mapping mode.
3. the 3-D view sorting technique based on the yardstick invariant features as claimed in claim 1 is characterized in that, the step of extracting the yardstick invariant features is:
At first set up the four-dimensional metric space discrete data structure of 3-D view, utilize the pyramidal extreme point of difference as candidate point set; After filtering through interpolation, low contrast filtration, surface and tubulose point of instability then, obtain the geometric position vector set of residue candidate point; At last the geometric position vector is carried out the neighborhood sampling, obtain each geometric position pairing profile of vector and describe vector.
4. the 3-D view sorting technique based on the yardstick invariant features as claimed in claim 1; It is characterized in that; Utilize the sample characteristics index to improve the search efficiency that is just occurring of target signature; Wherein, the sample characteristics index is that sample characteristics is mapped to same four-dimensional discrete space according to its geometric vector, sets up index and record mapping sample characteristics numbering so far in the discrete space position of correspondence.
5. the 3-D view sorting technique based on the yardstick invariant features as claimed in claim 1; It is characterized in that; In order to reach the limits of error of reflection appearance similar property under the prerequisite of geometric similarity, confirm the appearance similar threshold value of target signature f ' i through the relevant mode of characteristic:
ϵ a i ′ = sup { ϵ a ∈ [ 0 , 1 ) : | A i ( ϵ a ) ∩ G i | ≥ | A i ( δ a ) - G i | }
Wherein, sup{} representes the supremum gathered, ε aExpression
Figure FDA00002149127200022
Candidate value, G iThe sample characteristics set that the expression meeting geometric is similar, A ia) represent given ε aSatisfy the sample characteristics set of appearance similar under the condition:
G i = { f i k : | | x i k - x i ′ | | ≤ ϵ x ′ + 2 · ξ i ′ ^ | ln σ i k - ln σ i ′ | ≤ ϵ σ ′ }
A i ( ϵ a ) = { f i k : | | a i k - a i ′ | | ≤ ϵ a } .
6. the 3-D view sorting technique based on the yardstick invariant features as claimed in claim 1; It is characterized in that the estimated value of said conditional probability density
Figure FDA00002149127200025
is represented as follows:
p ^ ( f i ′ | C , T ) = 1 N C Σ k = 1 N i ( C ) 1 ( 2 πh 2 ) d / 2 exp ( - | | a i k - a i ′ | | 2 2 ( ϵ a i ′ ) 2 h 2 )
Wherein, C is given group, and T representes pre-service conversion, N CExpression group C internal labeling sample image quantity, N i(C) quantity for just occurring, h is the Density Estimator bandwidth, d representes the length of profile description vector,
Figure FDA00002149127200027
Expression is numbered i kThe profile of sample characteristics vector, a ' are described iExpression target signature f ' iProfile vector is described.
7. the 3-D view sorting technique based on the yardstick invariant features as claimed in claim 1 is characterized in that, uses the cross validation of Leave-One-Out mode, selects the coordinate part ε ' of geometric similarity threshold value xWith characteristic dimension part ε ' σOptimal value.
8. the 3-D view sorting technique based on the yardstick invariant features as claimed in claim 6 is characterized in that, uses the cross validation of Leave-One-Out mode, selects the optimal value of Density Estimator bandwidth h.
CN 201110053750 2011-03-07 2011-03-07 3D image classification method based on scale invariant features Expired - Fee Related CN102081740B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110053750 CN102081740B (en) 2011-03-07 2011-03-07 3D image classification method based on scale invariant features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110053750 CN102081740B (en) 2011-03-07 2011-03-07 3D image classification method based on scale invariant features

Publications (2)

Publication Number Publication Date
CN102081740A CN102081740A (en) 2011-06-01
CN102081740B true CN102081740B (en) 2012-12-12

Family

ID=44087695

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110053750 Expired - Fee Related CN102081740B (en) 2011-03-07 2011-03-07 3D image classification method based on scale invariant features

Country Status (1)

Country Link
CN (1) CN102081740B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105701499B (en) * 2015-12-31 2019-06-18 中国科学院深圳先进技术研究院 A kind of image processing method for the classification of brain MRI image
CN106250897A (en) * 2016-07-27 2016-12-21 合肥高晶光电科技有限公司 One carries out color selection method according to eigenvalue
CN107101972B (en) * 2017-05-24 2019-10-15 福州大学 A kind of near infrared spectrum quickly detects radix tetrastigme place of production method
CN110378330B (en) * 2018-04-12 2021-07-13 Oppo广东移动通信有限公司 Picture classification method and related product
CN110322968A (en) * 2019-06-24 2019-10-11 北京科技大学 A kind of feature selection approach and device of disease category medical data
JP7438744B2 (en) * 2019-12-18 2024-02-27 株式会社東芝 Information processing device, information processing method, and program

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5757309A (en) * 1996-12-18 1998-05-26 The United States Of America As Represented By The Secretary Of The Navy Spatial frequency feature extraction for a classification system using wavelets

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5757309A (en) * 1996-12-18 1998-05-26 The United States Of America As Represented By The Secretary Of The Navy Spatial frequency feature extraction for a classification system using wavelets

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
David G. Lowe.Distinctive Image Feature from Scale-Invariant Keypoints.《Computer Vision》.2004,全文. *
Matthew Toews等.Feature-based morphometry: Discoving group-related anatomical patterns.《NeuroImage》.2009,第4页左栏第31行,第2页右栏第40行,第5页左栏第27行,第5页右栏第1至第2段,第3页右栏第31行. *

Also Published As

Publication number Publication date
CN102081740A (en) 2011-06-01

Similar Documents

Publication Publication Date Title
CN108665481B (en) Self-adaptive anti-blocking infrared target tracking method based on multi-layer depth feature fusion
CN102081740B (en) 3D image classification method based on scale invariant features
CN109389180A (en) A power equipment image-recognizing method and inspection robot based on deep learning
CN102096819B (en) Method for segmenting images by utilizing sparse representation and dictionary learning
CN103810704B (en) Based on support vector machine and the SAR image change detection of discriminative random fields
CN102509123B (en) Brain function magnetic resonance image classification method based on complex network
CN104778457A (en) Video face identification algorithm on basis of multi-instance learning
CN110796667B (en) Color image segmentation method based on improved wavelet clustering
CN103617427B (en) Classification of Polarimetric SAR Image method
CN103366180A (en) Cell image segmentation method based on automatic feature learning
CN105718866A (en) Visual target detection and identification method
CN101986295B (en) Image clustering method based on manifold sparse coding
CN104657980A (en) Improved multi-channel image partitioning algorithm based on Meanshift
CN108932518A (en) A kind of feature extraction of shoes watermark image and search method of view-based access control model bag of words
CN110516525A (en) SAR image target recognition method based on GAN and SVM
CN102855488A (en) Three-dimensional gesture recognition method and system
CN109815973A (en) A kind of deep learning method suitable for the identification of fish fine granularity
CN105825201A (en) Moving object tracking method in video monitoring
CN104134073B (en) One kind is based on the normalized remote sensing image list class sorting technique of a class
CN103247052A (en) Image segmentation algorithm for local region characteristics through nonsubsampled contourlet transform
CN104282012A (en) Wavelet domain based semi-reference image quality evaluating algorithm
CN111127407B (en) Fourier transform-based style migration forged image detection device and method
CN108280417A (en) A kind of finger vena method for quickly identifying
CN103793913A (en) Spectral clustering image segmenting method combined with mean shift
Zografos et al. Sparse motion segmentation using multiple six-point consistencies

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20121212

CF01 Termination of patent right due to non-payment of annual fee