WO2007047665A1 - Modeling micro-structure for feature extraction - Google Patents

Modeling micro-structure for feature extraction Download PDF

Info

Publication number
WO2007047665A1
WO2007047665A1 PCT/US2006/040536 US2006040536W WO2007047665A1 WO 2007047665 A1 WO2007047665 A1 WO 2007047665A1 US 2006040536 W US2006040536 W US 2006040536W WO 2007047665 A1 WO2007047665 A1 WO 2007047665A1
Authority
WO
WIPO (PCT)
Prior art keywords
micro
image
patterns
block
recited
Prior art date
Application number
PCT/US2006/040536
Other languages
French (fr)
Inventor
Qiong Yang
Dian Gong
Xiaoou Tang
Original Assignee
Microsoft Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corporation filed Critical Microsoft Corporation
Priority to CN2006800378023A priority Critical patent/CN101283379B/en
Publication of WO2007047665A1 publication Critical patent/WO2007047665A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • G06V10/422Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation for representing the structure of the pattern or shape of an object therefor
    • G06V10/426Graphical representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/28Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/772Determining representative reference patterns, e.g. averaging or distorting patterns; Generating dictionaries

Definitions

  • edge, line, spot, blob, corner, and more complex patterns are designed to describe
  • a micro-pattern is a filter or template for recognizing a visual feature
  • the micro-structure in an image is modeled
  • Micro-patterns adaptively designed from the modeled micro-structure
  • Fig. 1 is a diagram of an exemplary Markov Random Field (MRF)-
  • FIG. 2 is a block diagram of an exemplary feature extraction engine.
  • FIG. 3 is a diagram of exemplary neighborhood structure among pixel
  • FIG. 4 is a diagram of exemplary micro-patterns.
  • FIG. 5 is a diagram of exemplary functional flow during MRF -based
  • FIG. 6 is a flow diagram of an exemplary method of feature
  • Fig. 7 is a flow diagram of an exemplary method of MRF -based
  • applications include, for example, face identification and glasses recognition.
  • an exemplary system divides a pre-aligned
  • occurrence probability of micro-patterns is computed site by site in each block to
  • An exemplary image feature thus modeled has the following traits.
  • the exemplary image feature is a micro-structural feature. Compared with the
  • the exemplary image feature can model the local spatial dependence and can be used to design
  • the exemplary image features are more
  • the adaptive micro-patterns are designed from the
  • the resulting micro-pattern is adaptive to different images, different attributes,
  • the exemplary image features are also model-based. Compared with
  • FIG. 1 shows an exemplary computing environment
  • extraction system 100 in which exemplary feature modeling and extraction can be
  • a computing device 102 hosts an application 104 in which feature extraction is used, such as face identification or glasses
  • An exemplary feature extraction engine 106 performs the exemplary
  • a facial image 108 is
  • the image 108 is passed to the application 104
  • the feature extraction engine 106 divides the
  • the blocks 112 are typically not displayed by an
  • the feature extraction engine 106 also defines a fitness function, by
  • such an exemplary feature extraction system 100 is both flexible and effective in
  • the micro-patterns are adaptive to
  • FIG. 2 shows the exemplary feature extraction engine 106 of Fig. 1 in
  • the illustrated configuration of the feature extraction engine 106 is
  • Such an exemplary feature extraction engine 106 can be executed in
  • the feature extraction engine 106 includes
  • the feature extraction engine 106 also includes
  • components for processing a subject image 204 i.e., for extracting features from
  • a block manager 206 controls the size, overlap, and
  • the feature extraction engine 106 For the above-mentioned training, the feature extraction engine 106
  • a learning engine 208 includes a learning engine 208, an adaptive designer 210, and a buffer or storage
  • model parameters 218, i.e., for each block i.e., for each block.
  • 210 may further include a definition engine 220.
  • micro-structure modeler 222 that includes a
  • processing the subject image also uses the
  • an image processor 228 includes a buffer for one
  • the local fitness engine 232 may further include a fitness function 236 suitable for
  • the image processor 228 further includes buffer
  • extraction engine 106 also includes a feature concatenator 242 to combine features
  • block manager 206 divides the image 204 into blocks 112 by which the MRF
  • attribute modeler 226 extracts block-level micro-structural features for each block
  • the local fitness engine 232 based on the MRF modeling, computes a
  • local fitness sequence 240 to describe the image's local fitness to micro-patterns.
  • the MFFT extractor 234 derives a transformed feature from the local fitness
  • This new feature is based on
  • the local fitness sequence 240 in each block reflects the image's
  • the exemplary feature extraction engine 106 implements a model-
  • attribute modeler 226 to model the micro-structure of the image 204 and design
  • adaptive micro-patterns 212 for feature extraction are also possible.
  • micro-structure modeler 222 in applying image structure
  • modeling to feature extraction provides at least three benefits. First, the modeling
  • the MRF attribute modeler 226 provides a flexible mechanism for
  • the MRF attribute modeler 226 makes use of spatial dependence to model
  • micro-patterns with different spatial dependencies corresponding to different
  • the MRF attribute modeler 226 conveniently represents
  • micro-patterns can be designed for different kinds of images, different attributes of
  • the MRF model is adaptive and flexible
  • model is adaptive to
  • Fig. 3 shows the 1st and 2nd order neighborhood structure of an
  • S is the site map 304 of the image 302.
  • selector 214 may select grayscale intensity, Gabor attribute, or another attribute.
  • N 8 denote the neighbors of site s, and the r-th order neighborhood
  • the Markov model is similar or equivalent to the Gibbs random field model, so an energy function is used to calculate probability, as in Equation
  • ⁇ s is the parameter set for site s, so we rewrite
  • E 61 (X ⁇ X N ) H,(XJ+ IJ-(JT 1 , AT,), where H, ⁇ X k ) is the "field" at site s, and teN.
  • J st (X s ,X t ) is the "interaction" between site s and site t. Furthermore, if
  • ⁇ & is taken as ⁇ .
  • FIG.4 shows example micro-patterns. Such adaptive micro-patterns
  • the feature extraction engine 106 aims to find the appropriate micro-
  • Micro-patterns 402, 404, 406, and 408 are micro-patterns of the
  • 426, 428, 430, 432, 434, 436, 438, 440, and 442 are micro-patterns of the Ising
  • the Ising model can discriminate all 16 of the patterns 412 - 442.
  • the Ising model has a strong ability to describe micro-patterns.
  • model and the Ising model are selectable as forms for the modeling to be performed.
  • micro-patterns 212 are determined (i.e., defined by
  • Equation (2) by model parameters 218 produced by the learning engine 208.
  • the model parameters 218 are used by the micro-structure modeler 222 to generate model parameters 218 to generate model parameters 218 to generate model parameters 218 .
  • the adaptive designer 210 includes a
  • ⁇ ( ⁇ ) is defined to be all the pairs of (x s ,x N ) that satisfy the
  • ⁇ ( ⁇ ) has the following properties.
  • R set of Real Numbers
  • micro-patterns 402 and 404 are deemed to be the
  • micro-patterns 406 and 408 are deemed to be different. [00047] b) If the Ising model is used, i.e.,
  • micro-pattern designed by the definition engine 220 is adaptive to the local
  • micro-patterns e.g., of Gabor.
  • the image processor 228 includes a local fitness engine 232 that
  • engine 232 includes a fitness function 236 that detects which micro-pattern 212 the
  • the fitness index 238 can be computed as in Equation (7):
  • the fitness function 236 matches the local characteristics of the
  • Equation (1) Random Field model exemplified by Equation (1), -Equation (8) is next derived:
  • the fitness index y ⁇ s 238 is proportional to the probability of
  • fitness engine 232 determines the local fitness sequence 240, the fitness function
  • function 236 enhances micro-patterns with low energies that have high
  • the adaptive designer 210 designs a series of micro-patterns
  • the fitness index 238, indicates the occurrence probability of
  • the fitness sequence 240 can be computed as in Equation (10):
  • the parameters 218 are estimated by the MRF attribute modeler 226.
  • the training images 202 can be a
  • BANCA BANCA dataset
  • “Session 1" of BANCA are used for training images 202 (e.g., 260 images).
  • training images 202 e.g., 260 images.
  • the training images 202 are two training libraries, the first library is
  • the second library is
  • Equation (14) for generality, the continuous form is used: where [a,b] is the value interval of x Js . In one implementation, if it is further
  • the estimator 216 can find the optimal ⁇
  • Fig. 5 shows an exemplary general flow of MRP-based feature
  • An image 204 to be processed is divided into blocks. Each block
  • Attribute x® 502 is the selected attribute of the
  • the term y ⁇ 240 is the local fitness sequence in the z-th
  • u® 506 is the Modified Fast Fourier Transform (MFFT) of the
  • the term x w y 508 is the attribute in the z-th
  • w 510 represents the parameters for
  • the exemplary feature extraction engine 106 uses Markov Random Field (MRF) modeling for the z-th block.
  • MRF Markov Random Field
  • the block manager 206 divides an image
  • micro-structure modeler 222 independently applies the MRF attribute modeler
  • modeler 222 applies a homogenous model in each block 112, i.e., the
  • model parameters 218 are the same within the same block.
  • the learning engine 208 learns the model parameters
  • structure modeler 222 derive a series of micro-patterns 212 for each block that best
  • the local fitness engine 232 In extracting features from an image 204, the local fitness engine 232
  • the MFFT extractor 234 derives an MFFT feature
  • the low-frequency components of the local fitness sequence 240 are
  • FIG. 6 shows an exemplary method 600 of extracting features from
  • the exemplary method 600 may be
  • the micro-structure of an image is modeled as a
  • An attribute of the pixels in an image can be used to calculate a Markov Random Field.
  • modeling captures visual spatial dependencies in the image.
  • an image feature is derived based on the Markov
  • the modeled micro-structure is
  • micro-patterns recast as adaptive micro-patterns via the general definition of micro-pattern.
  • a Modified Fast Fourier Transform is applied to the fitness sequence to generate a Modified Fast Fourier Transform.
  • Fig. 7 shows an exemplary method 700 of MRF-based feature
  • the exemplary method 700 may be
  • the selected pixels may be arbitrarily selected, such as five pixels by five pixels. The selected pixels
  • the blocks overlap, thereby providing transition
  • attribute such as grayscale value or intensity may be modeled.
  • modeled micro-structure is automatically designed.
  • a modeled micro-structure is automatically designed.
  • designing engine applies a general definition of micro-patterns to the MRF
  • each block of the series of micro-patterns is generated. In one implementation, each block of the series of micro-patterns is generated.
  • image is processed site by site, generating a sequence of micro-pattern fitness or a
  • the MFFT stabilizes the
  • the MFFT feature is strongly and uniquely

Abstract

Exemplary systems and methods use micro-structure modeling of an image for extracting image features. The micro-structure in an image is modeled as a Markov Random Field, and the model parameters are learned from training images. Micro-patterns adaptively designed from the modeled micro-structure capture spatial contexts of the image. In one implementation, a series of micro-patterns based on the modeled micro-structure can be automatically designed for each block of the image, providing improved feature extraction and recognition because of adaptability to various images, various pixel attributes, and various sites within an image.

Description

MODELING MICRO-STRUCTURE FOR FEATURE EXTRACTION
BACKGROUND
[0001] Feature extraction is one of the most important issues in many vision
tasks, such as object detection and recognition, face detection and recognition,
glasses detection, and character recognition. Conventional micro-patterns, such as
edge, line, spot, blob, corner, and more complex patterns, are designed to describe
the spatial context of the image via local relationships between pixels and can be
used as filters or templates for finding and extracting features in an image. In other
words, a micro-pattern is a filter or template for recognizing a visual feature
portrayed by pixel attributes.
[0002] However, these conventional micro-patterns are intuitively user-
designed based on experience, and are also limited by being application-specific.
Thus, conventional micro-patterns fit for one task might be unfit for another. For
example, the "Four Directional Line Element" is successful in character recognition,
but does not achieve the same success in face recognition, since facial images are
much more complex than a character image and cannot be simply represented with
directional lines. Another problem is that in some cases, it is difficult for the user
to intuitively determine whether the micro-pattern is appropriate without trial-and-
error experimenting. A similar problem exists for Gabor features. Gabor features
have been used to recognize general objects and faces, but the parameters are mainly adjusted by experimental results, which costs a great deal of time and effort
to find appropriate micro-patterns and parameters. What is needed for better
feature extraction and recognition is a system to automatically generate micro-
patterns with strong linkages to one or more mathematical properties of the actual
image.
SUMMARY
[0003] Exemplary systems and methods use micro-structure modeling of an
image for extracting image features. The micro-structure in an image is modeled
as a Markov Random Field, and the model parameters are learned from training
images. Micro-patterns adaptively designed from the modeled micro-structure
capture spatial contexts of the image. In one implementation, a series of micro-
patterns basedW the modeled micro-structure can be automatically designed for
each block of the image, providing improved feature extraction and recognition
because of adaptability to various images, various pixel attributes, and various sites
within an image.
[0004] This Summary is provided to introduce a selection of concepts in a
simplified form that are further described below in the Detailed Description. This
Summary is not intended to identify key features or essential features of the
claimed subject matter, nor is it intended to be used as an aid in determining the
scope of the claimed subject matter. BRIEF DESCRIPTION OF THE DRAWINGS
[0005] Fig. 1 is a diagram of an exemplary Markov Random Field (MRF)-
based feature extraction system.
[0006] Fig. 2 is a block diagram of an exemplary feature extraction engine.
[0007] Fig. 3 is a diagram of exemplary neighborhood structure among pixel
attributes.
[0008] Fig. 4 is a diagram of exemplary micro-patterns.
[0009] Fig. 5 is a diagram of exemplary functional flow during MRF -based
feature extraction.
[00010] Fig. 6 is a flow diagram of an exemplary method of feature
extraction.
[00011] Fig. 7 is a flow diagram of an exemplary method of MRF -based
feature extraction.
DETAILED DESCRIPTION
Overview
[00012] This disclosure describes systems and methods for model-based
extraction of features from an image. These exemplary systems and methods
introduce the concept of automatic modeling of features — i.e., structure-based
features — during the process of feature extraction. This modeling-based feature extraction can be generally applied to many types of applications. Such
applications include, for example, face identification and glasses recognition.
[00013] In contrast, conventional feature extraction techniques rely on
finding predetermined features fashioned intuitively by experience, and each
conventional technique is usually very specific to one type of application. These
conventional feature patterns, contrived by experience or derived by trial-and-error,
are often painstaking, and they lace the adaptability to various applications.
[00014] The exemplary micro-patterns described herein, however, are built
from spatial-dependencies that are modeled from mathematics and learned from
examples. The resulting exemplary micro-patterns are more generally applicable to
many applications that require feature extraction.
[00015] In one implementation, an exemplary system divides a pre-aligned
image into small blocks and assumes a homogenous model in each block. A series
of micro-patterns that best fits with the block are then designed for each block. The
occurrence probability of micro-patterns is computed site by site in each block to
form a sequence whose Modified Fast Fourier Transform (MFFT) features are
extracted to reflect the regional characteristics of its corresponding micro-patterns.
Then, all the MFFT features from all blocks of the image are concatenated together
to efficiently represent the image.
[00016] An exemplary image feature thus modeled, has the following traits.
First, the exemplary image feature is a micro-structural feature. Compared with the
holistic features from such as PCA (principal component analysis), the exemplary image feature can model the local spatial dependence and can be used to design
adaptive micro-patterns, while conventional holistic features extract global
characteristics of the image. Therefore, the exemplary image features are more
capable of capturing spatial context, which plays an important role in many vision
tasks, such as face recognition and character recognition.
[00017] The exemplary image feature modeled by micro-structure is used to
design adaptive micro-patterns. Compared with conventional feature extraction
methods based on micro-patterns, the adaptive micro-patterns are designed from the
extracted feature using the MRF model, rather than intuitively user-defined. The
type of micro-pattern to automatically design is learned from training samples, so
that the resulting micro-pattern is adaptive to different images, different attributes,
and different sites of an image.
[00018] The exemplary image features are also model-based. Compared with
conventional learning-based features from learning-based filters such as Local
Feature Analysis and Independent Component Analysis, the exemplary image
features can model local spatial context directly, and thereby give rise to finer and
more delicate micro-patterns than conventionally extracted features can.
Exemplary Environment
[00019] Fig. 1 shows an exemplary computing environment, a feature
extraction system 100, in which exemplary feature modeling and extraction can be
practiced. In one implementation, a computing device 102 hosts an application 104 in which feature extraction is used, such as face identification or glasses
detection. An exemplary feature extraction engine 106 performs the exemplary
micro-structure-based feature extraction to be described more fully below. Iri the
illustrated example of a feature extraction system 100, a facial image 108 is
captured by a digital camera 110. The image 108 is passed to the application 104
for face identification, via the exemplary feature extraction engine 106.
[00020] In one implementation, the feature extraction engine 106 divides the
image 108 into small overlapping visual blocks 112. Although an example block
112 is shown on the display 114, the blocks 112 are typically not displayed by an
application 104, but are used only for mathematical processing. A selected
attribute of the pixels of each block — that is, the image micro-structure — is
modeled as a Markov Random Field (MRF). Markov Random Fields are well-
suited for modeling spatial dependencies in an image. From the MRF model,
adaptive micro-patterns are defined. The parameters of the MRF model for each
block are obtained through learning from a set of aligned images. Thus, a
collection of generic image features is modeled in order to design adaptive micro-
patterns.
[00021] The feature extraction engine 106 also defines a fitness function, by
which a fitness index is computed to encode the image's local fitness to the
adaptive micro-patterns. Theoretical analysis and experimental results show that
such an exemplary feature extraction system 100 is both flexible and effective in
extracting features. [00022] Because the exemplary micro-patterns are adaptively designed
according to the spatial context of images, the micro-patterns are adaptive to
various images, various attributes, and various sites of an image. This enables the
adaptive micro-patterns to be used by the feature extraction engine 106 in many
different applications, such as face detection, face identification, glasses detection,
character recognition, object detection, object recognition, etc.
Exemplary Feature Extraction Engine
[00023] Fig. 2 shows the exemplary feature extraction engine 106 of Fig. 1 in
greater detail. The illustrated configuration of the feature extraction engine 106 is
only one implementation, and is meant to provide only one example arrangement
for the sake of overview. Many other arrangements of the illustrated components,
or similar components, are possible within the scope of the subject matter. The
illustrated lines and arrows are provided to suggest flow and emphasize connection
between some components. Even if no coupling line is illustrated between two
components, the illustrated components are generally in communication with each
other as needed because they are components of the same feature extraction engine
106. Such an exemplary feature extraction engine 106 can be executed in
hardware, software, combinations of hardware, software, firmware, etc.
[00024] In one implementation, the feature extraction engine 106 includes
components to learn model parameters from training images 202 for the Markov
Random Field micro-structure modeling, and to design adaptive micro-patterns relevant for a particular image. The feature extraction engine 106 also includes
components for processing a subject image 204, i.e., for extracting features from
the subject image 204. A block manager 206 controls the size, overlap, and
registration of blocks in an image, for both the training images 202 and the subject
image 204.
[00025] For the above-mentioned training, the feature extraction engine 106
includes a learning engine 208, an adaptive designer 210, and a buffer or storage
for resulting micro-patterns 212. The learning engine 208 just introduced further
includes an attribute selector 214, a pseudo maximum likelihood estimator 216,
and a buffer for model parameters 218, i.e., for each block. The adaptive designer
210 may further include a definition engine 220.
[00026] The learning engine 208 and the other components just introduced
are communicatively coupled with a micro-structure modeler 222 that includes a
block-level feature extractor 224 and a Markov Random Field (MRF) attribute
modeler 226. In one implementation, processing the subject image also uses the
same micro-structure modeler 222.
[00027] To process images, an image processor 228 includes a buffer for one
image block 230, a local fitness engine 232, and a MFFT feature extractor 234.
The local fitness engine 232 may further include a fitness function 236 suitable for
producing a fitness index 238. The image processor 228 further includes buffer
space for a local fitness sequence 240. [00028] For a global result of the entire subject image 204, the feature
extraction engine 106 also includes a feature concatenator 242 to combine features
of all blocks of an image into a single vector: a micro-structural feature 244
representing the entire image.
Exemplary Components of the Feature Extraction Engine
[00029] An overview of the exemplary engine 106 is now provided. The
block manager 206 divides the image 204 into blocks 112 by which the MRF
attribute modeler 226 extracts block-level micro-structural features for each block
112. Later, the local fitness engine 232, based on the MRF modeling, computes a
local fitness sequence 240 to describe the image's local fitness to micro-patterns.
The MFFT extractor 234 derives a transformed feature from the local fitness
sequence 240 of each block 112. The feature concatenator 242 combines these
features from all the blocks into a long feature vector. This new feature is based on
the image's microstructure and presents a description of the image on three levels:
the Markov field model reflects the spatial correlation of neighboring pixels on a
pixel-level; the local fitness sequence 240 in each block reflects the image's
regional fitness to micro-patterns on a block level; and the features from all blocks
are concatenated to build a global description of the image 204. In this way, both
the local textures and the global shape of the image are simultaneously encoded. Markov Random Field (MRF) Attribute Modeler
[00030] The exemplary feature extraction engine 106 implements a model-
based feature extraction approach, which uses a Markov Random Field (MRF)
attribute modeler 226 to model the micro-structure of the image 204 and design
adaptive micro-patterns 212 for feature extraction.
[00031] The micro-structure modeler 222, in applying image structure
modeling to feature extraction, provides at least three benefits. First, the modeling
can provide a sound theoretical foundation for automatically designing suitable
micro-patterns based on image micro-structure. Next, through modeling, the feature
extraction engine 106 or corresponding methods can be more generally applied
across a wider variety of diverse applications. Third, the modeling alleviates
experimental trial-and-error efforts to adjust parameters.
[00032] The MRF attribute modeler 226 provides a flexible mechanism for
modeling spatial dependence relations among pixels. In a local region of the image
204, the MRF attribute modeler 226 makes use of spatial dependence to model
micro-patterns, with different spatial dependencies corresponding to different
micro-patterns. Thus, the MRF attribute modeler 226 conveniently represents
unobserved and/or complex patterns within images, especially the location of
discontinuities between regions that are homogeneous in tone, texture, or depth.
[00033] Moreover, in one implementation, the parameters of the MRF model
are statistically learned from samples instead of intuitively user-designed. Thereby MRF modeling is more adaptive to the local characteristics of images. Different
micro-patterns can be designed for different kinds of images, different attributes of
images, and even at different sites of a single image, so that the features extracted
are more flexible, and more applicable to diverse applications.
[00034] From the description above, the MRF model is adaptive and flexible
to the intrinsic pattern of images at different sites of the image. The parameters
vary in relation to the site within a block. In addition, the model is adaptive to
changing attributes of the image.
[00035] Fig. 3 shows the 1st and 2nd order neighborhood structure of an
image 302 of size H x W. S is the site map 304 of the image 302. The image 302
has a 1st order neighborhood structure 306 and a 2nd order neighborhood structure
308. To further understand the functioning of the MRF attribute modeler 226, let /
represent an H x W image with S as its collection of all sites, and let Xs = xs
represent some attribute of the image / at site s ^S. For example, the attribute
selector 214 may select grayscale intensity, Gabor attribute, or another attribute.
The attributes of all other sites in S excluding site s is denoted by X.s = x.s. The
spatial distribution of attributes of S, i.e., X- x = { xs , s ^S], will be modeled as a
Markov Random Field (MRF).
[00036] Let N8 denote the neighbors of site s, and the r-th order neighborhood
is defined to be J\f[r) = {t | dist(s,t) ≤ r,t e s} , where dist(s, t) is the distance between
site s and site t. The Markov model is similar or equivalent to the Gibbs random field model, so an energy function is used to calculate probability, as in Equation
(1):
p(Xt \X_J = p(X, I X ) = iexp {-Eθ (XS,XN )}, (1)
1 T
where Eβi(Xs,XN ) is the energy function at site s which is the sum of
energies/potentials of the cliques containing site s, and τ = ∑exp{-Eθi(Xs,XN )}is
X,
the partition function. Here, θs is the parameter set for site s, so we rewrite
p(Xs\XN ) as Pθs(Xs\XN ).
[00037] For a pair-wise MRP model, there is
E61(X^XN ) = H,(XJ+ IJ-(JT1, AT,), where H,{Xk) is the "field" at site s, and teN.
Jst(Xs,Xt) is the "interaction" between site s and site t. Furthermore, if
HS(XS) = 0 and Jst(Xs,Xt) = -^71(X3 -X1)2, then the "smooth model" is at play
and there is Eθ (XS,XN )= ∑ -^(X, -X1)2 , θ6 ={σ»,t zNs} . If
Hs (X0 ) = αsXs , Jn (X s ,Xt) = βitX, X, and X5 e {+1, - 1} ,s e S , then the Ising
model is at play and there is Eθs(Xs,XN ) = αsXs + ∑βitXsX, , θs ={α, βsl,t sNs) .
IeN,
For simplicity, θ& is taken as θ .
Exemplary Adaptive Micro-Pattern Designer
[00038] Fig.4 shows example micro-patterns. Such adaptive micro-patterns
are used as "filters" to find or extract features from an image 204 and/or to identify the image. The feature extraction engine 106 aims to find the appropriate micro-
structure and its appropriate parameters for given image 204 by modeling micro-
patterns 212.
[00039] Micro-patterns 402, 404, 406, and 408 are micro-patterns of the
"smooth model." The sixteen micro-patterns 412, 414, 416, 418, 420, 422, 424,
426, 428, 430, 432, 434, 436, 438, 440, and 442 are micro-patterns of the Ising
model, when the parameters of the Ising model are as shown in 410. In other
words, the Ising model can discriminate all 16 of the patterns 412 - 442. Among
them, there are "blob" micro-patterns 412 and 414; triangle micro-patterns 414,
416, 418, 420; corner micro-patterns 422, 424, 426, and 428; line micro-patterns
430 and 432; arrow micro-patterns 434, 436, 438, 440; and a ring micro-pattern
442. The Ising model has a strong ability to describe micro-patterns. The smooth
model and the Ising model are selectable as forms for the modeling to be performed.
[00040] In one implementation, once the model form is selected (e.g.,
smooth, Ising, or others), micro-patterns 212 are determined (i.e., defined by
Equation (2) below) by model parameters 218 produced by the learning engine 208.
The model parameters 218 are used by the micro-structure modeler 222 to
implement the MRP attribute modeler 226. The adaptive designer 210 includes a
definition engine 220 that implements generalized definitions (Equation (2)) to
create the adaptive micro-patterns 212. [00041] Thus, at the adaptive designer 210, assume that Ω denotes the micro-
pattern, and Ωø(γ ) is defined to be all the pairs of (xs,xN ) that satisfy the
constraint gθ(χ s,xN ) = γ with given θ , i.e., {(χ s ,xN ) -- gθ{xs,xN ) = γ) . Here, θ is
the parameter set.
[00042] Ωø(γ ) has the following properties.
[00043] 1. Given θ,{Ωθ(γ),y e R describes a series of micro-patterns
where R (set of Real Numbers) is the value set of γ .
[00044] 2. When γ is discrete, Ωθ(γ ) is characterized by its probability P(Ω, =
Ωø(γ)); when γ is a continuous variable, Ωθ(γ ) is characterized by the probability
density function p(Qβ(y )).
[00045] Since the feature extraction engine 106 uses the MRP model in the
attribute modeler 226, it is defined that gθ(xs,xN ) = E8 (X ^ - X^XN ^ = xN ) ,
therefore, as in Equation (2):
Clθ(y ) = {(xs,xNt ) :Eθ(Xi = xi,XNi = xNs) = γ } (2)
That is, (xsN ) in the same energy level belong to the same micro-pattern.
[00046] a) If the smooth model is at play, i.e.,
Eθ (X , ,XN ) = ∑ -^3-(JT1 - X1)2 , then
Figure imgf000016_0001
As shown in Fig. 4, in this sense, micro-patterns 402 and 404 are deemed to be the
same, while micro-patterns 406 and 408 are deemed to be different. [00047] b) If the Ising model is used, i.e.,
H6 (X^ XN ) = asXi + ∑βilXiXι (with 1st order neighborhood), where teN,
Xs e {+1,-1}, Vs e S and θ = {aist,t e Ni} as is shown in Fig. 4(e), there is
Ω900 = \ (.X 13 X N, ) '- «. xs + ∑βΛχ t = r (4)
[00048] As mentioned, the micro-patterns defined in Equation (2) are
determined by the model parameters 218 once the model form is selected. The
more parameters the model has, the more micro-patterns 212 it can discriminate. A
micro-pattern designed by the definition engine 220 is adaptive to the local
characteristics of the image 204, since the parameters 218 are statistically learned
from the training samples 202. This is quite different from intuitively user-designed
micro-patterns (e.g., of Gabor).
Exemplary Fitness Engine
[00049] The image processor 228 includes a local fitness engine 232 that
finds features in the image 204 using the micro-patterns 212. The local fitness
engine 232 includes a fitness function 236 that detects which micro-pattern 212 the
local characteristics of the image at site s in one block 230 fit with.
[00050] Given θ, for any given pair of (XS,XN ) the fitness function
hθ(Xs ,XN ) is defined as in Equation (5): Specifically, when gθ(Xi ,XN ) = EΘ(XS ,XN ) , there is, as in Equation (6):
hβ (Xs,XNs ) = e-r \γ=Eβ {XttXNi) . (6)
Then, the fitness index 238 can be computed as in Equation (7):
yβj. = hθ(Xs> XN, ) =
Figure imgf000018_0001
e~r ■ (7)
[00051] The fitness function 236 matches the local characteristics of the
image 204 at site s with a particular micro-pattern 212. Furthermore, it enlarges the
difference between small γ, where there is low potential or energy, and reduces the
difference between large γ, where there is high potential or energy. •
[00052] From the definition of micro-pattern in Equation (2) and the Markov
Random Field model exemplified by Equation (1), -Equation (8) is next derived:
P(Ω = Ωβ00) = ∑ {P(XS : Eθ(Xs,X ) = y \ XN, = xNs )P(XNs = xN, } χχ,
Figure imgf000018_0002
= Z e
where r = ∑exp{-JE(9 (X s, XN )} is independent of Xs, V is the number of pairs
s,xN ) , which belong to the micro-pattern Ωθ(γ) given XNs = xNs, and
z = ∑{- • F • P(XN = XN )} - Note that both V and τ are only dependent on XN X
XN, * and θ, so z is a constant that is only dependent on θ. Consequently, as shown in
Equation (9):
yβ,>
Figure imgf000019_0001
(9)
That is, the fitness index yθ<s 238 is proportional to the probability of
Ω ø(Z )
Figure imgf000019_0002
From the perspective of filter design, e.g., when the local
fitness engine 232 determines the local fitness sequence 240, the fitness function
236 modulates the fitness to the micro-pattern 212 with its probability. The fitness
function 236 enhances micro-patterns with low energies that have high
probabilities, and depresses those with high energies that have low probabilities.
Actually, for a given θ, the adaptive designer 210 designs a series of micro-patterns
Ω ø(7 )>Y e R » and ye ,s, the fitness index 238, indicates the occurrence probability of
the micro-pattern Ωθ (y )
Figure imgf000019_0003
at site s.
[00053] The fitness sequence 240 can be computed as in Equation (10):
y = {yθ^s = l,2,...,n}, (10)
where n = H * W is the number of sites s in S.
Exemplary Learning Engine
[00054] The learning engine 208 estimates the parameters, Θ = {θs,s e S} 218,
to be used by the MRF attribute modeler 226. The parameters 218 are estimated by
learning from the training sample images 202. The training images 202 can be a
library from a standard face database: e.g., the BANCA dataset, which contains 52 subjects with 120 images for each subject. Among them, five images/subjects in
"Session 1" of BANCA are used for training images 202 (e.g., 260 images). In one
implementation, the training images 202 are two training libraries, the first library is
the grayscale intensity of the 260 faces, which are cropped and normalized to the
size of 55 x 51 pixels based on automatic registration of eyes. The second library is
the Gabor attributes of the same cropped faces, using a bank of Gabor filters with
two scales and four orientations.
[00055] Suppose there are m independent samples {x} , j = 1, 2 , ..., m },
where x} = [xjl,xj2,...,xjrι]τ . The maximum likelihood estimation (MLE) can be
treated like the optimization in Equation (11):
* m
Θ = argmaxπ £β(*i = *,iΛ = xji, Λ = x;») (l l)
However, since there is p(Xs = xs | X_s = χ_s) = p(Xs = x, | XNi = xNi ), the estimator
216 uses a pseudo maximum likelihood estimation (PMLE) for the approximation
in Equation (12):
arg
Figure imgf000020_0001
which is equivalent to Equation (13):
Θ « arg max ∑ ∑ log(pΘj (X s = xβ | X = χ Nt ) j (13)
J=I s=l
When the smooth model is selected, the approximation can be treated as the
optimization in Equation (14) (for generality, the continuous form is used):
Figure imgf000021_0001
where [a,b] is the value interval of xJs. In one implementation, if it is further
assumed that the Markov Random Field is homogenous and isotropic, i.e., that
σst = σ, Vs e S,\/t G JVΛ , then equivalently the estimator 216 can find the optimal σ
that maximizes the function in Equation (15):
Figure imgf000021_0002
where ψμ = 2 J] xJt , ζJS = J] (xJt f , erf(x) = -γ= [ exp(-t2 )dt . Then the exemplary
feature extraction engine 106 finds the optimal σ.
Exemplary Functionality of the Feature Extraction Engine
[00056] Fig. 5 shows an exemplary general flow of MRP-based feature
extraction. An image 204 to be processed is divided into blocks. Each block
undergoes MRF-based feature extraction and then feature concatenation combines
the features from all the blocks. Attribute x® 502 is the selected attribute of the
image at the z-th block. The term y^ 240 is the local fitness sequence in the z-th
block. The term u® 506 is the Modified Fast Fourier Transform (MFFT) of the
local fitness sequence of the z-th block. The term xw y 508 is the attribute in the z-th
block of thej-th training image 202. The term 6>w 510 represents the parameters for
Markov Random Field (MRF) modeling for the z-th block. [00057] In one implementation, the exemplary feature extraction engine 106
performs in three stages. In a first stage, the block manager 206 divides an image
204 into C blocks 112 of size N x M and overlapping of L x K. For each block 112,
the micro-structure modeler 222 independently applies the MRF attribute modeler
226 to model the attributes
Figure imgf000022_0001
- I5 2, ..., C). In one implementation, for
simplicity, the modeler 222 applies a homogenous model in each block 112, i.e., the
model parameters 218 are the same within the same block.
[00058] During training, the learning engine 208 learns the model parameters
= I5 2, ..., C) 218 for each block 112 from the set of pre-aligned training
images xf,j = l,2,...,C 202, via Equation (16):
θ w « argmax∑∑log(^ω (X, = x« \ XN< = *« )), (16)
J=I s=l
where I = N X M, I = 1, 2, ..., C . Once the parameters 218 are learned, the micro-
structure modeler 222 derive a series of micro-patterns 212 for each block that best
fits with the observations from the training samples 202.
[00059] In extracting features from an image 204, the local fitness engine 232
computes the local fitness sequence 240 of the image 204 for each block 112 y®(i =
1, 2, ..., Q using Equation (17):
y° = 1,2,..., C (17)
Figure imgf000022_0002
[00060] In a second stage, the MFFT extractor 234 derives an MFFT feature
of the local fitness sequence 240 in each block 112 to reduce both dimensionality and noise. The low-frequency components of the local fitness sequence 240 are
maintained, while the high-frequency components are averaged. If y® denotes the
local fitness sequence 240 of the z-th block 112, and z(i) = FFT(y(0) , where z(0 = {
Figure imgf000023_0001
= 1 , 2, ... , k+ 1 } where, as in Equation (18):
HΛ** (l8) and k is the truncation length.
[00061] In a third stage, the feature concatenator 242 concatenates
Figure imgf000023_0002
= 1,
2, ..., C) from all blocks of the image 204 to form the MRF-based micro-structural
feature, whose length is C x (k+1), as in Equation (19):
Figure imgf000023_0003
Exemplary Methods
[00062] Fig. 6 shows an exemplary method 600 of extracting features from
an image. In the flow diagram, the operations are summarized in individual
blocks. Depending on implementation, the exemplary method 600 may be
performed by hardware, software, or combinations of hardware, software,
firmware, etc., for example, by components of the exemplary feature extraction
engine 106.
[00063] At block 602, the micro-structure of an image is modeled as a
Markov Random Field. An attribute of the pixels in an image can be used to
access image micro-structure. Markov Random Field modeling at the micro level results in extracted features that are based on image structure, as the MRF
modeling captures visual spatial dependencies in the image.
[00064] At block 604, an image feature is derived based on the Markov
Random Field modeling of the micro-structure. The modeled micro-structure is
recast as adaptive micro-patterns via the general definition of micro-pattern. In
one implementation, a site-by-site scan of probability of occurrence of the micro-
patterns in the image is made within each block of the image to produce a fitness
sequence. A Modified Fast Fourier Transform is applied to the fitness sequence to
obtain a feature corresponding to the block.
[00065] Fig. 7 shows an exemplary method 700 of MRF-based feature
extraction. In the flow diagram, the operations are summarized in individual
blocks. Depending on implementation, the exemplary method 700 may be
performed by hardware, software, or combinations of hardware, software,
firmware, etc., for example, by components of the exemplary feature extraction
engine 106.
[00066] At block 702, an image is divided into blocks. The block dimensions
may be arbitrarily selected, such as five pixels by five pixels. The selected
dimensions for the block size remain the same during processing of the entire
image. In one implementation, the blocks overlap, thereby providing transition
smoothness and preventing a micro-structural feature from being missed due to an
imaginary boundary of a hypothetical block cutting through the feature. [00067] At block 704, the micro-structure of each block is modeled as a
Markov Random Field. Spatial dependencies are well-modeled by MRF. A pixel
attribute such as grayscale value or intensity may be modeled.
[00068] At block 706, a series of micro-patterns corresponding to the
modeled micro-structure is automatically designed. In one implementation, a
designing engine applies a general definition of micro-patterns to the MRF
modeling, resulting in adaptive micro-patterns custom tailored for the image block
at hand.
[00069] At block 708, a fitness sequence representing the fit of the block to
the series of micro-patterns is generated. In one implementation, each block of the
image is processed site by site, generating a sequence of micro-pattern fitness or a
fitness index to each particular site.
[00070] At block 710, a Modified Fast Fourier Transform is applied to the
fitness sequence of each image block to obtain a feature. The MFFT stabilizes the
fitness sequence into a feature result by attenuating high energy micro-patterns and
maintaining low energy micro-patterns. The result is a MFFT feature that has
strong mathematical correspondence to the image block from which it has been
derived, whether the MFFT feature has strong visual correspondence to the block
or not. In other words, for each block, the MFFT feature is strongly and uniquely
characteristic of the micro-structure of that block.
[00071] At block 712, the features from all of the blocks are concatenated to
represent the image. Each of the MFFT features for all the blocks in the image are concatenated into a long vector that is an MRF-based micro-structural
representation of the entire image.
Conclusion
[00072] Although exemplary systems and methods have been described in
language specific to structural features and/or methodological acts, it is to be
understood that the subject matter defined in the appended claims is not necessarily
limited to the specific features or acts described. Rather, the specific features and
acts are disclosed as exemplary forms of implementing the claimed methods,
devices, systems, etc.

Claims

1. A method, comprising:
modeling a micro-structure of an image as a Markov Random Field; and
extracting an image feature based on the modeling.
2. The method as recited in claim 1, wherein the modeling uses pixel
attributes to model the micro-structure.
3. The method as recited in claim 2, wherein the pixel attributes
describe a spatial context of the image.
4. The method as recited in claim 5, further comprising learning
parameters for the modeling from training images.
5. The method as recited in claim 4, wherein the learning includes a
pseudo maximum likelihood estimation.
6. The method as recited in claim I5 further comprising automatically
designing a micro-pattern from the modeled micro-structure.
7. The method as recited in claim 6, further comprising automatically
designing a micro-pattern adapted from attributes of the image or from a particular
site in the image.
8. The method as recited in claim 6, wherein designing a micro-pattern
comprises designing a set of micro-patterns that follow a smooth model.
9. The method as recited in claim 6, wherein designing a micro-pattern
comprises designing a set of micro-patterns that follow an Ising model.
10. The method as recited in claim 6, further comprising recognizing at
least part of an image using the micro-pattern.
11. The method as recited in claim I3 further comprising dividing the
image into blocks and modeling the micro-structure with parameters that are
consistent over each block.
12. The method as recited in claim I3 further comprising:
dividing the image into blocks;
automatically designing a series of the micro-patterns that corresponds to
the micro-structure of the block; and calculating an occurrence probability of micro-patterns site by site in each
block to form a fitness sequence of the image in the block to the series of micro-
patterns.
13. The method as recited in claim 12, further comprising applying a
Modified Fast Fourier Transform (MFFT) to the fitness sequence to derive regional
characteristics of the corresponding micro-patterns.
14. The method as recited in claim 12, further comprising applying a
Modified Fast Fourier Transform (MFFT) to the fitness sequence to derive at least
one MFFT feature of the block.
15. The method as recited in claim 14, further comprising concatenating
the MFFT features from all blocks of the image to represent the image.
16. The method as recited in claim 15, wherein the concatenated MFFT
features comprise a vector:
wherein the vector represents spatial correlation on a pixel level;
wherein the vector represents a regional fitness to the micro-patterns on a
block level; and
wherein the vector represents a description of the image on a global level.
17. A system, comprising:
a Markov Random Field attribute modeler to model micro-structure of an
image via attributes in a block of the image; and
a feature extractor to derive a feature of the image based on the modeled
micro-structure.
18. The system, as recited in claim 17, further comprising:
a designer to create micro-patterns from the derived feature; and
a learning engine to estimate parameters for the Markov Random Field
attribute modeler.
19. The system as recited in claim 17, further comprising:
a block manager to divide the image into blocks;
a designer to automatically design a series of the micro-patterns for each
block that corresponds to the micro-structure of the block;
a fitness engine to calculate an occurrence probability of micro-patterns site
by site in each block to form a fitness sequence of the image in the block to the
series of micro-patterns;
a Modified Fast Fourier Transform feature extractor to derive a feature from
each fitness sequence; and
a feature concatenator to combine the features to represent the image.
20. A system, comprising:
means for modeling a micro-structure of an image as a Markov Random
Field; and
means for extracting an image feature based on the modeling.
PCT/US2006/040536 2005-10-14 2006-10-16 Modeling micro-structure for feature extraction WO2007047665A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2006800378023A CN101283379B (en) 2005-10-14 2006-10-16 Modeling micro-structure method and systtem for feature extraction

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US72670705P 2005-10-14 2005-10-14
US60/726,707 2005-10-14
US11/466,332 2006-08-22
US11/466,332 US7991230B2 (en) 2005-10-14 2006-08-22 Modeling micro-structure for feature extraction

Publications (1)

Publication Number Publication Date
WO2007047665A1 true WO2007047665A1 (en) 2007-04-26

Family

ID=37948197

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2006/040536 WO2007047665A1 (en) 2005-10-14 2006-10-16 Modeling micro-structure for feature extraction

Country Status (4)

Country Link
US (1) US7991230B2 (en)
KR (1) KR20080058366A (en)
CN (1) CN101283379B (en)
WO (1) WO2007047665A1 (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100793989B1 (en) * 2006-07-11 2008-01-16 삼성전자주식회사 Method for classifing category of photographic and system thereof
US7853071B2 (en) * 2006-11-16 2010-12-14 Tandent Vision Science, Inc. Method and system for learning object recognition in images
GB2498954B (en) * 2012-01-31 2015-04-15 Samsung Electronics Co Ltd Detecting an object in an image
US9672416B2 (en) * 2014-04-29 2017-06-06 Microsoft Technology Licensing, Llc Facial expression tracking
JP6375706B2 (en) * 2014-06-11 2018-08-22 富士ゼロックス株式会社 Attribute estimation program and information processing apparatus
CN105701492B (en) * 2014-11-25 2019-10-18 宁波舜宇光电信息有限公司 A kind of machine vision recognition system and its implementation
CN104616300B (en) * 2015-02-03 2017-07-28 清华大学 The image matching method and device separated based on sampling configuration
CN105306946B (en) * 2015-11-10 2018-06-22 桂林电子科技大学 A kind of quality scalability method for video coding based on mean square error thresholding
JP7141365B2 (en) * 2019-05-20 2022-09-22 株式会社日立製作所 PORTFOLIO CREATION SUPPORT DEVICE AND PORTFOLIO CREATION SUPPORT METHOD

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6774917B1 (en) * 1999-03-11 2004-08-10 Fuji Xerox Co., Ltd. Methods and apparatuses for interactive similarity searching, retrieval, and browsing of video
JP2005078149A (en) * 2003-08-28 2005-03-24 Ricoh Co Ltd Image analysis device, image analysis program, storage medium, and image analysis method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3037432B2 (en) * 1993-11-01 2000-04-24 カドラックス・インク Food cooking method and cooking device using lightwave oven
US7106366B2 (en) * 2001-12-19 2006-09-12 Eastman Kodak Company Image capture system incorporating metadata to facilitate transcoding

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6774917B1 (en) * 1999-03-11 2004-08-10 Fuji Xerox Co., Ltd. Methods and apparatuses for interactive similarity searching, retrieval, and browsing of video
JP2005078149A (en) * 2003-08-28 2005-03-24 Ricoh Co Ltd Image analysis device, image analysis program, storage medium, and image analysis method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
DANSEREAU R.M. ET AL.: "Lip feature extraction using motion, color, and edge information", HAPTIC, AUDIO AND VISUAL ENVIRONMENTS AND THEIR APPLICATIONS. 2003. THE 2ND IEEE INTERNATIONAL WORKSHOP, 20 September 2003 (2003-09-20) - 21 September 2003 (2003-09-21), pages 1 - 6, XP010668250 *

Also Published As

Publication number Publication date
KR20080058366A (en) 2008-06-25
US7991230B2 (en) 2011-08-02
CN101283379B (en) 2012-11-28
US20070086649A1 (en) 2007-04-19
CN101283379A (en) 2008-10-08

Similar Documents

Publication Publication Date Title
WO2007047665A1 (en) Modeling micro-structure for feature extraction
Rao et al. Selfie video based continuous Indian sign language recognition system
CN104834922B (en) Gesture identification method based on hybrid neural networks
JP4739355B2 (en) Fast object detection method using statistical template matching
Guo et al. Automatic threshold selection based on histogram modes and a discriminant criterion
JP5505409B2 (en) Feature point generation system, feature point generation method, and feature point generation program
KR20060097074A (en) Apparatus and method of generating shape model of object and apparatus and method of automatically searching feature points of object employing the same
JP5766620B2 (en) Object region detection apparatus, method, and program
Ganesh et al. Entropy based binary particle swarm optimization and classification for ear detection
EP1964028A1 (en) Method for automatic detection and classification of objects and patterns in low resolution environments
JP6597914B2 (en) Image processing apparatus, image processing method, and program
CN113657528A (en) Image feature point extraction method and device, computer terminal and storage medium
EP2790130A1 (en) Method for object recognition
Al-Waisy et al. A fast and accurate iris localization technique for healthcare security system
Liu et al. Strip line detection and thinning by RPCL-based local PCA
CN109522865A (en) A kind of characteristic weighing fusion face identification method based on deep neural network
CN109344852A (en) Image-recognizing method and device, analysis instrument and storage medium
Ahn et al. Segmenting a noisy low-depth-of-field image using adaptive second-order statistics
CN116912604A (en) Model training method, image recognition device and computer storage medium
Budiman et al. The effective noise removal techniques and illumination effect in face recognition using Gabor and Non-Negative Matrix Factorization
CN106469267A (en) A kind of identifying code sample collection method and system
Ko et al. Automatic object-of-interest segmentation from natural images
Venkatesan et al. Advanced classification using genetic algorithm and image segmentation for Improved FD
Ma et al. Confidence based active learning for whole object image segmentation
Sallow et al. Optical Disc and Blood Vessel Segmentation in Retinal Fundus Images

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200680037802.3

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 1020087008589

Country of ref document: KR

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06826108

Country of ref document: EP

Kind code of ref document: A1