US20060010582A1 - Chin detecting method, chin detecting system and chin detecting program for a chin of a human face - Google Patents

Chin detecting method, chin detecting system and chin detecting program for a chin of a human face Download PDF

Info

Publication number
US20060010582A1
US20060010582A1 US11/004,648 US464804A US2006010582A1 US 20060010582 A1 US20060010582 A1 US 20060010582A1 US 464804 A US464804 A US 464804A US 2006010582 A1 US2006010582 A1 US 2006010582A1
Authority
US
United States
Prior art keywords
chin
detecting
human face
image
pixels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/004,648
Inventor
Toshinori Nagahashi
Takashi Hyuga
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Seiko Epson Corp
Original Assignee
Seiko Epson Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Seiko Epson Corp filed Critical Seiko Epson Corp
Assigned to SEIKO EPSON CORPORATION reassignment SEIKO EPSON CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HYUGA, TAKASHI, NAGAHASHI, TOSHINORI
Publication of US20060010582A1 publication Critical patent/US20060010582A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • G06T2207/20012Locally adaptive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present invention concerns pattern recognition and object recognition technologies, and more specifically, the invention relates to a chin detecting method, a chin detecting system and a chin detecting program for accurately detecting the location of the chin of a human face from an image of the human face.
  • JP-A-9-505208 the existence of a flesh color area is first determined in an input image, and the mosaic size is automatically determined in the flesh color area to convert a candidate area into a mosaic pattern. Then, the existence of human face is determined by calculating a proximity from a human face dictionary and mis-extraction due to the influence of background and like can be reduced by segmenting the human face. Thereby, the human face can be automatically and effectively detected from the image.
  • JP-A-8-77334 extraction of the feature point of a face image to be used for distinguishing each individual and group (for example, an ethnic group) is automatically and easily performed at a high speed by using a predetermined algorithm.
  • digital image data of the human face image is directly obtained by a digital still camera or the like using an electronic image pickup device such as a CCD or a CMOS.
  • the digital image data may be acquired from an analog photo (silver salt photo) of the human face by using an electronic optical image scanner.
  • easy image processing such as zooming in/out and moving is performed on the digital image data by using an image processing system comprising a general-purpose computer such as a PC and general-purpose software without damaging the innate characteristics of the human face in the photo.
  • the chin becomes vaguely-outlined depending on the presence of scattered light and the direction of the lighting.
  • a relatively strong edge is often detected between lips and the lower base of the chin depending on the facial features of the subject and between the collar and neck depending on the clothes worn.
  • a stronger edge is generated at the wrinkles of the neck than at the chin outline depending on the age and shape of the subject, and this edge is sometimes mistakenly detected as the chin outline.
  • one object of the invention is to provide a novel chin detecting method, a chin detecting system and a chin detecting program capable of detecting a robust chin lower base by accurately and quickly detecting an outline of the chin, which is difficult to detect from a face image.
  • a chin detecting method for detecting a lower base of a chin of a human face from an image with the human face included therein according to Aspect 1 is characterized in that: after detecting a face image of an area including both eyes and the lips of the human face but not including the chin and after setting a chin detecting window with a size including the chin of the human face at a lower part of the detected face image, a pixel having an edge strength with a threshold value or more is detected based on an edge strength distribution by calculating the edge strength distribution within the chin detecting window; and then an approximated curve is obtained to match a distribution of each of the detected pixels and a lowermost part of the approximated curve is identified as the lower base of the chin of the human face.
  • the edge strength distribution within the chin detecting window is calculated.
  • an outline including the chin lower base generally has a higher edge strength than that of the periphery of the outline due to a sharp contrast between the outline and the periphery thereof. Consequently, it becomes possible to easily and reliably select a candidate area to be the outline including the chin lower base that should be included in the chin detecting window by calculating the edge strength distribution within the chin detecting window.
  • the pixel having an edge strength with a threshold value or more is detected.
  • an outline including the chin lower base generally has a high edge strength, it becomes possible to select only a pixel with a high possibility of corresponding to the outline including the chin lower base by selecting the pixel having the edge strength with a specific threshold value or more and by eliminating other pixels.
  • the approximated curve is obtained to match the distribution of each of the detected pixels and the lowermost part of the approximated curve is identified as the lower base of the chin of the human face and detected.
  • a chin detecting method for detecting a lower base of a chin of a human face from an image with the human face included therein according to Aspect 2 is characterized in that: after detecting a face image of an area including both eyes and the lips of the human face but not including the chin and after setting a chin detecting window with a size including the chin of the human face at a lower part of the detected face image, a pixel having an edge strength with a threshold value or more is detected by calculating a primary differentiation type edge strength distribution within the chin detecting window and by calculating the threshold value from the primary differentiation type edge strength distribution; then a pixel to be used is narrowed down from the pixels by using a sign inversion of a secondary differentiation type edge; and thereafter an approximated curve is obtained to match a distribution of the narrowed-down pixel by using a least-square method and a lowermost part of the approximated curve is identified as the lower base of the chin of the human face.
  • a chin detecting method according to Aspect 3 is characterized in that the chin detecting window has a horizontally long rectangular shape, a width of the chin detecting window is wider than a width of the human face and the height of the chin detecting window is shorter than the width of the human face.
  • the chin lower base of the human face to be detected can be reliably captured within the chin detecting window, the chin lower base can be detected more accurately.
  • a chin detecting method according to Aspect 4 is characterized in that the primary differentiation type edge strength distribution is obtained by using a Sobel edge detection operator.
  • Calculating differentiation with respect to a contrast is the most representative method of detecting a sharp contrast in the image. Since a difference is substituted for the differentiation of a digital image, an edge part with a sharp contrast in the image can be effectively detected by primarily differentiating an original image within the chin detecting window.
  • a Sobel edge detection operator excellent in detection performance is used as a primary differentiation type edge detection operator (filter). Thereby the edge part within the chin detecting window can be detected reliably.
  • a chin detecting method according to Aspect 5 is characterized in that the secondary differentiation type edge is obtained by using a Laplace edge detection operator.
  • a chin detecting method according to Aspect 6 is characterized in that the approximated curve is obtained by using a least-square method by a quadratic function.
  • a least-square method by a quadratic function is used in the invention. Thereby the chin outline of the human face within the chin detecting window can be quickly obtained.
  • the least-square method employed in the invention is a method of finding a coefficient to minimize a sum of squares of errors from a function which has tried to fit a group of plural samplings, as generally understood.
  • a quadratic expression may be used for experimental data which shows a phenomenon behaving as the quadratic expression.
  • a calculation can be made by calculating the logarithm. It is possible to easily obtain the approximated curve by the least-square method by using software (a program) already incorporated in many scientific electronic calculators and spreadsheet software.
  • a chin detecting system for detecting a lower base of a chin of a human face from an image with the human face included therein comprises: an image scanning part for scanning the image with the human face included therein; a face detecting part for detecting an area including both eyes and the lips of the human face but not including the chin from the image scanned in the image scanning part and for setting a face detecting frame in the detected area; a chin detecting window setting part for setting a chin detecting window with a size including the chin of the human face at a lower part of the detecting frame; an edge calculating part for calculating an edge strength distribution within the chin detecting window; a pixel selecting part for selecting pixels having an edge strength with a threshold value or more based on the edge strength distribution obtained by the edge calculating part; a curve approximating part for obtaining an approximated curve to match a distribution of each of the pixels selected in the pixel selecting part; and a chin detecting part for detecting a lowermost part of the approximate
  • a chin detecting system according to Aspect 8 is characterized in that the pixel selecting part detects a pixel having an edge strength with a threshold value or more by calculating the threshold value from a primary differentiation type edge strength distribution calculated in the edge calculating part, and then selects a pixel to be used from the pixels by using a sign inversion of a secondary differentiation type edge.
  • a chin detecting program for detecting a lower base of a chin of a human face from an image with the human face included therein makes a computer realize: an image scanning part for scanning the image with the human face included therein; a face detecting part for detecting an area including both eyes and the lips of the human face but not including the chin from the image scanned in the image scanning part and for setting a face detecting frame in the detected area; a chin detecting window setting part for setting a chin detecting window with a size including the chin of the human face at a lower part of the detecting frame; an edge calculating part for calculating an edge strength distribution within the chin detecting window; a pixel selecting part for selecting pixels having an edge strength with a threshold value or more based on the edge strength distribution obtained by the edge calculating part; a curve approximating part for obtaining an approximated curve to match a distribution of each of the pixels selected in the pixel selecting part; and a chin detecting part for detecting a lowermost
  • a chin detecting program according to Aspect 10 is characterized in that the pixel selecting part detects a pixel having an edge strength with a threshold value or more by calculating the threshold value from a primary differentiation type edge strength distribution calculated in the edge calculating part, and then selects a pixel to be used from the pixels by using a sign inversion of a secondary differentiation type edge.
  • FIG. 1 is a block diagram showing one embodiment of a chin detecting system according to the invention.
  • FIG. 2 is a block diagram showing hardware configuring the chin detecting system.
  • FIG. 3 is a flowchart showing one embodiment of a chin detecting method according to the invention.
  • FIG. 4 is a graph showing a relationship between the luminance of the face image and the pixel location thereof.
  • FIGS. 5 ( a ) and 5 ( b ) are graphs showing a relationship between the edge strength of the face image and the pixel location thereof.
  • FIG. 6 is a view showing the face image from which the chin will be detected.
  • FIG. 7 is a view showing the state in which a face detecting frame is set at the face image.
  • FIG. 8 is a view showing the state in which a chin detecting window is set at the lower part of face detecting frame.
  • FIGS. 9 ( a ) and 9 ( b ) are views showing the state in which the chin lower base is detected and the location is modified.
  • FIG. 10 is a view showing the chin detecting window in which only a pixel having the edge strength with a threshold value or more is indicated.
  • FIG. 11 is a view showing the chin detecting window in which only the pixel selected as a result of sign inversion is indicated.
  • FIGS. 12 ( a ) and 12 ( b ) are views showing a Sobel edge detection operator.
  • FIG. 13 is a view showing a Laplacian filter.
  • FIG. 1 shows one embodiment of a chin detecting system 100 for a human face according to the invention.
  • the chin detecting system 100 comprises: an image scanning part 10 for scanning a face image G with the human face included therein; a face detecting part 12 for detecting the human face from the face image G scanned in the image scanning part 10 and for setting a face detecting frame F of the human face; a chin detecting window setting part 14 for setting a chin detecting window W with a size including the chin of the human face at a lower part of the face detecting frame F; an edge calculating part 16 for calculating an edge strength distribution within the chin detecting window W; a pixel selecting part 18 for selecting pixels having an edge strength with a threshold value or more based on the edge strength distribution obtained by the edge calculating part 16 ; a curve approximating part 20 for obtaining an approximated curve to substantially match a distribution of each of the pixels selected in the pixel selecting part 18 ; and a chin detecting part 22 for detecting a lowermost part of the approximated curve obtained in the curve approximating part 20 as the lower base of
  • the image scanning part 10 provides a function of obtaining a facial portrait for visual identification attached to, for example, a public ID such as a passport and a driver's license or attached to a private ID such as an employee ID card, a student ID card and a membership card, in other words, obtaining the face image G which has no background and includes largely the human face facing the front as digital image data including each pixel data of R (red), G (green) and B (blue) by using an image pickup sensor such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor).
  • a CCD Charge Coupled Device
  • CMOS Complementary Metal Oxide Semiconductor
  • the CCD of a digital still camera and a digital video camera, a CMOS camera, a vidicon camera, an image scanner, a drum scanner and so on may be used.
  • A/D analog to digital
  • the image scanning part 10 has a data storing function in which the scanned face image data can be properly stored in a storage device such as a hard disk drive (HDD) and in a storage medium such as DVD-ROM.
  • a storage device such as a hard disk drive (HDD)
  • a storage medium such as DVD-ROM.
  • the image scanning part 10 becomes unnecessary or functions as a communication part or an interface (I/F).
  • the face detecting part 12 provides a function of detecting the human face from the face image G scanned in the image scanning part 10 and setting the face detecting frame F at the detected part.
  • This face detecting frame F has a size (an area) including both eyes and the lips of the human face with the nose centered but not including the chin, which will be described later.
  • a detection algorithm for the human face by the face detecting part 12 is not especially limited, a conventional method can be utilized as described in the following document, for example:
  • the area from both eyes to the lips is detected as a face image area.
  • the size of the face detecting frame F is not unchangeable and can be increased and decreased depending on the size of the target face image.
  • the chin detecting window setting part 14 has a function of setting the chin detecting window W with a size including the chin of the human face at a lower part of the face detecting frame F set in the face detecting part 20 .
  • the edge calculating part 16 provides a function of calculating an edge strength distribution within the chin detecting window W.
  • the primary differentiation type edge strength distribution is obtained by using a Sobel edge detection operator or the like.
  • the pixel selecting part 18 provides a function of selecting a pixel having an edge strength with a threshold value or more based on the edge strength distribution obtained by the edge calculating part 16 .
  • a candidate pixel obtained by the Sobel edge detection operator by using a secondary differentiation filter (Laplacian filter) is narrowed down by using a sign inversion of the edge.
  • the curve approximating part 20 provides a function of obtaining an approximated curve to match a distribution of each pixel selected in the pixel selecting part 18 .
  • y denotes the vertical coordinate
  • x denotes the horizontal coordinate
  • x 0 denotes the horizontal center of the chin detecting window.
  • the chin lower base detecting part 22 provides a function of detecting a lowermost part of the approximated curve obtained in the curve approximating part 20 as the lower base of the chin of the human face. As shown in FIG. 9 , the chin lower base may be expressly provided by attaching a noticeable marker M to the detected chin-lower base.
  • each of the parts 10 to 22 and so on configuring the chin detecting system 100 is actually realized by a computer system such as a PC which is configured by hardware in the form of a CPU, RAM and the like and which is configured by a special computer program (software) shown in FIG. 3 .
  • a computer system such as a PC which is configured by hardware in the form of a CPU, RAM and the like and which is configured by a special computer program (software) shown in FIG. 3 .
  • a processor bus for performing various controls and arithmetic processing
  • a RAM Random Access Memory
  • ROM Read Only Memory
  • a secondary storage 43 such as a hard disk drive (HDD) and a semiconductor memory
  • an output unit 44 configured by a monitor (an LCD (liquid crystal display) or a CRT (cathode-ray tube)) and so on
  • an input unit 45 configured by an image pickup sensor and so on such as an image scanner, a keyboard, a mouse, a CCD (Charge Coupled Device
  • various control programs and data that are supplied through a storage medium such as a CD-ROM, DVD-ROM and a flexible disk (FD) and through a communication network (LAN, WAN, Internet and so on) N are installed on the secondary storage 43 and so on.
  • the programs and data are loaded onto the main storage 41 if necessary.
  • the CPU 40 performs a specific control and arithmetic processing by using various resources.
  • the processing result (processing data) is output to the output unit 44 through the bus 47 and displayed.
  • the data is properly stored and saved (updated) in the database created by the secondary storage 43 if necessary.
  • FIG. 3 is a flowchart showing an example of a chin detecting method for the face image G to be actually detected.
  • step S 101 by the face detecting part 12 , after detecting a face included in the face image G from the face image G which has been scanned in the image scanning part 10 and from which the chin will be detected, the face detecting frame F for specifying the detected human face is set.
  • the location of the human face is first specified by the face detecting part 12 and then the rectangular-shaped face detecting frame F is set on the human face as shown in FIG. 7 .
  • the face detecting frame F has a size (an area) including both eyes and the lips of the human face with the nose centered but not including the chin, the size and shape are not limited to those exemplified if the area does not include the chin part of the human face. Also, although the human face size and the location of a display frame Y in a horizontal direction are within the regulation with regard to each face image G shown in FIGS. 6 - 9 ( a ), the chin is located too low and is out of regulation.
  • step S 103 when the face detecting frame F has been set through the above process, moving to step S 103 and setting the chin detecting window W having a horizontally long rectangular shape, and the chin location of the human face is specified.
  • the size and shape of the chin detecting window W is not strictly limited. If the chin detecting window W includes an area from the lower lip of a human face to the chin lower base without fail, the size and shape is not especially limited. However, when the chin detecting window W is too large, there are many lines confusingly similar to the chin outline such as the shade of the chin, the wrinkles of the neck and a shirt collar, which increases the time to detect the true edge. When the chin detecting window W is too small, the chin lower base to be detected may not be included in some cases due to the difference between individuals.
  • the chin detecting window W is set by contacting the lower side of the face detecting frame F in the example of FIG. 8 , the chin detecting window W does not always have to contact the face detecting frame F. It suffices if a specific positional relationship can be kept between the face detecting frame F and the chin detecting window W.
  • step S 105 when the chin detecting window W has been set at a target image, moving to step S 105 and calculating the luminance (Y) of each pixel within the chin detecting window W and the primary differentiation type edge strength distribution within the chin detecting window W is obtained based on the luminance value by using a primary differentiation type (difference type) edge detection operator typified by a “Sobel edge detection operator” and the like.
  • a primary differentiation type (difference type) edge detection operator typified by a “Sobel edge detection operator” and the like.
  • FIGS. 12 ( a ) and 12 ( b ) show this “Sobel edge detection operator”.
  • a horizontal edge is emphasized by adjusting each group of three pixel values located in the left and right rows among eight pixel values surrounding a target pixel.
  • vertical and horizontal edges are detected by emphasizing the vertical edge by adjusting each group of three pixel values located in the upper line and lower row among eight pixel values surrounding a target pixel.
  • the edge strength can be obtained.
  • other primary differentiation type edge detection operators can be applied such as “Roberts” and “Prewitt” in place of the “Sobel edge detection operator”.
  • FIG. 4 shows a relationship between the luminance (vertical axis) of the face image G and the pixel location (horizontal axis) thereof. Since the luminance changes sharply at the edge part in the image such as the chin outline, a parabola-shaped approximated curve can be obtained by using a primary differentiation type (difference type) edge detection operator such as the “Sobel edge detection operator”.
  • a primary differentiation type (difference type) edge detection operator such as the “Sobel edge detection operator”.
  • a threshold value is calculated from the edge strength distribution. The reason for this is that, as described above, since the edge strength is greatly affected by photographing conditions (lighting conditions) and so on, it is difficult to determine the edge corresponding to the chin outline from the edge strength including other areas.
  • the threshold value for determining the pixels is not especially limited, the threshold value may be set at one-tenth of the maximum edge strength detected in the chin detecting window W, for example, and the pixels having a stronger edge than this threshold value are selected as candidate pixels for obtaining the chin lower base.
  • step S 111 when the threshold value for sorting out the pixel values has been determined, moving to step S 111 and selecting only the pixels having the edge strength exceeding the threshold value while scanning in a vertical direction by setting all pixels configuring the upper side of the chin detecting window W as the base point as shown in FIG. 10 , and the pixels less than threshold value are eliminated.
  • FIG. 10 shows simply the pixel distribution of the pixels thus selected (exceeding the threshold value).
  • the pixels having the edge strength with a threshold value or more are identified and indicated by scanning the pixels on each line in a non-interlaced manner, that is, scanning in the X-direction within the chin detecting window W from the upper left of the chin detecting window W and moving to the Y-direction sequentially.
  • the reason for scanning from the upper left of the chin detecting window W is that a candidate pixel with a threshold value or more appearing earliest in the Y-direction will be identified as a potential candidate of the chin lower base. Thereby it becomes possible to detect the pixels corresponding to the chin outline effectively.
  • the reason is that, since an edge which is confusingly similar to the chin outline is stronger at the wrinkles of the neck and a shirt collar located below the actual chin outline than the edge at the upper part of the actual chin outline, the lower edge is desired to be low-prioritized.
  • step S 113 when the pixels having the edge strength exceeding threshold value have been selected, moving to step S 113 , a sign inversion of a secondary differentiation type edge is detected in each row in order to narrow down (identify) the pixel having the maximum edge strength in each pixel row (Y-direction) among the selected pixels.
  • one pixel will be determined among plural candidate pixels in each row as shown in FIG. 10 (and FIG. 11 ).
  • each uppermost pixel is selected as the candidate pixel configuring the chin outline in rows “a”, “b”, “d”, “f” and “g” in FIG. 11 while each lowermost pixel is selected as the candidate pixel configuring the chin outline in rows “c” and “e” in FIG. 11 .
  • step S 115 After that, when the selected candidate pixel has been finally narrowed down among many pixels exceeding the threshold value, moving to step S 115 , putting the above-described approximated curve into the distribution of the pixels searched, and the chin lower base will be obtained.
  • the chin lower base of the human face since the chin lower base of the human face is located quite low, the chin lower base can be located at the regulation location by moving the human face vertically upward as shown in FIG. 9 ( b ). Although the image ends at the neck of human as shown in FIG. 9 ( a ) and so on, the image under the neck is assumed to exist actually as it is.
  • the lower base of the human face is detected based on the edge strength distribution within the chin detecting window after setting the chin detecting window by using a publicly-known human face detecting method, it becomes possible to detect a robust chin lower base by accurately and quickly detecting the chin lower base of the human face, which is difficult to detect from a face image.

Abstract

A chin detecting method is provided. After detecting a human face and setting a chin detecting window at a lower part of the image, an edge strength distribution is calculated within the chin detecting window and pixels having an edge strength with a threshold value or more are detected based on the edge strength distribution. Then an approximated curve is obtained to most match a distribution of each of the detected pixels and a lowermost part of the approximated curve is identified as the lower base of the chin of the human face. Thereby the chin lower base of the human face can be detected automatically, accurately and quickly.

Description

    RELATED APPLICATIONS
  • This application claims priority to Japanese Patent Application No. 2003-407911 filed Dec. 5, 2003 which is hereby expressly incorporated by reference herein in its entirety.
  • BACKGROUND
  • 1. Technical Field
  • The present invention concerns pattern recognition and object recognition technologies, and more specifically, the invention relates to a chin detecting method, a chin detecting system and a chin detecting program for accurately detecting the location of the chin of a human face from an image of the human face.
  • 2. Related Art
  • With recent advancements in pattern recognition technologies and information processors such as computers, the recognition accuracy of text and sound has been dramatically improved. However, in the pattern recognition of an image of a human, an object, the landscape and so on, e.g., an image scanned from a digital still camera and the like, it is still difficult to accurately and quickly identify whether a human face is visible in the image or not.
  • However, automatically and accurately identifying whether a human face is visible in the image or not and who the human is with a computer has been extremely important to establish a living body recognition process, improve security, accelerate criminal investigations, speed up image data reduction and retrieval, and so on. In this regard, many proposals have been made.
  • In JP-A-9-50528, the existence of a flesh color area is first determined in an input image, and the mosaic size is automatically determined in the flesh color area to convert a candidate area into a mosaic pattern. Then, the existence of human face is determined by calculating a proximity from a human face dictionary and mis-extraction due to the influence of background and like can be reduced by segmenting the human face. Thereby, the human face can be automatically and effectively detected from the image.
  • In JP-A-8-77334, extraction of the feature point of a face image to be used for distinguishing each individual and group (for example, an ethnic group) is automatically and easily performed at a high speed by using a predetermined algorithm.
  • Incidentally, with regard to a required photo of a human face (which is a face image) for a passport, photo ID card and the like, many photographic pose requirements are set in detail such as the size of the photo, the direction to which the human face faces, and the size and location of the human face within the photograph.
  • For example, not to mention the requirements for no background and no accessories such as a hat, there are detailed regulations requiring that the human face point to the front, that the human face be located in the center of photo, that the chin of the face be located within a specific area relative to the lower frame of the photo, and so on. In principle, a photo of a face image that is outside of the regulations is not adopted.
  • However, it is impractical to retake a photo simply because the size and location of the face in the photo is slightly out of regulation, although it may be rational if the human face is not facing the front or if an accessory such as a hat is worn. This causes a problem of imposing considerable labor and cost on a user.
  • For this reason, a method of solving the above problems has been examined by using digital image processing which has developed significantly in recent years.
  • For example, the following method has been examined to solve such problems. First, digital image data of the human face image is directly obtained by a digital still camera or the like using an electronic image pickup device such as a CCD or a CMOS. On the other hand, the digital image data may be acquired from an analog photo (silver salt photo) of the human face by using an electronic optical image scanner. Once the digital image data is obtained, easy image processing such as zooming in/out and moving is performed on the digital image data by using an image processing system comprising a general-purpose computer such as a PC and general-purpose software without damaging the innate characteristics of the human face in the photo.
  • Although it is possible to manually perform the above process by using general-purpose I/O devices such as a mouse, a keyboard and a monitor when the number of images to be processed is small, it becomes necessary to perform the process automatically by using the aforementioned conventional techniques when the number of images increases.
  • To realize the automation of image processing for a human face, however, it is necessary to accurately recognize a face outline, and especially a chin outline of the human face which cannot be accurately scanned in many cases with only a conventional edge detection filter due to lighting conditions, features of the person in the photo and other conditions that are present while taking the picture.
  • For example, the chin becomes vaguely-outlined depending on the presence of scattered light and the direction of the lighting. Also, a relatively strong edge is often detected between lips and the lower base of the chin depending on the facial features of the subject and between the collar and neck depending on the clothes worn. In many cases a stronger edge is generated at the wrinkles of the neck than at the chin outline depending on the age and shape of the subject, and this edge is sometimes mistakenly detected as the chin outline.
  • The present invention has been achieved to solve the aforementioned problems. Therefore, one object of the invention is to provide a novel chin detecting method, a chin detecting system and a chin detecting program capable of detecting a robust chin lower base by accurately and quickly detecting an outline of the chin, which is difficult to detect from a face image.
  • SUMMARY
  • A chin detecting method for detecting a lower base of a chin of a human face from an image with the human face included therein according to Aspect 1 is characterized in that: after detecting a face image of an area including both eyes and the lips of the human face but not including the chin and after setting a chin detecting window with a size including the chin of the human face at a lower part of the detected face image, a pixel having an edge strength with a threshold value or more is detected based on an edge strength distribution by calculating the edge strength distribution within the chin detecting window; and then an approximated curve is obtained to match a distribution of each of the detected pixels and a lowermost part of the approximated curve is identified as the lower base of the chin of the human face.
  • In the invention as described above, after selecting a part with a high possibility of including the chin of a human face and setting the chin detecting window at that part, the edge strength distribution within the chin detecting window is calculated. In other words, an outline including the chin lower base generally has a higher edge strength than that of the periphery of the outline due to a sharp contrast between the outline and the periphery thereof. Consequently, it becomes possible to easily and reliably select a candidate area to be the outline including the chin lower base that should be included in the chin detecting window by calculating the edge strength distribution within the chin detecting window.
  • Next, when the edge strength distribution has been calculated, the pixel having an edge strength with a threshold value or more is detected. In other words, since an outline including the chin lower base generally has a high edge strength, it becomes possible to select only a pixel with a high possibility of corresponding to the outline including the chin lower base by selecting the pixel having the edge strength with a specific threshold value or more and by eliminating other pixels.
  • Finally, the approximated curve is obtained to match the distribution of each of the detected pixels and the lowermost part of the approximated curve is identified as the lower base of the chin of the human face and detected.
  • Thereby it becomes possible to detect a robust chin lower base by accurately and quickly detecting an outline of the chin of the human face, which is difficult to detect from a face image.
  • A chin detecting method for detecting a lower base of a chin of a human face from an image with the human face included therein according to Aspect 2 is characterized in that: after detecting a face image of an area including both eyes and the lips of the human face but not including the chin and after setting a chin detecting window with a size including the chin of the human face at a lower part of the detected face image, a pixel having an edge strength with a threshold value or more is detected by calculating a primary differentiation type edge strength distribution within the chin detecting window and by calculating the threshold value from the primary differentiation type edge strength distribution; then a pixel to be used is narrowed down from the pixels by using a sign inversion of a secondary differentiation type edge; and thereafter an approximated curve is obtained to match a distribution of the narrowed-down pixel by using a least-square method and a lowermost part of the approximated curve is identified as the lower base of the chin of the human face.
  • In the invention, there is a more specialized calculating method than in Aspect 1 for the edge strength distribution (primary differentiation type), pixel selecting method (secondary differentiation type) and for the approximated curve (least-square method). Thereby the chin lower base of human face can be detected more accurately and faster than in Aspect 1.
  • In the chin detecting method according to Aspect 1 or 2, a chin detecting method according to Aspect 3 is characterized in that the chin detecting window has a horizontally long rectangular shape, a width of the chin detecting window is wider than a width of the human face and the height of the chin detecting window is shorter than the width of the human face.
  • Thereby since the chin lower base of the human face to be detected can be reliably captured within the chin detecting window, the chin lower base can be detected more accurately.
  • In a chin detecting method according to Aspect 2 or 3, a chin detecting method according to Aspect 4 is characterized in that the primary differentiation type edge strength distribution is obtained by using a Sobel edge detection operator.
  • Calculating differentiation with respect to a contrast is the most representative method of detecting a sharp contrast in the image. Since a difference is substituted for the differentiation of a digital image, an edge part with a sharp contrast in the image can be effectively detected by primarily differentiating an original image within the chin detecting window.
  • In the invention, a Sobel edge detection operator excellent in detection performance is used as a primary differentiation type edge detection operator (filter). Thereby the edge part within the chin detecting window can be detected reliably.
  • In a chin detecting method according to one of Aspects 2 to 4, a chin detecting method according to Aspect 5 is characterized in that the secondary differentiation type edge is obtained by using a Laplace edge detection operator.
  • Thereby the secondary differentiation type edge can be detected reliably.
  • In a chin detecting method according to one of Aspects 1 to 5, a chin detecting method according to Aspect 6 is characterized in that the approximated curve is obtained by using a least-square method by a quadratic function.
  • As a method of obtaining the approximated curve within the chin detecting window which can be identified as the chin outline of the human face, a least-square method by a quadratic function is used in the invention. Thereby the chin outline of the human face within the chin detecting window can be quickly obtained.
  • The least-square method employed in the invention is a method of finding a coefficient to minimize a sum of squares of errors from a function which has tried to fit a group of plural samplings, as generally understood. For example, a quadratic expression may be used for experimental data which shows a phenomenon behaving as the quadratic expression. When it is expected that the experimental data will show a phenomenon behaving as an exponential function, a calculation can be made by calculating the logarithm. It is possible to easily obtain the approximated curve by the least-square method by using software (a program) already incorporated in many scientific electronic calculators and spreadsheet software.
  • A chin detecting system for detecting a lower base of a chin of a human face from an image with the human face included therein according to Aspect 7 comprises: an image scanning part for scanning the image with the human face included therein; a face detecting part for detecting an area including both eyes and the lips of the human face but not including the chin from the image scanned in the image scanning part and for setting a face detecting frame in the detected area; a chin detecting window setting part for setting a chin detecting window with a size including the chin of the human face at a lower part of the detecting frame; an edge calculating part for calculating an edge strength distribution within the chin detecting window; a pixel selecting part for selecting pixels having an edge strength with a threshold value or more based on the edge strength distribution obtained by the edge calculating part; a curve approximating part for obtaining an approximated curve to match a distribution of each of the pixels selected in the pixel selecting part; and a chin detecting part for detecting a lowermost part of the approximated curve obtained in the curve approximating part as the lower base of the chin of the human face.
  • Thereby, as in Aspect 1, it becomes possible to detect a robust chin lower base by accurately and quickly detecting an outline of the chin of human face, which is difficult to detect from a face image.
  • By realizing each part by using special hardware and a computer system, it becomes possible to exert these operations and effects automatically.
  • In a chin detecting system according to Aspect 7, a chin detecting system according to Aspect 8 is characterized in that the pixel selecting part detects a pixel having an edge strength with a threshold value or more by calculating the threshold value from a primary differentiation type edge strength distribution calculated in the edge calculating part, and then selects a pixel to be used from the pixels by using a sign inversion of a secondary differentiation type edge.
  • Thereby, as in Aspects 2 and 7, it becomes possible to detect a chin lower base accurately and quickly. In addition, by realizing each part by using special hardware and a computer system, it becomes possible to exert these operations and effects automatically.
  • A chin detecting program for detecting a lower base of a chin of a human face from an image with the human face included therein according to Aspect 9 makes a computer realize: an image scanning part for scanning the image with the human face included therein; a face detecting part for detecting an area including both eyes and the lips of the human face but not including the chin from the image scanned in the image scanning part and for setting a face detecting frame in the detected area; a chin detecting window setting part for setting a chin detecting window with a size including the chin of the human face at a lower part of the detecting frame; an edge calculating part for calculating an edge strength distribution within the chin detecting window; a pixel selecting part for selecting pixels having an edge strength with a threshold value or more based on the edge strength distribution obtained by the edge calculating part; a curve approximating part for obtaining an approximated curve to match a distribution of each of the pixels selected in the pixel selecting part; and a chin detecting part for detecting a lowermost part of the approximated curve obtained in the curve approximating part as the lower base of the chin of the human face.
  • Thereby since it becomes possible to obtain the same effect as in Aspects 1 and 7 and to realize the function in software by using a general-purpose computer (hardware) such as a PC, the function can be realized more economically and easily as compared to the case of realizing it by creating a special apparatus. In addition, version upgrades can be easily attained such as a change and an improvement of the function only by rewriting a program in many cases.
  • In a chin detecting program according to Aspect 9, a chin detecting program according to Aspect 10 is characterized in that the pixel selecting part detects a pixel having an edge strength with a threshold value or more by calculating the threshold value from a primary differentiation type edge strength distribution calculated in the edge calculating part, and then selects a pixel to be used from the pixels by using a sign inversion of a secondary differentiation type edge.
  • Thereby since it becomes possible to obtain the same effect as in Aspects 2 and 8 and to realize the function in software as in Aspect 9, the function can be realized economically and easily. In addition, version upgrades can be easily attained such as a change and an improvement of the function.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram showing one embodiment of a chin detecting system according to the invention.
  • FIG. 2 is a block diagram showing hardware configuring the chin detecting system.
  • FIG. 3 is a flowchart showing one embodiment of a chin detecting method according to the invention.
  • FIG. 4 is a graph showing a relationship between the luminance of the face image and the pixel location thereof.
  • FIGS. 5(a) and 5(b) are graphs showing a relationship between the edge strength of the face image and the pixel location thereof.
  • FIG. 6 is a view showing the face image from which the chin will be detected.
  • FIG. 7 is a view showing the state in which a face detecting frame is set at the face image.
  • FIG. 8 is a view showing the state in which a chin detecting window is set at the lower part of face detecting frame.
  • FIGS. 9(a) and 9(b) are views showing the state in which the chin lower base is detected and the location is modified.
  • FIG. 10 is a view showing the chin detecting window in which only a pixel having the edge strength with a threshold value or more is indicated.
  • FIG. 11 is a view showing the chin detecting window in which only the pixel selected as a result of sign inversion is indicated.
  • FIGS. 12(a) and 12(b) are views showing a Sobel edge detection operator.
  • FIG. 13 is a view showing a Laplacian filter.
  • DETAILED DESCRIPTION
  • A best mode for carrying out the invention will be described with reference to the drawings.
  • FIG. 1 shows one embodiment of a chin detecting system 100 for a human face according to the invention.
  • As shown in this Figure, the chin detecting system 100 comprises: an image scanning part 10 for scanning a face image G with the human face included therein; a face detecting part 12 for detecting the human face from the face image G scanned in the image scanning part 10 and for setting a face detecting frame F of the human face; a chin detecting window setting part 14 for setting a chin detecting window W with a size including the chin of the human face at a lower part of the face detecting frame F; an edge calculating part 16 for calculating an edge strength distribution within the chin detecting window W; a pixel selecting part 18 for selecting pixels having an edge strength with a threshold value or more based on the edge strength distribution obtained by the edge calculating part 16; a curve approximating part 20 for obtaining an approximated curve to substantially match a distribution of each of the pixels selected in the pixel selecting part 18; and a chin detecting part 22 for detecting a lowermost part of the approximated curve obtained in the curve approximating part 20 as the lower base of the chin of the human face.
  • First, the image scanning part 10 provides a function of obtaining a facial portrait for visual identification attached to, for example, a public ID such as a passport and a driver's license or attached to a private ID such as an employee ID card, a student ID card and a membership card, in other words, obtaining the face image G which has no background and includes largely the human face facing the front as digital image data including each pixel data of R (red), G (green) and B (blue) by using an image pickup sensor such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor).
  • More specifically, the CCD of a digital still camera and a digital video camera, a CMOS camera, a vidicon camera, an image scanner, a drum scanner and so on may be used. There is provided a function of analog to digital (A/D) converting the face image G optically scanned in the image pickup sensor and sequentially sending the digital image data to the face detecting part 12.
  • In addition, the image scanning part 10 has a data storing function in which the scanned face image data can be properly stored in a storage device such as a hard disk drive (HDD) and in a storage medium such as DVD-ROM. When the face image is supplied as digital image data through a network and a storage medium, the image scanning part 10 becomes unnecessary or functions as a communication part or an interface (I/F).
  • Next, the face detecting part 12 provides a function of detecting the human face from the face image G scanned in the image scanning part 10 and setting the face detecting frame F at the detected part.
  • This face detecting frame F has a size (an area) including both eyes and the lips of the human face with the nose centered but not including the chin, which will be described later.
  • In addition, although a detection algorithm for the human face by the face detecting part 12 is not especially limited, a conventional method can be utilized as described in the following document, for example:
  • H. A. Rowley, S. Baluja and T. Kanade,
  • “Neural network-based face detection”
  • IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 1, pp. 23-38, 1998.
  • According to the technology described in this document, creating a face image of an area including both eyes and the lips of the human face but not including the chin, and training a neural network by using this image, the human face is detected by using the trained neural network. According to the disclosed technology mentioned above, the area from both eyes to the lips is detected as a face image area.
  • The size of the face detecting frame F is not unchangeable and can be increased and decreased depending on the size of the target face image.
  • The chin detecting window setting part 14 has a function of setting the chin detecting window W with a size including the chin of the human face at a lower part of the face detecting frame F set in the face detecting part 20. In other words, there is selected a target area for accurately detecting an outline including the chin lower base of the human face in the following parts from the face image G by using the chin detecting window W.
  • The edge calculating part 16 provides a function of calculating an edge strength distribution within the chin detecting window W. As will be described later, the primary differentiation type edge strength distribution is obtained by using a Sobel edge detection operator or the like.
  • The pixel selecting part 18 provides a function of selecting a pixel having an edge strength with a threshold value or more based on the edge strength distribution obtained by the edge calculating part 16. As will be described later, a candidate pixel obtained by the Sobel edge detection operator by using a secondary differentiation filter (Laplacian filter) is narrowed down by using a sign inversion of the edge.
  • The curve approximating part 20 provides a function of obtaining an approximated curve to match a distribution of each pixel selected in the pixel selecting part 18. As will be described later, the chin outline of the human face is obtained in a curve manner by using a least-square method by a quadratic function as in the following formula.
    y=a×(x−x 0)2 +b   Formula 1
  • In this formula, y denotes the vertical coordinate, x denotes the horizontal coordinate and x0 denotes the horizontal center of the chin detecting window.
  • Calculating “a” and “b” by using this formula and a least-square method, “b” will express the chin lower base (a<0).
  • The chin lower base detecting part 22 provides a function of detecting a lowermost part of the approximated curve obtained in the curve approximating part 20 as the lower base of the chin of the human face. As shown in FIG. 9, the chin lower base may be expressly provided by attaching a noticeable marker M to the detected chin-lower base.
  • In addition, each of the parts 10 to 22 and so on configuring the chin detecting system 100 is actually realized by a computer system such as a PC which is configured by hardware in the form of a CPU, RAM and the like and which is configured by a special computer program (software) shown in FIG. 3.
  • In the hardware for realizing the chin detecting system 100 as shown in FIG. 2, for example, through various internal/external buses 47 such as a processor bus, a memory bus, a system bus and an I/O bus which are configured by a PCI (Peripheral Component Interconnect) bus, an ISA (Industrial Standard Architecture) bus and so on, there are bus-connected to each other: a CPU (Central Processing Unit) 40 for performing various controls and arithmetic processing; a RAM (Random Access Memory) 41 used for a main storage; a ROM (Read Only Memory) 42 which is a read-only storage device; a secondary storage 43 such as a hard disk drive (HDD) and a semiconductor memory; an output unit 44 configured by a monitor (an LCD (liquid crystal display) or a CRT (cathode-ray tube)) and so on; an input unit 45 configured by an image pickup sensor and so on such as an image scanner, a keyboard, a mouse, a CCD (Charge Coupled Device) and a CMOS (Complementary Metal Oxide Semiconductor); an I/O interface (I/F) 46; and so on.
  • Then, for example, various control programs and data that are supplied through a storage medium such as a CD-ROM, DVD-ROM and a flexible disk (FD) and through a communication network (LAN, WAN, Internet and so on) N are installed on the secondary storage 43 and so on. At the same time, the programs and data are loaded onto the main storage 41 if necessary. According to the programs loaded onto the main storage 41, the CPU 40 performs a specific control and arithmetic processing by using various resources. The processing result (processing data) is output to the output unit 44 through the bus 47 and displayed. The data is properly stored and saved (updated) in the database created by the secondary storage 43 if necessary.
  • A description will now be given about an example of a chin detecting method using the chin detecting system 100 having such a configuration with reference to FIGS. 3-13.
  • FIG. 3 is a flowchart showing an example of a chin detecting method for the face image G to be actually detected.
  • First, as shown in step S101, by the face detecting part 12, after detecting a face included in the face image G from the face image G which has been scanned in the image scanning part 10 and from which the chin will be detected, the face detecting frame F for specifying the detected human face is set.
  • For example, since the image from which the chin will be detected in the invention is limited to the image of one human face as shown in FIG. 6, the location of the human face is first specified by the face detecting part 12 and then the rectangular-shaped face detecting frame F is set on the human face as shown in FIG. 7.
  • In the case of the face detecting frame F as shown in the Figure, although the face detecting frame F has a size (an area) including both eyes and the lips of the human face with the nose centered but not including the chin, the size and shape are not limited to those exemplified if the area does not include the chin part of the human face. Also, although the human face size and the location of a display frame Y in a horizontal direction are within the regulation with regard to each face image G shown in FIGS. 6-9(a), the chin is located too low and is out of regulation.
  • Next, when the face detecting frame F has been set through the above process, moving to step S103 and setting the chin detecting window W having a horizontally long rectangular shape, and the chin location of the human face is specified.
  • The size and shape of the chin detecting window W is not strictly limited. If the chin detecting window W includes an area from the lower lip of a human face to the chin lower base without fail, the size and shape is not especially limited. However, when the chin detecting window W is too large, there are many lines confusingly similar to the chin outline such as the shade of the chin, the wrinkles of the neck and a shirt collar, which increases the time to detect the true edge. When the chin detecting window W is too small, the chin lower base to be detected may not be included in some cases due to the difference between individuals.
  • Therefore, when using the chin detecting window having a horizontally long rectangular shape, the width being wider than a width of the human face and the height being shorter than the width of the human face, it is conceivable that the chin outline including the chin lower base can be reliably captured while eliminating confusingly similar parts such as a shirt collar. Although the chin detecting window W is set by contacting the lower side of the face detecting frame F in the example of FIG. 8, the chin detecting window W does not always have to contact the face detecting frame F. It suffices if a specific positional relationship can be kept between the face detecting frame F and the chin detecting window W.
  • Next, when the chin detecting window W has been set at a target image, moving to step S105 and calculating the luminance (Y) of each pixel within the chin detecting window W and the primary differentiation type edge strength distribution within the chin detecting window W is obtained based on the luminance value by using a primary differentiation type (difference type) edge detection operator typified by a “Sobel edge detection operator” and the like.
  • FIGS. 12(a) and 12(b) show this “Sobel edge detection operator”. In the operator (filter) shown in FIG. 12(a), a horizontal edge is emphasized by adjusting each group of three pixel values located in the left and right rows among eight pixel values surrounding a target pixel. In the operator (filter) shown in FIG. 12(b), vertical and horizontal edges are detected by emphasizing the vertical edge by adjusting each group of three pixel values located in the upper line and lower row among eight pixel values surrounding a target pixel.
  • After calculating the square sum of the result generated in such an operator and calculating the square root, the edge strength can be obtained. However as described above, other primary differentiation type edge detection operators can be applied such as “Roberts” and “Prewitt” in place of the “Sobel edge detection operator”.
  • FIG. 4 shows a relationship between the luminance (vertical axis) of the face image G and the pixel location (horizontal axis) thereof. Since the luminance changes sharply at the edge part in the image such as the chin outline, a parabola-shaped approximated curve can be obtained by using a primary differentiation type (difference type) edge detection operator such as the “Sobel edge detection operator”.
  • Next, when the edge strength distribution within the chin detecting window W has been obtained in such a manner, moving to step S107, a threshold value is calculated from the edge strength distribution. The reason for this is that, as described above, since the edge strength is greatly affected by photographing conditions (lighting conditions) and so on, it is difficult to determine the edge corresponding to the chin outline from the edge strength including other areas.
  • Although the threshold value for determining the pixels is not especially limited, the threshold value may be set at one-tenth of the maximum edge strength detected in the chin detecting window W, for example, and the pixels having a stronger edge than this threshold value are selected as candidate pixels for obtaining the chin lower base.
  • Next, when the threshold value for sorting out the pixel values has been determined, moving to step S111 and selecting only the pixels having the edge strength exceeding the threshold value while scanning in a vertical direction by setting all pixels configuring the upper side of the chin detecting window W as the base point as shown in FIG. 10, and the pixels less than threshold value are eliminated.
  • FIG. 10 shows simply the pixel distribution of the pixels thus selected (exceeding the threshold value). The pixels having the edge strength with a threshold value or more are identified and indicated by scanning the pixels on each line in a non-interlaced manner, that is, scanning in the X-direction within the chin detecting window W from the upper left of the chin detecting window W and moving to the Y-direction sequentially.
  • The reason for scanning from the upper left of the chin detecting window W is that a candidate pixel with a threshold value or more appearing earliest in the Y-direction will be identified as a potential candidate of the chin lower base. Thereby it becomes possible to detect the pixels corresponding to the chin outline effectively. In other words, the reason is that, since an edge which is confusingly similar to the chin outline is stronger at the wrinkles of the neck and a shirt collar located below the actual chin outline than the edge at the upper part of the actual chin outline, the lower edge is desired to be low-prioritized.
  • Next, when the pixels having the edge strength exceeding threshold value have been selected, moving to step S113, a sign inversion of a secondary differentiation type edge is detected in each row in order to narrow down (identify) the pixel having the maximum edge strength in each pixel row (Y-direction) among the selected pixels.
  • When identifying the candidate pixel, it is necessary to consider how sharply the luminance changes. When the luminance changes slowly as shown in FIG. 4, the primary differentiation type Sobel edge strength changes slightly and slowly as shown in FIG. 5(a). When reaching and exceeding the threshold value, the number of candidate pixels increases to lead to an error in determining the chin lower base.
  • For this reason, by detecting the edge sign inversion by using a secondary differentiation type edge detection filter (Laplacian filter) as shown in FIG. 13, one pixel will be determined among plural candidate pixels in each row as shown in FIG. 10 (and FIG. 11).
  • For example, in the case of selecting plural pixels in each row from “a” to “g” as a result of searching the pixels having the edge strength with a threshold value or more as shown in FIG. 10, each uppermost pixel is selected as the candidate pixel configuring the chin outline in rows “a”, “b”, “d”, “f” and “g” in FIG. 11 while each lowermost pixel is selected as the candidate pixel configuring the chin outline in rows “c” and “e” in FIG. 11.
  • After that, when the selected candidate pixel has been finally narrowed down among many pixels exceeding the threshold value, moving to step S115, putting the above-described approximated curve into the distribution of the pixels searched, and the chin lower base will be obtained.
  • When the chin lower base has been detected, attaching a marker M to the chin lower base as shown in FIGS. 9(a) and 9(b) and the entire human face will be moved so that the marker M located at the same location as the proper (regulation) location of chin lower base.
  • In FIG. 9(a), since the chin lower base of the human face is located quite low, the chin lower base can be located at the regulation location by moving the human face vertically upward as shown in FIG. 9(b). Although the image ends at the neck of human as shown in FIG. 9(a) and so on, the image under the neck is assumed to exist actually as it is.
  • As described above, since the lower base of the human face is detected based on the edge strength distribution within the chin detecting window after setting the chin detecting window by using a publicly-known human face detecting method, it becomes possible to detect a robust chin lower base by accurately and quickly detecting the chin lower base of the human face, which is difficult to detect from a face image.

Claims (10)

1. A chin detecting method for detecting a lower base of a chin of a human face from an image with the human face included therein, the method comprising:
detecting a face image of an area including both eyes and lips of the human face but excluding the chin;
setting a chin detecting window with a size including the chin of the human face at a lower part of the detected face image;
detecting pixels having an edge strength with at least a threshold value based on an edge strength distribution by calculating the edge strength distribution within the chin detecting window; and
thereafter obtaining an approximated curve to match a distribution of each of the detected pixels and identifying a lowermost part of the approximated curve as the lower base of the chin of the human face.
2. A chin detecting method for detecting a lower base of a chin of a human face from an image with the human face included therein, the method comprising:
detecting a face image of an area including both eyes and lips of the human face but excluding the chin;
setting a chin detecting window with a size including the chin of the human face at a lower part of the detected face image;
detecting pixels having an edge strength with at least a threshold value by calculating a primary differentiation type edge strength distribution within the chin detecting window and by calculating the threshold value from the primary differentiation type edge strength distribution;
identifying select pixels to be used from the pixels by using a sign inversion of a secondary differentiation type edge; and
thereafter obtaining an approximated curve to match a distribution of the selected pixels by using a least-square method and identifying a lowermost part of the approximated curve as the lower base of the chin of the human face.
3. A chin detecting method according to claim 1 wherein the chin detecting window has a horizontally long rectangular shape, a width of the chin detecting window is wider than a width of the human face and a height of the chin detecting window is shorter than a width of the human face.
4. A chin detecting method according to claim 2 wherein the primary differentiation type edge strength distribution is obtained by using a Sobel edge detection operator.
5. A chin detecting method according to claim 2 wherein the secondary differentiation type edge is obtained by using a Laplace edge detection operator.
6. A chin detecting method according to claim 1 wherein the approximated curve is obtained by using a least-square method by a quadratic function.
7. A chin detecting system for detecting a lower base of a chin of a human face from an image with the human face included therein comprising:
an image scanning part for scanning the image with the human face included therein;
a face detecting part for detecting an area including both eyes and lips of the human face but excluding the chin from the image scanned in the image scanning part and for setting a face detecting frame in the detected area;
a chin detecting window setting part for setting a chin detecting window with a size including the chin of the human face at a lower part of the face detecting frame;
an edge calculating part for calculating an edge strength distribution within the chin detecting window;
a pixel selecting part for selecting pixels having an edge strength with at least a threshold value based on the edge strength distribution obtained by the edge calculating part;
a curve approximating part for obtaining an approximated curve to match a distribution of each of the pixels selected in the pixel selecting part; and
a chin detecting part for detecting a lowermost part of the approximated curve obtained in the curve approximating part as the lower base of the chin of the human face.
8. A chin detecting system according to claim 7 wherein the pixel selecting part detects pixels having the edge strength with at least the threshold value by calculating the threshold value from a primary differentiation type edge strength distribution calculated in the edge calculating part, and then selects certain pixels to be used from the pixels by using a sign inversion of a secondary differentiation type edge.
9. A chin detecting program for detecting a lower base of a chin of a human face from an image with the human face included therein making a computer realize the parts of claim 7.
10. A chin detecting program according to claim 9 wherein the pixel selecting part detects pixels having the edge strength with at least the threshold value by calculating the threshold value from a primary differentiation type edge strength distribution calculated in the edge calculating part, and then selects certain pixels to be used from the pixels by using a sign inversion of a secondary differentiation type edge.
US11/004,648 2003-12-05 2004-12-03 Chin detecting method, chin detecting system and chin detecting program for a chin of a human face Abandoned US20060010582A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2003-407911 2003-12-05
JP2003407911A JP2005165983A (en) 2003-12-05 2003-12-05 Method for detecting jaw of human face, jaw detection system, and jaw detection program

Publications (1)

Publication Number Publication Date
US20060010582A1 true US20060010582A1 (en) 2006-01-19

Family

ID=34650325

Family Applications (1)

Application Number Title Priority Date Filing Date
US11/004,648 Abandoned US20060010582A1 (en) 2003-12-05 2004-12-03 Chin detecting method, chin detecting system and chin detecting program for a chin of a human face

Country Status (4)

Country Link
US (1) US20060010582A1 (en)
JP (1) JP2005165983A (en)
TW (1) TW200527319A (en)
WO (1) WO2005055144A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070154095A1 (en) * 2005-12-31 2007-07-05 Arcsoft, Inc. Face detection on mobile devices
US20070154096A1 (en) * 2005-12-31 2007-07-05 Jiangen Cao Facial feature detection on mobile devices
US20090244660A1 (en) * 2008-03-26 2009-10-01 Seiko Epson Corporation Coloring image generating apparatus and coloring image generating method
CN102914286A (en) * 2012-09-12 2013-02-06 福建网龙计算机网络信息技术有限公司 Method for automatically detecting user sitting posture based on handheld equipment
CN104732197A (en) * 2013-12-24 2015-06-24 富士通株式会社 Target line detection device and method
US9916494B2 (en) 2015-03-25 2018-03-13 Alibaba Group Holding Limited Positioning feature points of human face edge

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5095182B2 (en) * 2005-12-01 2012-12-12 株式会社 資生堂 Face classification device, face classification program, and recording medium on which the program is recorded
US8417033B2 (en) * 2007-04-27 2013-04-09 Hewlett-Packard Development Company, L.P. Gradient based background segmentation and enhancement of images

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5883982A (en) * 1995-10-24 1999-03-16 Neopath, Inc. Astigmatism measurement apparatus and method based on a focal plane separation measurement
US5933527A (en) * 1995-06-22 1999-08-03 Seiko Epson Corporation Facial image processing method and apparatus
US6330348B1 (en) * 1999-01-21 2001-12-11 Resolution Sciences Corporation Method and apparatus for measurement of microtome performance

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3124597B2 (en) * 1991-11-26 2001-01-15 グローリー工業株式会社 Edge detection method and image recognition method using the same
JPH0877334A (en) * 1994-09-09 1996-03-22 Konica Corp Automatic feature point extracting method for face image
JPH11306372A (en) * 1998-04-17 1999-11-05 Sharp Corp Method and device for picture processing and storage medium for storing the method
JP3638845B2 (en) * 1999-12-24 2005-04-13 三洋電機株式会社 Image processing apparatus and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5933527A (en) * 1995-06-22 1999-08-03 Seiko Epson Corporation Facial image processing method and apparatus
US5883982A (en) * 1995-10-24 1999-03-16 Neopath, Inc. Astigmatism measurement apparatus and method based on a focal plane separation measurement
US6330348B1 (en) * 1999-01-21 2001-12-11 Resolution Sciences Corporation Method and apparatus for measurement of microtome performance

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070154095A1 (en) * 2005-12-31 2007-07-05 Arcsoft, Inc. Face detection on mobile devices
US20070154096A1 (en) * 2005-12-31 2007-07-05 Jiangen Cao Facial feature detection on mobile devices
US7643659B2 (en) * 2005-12-31 2010-01-05 Arcsoft, Inc. Facial feature detection on mobile devices
US7953253B2 (en) 2005-12-31 2011-05-31 Arcsoft, Inc. Face detection on mobile devices
US20090244660A1 (en) * 2008-03-26 2009-10-01 Seiko Epson Corporation Coloring image generating apparatus and coloring image generating method
US8149465B2 (en) * 2008-03-26 2012-04-03 Seiko Epson Corporation Coloring image generating apparatus and coloring image generating method for generating a coloring image from an image using an edge intensity frequency distribution of the image
CN102914286A (en) * 2012-09-12 2013-02-06 福建网龙计算机网络信息技术有限公司 Method for automatically detecting user sitting posture based on handheld equipment
CN104732197A (en) * 2013-12-24 2015-06-24 富士通株式会社 Target line detection device and method
US9916494B2 (en) 2015-03-25 2018-03-13 Alibaba Group Holding Limited Positioning feature points of human face edge

Also Published As

Publication number Publication date
TW200527319A (en) 2005-08-16
JP2005165983A (en) 2005-06-23
WO2005055144A1 (en) 2005-06-16

Similar Documents

Publication Publication Date Title
CN108230252B (en) Image processing method and device and electronic equipment
US7460705B2 (en) Head-top detecting method, head-top detecting system and a head-top detecting program for a human face
US6526161B1 (en) System and method for biometrics-based facial feature extraction
US8737740B2 (en) Information processing apparatus, information processing method, and non-transitory computer-readable storage medium
JP6655878B2 (en) Image recognition method and apparatus, program
US8836777B2 (en) Automatic detection of vertical gaze using an embedded imaging device
US9092868B2 (en) Apparatus for detecting object from image and method therefor
KR100480781B1 (en) Method of extracting teeth area from teeth image and personal identification method and apparatus using teeth image
US20050196044A1 (en) Method of extracting candidate human region within image, system for extracting candidate human region, program for extracting candidate human region, method of discerning top and bottom of human image, system for discerning top and bottom, and program for discerning top and bottom
US10079974B2 (en) Image processing apparatus, method, and medium for extracting feature amount of image
US20130202159A1 (en) Apparatus for real-time face recognition
JP4764172B2 (en) Method for detecting moving object candidate by image processing, moving object detecting method for detecting moving object from moving object candidate, moving object detecting apparatus, and moving object detecting program
EP3213504B1 (en) Image data segmentation
JP2002216129A (en) Face area detector, its method and computer readable recording medium
JP2005190400A (en) Face image detection method, system, and program
US20060010582A1 (en) Chin detecting method, chin detecting system and chin detecting program for a chin of a human face
JP2005134966A (en) Face image candidate area retrieval method, retrieval system and retrieval program
JP2002269545A (en) Face image processing method and face image processing device
CN108491820B (en) Method, device and equipment for identifying limb representation information in image and storage medium
JP6698058B2 (en) Image processing device
RU2329535C2 (en) Method of automatic photograph framing
JP2007219899A (en) Personal identification device, personal identification method, and personal identification program
JP3963789B2 (en) Eye detection device, eye detection program, recording medium for recording the program, and eye detection method
JP2013029996A (en) Image processing device
Amjed et al. A robust geometric skin colour face detection method under unconstrained environment of smartphone database

Legal Events

Date Code Title Description
AS Assignment

Owner name: SEIKO EPSON CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:NAGAHASHI, TOSHINORI;HYUGA, TAKASHI;REEL/FRAME:015773/0023

Effective date: 20050215

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION