WO2002093493A1 - Method for acquiring a 3d face digital data froom 2d photograph aand a system performing the same - Google Patents

Method for acquiring a 3d face digital data froom 2d photograph aand a system performing the same Download PDF

Info

Publication number
WO2002093493A1
WO2002093493A1 PCT/KR2002/000896 KR0200896W WO02093493A1 WO 2002093493 A1 WO2002093493 A1 WO 2002093493A1 KR 0200896 W KR0200896 W KR 0200896W WO 02093493 A1 WO02093493 A1 WO 02093493A1
Authority
WO
WIPO (PCT)
Prior art keywords
nose
eyes
mouth
tip
distinctive
Prior art date
Application number
PCT/KR2002/000896
Other languages
French (fr)
Inventor
Wook-Jin Chung
Jae-Chang Shim
Original Assignee
Face3D Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Face3D Co., Ltd. filed Critical Face3D Co., Ltd.
Publication of WO2002093493A1 publication Critical patent/WO2002093493A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Definitions

  • the present invention relates to a system and a method for acquiring a three-dimensional (3D) facial feature from a two-dimensional (2D) face image such as a photograph wherein the 3D facial feature is acquired by comparing distinctive features abstracted from the 2D face image and feature information abstracted from a plurality of 3D facial feature data that is taken from a lot of real human face and accumulated in a database, and retrieving a 3D facial feature data which is most similar to the 2D face image from the database.
  • 3D facial feature is acquired by comparing distinctive features abstracted from the 2D face image and feature information abstracted from a plurality of 3D facial feature data that is taken from a lot of real human face and accumulated in a database, and retrieving a 3D facial feature data which is most similar to the 2D face image from the database.
  • 3D feature data is comprised of coordinates in 3D space and may be applied to a lot of applications such as an actual object building or electronic commerce (e-commerce) because it has an advantage of being able to express stereo conformation of an object even though it does not express color that can be expressed in a 2D image.
  • An exemplary application area of the 3D feature data is a 3D camera system.
  • the 3D camera system is created using a stereo technology in which height information of an object is acquired by imitating the light sense of two eyes of human.
  • the 3D camera system can be realized by acquiring feature real value of information of an object using non-contact method in which a laser beam is scanned on the object in a direction of X axis and then detected in a direction of Y axis, thereby the height information in a direction of Z axis being produced by Laser Optical Triangulation.
  • the 2D face image data usually does not have height information (or data) that is necessary to obtain a 3D facial feature. Accordingly, it was substantially difficult to obtain the 3D facial feature that is substantially similar the real human face using the 2D face image in the conventional art such as the conventional art software.
  • the 3D facial feature transformed from the 2D face image in the conventional art is more open expressed with a different feature from the real human face because the transformation results depends on an operator's own expression ability. Accordingly, to transform the 2D face image to the realistic 3D facial feature, operator's artistic sense of the image transformation is required. That is, the 2D face image information as well as the height information of an object having a height such as a nose and ears is required to reconstruct the 3D facial feature from the 2D face image information.
  • the operator has great knowledge of a human face and is good at using a software used for 3D feature correction, it takes a lot of time and trial errors to obtain a real 3D facial feature because the height information is mostly lost.
  • the 3D facial feature from the 2D face image. That is combining a plurality of parts comprising a 3D facial feature data wherein the parts are acquired by dividing a plurality of 3D facial feature data and stored in a database. That is, first a lot of the 3D facial feature data is collected, and then each of the data is divided into a lot of fractions, and further then the divided facial fractions (parts of facial feature) are stored in the database. Accordingly, each time a specified 3D facial feature is reconstructed, the parts of the 3D facial feature are combined with a great combination.
  • the method is similar to a composition of a montage.
  • the present invention discloses a method for acquiring a 3D facial feature data from a 2D face image data, and the method includes abstracting distinctive facial features from the 2D face image, automatically attaining several 3D facial feature data being most similar to the 2D face image from a database having a plurality of 3D facial feature data, providing the several attained 3D facial feature data to a client for making the client to select a most desirable 3D feature data out of the provided several face feature data and further correcting the 3D facial feature data selected by the client if needed.
  • a technique such as a pattern recognition to abstract distinctive features from the 2D face image data, a technique to abstract distinctive feature information from each of the 3D facial feature data stored in the database and a technique of comparing the distinctive features abstracted from the 2D face image data and the distinctive feature information abstracted from the 3D facial feature data and automatically retrieving most similar 3D facial feature data that is most similar to the 2D face image data using a computer are used.
  • an editing professional After retrieving the most similar 3D facial feature data from the database, an editing professional corrects and edits the retrieved 3D facial feature data to minimize acquiring time and satisfy client's demand.
  • a system for acquiring a 3D facial feature from a 2D face image wherein the system is connected to a communication network by way of a web server and comprising an abstracting means for abstracting distinctive facial features from photograph data provided by a client when the photograph data is transmitted by way of the web server from the client, a comparing means for comparing the distinctive facial features abstracted from the photograph data with a plurality of distinctive feature information which is abstracted from a plurality of 3D facial feature data stored in a database, and a 3D facial feature reconstructing system for reconstructing a 3D facial feature using the photograph data provided by the client and the 3D facial feature data having the distinctive feature information being matched with the distinctive facial features of the photograph data.
  • the client entrusts a 3D facial feature reconstruction service provider with reconstruction of the 3D facial feature by on-line transmitting a frontal face view photograph and a side face view photograph to the web server 4 which is connected to a 3D facial feature reconstructing service provider through the Internet using a web browser 2.
  • the photograph that may be digital photograph is transferred to a 3D facial feature reconstructing system 8.
  • the 3D facial feature reconstructing system 8 abstracts distinctive facial features of eyes, a nose, a mouth, ears, eyebrows and a face outline from a face image contained in the frontal face view photograph and abstracts distinctive features of the nose, eyes, mouth, outline of the nose and hair style from the side face view photograph.
  • the distinctive facial features abstracted from the photographs are stored in a first storage device 9.
  • a second storage device 11 storing distinctive feature information abstracted from the plurality of 3D facial feature data accumulated in a 3D facial feature database 12 wherein the 3D facial feature data are acquired by taking pictures of a lot of real human face using a 3D input system such as a 3D camera. Then data stored in the first storage device 9 and the second storage device 11 are inputted into an automatic similar face retrieving device 10 and then the automatic similar face retrieving device 10 compares the distinctive features abstracted from the photographs provided by the client and the distinctive feature information abstracted from the 3D facial feature data, retrieves distinctive feature information of the 3D facial feature data that is similar to the distinctive facial features abstracted from the photograph and stores the similar into a third storage device 13.
  • the system 8 retrieves 3D facial feature data corresponding to the distinctive feature information stored in the third storage device 13 from the 3D facial feature database 12 and stores the similar into a fourth storage device 14.
  • the system 8 randomly selects data by 4 or ⁇ out of the 3D facial feature data stored in the fourth storage device 14, performs texture mapping each of the selected 4 or 63D facial feature data with the 2D face image (photograph) provided by the client and then stores the texture mapped 3D facial feature data to a fifth storage device 15.
  • the system 8 transmits the 4 or 6 texture mapped 3D facial feature data by 4 or 6 to the web server 5 for the client to select a most desirable 3D facial feature data.
  • the client makes an on-line order to correct the selected 3D facial feature data to an editing professional through the Internet if needed, thereby completing the 3D facial feature reconstruction.
  • Fig. 1 is a block diagram of a system for acquiring 3D facial feature in accordance with the present invention
  • Fig. 2 is a block diagram showing a method for acquiring a 3D facial feature from a 2D face image data
  • Fig. 3 is a block diagram showing a method for abstracting partial distinctive features from a 2D face image of a frontal view photograph
  • Fig. 4 is a block diagram showing a method for abstracting whole distinctive features from a 2D face image of a frontal view photograph
  • Fig. 5 is a block diagram showing a method for abstracting distinctive feature information from a 3D facial feature data.
  • a system for acquiring a 3D facial feature from a 2D face image data comprises a web server 5 connected- to a client (or user) personal computer (PC) 1 using a network i.e. Internet and a 3D facial feature reconstruction system 8 which is capable of exchanging data with the web server 5 by being connected to the web server 5.
  • a client (or user) can access various web pages 6 using a user PC 1 having a web browser therein.
  • the client When the client who has a 2D face image such as a photograph wants to get a 3D facial feature from the 2D face image, the client demands certification process to the web server 5, receives a user identification number 3 from the web server 5 after successfully completing the certificates process, and transmits the photograph to the web server 5. After the client once receives the user identification number from the web server 5, the client can login the web server 6 without certification process.
  • the web server 5 capable of exchanging data with the user PC using the TCP/IP protocol provides a plurality of web pages such as an initiating page and a guidance page to the user PC
  • the web server 5 further provides the user PC with an upload function for on-line receiving the photographs containing a frontal face view and a side face view from the client.
  • a database 7 for storing user identification information and the photographs uploaded may be employed in the web server 5.
  • the web server 5 provides the user PC 1 with a web page 6 on which texture mapped 3D facial feature data which are formed by texture mapping the 2D face image provided by the client and a plurality of similar 3D facial feature data corresponding to the 2D face image, and stored in a storage device 15 are displayed.
  • the 3D facial feature reconstructing system 8 is connected to the web server 5 to exchange data with the web server 5, abstracts distinctive features of a nose, eyes, a mouth, ears, eyebrows and a face outline from the frontal face view photograph provided by the client and further distinctive features of nose outline, distance between one ear to a tip of the nose, the eyes, mouth, nose and hair style from the side face view photograph and stores the same into a first storage device 9.
  • distinctive feature information abstracted from a plurality of 3D facial feature data accumulated in a 3D facial feature database 12 is stored in a second storage device 11, wherein the 3D facial feature data accumulated in the database 12 is acquired by taking pictures against objects (human faces in here) using a 3D camera system capable of acquiring stereo feature information of the objects based on a non-contact method using the optical triangulation technique.
  • the automatic similar face retrieving system 10 constructs polygons by connecting a tip of the nose with two eyes, the tip of the nose with both side ends of the mouth and the centers of the eyes with both side ends of the mouth, respectively, in the each 3D facial feature data. And then, the automatic similar face retrieving system 10 calculates similarity between the distinctive features abstracted from the 2D face image and stored in the first storage device 9 and the distinctive feature information abstracted from the 3D facial feature data and stored in the second storage device 11 to find 3D facial feature data that is similar to the 2D face image.
  • the distinctive feature information includes distinctive features of the eyes, nose, ears, mouth, eyebrows, the face outline, a nose outline on top of the nose (on ridge of the nose) , distance from the ear to the tip of the nose, a style of hair and distance between the eyes.
  • the automatic similar face retrieving device 10 searches distinctive feature information corresponding the 3D facial feature data that is similar to the 2D face image based on the calculation and stores the searching result in a third storage device 13.
  • the automatic similar face retrieving device 10 can retrieve the similar 3D facial feature data being similar to the 2D face image based on a point that position information of the eyes, nose, mouth in 3D facial feature is coincident with edge information of the eyes, nose and mouth of the same personal in 2D face image when the position information abstracted from the 3D facial feature data is transformed to the 2D information.
  • About 100 data of the 3D facial feature data being similar to the 2D face image provided by the client are read out from the database 12 and stored into a fourth storage device 14.
  • randomly chosen 4 or 6 data out of the 100 data are texture mapped with the 2D face image provided the client, respectively and then the 4 or 6 texture mapped 3D facial feature data are stored in a fifth storage device 15.
  • the texture mapped data are transmitted to the user PC 1 via the web server 5, and then the client selects a desired 3D facial feature data which satisfies the client's demand.
  • the client may request partial correction of the acquired 3D facial feature data if needed.
  • the client demands correction of the reconstructed 3D facial feature
  • the client can make an on-line order for demanding correction of the 3D facial feature.
  • the web server 5 transmits a message containing the client's demand to an editing professional of the 3D feature data as soon as the web server 5 receives the order for correcting the 3D facial feature data from the client, the 3D facial feature data is delivered to a 3D facial featuring reconstructing 16.
  • corrected 3D facial feature data is transmitted to and then stored into a storage device 17, so that all processes for . reconstructing the 3D facial feature are completed. All of the processes except a correction process that is semi-automatically performed are automatically implemented through on-line.
  • a method for acquiring a 3D facial feature from a 2D face image is implemented using a system that is connected to a communication network by way of a web server.
  • the method comprises a first step of abstracting distinctive features from a face photograph data when the face photograph data is transmitted by the client by way of the web server, a second step of comparing a plurality of distinctive feature information that is abstracted from a plurality of 3D facial feature data accumulated in a database and the distinctive features abstracted during the first step and a third step of constructing
  • Fig.2 is showing a method of acquiring a 3D facial feature from a 2D face image .
  • the method will be described below in detail with reference to Fig. 2.
  • a client who wants to acquire the 3D facial feature from the 2D face image logins a web server 5 by inputting user identification number on a certification process. The client can login into the web server 5 after once being allocated the user identification number from the web server.
  • the client transmits a frontal face view photograph and a side face view photograph to the web server 5.
  • a service provider abstracts distinctive features from the photographs i.e. 2D face image provided by the client and stores the same into a storage device.
  • an automatic similar face retrieving system 10 is in a standby mode after accumulating a plurality of 3D facial feature data in a database (S103), abstracting distinctivefeature information from the 3D facial feature data and storing the same into a storage device.
  • the automatic similar face retrieving system 10 compares the distinctive features abstracted from the photographs provided by the client and the distinctive feature information abstracted from the 3D facial feature data.
  • the automatic similar face retrieving system 10 retrieves the distinctive feature information of a 3D facial feature data that matches with the photographs provided by the client and storing the same into a storage device. (S106)
  • the ' automatic similar face retrieving system 10 retrieves about 100 data of the 3D facial feature corresponding to the distinctive feature information retrieved in the step of S106 from the database 12 having a plurality of the 3D facial feature data and stores the same into the storage device 14. (S107)
  • the automatic similar face retrieving system 10 randomly selects 4 or 6 data out of the 100 data retrieved in the step of S107, performs texture mapping the chosen 4 or 6 data with the photograph provided by the client and stores the
  • the automatic similar face retrieving system 10 provides the client with the 4-6 texture mapped 3D facial feature data for the client to select a most preferable 3D facial feature .
  • the web server 5 asks the client whether he or she wants to correct or edit the texture mapped 3D facial feature data selected by the client.
  • a 3D facial feature reconstruction service provider receives a message containing how the texture mapped 3D facial feature data should be corrected from the client using an on-line message transmission system and requests an editing professional who is in contract with the service provider to correct the 3D texture mapped facial feature data by sending the same data with the client's message to the editing professional on line. (Sill) After the 3D texture mapped facial feature data is corrected, the corrected data is stored in the storage device 17. (S112)
  • Fig. 3 is showing a method of abstracting partial distinctive features of a face from a frontal face view photograph. The method comprises bounding and separating a face portion from background in the photograph using color information by considering that the face has a skin color (S200) , projecting position of eyes, a nose and a mouth of the face (S201) and detecting edges of the eyes, nose and mouth. (S202) .
  • the position information of the eyes, nose and mouse can be abstracted by detecting edges of the eyes, nose and mouth wherein the edges can be located by detecting pixel numbers. ' The number of pixels is abruptly changed at the edge portions. Then the projected face data is acquired by accumulating the image of the edges of the features in axis of abscissa and in axis of ordinate.
  • Fig.4 is showing a method of abstracting whole distinctive features from the frontal face view photograph.
  • the method comprises a step of bounding a face portion from the photograph using color information by considering that the skin color is distinguished from the background color in the photograph (S300) , an outline abstraction step of abstracting a face outline from the face portion (S301) and a feature outline abstraction step of abstracting outlines of features such as a tip of the nose, philtrum, mouth and eyebrows from the face portion. (S302)
  • Fig. 5 is showing a method of abstracting distinctive feature information from a 3D facial feature.
  • the method comprises a step of detecting a tip of a nose (S400), a step of detecting position of the mouth and eyes and formalizing shape of the mouth and eyes (S401) and a step of detecting the tip of the nose, the center of the eyes and side edges of the mouth (S402) .
  • the step of detecting the tip of nose comprises detecting first the nose and detecting the ridges of the nose, the ridge being across the central line of the eyebrows and philtrum.
  • the detecting position and formalizing (S401) resizes the 3D facial feature in a predetermined size.
  • (S402) detects first a position of the mouth by searching a figure of upper and lower lips below the nose and then detects both side edges of the mouth and further then detects position of the eyes by searching dimple portions which are located in directions of 11 hour and 1 hour, respectively, above the nose.
  • the feature information abstracted from the 3D facial feature data using the method of abstracting in accordance with steps S400-S402 is then transformed to 2D feature information.
  • position information acquired from the 2D feature information that is abstracted and transformed from the 3D facial feature information is the same as the position information directly abstracted from the face photograph (2D face image) when the faces in the photograph and the 3D facial feature data are taken from the same person. That is, position information out of the information abstracted from the 3D facial features and the 2Dface image, respectively, is coincident with each other, so that the similar 3D facial feature data being similar to the 2D face image can be retrieved from the database.
  • the 3D facial feature is acquired from the 2D face image provided by the client almost in real time. Further, the method for acquiring the 3D facial feature is in automatic (not manual) . Accordingly, time and cost for acquiring the 3D facial feature from the 2D face image may be greatly reduced. Further, the acquired 3D facial feature is so realistic and close to original human face because the 3D facial feature is acquired by using the database having a lot of 3D facial feature data taken from a lot of real human faces.
  • This method in accordance with the present invention provides the 3D facial feature that is more realistic and similar to the 2D face image than the 3D facial feature constructed manually by a technician. Further, this method can provide convenience to the client and is user friendly by further comprising the correction step for editing the 3D facial feature acquired from the automatic process in response to the client's demand.

Abstract

The present invention discloses a method of acquiring a 3D facial feature from a two-dimensional (2D) image and a system for implementing the same method. The method compares distinctive facial features abstracted from the 2D face image and features abstractedfrom a 3D face data which are stores in a database and comprised of a lot of real human faces, thereby retrieving a most similar 3D face which is most similar to the 2D face image from the database. The system for acquiring the 3D facial feature from the 2D face image is connected to a communication network by way of a web server, the system comprising an abstracting means for abstracting a distinctive facial feature from photograph data provided by aclient when the photograph data is transmitted by way of the web server from the client, comparing means for comparing the distinctive facial feature abstracted from the photograph data with a plurality of feature information which is abstracted from a plurality of 3D face data and a 3D facial feature constructing system for constructing a 3D facial feature using the photograph data provided by the client and the 3D face data having the feature information which substantially matches with the distinctive facial feature of the photograph data.

Description

METHOD FOR ACQUIRING A 3D FACE DIGITAL DATA FROM 2D PHOTOGRAPH AND A SYSTEM PERFORMING THE SAME
Technical Field The present invention relates to a system and a method for acquiring a three-dimensional (3D) facial feature from a two-dimensional (2D) face image such as a photograph wherein the 3D facial feature is acquired by comparing distinctive features abstracted from the 2D face image and feature information abstracted from a plurality of 3D facial feature data that is taken from a lot of real human face and accumulated in a database, and retrieving a 3D facial feature data which is most similar to the 2D face image from the database.
Background of the Invention
3D feature data is comprised of coordinates in 3D space and may be applied to a lot of applications such as an actual object building or electronic commerce (e-commerce) because it has an advantage of being able to express stereo conformation of an object even though it does not express color that can be expressed in a 2D image. An exemplary application area of the 3D feature data is a 3D camera system. The 3D camera system is created using a stereo technology in which height information of an object is acquired by imitating the light sense of two eyes of human. In another way, the 3D camera system can be realized by acquiring feature real value of information of an object using non-contact method in which a laser beam is scanned on the object in a direction of X axis and then detected in a direction of Y axis, thereby the height information in a direction of Z axis being produced by Laser Optical Triangulation.
However, when an actual object does not exist in the world or can not be transferred or delivered to a place where a 3D feature input system such as 3D camera is installed, transforming a 2D photograph containing the object to a 3D feature data is a unique solution to acquire the 3D feature of the object. Further, even when the actual object is placed in an area where the 3D camera system can not be used, the transforming technique from a 2D image to 3D feature is useful. In accordance with the conventional art, a method of transforming a 2D face image data to the 3D facial feature data was implemented using a commercial software such as "3D-Max".
The 2D face image data usually does not have height information (or data) that is necessary to obtain a 3D facial feature. Accordingly, it was substantially difficult to obtain the 3D facial feature that is substantially similar the real human face using the 2D face image in the conventional art such as the conventional art software.
Further, the 3D facial feature transformed from the 2D face image in the conventional art is more open expressed with a different feature from the real human face because the transformation results depends on an operator's own expression ability. Accordingly, to transform the 2D face image to the realistic 3D facial feature, operator's artistic sense of the image transformation is required. That is, the 2D face image information as well as the height information of an object having a height such as a nose and ears is required to reconstruct the 3D facial feature from the 2D face image information. However, even if the operator has great knowledge of a human face and is good at using a software used for 3D feature correction, it takes a lot of time and trial errors to obtain a real 3D facial feature because the height information is mostly lost.
Accordingly, cost for acquiring the 3D facial feature increases .
Further, there has been another trial to obtain the 3D facial feature from the 2D face image. That is combining a plurality of parts comprising a 3D facial feature data wherein the parts are acquired by dividing a plurality of 3D facial feature data and stored in a database. That is, first a lot of the 3D facial feature data is collected, and then each of the data is divided into a lot of fractions, and further then the divided facial fractions (parts of facial feature) are stored in the database. Accordingly, each time a specified 3D facial feature is reconstructed, the parts of the 3D facial feature are combined with a great combination. The method is similar to a composition of a montage. However, most of recognition for identifying a human face by human eyes is determined by colors and outline of the face and least part of the recognition is used for height information of the 3D facial feature. Accordingly, it is very difficult to identifying a human face by composing the parts of the 3D facial feature. Further, it is much more difficult to reconstruct a 3D facial feature in a montage technique than to compose a 2D face image. Further, the method combining the parts of the 3D facial feature can't respond to customer's demand that requires realistic and delicate facial structure.
Disclosure of the invention
The present invention discloses a method for acquiring a 3D facial feature data from a 2D face image data, and the method includes abstracting distinctive facial features from the 2D face image, automatically attaining several 3D facial feature data being most similar to the 2D face image from a database having a plurality of 3D facial feature data, providing the several attained 3D facial feature data to a client for making the client to select a most desirable 3D feature data out of the provided several face feature data and further correcting the 3D facial feature data selected by the client if needed.
To acquire the 3D facial feature in accordance with the present invention, a technique such as a pattern recognition to abstract distinctive features from the 2D face image data, a technique to abstract distinctive feature information from each of the 3D facial feature data stored in the database and a technique of comparing the distinctive features abstracted from the 2D face image data and the distinctive feature information abstracted from the 3D facial feature data and automatically retrieving most similar 3D facial feature data that is most similar to the 2D face image data using a computer are used. After retrieving the most similar 3D facial feature data from the database, an editing professional corrects and edits the retrieved 3D facial feature data to minimize acquiring time and satisfy client's demand.
To achieve the object of the present invention, there is provided a system for acquiring a 3D facial feature from a 2D face image, wherein the system is connected to a communication network by way of a web server and comprising an abstracting means for abstracting distinctive facial features from photograph data provided by a client when the photograph data is transmitted by way of the web server from the client, a comparing means for comparing the distinctive facial features abstracted from the photograph data with a plurality of distinctive feature information which is abstracted from a plurality of 3D facial feature data stored in a database, and a 3D facial feature reconstructing system for reconstructing a 3D facial feature using the photograph data provided by the client and the 3D facial feature data having the distinctive feature information being matched with the distinctive facial features of the photograph data.
In more detail, first the client entrusts a 3D facial feature reconstruction service provider with reconstruction of the 3D facial feature by on-line transmitting a frontal face view photograph and a side face view photograph to the web server 4 which is connected to a 3D facial feature reconstructing service provider through the Internet using a web browser 2. Next, the photograph that may be digital photograph is transferred to a 3D facial feature reconstructing system 8. The 3D facial feature reconstructing system 8 abstracts distinctive facial features of eyes, a nose, a mouth, ears, eyebrows and a face outline from a face image contained in the frontal face view photograph and abstracts distinctive features of the nose, eyes, mouth, outline of the nose and hair style from the side face view photograph. The distinctive facial features abstracted from the photographs are stored in a first storage device 9. There is a second storage device 11 storing distinctive feature information abstracted from the plurality of 3D facial feature data accumulated in a 3D facial feature database 12 wherein the 3D facial feature data are acquired by taking pictures of a lot of real human face using a 3D input system such as a 3D camera. Then data stored in the first storage device 9 and the second storage device 11 are inputted into an automatic similar face retrieving device 10 and then the automatic similar face retrieving device 10 compares the distinctive features abstracted from the photographs provided by the client and the distinctive feature information abstracted from the 3D facial feature data, retrieves distinctive feature information of the 3D facial feature data that is similar to the distinctive facial features abstracted from the photograph and stores the similar into a third storage device 13. Next, the system 8 retrieves 3D facial feature data corresponding to the distinctive feature information stored in the third storage device 13 from the 3D facial feature database 12 and stores the similar into a fourth storage device 14. Next, the system 8 randomly selects data by 4 or β out of the 3D facial feature data stored in the fourth storage device 14, performs texture mapping each of the selected 4 or 63D facial feature data with the 2D face image (photograph) provided by the client and then stores the texture mapped 3D facial feature data to a fifth storage device 15. At the same time of storing the texture mapped 3D facial feature data, the system 8 transmits the 4 or 6 texture mapped 3D facial feature data by 4 or 6 to the web server 5 for the client to select a most desirable 3D facial feature data. After selecting one from each 4 or 6 data, the client makes an on-line order to correct the selected 3D facial feature data to an editing professional through the Internet if needed, thereby completing the 3D facial feature reconstruction. Brief Description of the Drawings
Fig. 1 is a block diagram of a system for acquiring 3D facial feature in accordance with the present invention;
Fig. 2 is a block diagram showing a method for acquiring a 3D facial feature from a 2D face image data;
Fig. 3 is a block diagram showing a method for abstracting partial distinctive features from a 2D face image of a frontal view photograph;
Fig. 4 is a block diagram showing a method for abstracting whole distinctive features from a 2D face image of a frontal view photograph;
Fig. 5 is a block diagram showing a method for abstracting distinctive feature information from a 3D facial feature data.
Preferred embodiment of the present invention
The present invention will be described below in detail with reference to the accompanying drawings.
As shown Fig.1, a system for acquiring a 3D facial feature from a 2D face image data in accordance with the present invention comprises a web server 5 connected- to a client (or user) personal computer (PC) 1 using a network i.e. Internet and a 3D facial feature reconstruction system 8 which is capable of exchanging data with the web server 5 by being connected to the web server 5. A client (or user) can access various web pages 6 using a user PC 1 having a web browser therein. When the client who has a 2D face image such as a photograph wants to get a 3D facial feature from the 2D face image, the client demands certification process to the web server 5, receives a user identification number 3 from the web server 5 after successfully completing the certificates process, and transmits the photograph to the web server 5. After the client once receives the user identification number from the web server 5, the client can login the web server 6 without certification process. The web server 5 capable of exchanging data with the user PC using the TCP/IP protocol provides a plurality of web pages such as an initiating page and a guidance page to the user PC The web server 5 further provides the user PC with an upload function for on-line receiving the photographs containing a frontal face view and a side face view from the client. A database 7 for storing user identification information and the photographs uploaded may be employed in the web server 5.
The web server 5 provides the user PC 1 with a web page 6 on which texture mapped 3D facial feature data which are formed by texture mapping the 2D face image provided by the client and a plurality of similar 3D facial feature data corresponding to the 2D face image, and stored in a storage device 15 are displayed.
Further, the 3D facial feature reconstructing system 8 is connected to the web server 5 to exchange data with the web server 5, abstracts distinctive features of a nose, eyes, a mouth, ears, eyebrows and a face outline from the frontal face view photograph provided by the client and further distinctive features of nose outline, distance between one ear to a tip of the nose, the eyes, mouth, nose and hair style from the side face view photograph and stores the same into a first storage device 9.
Further, distinctive feature information abstracted from a plurality of 3D facial feature data accumulated in a 3D facial feature database 12 is stored in a second storage device 11, wherein the 3D facial feature data accumulated in the database 12 is acquired by taking pictures against objects (human faces in here) using a 3D camera system capable of acquiring stereo feature information of the objects based on a non-contact method using the optical triangulation technique.
Next, data stored in the storage devices 9 and 11 are transferred to an automatic similar face retrieving device 10. The automatic similar face retrieving system 10 constructs polygons by connecting a tip of the nose with two eyes, the tip of the nose with both side ends of the mouth and the centers of the eyes with both side ends of the mouth, respectively, in the each 3D facial feature data. And then, the automatic similar face retrieving system 10 calculates similarity between the distinctive features abstracted from the 2D face image and stored in the first storage device 9 and the distinctive feature information abstracted from the 3D facial feature data and stored in the second storage device 11 to find 3D facial feature data that is similar to the 2D face image. The distinctive feature information includes distinctive features of the eyes, nose, ears, mouth, eyebrows, the face outline, a nose outline on top of the nose (on ridge of the nose) , distance from the ear to the tip of the nose, a style of hair and distance between the eyes.
Therefore, the automatic similar face retrieving device 10 searches distinctive feature information corresponding the 3D facial feature data that is similar to the 2D face image based on the calculation and stores the searching result in a third storage device 13. The automatic similar face retrieving device 10 can retrieve the similar 3D facial feature data being similar to the 2D face image based on a point that position information of the eyes, nose, mouth in 3D facial feature is coincident with edge information of the eyes, nose and mouth of the same personal in 2D face image when the position information abstracted from the 3D facial feature data is transformed to the 2D information. Next, About 100 data of the 3D facial feature data being similar to the 2D face image provided by the client are read out from the database 12 and stored into a fourth storage device 14.
Next, randomly chosen 4 or 6 data out of the 100 data are texture mapped with the 2D face image provided the client, respectively and then the 4 or 6 texture mapped 3D facial feature data are stored in a fifth storage device 15. The texture mapped data are transmitted to the user PC 1 via the web server 5, and then the client selects a desired 3D facial feature data which satisfies the client's demand.
When any of the 4 or 6 texture mapped data can not satisfy the client's demand, another 4 or 6 data except early chosen data are randomly selected from the fourth storage device 14, then again textured mapped with the 2D face image provided the client and then the another 4-6 texture mapped data are re- provided to the user PC 1. That process will not be stopped till the client satisfies with one of the texture mapped data.
Further the client may request partial correction of the acquired 3D facial feature data if needed. When the client demands correction of the reconstructed 3D facial feature, the client can make an on-line order for demanding correction of the 3D facial feature. Then, after the web server 5 transmits a message containing the client's demand to an editing professional of the 3D feature data as soon as the web server 5 receives the order for correcting the 3D facial feature data from the client, the 3D facial feature data is delivered to a 3D facial featuring reconstructing 16.
Next, corrected 3D facial feature data is transmitted to and then stored into a storage device 17, so that all processes for . reconstructing the 3D facial feature are completed. All of the processes except a correction process that is semi-automatically performed are automatically implemented through on-line.
Further, a method for acquiring a 3D facial feature from a 2D face image is implemented using a system that is connected to a communication network by way of a web server. The method comprises a first step of abstracting distinctive features from a face photograph data when the face photograph data is transmitted by the client by way of the web server, a second step of comparing a plurality of distinctive feature information that is abstracted from a plurality of 3D facial feature data accumulated in a database and the distinctive features abstracted during the first step and a third step of constructing
• the 3D facial feature using the face photograph data provided by the client and the distinctive feature information matching with the distinctive features of the face photograph data.
Fig.2 is showing a method of acquiring a 3D facial feature from a 2D face image . The method will be described below in detail with reference to Fig. 2. First, a client who wants to acquire the 3D facial feature from the 2D face image logins a web server 5 by inputting user identification number on a certification process. The client can login into the web server 5 after once being allocated the user identification number from the web server. (S100) Next, the client transmits a frontal face view photograph and a side face view photograph to the web server 5. (S101) Next, a service provider abstracts distinctive features from the photographs i.e. 2D face image provided by the client and stores the same into a storage device. (S102) At this time, an automatic similar face retrieving system 10 is in a standby mode after accumulating a plurality of 3D facial feature data in a database (S103), abstracting distinctivefeature information from the 3D facial feature data and storing the same into a storage device. (S104) Next, the automatic similar face retrieving system 10 compares the distinctive features abstracted from the photographs provided by the client and the distinctive feature information abstracted from the 3D facial feature data. (S105)
Next, the automatic similar face retrieving system 10 retrieves the distinctive feature information of a 3D facial feature data that matches with the photographs provided by the client and storing the same into a storage device. (S106)
Next, the' automatic similar face retrieving system 10 retrieves about 100 data of the 3D facial feature corresponding to the distinctive feature information retrieved in the step of S106 from the database 12 having a plurality of the 3D facial feature data and stores the same into the storage device 14. (S107)
Next, the automatic similar face retrieving system 10 randomly selects 4 or 6 data out of the 100 data retrieved in the step of S107, performs texture mapping the chosen 4 or 6 data with the photograph provided by the client and stores the
4 or 6 textured mapped 3D facial feature data in a storage device.
(S108) Next, the automatic similar face retrieving system 10 provides the client with the 4-6 texture mapped 3D facial feature data for the client to select a most preferable 3D facial feature .
(S109)
At this time, when the client can not find a desirable 3D facial feature data from the 4 or 6 texture mapped 3D facial feature data, the steps S108-S109 are repeated.
On the other hand, when the client finds a desirable 3D facial feature from the 4 or 6 texture mapped data and therefore selects a texture mapped data, the web server 5 asks the client whether he or she wants to correct or edit the texture mapped 3D facial feature data selected by the client. (S110)
When the client wants to correct one, a 3D facial feature reconstruction service provider receives a message containing how the texture mapped 3D facial feature data should be corrected from the client using an on-line message transmission system and requests an editing professional who is in contract with the service provider to correct the 3D texture mapped facial feature data by sending the same data with the client's message to the editing professional on line. (Sill) After the 3D texture mapped facial feature data is corrected, the corrected data is stored in the storage device 17. (S112)
As discussed above, all steps for performing the method of acquiring the 3D facial feature from the 2D face image are automatically proceeded on line except a correction step and almost in real time except for data correction time which is needed to correct the 3D facial feature data that is similar to the 2D face image from the database and the client wants to correct . Fig. 3 is showing a method of abstracting partial distinctive features of a face from a frontal face view photograph. The method comprises bounding and separating a face portion from background in the photograph using color information by considering that the face has a skin color (S200) , projecting position of eyes, a nose and a mouth of the face (S201) and detecting edges of the eyes, nose and mouth. (S202) .
The position information of the eyes, nose and mouse can be abstracted by detecting edges of the eyes, nose and mouth wherein the edges can be located by detecting pixel numbers. ' The number of pixels is abruptly changed at the edge portions. Then the projected face data is acquired by accumulating the image of the edges of the features in axis of abscissa and in axis of ordinate.
The distinctive features of a face from a 2D face image are abstracted in a method of fractional analysis in which the coordinate of the position of the nose, eyes and mouth are abstracted by applying critical value to the projected data. Fig.4 is showing a method of abstracting whole distinctive features from the frontal face view photograph. The method comprises a step of bounding a face portion from the photograph using color information by considering that the skin color is distinguished from the background color in the photograph (S300) , an outline abstraction step of abstracting a face outline from the face portion (S301) and a feature outline abstraction step of abstracting outlines of features such as a tip of the nose, philtrum, mouth and eyebrows from the face portion. (S302)
Fig. 5 is showing a method of abstracting distinctive feature information from a 3D facial feature. The method comprises a step of detecting a tip of a nose (S400), a step of detecting position of the mouth and eyes and formalizing shape of the mouth and eyes (S401) and a step of detecting the tip of the nose, the center of the eyes and side edges of the mouth (S402) .
The step of detecting the tip of nose (S400) comprises detecting first the nose and detecting the ridges of the nose, the ridge being across the central line of the eyebrows and philtrum. The detecting position and formalizing (S401) resizes the 3D facial feature in a predetermined size. The detecting
(S402) detects first a position of the mouth by searching a figure of upper and lower lips below the nose and then detects both side edges of the mouth and further then detects position of the eyes by searching dimple portions which are located in directions of 11 hour and 1 hour, respectively, above the nose. The feature information abstracted from the 3D facial feature data using the method of abstracting in accordance with steps S400-S402 is then transformed to 2D feature information. At this time, position information acquired from the 2D feature information that is abstracted and transformed from the 3D facial feature information is the same as the position information directly abstracted from the face photograph (2D face image) when the faces in the photograph and the 3D facial feature data are taken from the same person. That is, position information out of the information abstracted from the 3D facial features and the 2Dface image, respectively, is coincident with each other, so that the similar 3D facial feature data being similar to the 2D face image can be retrieved from the database.
As discussed above, in this present method and system, the 3D facial feature is acquired from the 2D face image provided by the client almost in real time. Further, the method for acquiring the 3D facial feature is in automatic (not manual) . Accordingly, time and cost for acquiring the 3D facial feature from the 2D face image may be greatly reduced. Further, the acquired 3D facial feature is so realistic and close to original human face because the 3D facial feature is acquired by using the database having a lot of 3D facial feature data taken from a lot of real human faces. This method in accordance with the present invention provides the 3D facial feature that is more realistic and similar to the 2D face image than the 3D facial feature constructed manually by a technician. Further, this method can provide convenience to the client and is user friendly by further comprising the correction step for editing the 3D facial feature acquired from the automatic process in response to the client's demand.

Claims

What is claimed is:
1. A system for acquiring a 3D facial feature from a 2D image, connected to a communication network by way of a web server, the system being connected to a communication network by way of a web server and comprising: an abstracting means for abstracting distinctive features from photograph data provided by a client when the photograph data is transmitted by way of the web server from the client; a comparing means for comparing the distinctive facial features abstracted from the photograph data with a plurality of distinctive feature information which is abstracted from a plurality of 3D facial feature data; a 3D facial feature reconstructing system for reconstructing a 3D facial feature using the photograph data provided by the client and the 3D facial feature data having the distinctive feature information which substantially matches with the distinctive facial features abstracted from the photograph data.
2. The system according to claim 1, wherein there is a plurality of the distinctive feature information that is matched with the distinctive facial feature from the photograph data and the 3D facial feature is reconstructed by using a plurality of 3D facial feature data corresponding to the plurality of distinctive feature information and the photograph data provided by the client.
3. The system according to claim 1 or 2, further comprising a terminal of an editing professional for correcting the 3D facial feature reconstructed by the 3D facial feature reconstructing system, the terminal being connected to the communication network, and a transmitting means for transmitting the reconstructed 3D facial feature the editing professional by way of the web server.
4. The system according to claim 1 or 2, wherein the 3D facial feature reconstructing system reconstructs the 3D facial feature in a method of texture mapping a retrieved 3D facial feature data from a database and the photograph data provided by the client.
5. The system according to claim 3, wherein the 3D facial feature reconstructing system reconstructs the 3D facial feature in a method of texture mapping a 3D facial feature data retrieved from a database and the photograph data provided by the client.
6. The system according to claim 1 or 2, further comprising: a first storage device 9 for storing the distinctive facial features abstracted from the photograph data; a second storage device 11 for storing a plurality of the distinctive feature information abstracted from the 3D facial feature data; and a third storage device 15 for storing reconstructed 3D facial feature data that are reconstructed using the 3D facial feature data corresponding to the distinctive feature information and the photograph data.
7. The system according to claim 3, further comprising: a first storage device 9 for storing the distinctive facial features abstracted from the photograph data; a second storage device 11 for storing a plurality of the distinctive feature information abstracted from the 3D facial feature data; and a third storage device 15 for storing reconstructed 3D facial feature data that are reconstructed using the 3D facial feature data corresponding to the distinctive feature information and the photograph data.
8. The system according to claim 1 or 2, wherein the photograph data comprises a frontal face view photograph and a side face view photograph, the distinctive facial features abstracted from the frontal face view photograph comprises features of eyes, a nose, a mouth, ears, eyebrows and a face outline and the distinctive facial features abstracted from the side face view photograph comprises a nose outline, distance from the one ear to a tip of the nose, features of the nose, eye, mouth and hair style.
9. The system according to claim 3, wherein the photograph data comprises a frontal face view photograph and a side face view photograph, the distinctive facial features abstracted from the frontal face view photograph comprises features of eyes, a nose, a mouth, ears, eyebrows and a face outline and the distinctive facial features abstracted from the side face view photograph comprises a nose outline, distance from the one ear to a tip of the nose, features of the nose, eye, mouth and hair style .
10. The system according to claim 4 , wherein the photograph data comprises a frontal face view photograph and a side face view photograph, the distinctive facial features abstracted from the frontal face view photograph comprises features of eyes , a nose, a mouth, ears, eyebrows and a face outline and the distinctive facial features abstracted from the side face view photograph comprises a nose outline, distance from the one ear to a tip of the nose, features of the nose, eye, mouth and hair style.
11. The system according to claim 5, wherein the photograph data comprises a frontal face view photograph and a side face view photograph, the distinctive facial features abstracted from the frontal face view photograph comprises features of eyes, a nose, a mouth, ears, eyebrows and a face outline and the distinctive facial features abstracted from the side face view photograph comprises a nose outline, distance from the one ear to a tip of the nose, features of the nose, eye, mouth and hair style.
12. The system according to claim 6, wherein the photograph data comprises a frontal face view photograph and a side face view photograph, the distinctive facial features abstracted from the frontal face view photograph comprises features of eyes, a nose, a mouth, ears, eyebrows and a face outline and the distinctive facial features abstracted from the side face view photograph comprises a nose outline, distance from the one ear to a tip of the nose, features of the nose, eye, mouth and hair style.
13. The system according to claim 7 , wherein the photograph data comprises a frontal face view photograph and a side face view photograph, the distinctive facial features abstracted from the frontal face view photograph comprises features of eyes, a nose, a mouth, ears, eyebrows and a face outline and the distinctive facial features abstracted from the side face view photograph comprises a nose outline, distance from the one ear to a tip of the nose, features of the nose, eye, mouth and hair style .
14. The system according to claim 8 , wherein the photograph data comprises a frontal face view photograph and a side face view photograph, the distinctive facial features abstracted from the frontal face view photograph comprises features of eyes, a nose, a mouth, ears, eyebrows and a face outline and the distinctive facial features abstracted from the side face view photograph comprises a nose outline, distance from the one ear to a tip of the nose, features of the nose, eye, mouth and hair style.
15. The system according to claim 1 or 2, wherein the distinctive feature information includes features of a face outline, a nose outline being across the ridges of the nose, distance from one ear to a tip of the nose, hair style, distance between two eyes and polygons which are formed by connecting the tip of the nose and two eyes, the tip of the nose and both side edges of the mouth, centrals of the two eyes and the both side edges of the mouth, respectively.
16. The system according to claim 3, wherein the distinctive feature information includes features of a face outline, a nose outline being across the ridges of the nose, distance from one ear to a tip of the nose, hair style, distance between two eyes and polygons which are formed by connecting the tip of the nose and two eyes, the tip of the nose and both side edges of the mouth, centrals of the two eyes and the both side edges of the mouth, respectively.
17. The system according to claim 4, wherein the distinctive feature information includes features of a face outline, a nose outline being across the ridges of the nose, distance from one ear to a tip of the nose, hair style, distance between two eyes and polygons which are formed by connecting the tip of the nose and two eyes, the tip of the nose and both side edges of the mouth, centrals of the two eyes and the both side edges of the mouth, respectively.
18. The system according to claim 5, wherein the distinctive feature information includes features of a face outline, a nose outline being across the ridges of the nose, distance from one ear to a tip of the nose, hair style, distance between two eyes and polygons which are formed by connecting the tip of the nose and two eyes,' the tip of the nose and both side edges of the mouth, centrals of the two eyes and the both side edges of the mouth, respectively.
19. The system according to claim 6, wherein the distinctive feature information includes features of a face outline, a nose outline being across the ridges of the nose, distance from one ear to a tip of the nose, hair style, distance between two eyes and polygons which are formed by connecting the tip of the nose and two eyes, the tip of the nose and both side edges of the mouth, centrals of the two eyes and the both side edges of the mouth, respectively.
20. The system according to claim 7, wherein the distinctive feature information includes features of a face outline, a nose outline being across the ridges of the nose, distance from one ear to a tip of the nose, hair style, distance between two eyes and polygons which are formed by connecting the tip of the nose and two eyes, the tip of the nose and both side edges of the mouth, centrals of the two eyes and the both side edges of the mouth, respectively.
- 21. The system according to claim 8, wherein the distinctive feature information includes features of a face outline, a nose outline being across the ridges of the nose, distance from one ear to a tip of the nose, hair style, distance between two eyes and polygons which are formed by connecting the tip of the nose and two eyes, the tip of the nose and both side edges of the mouth, centrals of the two eyes and the both side edges of the mouth, respectively.
22. The system according to claim 9, wherein the distinctive feature information includes features of a face outline, a nose outline being across the ridges of the nose, distance from one ear to a tip of the nose, hair style, distance between two eyes and polygons which are formed by connecting the tip of the nose and two eyes, the tip of the nose and both side edges of the mouth, centrals of the two eyes and the both side edges of the mouth, respectively.
23. The system according to claim 10, wherein the distinctive feature information includes features of a face outline, a nose outline being across the ridges of the nose, distance from one ear to a tip of the nose, hair style, distance between two eyes and polygons which are formed by connecting the tip of the nose and two eyes, the tip of the nose and both side edges of the mouth, centrals of the two eyes and the both side edges of the mouth, respectively.
24. The system according to claim 11, wherein the distinctive feature information includes features of a face outline, a nose outline being across the ridges of the nose, distance from one ear to a tip of the nose, hair style, distance between two eyes and polygons which are formed by connecting the tip of the nose and two eyes, the tip of the nose and both side edges of the mouth, centrals of the two eyes and the both side edges of the mouth, respectively.
25. The system according to claim 12, wherein the distinctive feature information includes features of a face outline, a nose outline being across the ridges of the nose, distance from one ear to a tip of the nose, hair style, distance between two eyes and polygons which are formed by connecting the tip of the nose and two eyes, the tip of the nose and both side edges of the mouth, centrals of the two eyes and the both side edges of the mouth, respectively.
26. The system according to claim 13, wherein the distinctive feature information includes features of a face outline, a nose outline being across the ridges of the nose, distance from one ear to a tip of the nose, hair style, distance between two eyes and polygons which are formed by connecting the tip of the nose and two eyes, the tip of the nose and both side edges of the mouth, centrals of the two eyes and the both side edges of the mouth, respectively.
27. The system according to claim 14, wherein the distinctive feature information includes features of a face outline, a nose outline being across the ridges of the nose, distance from one ear to a tip of the nose, hair style, distance between two eyes and polygons which are formed by connecting the tip of the nose and two eyes, the tip of the nose and both side edges of the mouth, centrals of the two eyes and the both side edges of the mouth, respectively.
28. A method for acquiring a 3D facial feature from 2D face image using a system connected to a communication network by way of a web server, comprising: a first step of abstracting distinctive facial features from photograph data provided by a client when the photograph data is transmitted by way of the web server from the client; a second step of comparing the distinctive facial features abstracted from the photograph data with a plurality of distinctive feature information which are abstracted from a plurality of 3D facial feature data which is preliminary stored in a database; a third step of reconstructing a 3D facial feature using the photograph data provided by the client and the 3D facial feature data corresponding to the distinctive feature information which is substantially matched with the distinctive facial features abstracted from the photograph data.
29. The method according to claim 28, wherein there is a plurality of the distinctive feature information that is matched with the distinctive facial features from the photograph data, and a plurality of the 3D facial feature is constructed by using a plurality of 3D facial feature data corresponding to the plurality of distinctive feature information and the photograph data provided by the client.
30. The method according to claim 28 or 29, further comprising a fourth step of transmitting the 3D facial feature reconstructed during the third step to a terminal of an editing professional for correcting the reconstructed 3D facial feature by way of the web server which is connected to a communication network when the client demands correction of the reconstructed 3D facial feature.
31. The method according to claim 28 or 29, wherein the third step is performed by texture mapping the 3D facial feature data and the photograph data provided by the client.
32. The method according to claim 30, wherein the third step is performed by texture mapping the 3D facial feature data and the photograph data provided by the client.
33. The method according to claim 28 or 29, wherein the photograph data includes a frontal face view photograph and the first step comprises steps of: abstracting a partial distinctive facial features including: a sub step 1-lof bounding a face portion using color information from the frontal face view photograph, a sub step 1-2 of projecting positions of eyes, a nose and a mouth, and a sub step 1-3 of detecting edges of the eyes, nose and mouth; and abstracting whole distinctive facial features including: a sub step 2-1 of bounding the face portion using color information from the frontal face view photograph, a sub step 2-2 of abstracting a face outline, and a sub step 2-3 of abstracting outlines a tip of nose, a philtrum, a mouth and eyebrows.
34. The method according to claim 30, wherein the photograph data includes a frontal face view photograph and the first step comprises steps of: abstracting a partial distinctive facial features including: a sub step 1-lof bounding a face portion using color information from the frontal face view photograph, a sub step 1-2 of projecting positions of eyes, a nose and a mouth, and a sub step 1-3 of detecting edges of the eyes, nose and mouth; and abstracting whole distinctive facial features including: a sub step 2-1 of bounding the face portion using color information from the frontal face view photograph, a sub step 2-2 of abstracting a face outline, and a sub step 2-3 of abstracting outlines a tip of nose, a philtrum a mouth and eyebrows.
35. The method according to claim 31, wherein the photograph data includes a frontal face view photograph and the first step comprises steps of: abstracting a partial distinctive facial features including: a sub step 1-lof bounding a face portion using color information from the frontal face view photograph, a sub step 1-2 of projecting positions of eyes, a nose and a mouth, and a sub step 1-3 of detecting edges of the eyes, nose and mouth; and abstracting whole distinctive facial features including: a sub step 2-1 of bounding the face portion using color information from the frontal face view photograph, a sub step 2-2 of abstracting a face outline, and a sub step 2-3 of abstracting outlines a tip of nose, a philtrum a mouth and eyebrows.
36. The method according to claim 32, wherein the photograph data includes a frontal face view photograph and the first step comprises steps of: abstracting a partial distinctive facial features including: a sub step 1-lof bounding a face portion using color information from the frontal face view photograph, a sub step 1-2 of projecting positions of eyes, a nose and a mouth, and a sub step 1-3 of detecting edges of the eyes, nose and mouth; and abstracting whole distinctive facial features including: a sub step 2-1 of bounding the face portion using color information from the frontal face view photograph, a sub step 2-2 of abstracting a face outline, and a sub step 2-3 of abstracting outlines a tip of nose, a philtrum a mouth and eyebrows.
37. The method according to claim 28 or 29, wherein the distinctive feature information in the second step is abstracted by performing steps of: detecting a tip of a nose from a 3D facial feature data; detecting position of a mouth and eyes from the 3D facial feature data and formalizing the mouth and eyes; and detecting a tip of the nose, the centers of the eyes, both side ends of the mouth from the 3D facial feature data.
38. The method according to claim 30, wherein the distinctive feature information in the second step is abstracted by performing steps of: detecting a tip of a nose from a 3D facial feature data; detecting position of a mouth and eyes from the 3D facial feature data and formalizing the mouth and eyes; and detecting a tip of the nose, the centers of the eyes, both side ends of the mouth from the 3D facial feature data.
39. The method according to claim 31, wherein the distinctive feature information in the second step is abstracted by performing steps of: detecting a tip of a nose from a 3D facial feature data; detecting position of a mouth and eyes from the 3D facial feature data and formalizing the mouth and eyes; and detecting a tip of the nose, the centers of the eyes, both side ends of the mouth from the 3D facial feature data.
40. The method according to claim 32, wherein the distinctive feature information in the second step is abstracted by performing steps of: detecting a tip of a nose from a 3D facial feature data; detecting position of a mouth and eyes from the 3D facial feature data and formalizing the mouth and eyes; and detecting a tip of the nose, the centers of the eyes, both side ends of the mouth from the 3D facial feature data.
41. The method according to claim 33, wherein the distinctive feature information in the second step is abstracted by performing steps of: detecting a tip of a nose from a 3D facial feature data; detecting position of a mouth and eyes from the 3D facial feature. data and formalizing the mouth and eyes; and detecting a tip of the nose, the centers of the eyes, both side ends of the mouth from the 3D facial feature data.
42. The method according to claim 34, wherein the distinctive feature information in the second step is abstracted by performing steps of: detecting a tip of a nose from a 3D facial feature data; detecting position of a mouth and eyes from the 3D facial feature data and formalizing the mouth and eyes; and detecting a tip of the nose, the centers of the eyes, both side ends of the mouth from the 3D facial feature data.
43. The method according to claim 35, wherein the distinctive feature information in the second step is abstracted by performing steps of: detecting a tip of a nose from a 3D facial feature data; detecting position of a mouth and eyes from the 3D facial feature data and formalizing the mouth and eyes; and detecting a tip of the nose, the centers of the eyes, both side ends of the mouth from the 3D facial feature data.
44. The method according to claim 36, wherein the distinctive feature information in the second step is abstracted by performing steps of: detecting a tip of a nose from a 3D facial feature data; i detecting position of a mouth and eyes from the 3D facial feature data and formalizing the mouth and eyes; and detecting a tip of the nose, the centers of the eyes, both side ends of the mouth from the 3D facial feature data.
PCT/KR2002/000896 2001-05-14 2002-05-14 Method for acquiring a 3d face digital data froom 2d photograph aand a system performing the same WO2002093493A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR2001/26057 2001-05-14
KR10-2001-0026057A KR100474352B1 (en) 2001-05-14 2001-05-14 System and method for acquiring a 3D face digital data from 2D photograph

Publications (1)

Publication Number Publication Date
WO2002093493A1 true WO2002093493A1 (en) 2002-11-21

Family

ID=19709421

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2002/000896 WO2002093493A1 (en) 2001-05-14 2002-05-14 Method for acquiring a 3d face digital data froom 2d photograph aand a system performing the same

Country Status (2)

Country Link
KR (1) KR100474352B1 (en)
WO (1) WO2002093493A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7454039B2 (en) 2004-07-12 2008-11-18 The Board Of Trustees Of The University Of Illinois Method of performing shape localization
US7486825B2 (en) * 2004-05-27 2009-02-03 Kabushiki Kaisha Toshiba Image processing apparatus and method thereof
US20160042557A1 (en) * 2014-08-08 2016-02-11 Asustek Computer Inc. Method of applying virtual makeup, virtual makeup electronic system, and electronic device having virtual makeup electronic system

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1313979C (en) 2002-05-03 2007-05-02 三星电子株式会社 Apparatus and method for generating 3-D cartoon
KR20040009460A (en) * 2002-07-23 2004-01-31 주식회사 페이스쓰리디 System and method for constructing three dimensional montaged geometric face
KR101653592B1 (en) 2015-02-03 2016-09-02 한국기술교육대학교 산학협력단 Method for providing service of three dimensional face modeling
KR101725932B1 (en) 2015-07-17 2017-04-12 광운대학교 산학협력단 An imaging photographing device and an imaging photographing method using an video editing
KR101692493B1 (en) * 2015-10-07 2017-01-03 (주)에스앤 Device and method of transform 2D image to 3D image
KR101741150B1 (en) 2016-12-28 2017-05-29 광운대학교 산학협력단 An imaging photographing device and an imaging photographing method using an video editing

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR950030647A (en) * 1994-04-15 1995-11-24 가나이 쯔또무 Video communication device
KR19980084422A (en) * 1997-05-23 1998-12-05 배순훈 3D Character Generation Method Using Face Model and Template Model
KR20000063344A (en) * 2000-06-26 2000-11-06 김성호 Facial Caricaturing method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100317138B1 (en) * 1999-01-19 2001-12-22 윤덕용 Three-dimensional face synthesis method using facial texture image from several views

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR950030647A (en) * 1994-04-15 1995-11-24 가나이 쯔또무 Video communication device
KR19980084422A (en) * 1997-05-23 1998-12-05 배순훈 3D Character Generation Method Using Face Model and Template Model
KR20000063344A (en) * 2000-06-26 2000-11-06 김성호 Facial Caricaturing method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7486825B2 (en) * 2004-05-27 2009-02-03 Kabushiki Kaisha Toshiba Image processing apparatus and method thereof
US7454039B2 (en) 2004-07-12 2008-11-18 The Board Of Trustees Of The University Of Illinois Method of performing shape localization
US20160042557A1 (en) * 2014-08-08 2016-02-11 Asustek Computer Inc. Method of applying virtual makeup, virtual makeup electronic system, and electronic device having virtual makeup electronic system

Also Published As

Publication number Publication date
KR20010069820A (en) 2001-07-25
KR100474352B1 (en) 2005-03-08

Similar Documents

Publication Publication Date Title
EP1134701A2 (en) Client-server system for three-dimensional face make-up simulation
US6847383B2 (en) System and method for accurately displaying superimposed images
US7200642B2 (en) Method and apparatus for electronic delivery of electronic model images
JP3984191B2 (en) Virtual makeup apparatus and method
US10277836B2 (en) Communication terminal, image management apparatus, image processing system, method for controlling display, and computer program product
US20020085046A1 (en) System and method for providing three-dimensional images, and system and method for providing morphing images
KR102267319B1 (en) Apparatus and method for recommending cosmetics for skin type
US20180012394A1 (en) Method for depicting an object
KR20180124126A (en) INFORMATION DISPLAY METHOD, DEVICE, AND SYSTEM
JP2007213623A (en) Virtual makeup device and method therefor
WO2002093493A1 (en) Method for acquiring a 3d face digital data froom 2d photograph aand a system performing the same
KR102506352B1 (en) Digital twin avatar provision system based on 3D anthropometric data for e-commerce
JP2002324126A (en) Providing system for make-up advise information
KR102594093B1 (en) Dermatologic treatment recommendation system using deep learning model and method thereof
KR102497411B1 (en) Method for providing nail art service and System for providing nail art service
JP2001012922A (en) Three-dimensional data-processing device
RU2703327C1 (en) Method of processing a two-dimensional image and a user computing device thereof
CN115587829A (en) Dish recommendation method and system
CN111370100A (en) Face-lifting recommendation method and system based on cloud server
JP2020022681A (en) Makeup support system, and makeup support method
JP2001344293A (en) Method for providing and acquiring interior design and coordinates by computer two-way communications its communication system, and program
US20230209171A1 (en) Image processing apparatus, image processing method, and program
US20230206483A1 (en) Image processing apparatus, image processing method, and program
US20230368463A1 (en) System and method for providing personalized transactions based on 3d representations of user physical characteristics
JP7269309B1 (en) Information processing device, image presentation method, and program

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG US UZ VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 69(1) EPC EPO FORM 1205A DATED 04-03-04

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP