US20040207743A1 - Digital camera system - Google Patents

Digital camera system Download PDF

Info

Publication number
US20040207743A1
US20040207743A1 US10/814,142 US81414204A US2004207743A1 US 20040207743 A1 US20040207743 A1 US 20040207743A1 US 81414204 A US81414204 A US 81414204A US 2004207743 A1 US2004207743 A1 US 2004207743A1
Authority
US
United States
Prior art keywords
feature point
digital camera
given
camera system
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/814,142
Inventor
Hirotake Nozaki
Hideo Hibino
Toshiaki Kobayashi
Norikazu Yokonuma
Satoshi Ejima
Tadashi Ohta
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nikon Corp
Original Assignee
Nikon Corp
Nikon Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2003109884A external-priority patent/JP2004320285A/en
Priority claimed from JP2003109883A external-priority patent/JP2004317699A/en
Priority claimed from JP2003109886A external-priority patent/JP4196714B2/en
Priority claimed from JP2003109885A external-priority patent/JP2004320286A/en
Priority claimed from JP2003109882A external-priority patent/JP2004320284A/en
Application filed by Nikon Corp, Nikon Technologies Inc filed Critical Nikon Corp
Assigned to NIKON CORPORATION, NIKON TECHNOLOGIES, INC. reassignment NIKON CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: NOZAKI, HORITAKE, EJIMA, SATOSHI, HIBINO, HIDEO, KOBAYASHI, TOSHIAKI, YOKONUMA, NORIKAZU, OHTA, TADASHI
Publication of US20040207743A1 publication Critical patent/US20040207743A1/en
Assigned to NIKON CORPORATION reassignment NIKON CORPORATION CO. TO CO. ASSIGNMENT Assignors: NIKON TECHNOLOGIES INC.
Priority to US12/289,689 priority Critical patent/US20090066815A1/en
Priority to US13/067,502 priority patent/US20110242363A1/en
Priority to US13/964,648 priority patent/US9147106B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/673Focus control based on electronic image sensor signals based on contrast or high frequency components of image signals, e.g. hill climbing method
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • H04N23/635Region indicators; Field of view indicators
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Definitions

  • the present invention relates to a digital camera system capable of detecting a feature point of a person and operating on the basis of the detected result.
  • the method for detecting a person from an image data has been known starting from a system that confirms a person himself by comparing fingerprints or the iris with that stored in advance.
  • U.S. Pat. No. 5,982,912 precisely discloses a method that discriminates a person by comparing feature points detected from an input image with feature points such as an eye, a nose, a mouth, and the like stored in advance.
  • Japanese Laid-Open Patent Application No. 10-232934 discloses a method that increases accuracy of the image dictionary upon storing feature points detected in such manner. The following examples are applications of such method to a camera.
  • U.S. Pat. No. 5,347,371 discloses a video camera that separately controls parameters for processing a specific subject portion and those for the other portion by detecting the specific subject signal. Accordingly, for example, white balance of the subject can be corrected and the background can be defocused upon shooting portrait photography.
  • U.S. Pat. No. 5,812,193 discloses a video camera that calculates the area of the detected face image and carries out zooming process comparing it with a reference face area.
  • Japanese Laid-Open Patent Application No. 9-233384 discloses an image input device that divides a shot image data into a given number and automatically expands and outputs a divided image including a specific image among the divided images.
  • EP1128316A1 (28.02.2000 U.S. Pat. No. 514,436) discloses a camera that stores data such as coordinates and dimension of a face detected by a face-detection-algorism, position of the eye and a pose of the head together with the image data. Moreover, it discloses that the camera carries out automatic red-eye correction algorism and applies to a detected face a face-priority-color-balance algorism.
  • Japanese Laid-Open Patent Application No. 2001-218020 discloses an image processing method that assumes sex of a person by detecting lips and locally carries out processes such as skin color, gradation, and smoothing.
  • Japanese Laid-Open Patent Application No. 2001-330882 discloses a camera that changes a detection algorism for detecting subject information corresponding to a shooting mode.
  • focusing and the aperture value are controlled corresponding to the number and the size of the detected face in accordance with the face detection algorism.
  • U.S. Laid-Open Patent Application No. 2002/101619A1 discloses an image storing device that stores a shot image in connection with discrimination information of the subject stored in advance.
  • Japanese Laid-Open Patent Application No. 2002-051255 discloses a main-subject-detection camera that detects the main subject and measures the distance to the main subject when a plurality of people are detected by a person-detection means.
  • the person locating at the nearest position, having the largest area, or locating at the center of the image frame is discriminated as the main subject.
  • Japanese Laid-Open Patent Application No. 2002-333652 discloses an image shooting device that generates a storing signal comparing shot face information with face information stored in advance. When plurality number of faces are there in the image frame, a face corresponding to higher priority face code is focused.
  • U.S. Laid-Open Patent Application No. 2003/0071908A1 discloses an imaging device that detects a face and sets a distance measuring area or a photometry area to at least a portion of the face. Moreover, it discloses an image-shooting-with-emitting-a-speedlight device that detects a face and emits a speedlight for preventing red-eye.
  • the present invention is made in view of the aforementioned problems and has an object to provide a digital camera system capable of operating by detecting a feature point, which has not been accomplished, in addition to ordinary functions of a conventional camera.
  • a digital camera system includes a detecting means that detects a given feature point from an image data, a receiving means that receives an order from a user, a selecting means that selects each feature point in accordance with a given order instructed by the receiving means when a plurality of feature points are detected, and a display that displays feature point information identifying the feature point selected by the selecting means. Accordingly, a user can easily select a desired person.
  • the display displays information regarding the feature point overlaid with the image data is included.
  • a face detection means that detects the size of a face from the feature point detected by detecting means is included.
  • the selecting means selects the face in descending order of the face size detected by the face detection means.
  • a distance detection means that detects a distance to the feature point detected by the detecting means is included.
  • the selecting means selects the feature point in ascending order of the distance detected by the distance detection means, so the user can easily select a desired subject.
  • a focus-area-setting means that sets a given area including the feature point detected by the detecting means as a focus area for detecting focus is included.
  • a photometry-area-setting means that sets a given area including the feature point detected by the detecting means as a photometry area is included.
  • claim 7 provides a digital camera system including a detecting means that detects a given feature point from an image data, a display that displays the feature point detected by the detecting means, a receiving means that receives information regarding the feature point displayed by the display, and a memory that stores the feature point and information regarding the feature point. Accordingly, information regarding the feature point together with the feature point are stored in the memory such as a nonvolatile memory in the digital camera.
  • the information regarding the feature point is specific name information.
  • the information regarding the feature point is priority information determined when a plurality of feature points are detected at a time.
  • a discriminating means that discriminates the priority information, and a selecting means that selects feature point in order of the priority discriminated by the discriminating means are included.
  • a distance-measuring-area-setting means that sets a distance measuring area for measuring a distance to a subject displayed on the display is included.
  • the priority information is a priority among the plurality of feature points upon setting the distance measuring area by the distance-measuring-area-setting means.
  • a photometry-area-setting means that sets a photometry area for measuring lightness of the subject displayed on the display is included.
  • the priority information is a priority among the plurality of feature points upon setting the photometry area by the photometry-area-setting means.
  • the information regarding the feature point is at least one of color process information and outline correction process information upon storing the image data including the feature point.
  • the information regarding the feature point is at least one of color process information and outline correction process information upon reproducing the image data including the feature point.
  • a discriminating means that discriminates and displays whether or not at least one of the feature point and information regarding the feature point displayed on the display is stored in the memory is included.
  • claim 16 provides a digital camera system including a detecting means that detects a given feature point from an image data, a display that displays the feature point detected by the detecting means, a input means that inputs information regarding the feature point displayed by the display, a instruction means that instructs to store the feature point and information regarding the feature point in connection with the image data, and a memory that stores the feature point, information regarding the feature point, and the image data instructed by the instruction means. Accordingly, information regarding the feature point and the feature point are stored in the memory in connection with the image data, so it is convenient to select later a subject on the basis of the information regarding the feature point.
  • the information regarding the feature point is positional information in the image data upon detecting the feature point from the image data.
  • the invention according to claim 18 includes a memory that stores a first feature point and first specific name information regarding the first feature point, a detecting means that detects a given feature point from an image data, an input means that inputs second specific name information regarding a second feature point detected by the detecting means, and a storing instruction means that instructs to additionally store in the memory the second feature point when the first specific name information and the second specific name information are identical and the first feature point and the second feature point are different.
  • the specific memory information regarding the detected subject is the same as the specific name information such as a person's name stored in the memory such as a built-in memory and when a new feature point regarding the person is detected, the detected feature point is additionally stored in the built-in memory, so that the accuracy of discriminating the person can be increased.
  • the invention according to claim 19 includes a first memory that stores a first feature point and specific name information regarding the first feature point, a second memory that stores a second feature point and the specific name information in connection with an image data, a storing instruction means that instructs to additionally store in the first memory the second feature point when the first feature point and the second feature point are different. Accordingly, feature points regarding the same specific name information are additionally stored in advance in the built-in memory from a memory card in which the image data, feature point and the specific name information such as a person's name regarding thereof are stored, so that the accuracy of discriminating the person can be increased.
  • the invention according to claim 20 includes a first memory that stores a first feature point and specific name information regarding the first feature point, a second memory that stores a second feature point and the specific name information in connection with an image data, and a storing instruction means that instructs to additionally store in the second memory the first feature point when the first feature point and the second feature point are different. Accordingly, a feature point not detected from the image data stored in the memory card can additionally be stored in the memory card, so the number of feature points regarding the person in the memory card can gradually be increased.
  • the invention according to claim 21 includes a display that displays an image data, a detecting means that detects a given feature point from the image data, a memory that stores a plurality of feature points in advance, a checking means that checks whether or not the feature point detected by the detecting means is the same as any one of the feature points stored in the memory, and a discriminating-display means that discriminates and displays on the display the checked result checked by the checking means. Accordingly, it becomes possible to discriminate immediately whether the detected feature point has already stored or not.
  • the memory stores at least one of specific name information regarding the feature point and priority information for setting a priority of selection when a plurality of feature points are detected at a time
  • the discriminating-display means displays on the display information stored in the memory regarding the feature point checked as the same by the checking means.
  • the invention according to claim 23 includes a detecting means that detects a given feature point from an image data, and a control means that controls the detected feature point in connection with the image data. Accordingly, the image data and the feature point detected from it can be stored in connection with each other.
  • the invention according to claim 24 includes a memory that stores a given feature point in an image data in connection with information regarding the given feature point, a detecting means that detects a feature point from an image data, an assigning means that assigns at least one of the given feature point and information regarding the given feature point stored in the memory, an agreement checking means that checks whether or not the feature point detected by the detecting means is the same as the given feature point, a size checking means that checks the size of the feature point checked by the agreement checking means as the same, and a zooming means that zooms in/out a given area including the feature point corresponding to the size of the feature point checked by the size checking means. Accordingly, when a feature pint as the same as a given feature point such as a person is detected, the feature point is checked its size and zoomed in/out to become a given size.
  • the agreement checking means includes an overlaid display means that displays a subject corresponding to the feature point checked as the same by the checking means overlaid with a maker.
  • the information regarding the feature point is specific name information for specifying the feature point.
  • the zooming means zooms in/out such that the size of the feature point checked by the size checking means becomes a given range of the size.
  • a position-detecting means that detects the position of the agreed feature point in the shooting image frame is included.
  • the zooming means includes a vibration correction lens that corrects vibration upon shooting and a vibration correction lens driver that drives the vibration correction lens such that the agreed feature point comes to a given position in the shooting image frame in response to the detected result of the position-detecting means. Accordingly, a desired subject always comes to a given position (such as the center) of the image frame and is zoomed in/out.
  • a position-detecting means that detects the position of the agreed feature point in the shooting image frame is included.
  • the zooming means includes an electronic zooming means that zooms in/out electronically such that the agreed feature point comes to a given position in the shooting image frame in response to the detected result of the position-detecting means. Accordingly, the detected feature point is zoomed in/out to a given position such as the center of the image frame.
  • the invention according to claim 30 includes a detecting means that detects a given feature point from an image data, a position-detecting means that detects the position of the feature point in a shooting image frame, a vibration correction lens that corrects vibration upon shooting, and a driver that drives the vibration correction lens such that the feature point comes to a given position in the shooting image frame in response to the detected result of the position-detecting means. Accordingly, the detected feature point can always be positioned optically at a desired position in the shooting image frame without using a camera platform. In claim 31 , the given position locates in the vicinity of the center of the shooting image frame.
  • a memory that stores the given feature point in the image data together with information regarding the given feature point, an assigning means that assigns at least one of the given feature point and information regarding the given feature point stored in the memory, and an agreement checking means that checks whether or not the feature point detected by the detecting means is the same as the given feature point are further included.
  • the driver drives the vibration correction lens such that the feature point checked by the agreement checking means as the same comes to the given position. Accordingly, a given feature point can always be shot at a desired position such as the center of the image frame.
  • the invention according to claim 33 includes a shooting instruction means that instructs to shoot a still image of a subject, a detecting means that detects a given feature point from the still image data shot in response to the instruction of the shooting instruction means, a discriminating means that discriminates a state of the given feature point detected by the detecting means, and a warning means that warns in accordance with the discriminated result of the discriminating means. Accordingly, when the shot condition has not been satisfactory after shooting, the warning gives a warning to a user right away, so that the user can take a measure such as reshooting.
  • the given feature point is a pupil portion of a person and when the discriminating means discriminates that a pupil has not been detected, the warning means gives a warning.
  • the warning means gives a warning. Accordingly, when a person is shot with his/her eyes shut, a warning is given.
  • the given feature point is an eye or a face outline of a person and when the discriminating means discriminates that the eye or the face outline has a camera shake, the warning means gives a warning. Accordingly, when a person is shot with his/her eyes blinking or with his/her face moving, a warning is given.
  • the detecting means detects a face of a person before shooting a still image and the given feature point is a face of a person and when the number of the faces detected by the detecting means before shooting a still image has not coincide with that detected from the shot still image, the warning means gives a warning. Accordingly, when a desired person is shot with hiding behind another person, a warning is given.
  • the invention according to claim 37 includes a shooting instruction means that instructs to shoot an image of a subject, a detecting means that detects a given feature point from the image data shot in response to the instruction of the shooting instruction means, a discriminating means that discriminates a state of the given feature point detected by the detecting means, and a reshooting instruction means that instructs the shooting instruction means to reshoot the subject in accordance with the discriminated result of the discriminating means. Accordingly, when a shot condition has not been satisfactory after shooting, the subject is automatically reshot.
  • the given feature point is a pupil portion of a person and when the discriminating means discriminates that a pupil has not been detected, the reshooting instruction means instructs to reshoot the subject. Accordingly, when a person is shot with his/her eyes shut, the person is automatically reshot.
  • the given feature point is an eye or a face outline of a person and when the discriminating means discriminates that the eye or the face outline has a camera shake, the reshooting instruction means instructs to reshoot the subject. Accordingly, when a person is shot with his/her eyes blinking or with his/her face moving, the person is automatically reshot.
  • the detecting means detects a face of a person before shooting an image and the given feature point is a face of a person and when the number of the faces detected by the detecting means before shooting an image has not coincide with that detected from the shot image, the reshooting instruction means instructs to reshoot the subject. Accordingly, when a desired person is shot with hiding behind another person, the person is automatically reshot.
  • the invention according to claim 41 includes a detecting means that detects a given feature point from an image data, a memory that stores a plurality of color reproduction parameters for carrying out color reproduction of the whole image data, a discriminating means that discriminates a face of a person from the feature point detected by the detecting means, a size comparator that compares the size of the face discriminated by the discriminating means with a given value, and a selecting means that selects a color reproduction parameter giving priority to skin color among the plurality of color reproduction parameters when the size comparator discriminates that the size of the face is the given value or more. Accordingly, when the detected face size is a given value or more, color reproduction parameter giving priority to skin color is selected.
  • the invention according to claim 42 includes a detecting means that detects a given feature point from an image data, a memory that stores a plurality of color reproduction parameters for carrying out color reproduction of the whole image data, a discriminating means that discriminates a face of a person from the feature point detected by the detecting means, a number comparator that compares the number of the faces discriminated by the discriminating means with a given value, and a selecting means that selects a color reproduction parameter giving priority to skin color among the plurality of color reproduction parameters when the number comparator discriminates that the number of the faces is the given value or more. Accordingly, when the number of detected faces is a given value or more, color reproduction parameter giving priority to skin color is selected.
  • the invention according to claim 43 includes an imaging device that images a subject, an aperture stop that controls light quantity incident on the imaging device, a detecting means that detects a given feature point from an image data output from the imaging device, a discriminating means that discriminates the size and the number of the faces from the feature point detected by the detecting means, and a control means that controls the aperture value of the aperture stop to become small when the discriminating means discriminates that the face size detected by the detecting means is a first given value or more and a second given value or less. Accordingly, when the size of the detected face is large to a certain extent and when the number of the detected face is three to four or less, the image is discriminated as a portrait photograph and shot by setting small aperture value to obtain an image with shallow depth of focus.
  • the invention according to claim 44 includes a detecting means that detects a given feature point for discriminating a subject from an image data, a setting means that sets a given setting condition corresponding to at least one item of photometry, measuring distance and white balance each including a plurality of setting conditions upon shooting, and an instructing means that instructs the setting means to set different setting condition in accordance with the detected result of the detecting means. Accordingly, the best setting condition in accordance with the detected subject can be set.
  • a discriminating means that discriminates the subject is further included.
  • the setting condition is any one of a condition suitable for a landscape, a distant subject, and a night view and when the discriminating means discriminates a person as the subject, the instructing means instructs the setting means to set a setting condition suitable for shooting a person. Accordingly, in the case of the aperture value is large for obtaining large depth of focus as an example suitable for shooting a landscape, when a person is detected in the shooting image frame, the shooting mode is immediately shifted to a mode suitable for shooting a person setting the aperture vale to small obtaining shallow depth of focus.
  • the instructing means instructs the setting means to set any one of a condition suitable for a landscape, a distant object and a night view. This is the opposite case of the above-described claim 45 .
  • the shooting mode is shifted to that suitable for shooting a landscape.
  • a warning means that gives a warning when the setting condition is suitable for shooting a person and when the detecting means does not detect a person as the subject is further included.
  • the invention according to claim 48 includes an AF means that controls focusing on the basis of a signal output from a given AF area in an image data, a detecting means that detects a given feature point from the image data, a face discriminating means that discriminates a face of a person from the feature point detected by the detecting means, a position discriminating means that discriminates a position of the face discriminated by the face discriminating means, and a setting means that sets a given second area as an AF area when the position discriminating means discriminates that the face position is outside of a given first area. Accordingly, when a subject is located on the periphery of the shooting image frame, the AF area is set to a predetermined central area.
  • the invention according to claim 49 includes a shooting lens that is composed of a zoom lens and a focusing lens for shooting a subject, a position sensor that detects a position of the zoom lens, a detecting means that detects a given feature point and information regarding the feature point from an image data shot by the shooting lens, and a calculator that calculates a distance to the subject on the basis of information regarding the feature point detected by the detecting means and the position of the zoom lens detected by the position sensor. Accordingly, the distance to the subject is calculated on the basis of the information regarding the detected feature point and the zoom position.
  • the information regarding the feature point is at least one of the face size and the pupil distance.
  • a restriction means that restricts a moving range of the focusing lens to a given range on the basis of the distance to the subject calculated by the calculator is further included. Accordingly, by restricting the focus range of the focusing lens, the AF movement can be carried out faster, and even if a high contrast backdrop exists, the AF movement cannot be affected by it.
  • an aperture stop that controls light quantity incident on the shooting lens, and an aperture determining means that determines an aperture value of the aperture stop such that when a plurality of faces are detected by the detecting means, a given face among the plurality of faces comes in focus on the basis of the distances to the plurality of faces calculated by the calculator are further included. Accordingly, by varying the aperture value in accordance with the calculated distance to each face, a desired face can be located within the depth of focus of the shooting lens.
  • the invention according to claim 53 includes an illumination means that illuminates a subject upon shooting the subject, a detecting means that detects a given feature point from an image data, a distance calculator that calculates a distance to the feature point on the basis of the feature point detected by the detecting means, and a illumination quantity setting means that sets an illumination light quantity of the illumination means on the basis of the distance calculated by the distance calculator. Accordingly, the light quantity of the speedlight can be set in accordance with the distance to the detected feature point.
  • a plurality of photometry areas that measure luminance of the subject, and an exposure setting means that sets an exposure condition upon shooting on the basis of an output of a given photometry area among the plurality of photometry areas are further included. Accordingly, a proper exposure can be provided to both the detected feature point and the backdrop even if it is backlight condition.
  • a size detector that detects a face size or a pupil distance from the feature point detected by the detecting means, and a lens position sensor that detects the focal length of the zoom lens are further included.
  • the distance calculator calculates a distance to the feature point on the basis of the face size or the pupil distance detected by the size detector and the focal length of the zoom lens detected by the lens position sensor.
  • a discriminating means that discriminates whether or not the distance is within the controllable exposure range of the illumination means on the basis of the distance to the subject calculated by the distance calculator, and a warning means that gives a warning when the discriminating means discriminates that the distance is out of the controllable exposure range are further included.
  • the invention according to claim 57 includes a main illumination means that illuminates a subject upon shooting the subject, an auxiliary illumination means that illuminates the subject with an auxiliary illumination in advance, a detecting means that detects a given feature point from an image data, and a setting means that sets an illumination light quantity of the main illumination means on the basis of a reflection light from the feature point illuminated with the auxiliary illumination by the auxiliary illumination means. Accordingly, since the illumination light quantity upon shooting is determined in accordance with the reflected light from the feature point, the best exposure can be provided to the feature point.
  • the feature point is a face portion of a person.
  • the invention according to claim 59 includes an imaging device that shoots an image of a subject, a memory that stores an image data, a detecting means that detects a given feature point from the image data, an instructing means that instructs the imaging device to shoot the subject for storing in the memory, and a controller that controls the detecting means not to carry out detecting procedure to an image data output from the imaging device before the instructing means gives the instruction.
  • the detection is not carried out to the image data output from the imaging device simply for monitoring purpose before shooting the image data for storing the memory such as a memory card. After the shutter release button is pressed, the detection is carried out to the image data output for storing before storing the image data. Accordingly, a precious shutter chance is not given away.
  • a processing means that processes at least one of white balance process and outline enhancement process on the basis of the feature point detected by the detecting means in response to the instruction given by the instructing means is further included.
  • a controller that controls the memory to store the image data processed by the processing means is further included.
  • the invention according to claim 62 includes a memory that stores a given feature point together with information regarding the feature point detected from an image data, a display that displays either the feature point or the information regarding the feature point stored in the memory, and a deleting means that deletes from the memory at lest a portion of the feature point or the information regarding the feature point displayed on the display. Accordingly, the feature point or information regarding the feature point can be deleted from the memory such as the inside memory or the outside memory card.
  • the invention according to claim 63 includes a memory that stores a given feature point together with information regarding the feature point detected from an image data, a display that displays either the feature point or the information regarding the feature point stored in the memory, and a controller that changes at least a portion of the feature point or the information regarding the feature point displayed on the display and stores to the memory. Accordingly, the feature point or information regarding the feature point can be changed from the memory such as the inside memory or the outside memory card.
  • FIG. 1 is a block diagram explaining main features of a digital camera system according to the present invention.
  • FIG. 2 is a flow chart explaining the total sequence of actions of the digital camera according to the present invention.
  • FIG. 3 is a flow chart explaining a sequence of actions of the digital camera according to the present invention in which the mode of the digital camera is set to reproduction mode.
  • FIG. 4 is a flow chart explaining a sequence for storing feature point information.
  • FIG. 5 is a flow chart explaining a sequence for setting shooting angle of view.
  • FIG. 6 is a flow chart explaining a sequence for setting shooting condition.
  • FIG. 7 is a flow chart explaining a sequence for setting other shooting condition.
  • FIG. 8 is a flow chart explaining a sequence for setting other shooting condition.
  • FIG. 9 is a flow chart explaining a sequence for setting an emitting light quantity of a speedlight.
  • FIG. 10 is a flow chart explaining a shooting sequence.
  • FIG. 11 is a flowchart explaining an other shooting sequence.
  • FIG. 12 is a flow chart explaining a shooting sequence.
  • FIG. 13 is a drawing explaining a storing state of a feature point and feature information.
  • FIG. 14 is a drawing explaining a storing state of a image data and feature information attached thereof.
  • FIG. 15 is a drawing showing markers overlaid each detected feature point discriminating with different marker.
  • FIG. 16 shows an example of setting an AF area or an AE area.
  • FIG. 17 shows another example of setting an AF area or an AE area.
  • FIG. 18 shows another example of setting an AF area or an AE area.
  • FIG. 19 shows another example of setting an AF area or an AE area.
  • FIG. 20 shows an example of setting an AF area.
  • FIG. 21 is a graph showing change in evaluation value relative to the focusing lens position.
  • FIG. 22 is a drawing explaining the case when the distance to the person is calculated on the basis of the pupil distance of the detected person and the focal length of the zoom lens.
  • FIG. 1 is a block diagram explaining main features of a digital camera system according to the present invention.
  • a shooting lens 101 is composed of a zoom lens for varying the focal length continuously, a focusing lens for adjusting focal point, and a VR (vibration reduction) lens for correcting a camera shake upon shooting. These lenses are driven by a driver 113 .
  • the driver 113 is composed of a zooming lens driving mechanism and its driving circuit, a focusing lens driving mechanism and its driving circuit, and a VR lens driving mechanism and its driving circuit. Each mechanism is controlled by a CPU 112 .
  • a detector 121 detects positions of the focusing lens and the zooming lens and transmits each lens position to the CPU 112 .
  • the shooting lens 101 forms a subject image on an imaging surface of an imaging device 103 .
  • the imaging device 103 a is a photoelectric converter such as a CCD-type or MOS-type solid-state imaging device outputting electric signals in response to the intensity of the subject image formed on the imaging surface.
  • the imaging device 103 is driven by a driver 115 controlling timing of outputting signals therefrom.
  • An aperture stop 102 is arranged between the shooting lens 101 and the imaging device 103 .
  • the aperture stop 102 is driven by a driver 114 having a stopping mechanism and its driving circuit.
  • An imaging signal from the solid-state imaging device 103 is input to an analogue signal processor 104 and processed such as a correlated double sampling (CDS) process and the like.
  • the imaging signal processed by the analogue signal processor 104 is converted from an analogue signal to a digital signal by an A/D converter 135 .
  • CDS correlated double sampling
  • the A/D converted signal is carried out various image processing such as edge enhancement, gamma correction and the like by a digital signal processor 106 .
  • a plurality of parameters for edge enhancement are provided in advance and the optimum parameter is selected in accordance with the mage data.
  • a luminance/color difference signal generating circuit and the like carrying out processing for recording are included and parameters for generating these signals are also provided. Accordingly, the most suitable parameter is selected from these plurality of parameters in accordance with a shot image.
  • the plurality of parameters for edge enhancement and color reproduction are stored in a memory 1127 , explained later, in which the best suited parameter is selected by the CPU 112 .
  • a buffer memory 105 is a frame memory on which data of a plurality of image frames shot by the imaging device 103 and temporally stores the A/D converted signals.
  • the data stored in the buffer memory 105 is read out by the digital signal processor 106 , carried out each processing described above, and, after processing, stored again in the buffer memory 105 .
  • the CPU 112 is connected with the digital signal processor 106 and drivers 113 through 115 , and carries out sequential control of the shooting movement of the camera system.
  • An AE calculator 1121 in the CPU 112 carries out auto-exposure calculation on the basis of the image signal from the imaging device.
  • An AWB calculator 1122 carries out auto-white-balance calculation for setting parameters for white balance.
  • a feature-detection calculator 1123 stores features such as a shape, position, size and the like of a person in the image data in the memory 1127 on the basis of a given algorism, calculates an approximate distance to each detected person on the basis of the sizes of the detected face, pupil distance, and the like and the focal length of the zoom lens detected by the detector 121 , and stores it to the memory 1127 together with the detected time and date.
  • the method of calculating the distance is explained below with reference to FIG. 22.
  • FIG. 22 shows the case when the distance to the person is calculated on the basis of the pupil distance of the detected person.
  • A denotes an average value of the pupil distance of a grown-up man
  • a denotes a detected pupil distance formed on the imaging device
  • L denotes a distance between a shooting lens and the person
  • f denotes the focal length.
  • detected features and the distances to the features calculated on the basis of the detected features are temporally stored in the memory 1127 .
  • the user selects features to be saved among such stored features and registers them by selecting. The contents and the method of the registration is explained later in detail with reference to FIG. 13.
  • a band-pass filter (BPF) 1124 picks up high frequency component of a given frequency range on the basis of a shooting signal in the focus detection area arranged in the imaging area.
  • the output of the BPF 1124 is input to a following adder 1125 , and, here, the absolute value of the high frequency component is integrated as a focus evaluation value.
  • An AF calculator 1126 carries out the AF calculation by a contrast method on the basis of these focus evaluation values.
  • the CPU 112 adjusts focus range of the shooting lens 101 and carries out focusing.
  • a power switch 1161 for turning on/off the power of the camera system On an operating member 116 connecting with the CPU 112 , a power switch 1161 for turning on/off the power of the camera system, a half-press switch 1162 and a full-press switch 1163 for turning on/off in response to the shutter release button, a setting button 1164 for selecting various kinds of contents for shooting mode, an Up/Down button 1165 for renewing reproducing images, and the like.
  • the setting button 1164 uses the U/D button together to set a name to a selected feature by selecting an alphabet, a numeral, and the like. Except this function, the U/D button 1165 is also used for selecting a desired person from a plurality of detected people, and for manually driving the zoom to the telephoto/wide-angle side upon shooting.
  • a speedlight 122 When the luminance of a subject is low, a speedlight 122 is emitted. Upon shooting with using the speedlight, the speedlight 122 also has a monitor pre-flash function that prevents or reduces the subject's eyes becoming red or measures the luminance of the subject in advance by emitting an AF-assist illuminator when the luminance of the subject is low.
  • the reference number 123 denotes a sounding body such as a buzzer for warning something wrong with the camera system by a sound.
  • a peak value of the evaluation value detected by the result from the AF calculation and corresponding lens position are stored in addition to the aforementioned feature information.
  • Image data carried out various processing by the digital signal processor 106 is stored in an external memory 111 such as a memory card and the like through a read-write signal processor 110 after temporally storing in the buffer memory 105 .
  • an external memory 111 such as a memory card and the like
  • a read-write signal processor 110 carries out data compression upon storing the image data in the external memory 111 and data expansion upon reproducing a compressed image data from an external memory 111 or transferred from another camera system.
  • the reference number 120 denotes an interface for carrying out data communication with an external device such as a digital camera and the like by radio transmission or connected line. Such interface may exist a plurality numbers at a time.
  • a monitor 109 is an LCD display for showing a shot subject image or showing various setting menus upon shooting/reproducing. This is also used for reproducing an image data stored in the external memory 111 or transferred from another camera system.
  • an image data stored in the buffer memory 105 is read out and converted a digital image data into an analogue image signal by a D/A converter 108 . Then, an image is shown on the monitor 109 by using the analogue image signal.
  • the contrast method that is an AF control method used by the digital camera is explained.
  • focusing is carried out by using the fact that degree of defocusing and a contrast of an image has a mutual relation and the contrast of an image becomes maximum when the image comes into focus.
  • the magnitude of contrast can be evaluated by the magnitude of high frequency component of the imaging signal.
  • the high frequency component of the imaging signal is detected by the BPF 1124 , the absolute value of the high frequency component is integrated by the adder 1125 , and let the result be a focus evaluation value.
  • the AF calculator 1126 carries out AF calculation on the basis of the focus evaluation value.
  • CPU 112 adjusts the focusing position of the shooting lens 101 by using the result of the calculation.
  • FIGS. 2 and 3 are flow charts showing the total sequence of actions of the digital camera having a face recognition function.
  • step S 101 when the digital camera detects that the power of the digital camera is turned on by the power switch 1161 , the flow proceeds to a step S 102 .
  • step S 102 the operation mode of the digital camera is checked. Here, whether the mode set by the setting button 1164 is set to a shooting mode for shooting a subject or to a reproducing mode for reproducing an image data stored in the memory card is discriminated.
  • the mode is set to the reproducing mode, the flow proceeds to step S 117 shown in FIG. 3.
  • step S 103 When the mode is set to the shooting mode, the flow proceeds to step S 103 .
  • step S 103 the subject image is displayed videos on the LCD monitor 109 .
  • step S 104 whether the displayed image is set to carry out a feature detection process for detecting feature points in accordance with a given algorism or not is discriminated.
  • the setting button 1164 is used for this setting.
  • step S 113 When the image is not set to carry out a feature detection process, the flow proceeds to step S 113 and ordinary shooting process is carried out.
  • step S 105 feature points and their positional information are detected from every one or two to three frames of the video image data displayed on the LCD monitor 109 .
  • the detected feature points are such as a face, eyes, pupils, eyebrows, a nose, a mouth, ears, hands, legs, and outline of eyeglasses and its direction, position, and dimension. Moreover, sex, race and age of the person can be discriminated by detecting hairstyle, bone structure, and the kind of clothes of the person. Furthermore, not only a person but also general subject of an animal such as a dog, a cat, and a bird, and houses and cars can be detected. The following explanation is mainly in the case of detecting features of a person.
  • step S 106 whether there is any coincident feature point between a plurality of detected feature points and those stored in the memory 1127 of the digital camera in advance is checked. When there is no coincident feature point, the flow proceeds to step S 107 . In step S 107 , a marker indicating that a feature point is detected is overlaid with the image displayed on the LCD monitor 109 . On the other hand, when there is a coincident feature point, then the flow proceeds to step S 108 . In step S 108 , another maker different from the other makers indicating that the feature point has already stored is overlaid. An example of the case is shown in FIG. 15. FIG.
  • FIG. 15 shows that among the six people in the frame a person's face is too small to detect a feature point as a face, the other five people are detected their feature points as respective faces, and a person among them is detected as one already stored.
  • the faces of four people who are simply detected their feature points are enclosed with a broken line and the face of a person whose feature point has already been stored is enclosed with a solid line.
  • personal information such as a name corresponding to the feature point has already been stored as feature point information, it is also shown as FIG. 15. Accordingly, identification of the subject is confirmed all the more.
  • priority on selecting AE area or AF area explained later is also stored as feature information.
  • An example of recording in the memory 1127 regarding a feature point is shown in FIG. 13. In FIG.
  • an area including Mr. A takes priority to be set as an AE area or an AF area.
  • the order of priority can be changed arbitrarily.
  • Mr. A's feature point information the date when Mr. A's feature point information is stored is then stored as a registration date.
  • the registration date indicated by (1) is the date Mr. A is stored in the first place.
  • the dates indicated by (2) and (3) are the dates Mr. A is additionally stored in different states where facing sideway, turning backward, wearing eyeglasses, or the like.
  • Step S 109 through step S 114 show processing peculiar to the detected feature point. Even if a feature point is detected, you can arbitrarily choose a step to be applied among respective steps by using the setting button 1164 . The following explanation corresponds to a case that all steps are selected.
  • step S 109 the detected result shown on the display is stored. The storing procedure in step S 109 is explained later in detail with reference to FIG. 4. After finished storing, the flow proceeds to step S 110 for setting an angle of view.
  • step S 110 By setting in step S 110 , even if a plurality of people are there in a shot image frame, a subject to be aimed is automatically detected and is zoomed up to be placed at the center of the frame.
  • step S 110 The function is particularly effective upon shooting your child in a sports meeting or a concert.
  • the step S 110 is explained later in detail with reference to FIG. 5.
  • step S 111 shooting conditions are set. When a plurality of people are there in a shooting image frame, an area including a person to be shot is set as an AF area or an AE area, or an aperture stop corresponding to the size or the number of the people is set.
  • the step S 111 is explained later in detail with reference to FIGS. 6 through 8.
  • step S 112 a speedlight is set.
  • the step S 112 is explained later in detail with reference to FIG. 9.
  • the steps from S 109 through S 112 are settings before shooting, so the order of the settings can be changed arbitrarily in accordance with the shooting image frame and the contents of each setting also can be changed at each step.
  • step S 113 a subject is shot.
  • the number of shooting frames is automatically set, and the actual exposure is carried out in response to the movement of the people upon shooting.
  • the procedure of the shooting steps is explained later in detail with reference to FIGS. 10 and 11.
  • step S 114 After shooting recording procedure is carried out in step S 114 .
  • an outline of the face of a subject is detected and processes such as changing white balance, and automatically reducing freckles and moles are carried out.
  • the step S 114 is explained later in detail with reference to FIG. 12.
  • step S 115 the processed image data and the feature point information are combined as a single file to be stored in the memory card.
  • step S 116 whether the power is turned off or not is discriminated. When the power is not turned off, the flow returns to step S 102 and discriminates the operation mode of the digital camera. When the power switch is turned off, the sequence is completed.
  • step S 102 when reproduction mode has been set, the flow proceeds to step S 117 sown in FIG. 3.
  • step S 117 an image data stored in the memory card 111 is reproduced and displayed on the LCD monitor 109 .
  • the reproduced image may be a still image or a video image.
  • step S 118 similar to step S 104 , whether the displayed image is set to carry out a feature detection process or not is discriminated.
  • step S 127 to carry out ordinary reproduction.
  • the mode is set to carry out a feature detection process, the flow proceeds to step S 119 .
  • step S 119 whether feature point information is attached to a reproducing image data or not is discriminated.
  • the flow proceeds to step S 120 .
  • step S 120 a feature point is detected from the image data similar to step S 105 and the flow proceeds to step S 122 .
  • step S 121 feature point information attached to a reproducing image data is read out and the flow proceeds to step S 122 .
  • step S 122 the detected feature points, read out feature points, and feature information are overlaid with the reproduced image. Instead of the feature points, the aforementioned marker or an icon may be overlaid.
  • step S 123 whether there is any coincident feature point between a plurality of detected feature points and those stored in the memory 1127 of the digital camera is checked. Similar to step S 106 , when there is no coincident feature point, the flow proceeds to step S 124 . In step S 124 , a marker indicating that a feature point is detected is overlaid with the image displayed on the LCD monitor 109 . On the other hand, when there is a coincident feature point, then the flow proceeds to step S 125 . In step S 125 , another maker different from the other makers indicating that the feature point has already stored is overlaid. In step 126 , the detected result shown on the display is stored. The storing procedure is explained later with reference to FIG. 4.
  • step S 127 whether the next image is reproduced or not is discriminated.
  • the flow returns to step S 117 .
  • the flow proceeds to step S 128 .
  • step S 128 whether the power switch is turned off or not is discriminated. When the power switch is not turned off, the flow returns to step S 102 shown in FIG. 2. When the power switch is turned off, the flow proceeds to the end.
  • step S 151 When the image data is a shot image data, in step S 151 , whether there is any coincident feature point between detected feature points and those stored in the memory 1127 of the digital camera is checked.
  • step S 151 feature point or feature point information attached to the reproduced image data is read out. Whether there is any coincident feature point or feature point information between those of the read out image data and those stored in the memory 1127 in the form explained in FIG. 13 is checked.
  • feature point or feature point information is not attached to the reproduced image data, feature point is detected from the reproduced image data similar to the shot image data.
  • feature point information attached to the image data is explained with reference to FIG. 14.
  • image data file DSC 002 as shown in FIG. 14, feature point information and feature point data are additionally stored beside the actual image data.
  • two people of Mr. A and Ms. C are stored as feature point information.
  • priority the date when Mr. A or Ms. C is detected in the image data, and the position of center of gravity of the feature point are stored.
  • Mr. A in addition to those, two other feature points detected from other image data than the image data DSC 002 is additionally stored. Similar to FIG. 13, simple comments or processing upon recording/reproducing may be stored.
  • the distance to the feature point calculated by the feature-detection calculator 1123 may be stored.
  • the data contents of the feature point information can be changed, added, and deleted arbitrarily.
  • the actual feature point data regarding Mr. A and Ms. C is stored in turn in the feature point data area shown below.
  • step S 151 when the feature point of a shot image data or the feature point or the feature point information of a reproduced image data has already been stored in the memory 1127 , the flow proceeds to step S 152 .
  • step S 152 whether or not the already stored feature point or feature point information is to be changed or added is checked. In particular, detected person's name or priority is added or changed.
  • step S 156 On the other hand, there is any change or addition, the flow proceeds to step S 153 .
  • step S 151 when the feature point of a shot image data or the feature point or the feature point information of a reproduced image data has not been stored in the memory 1127 , the flow proceeds to step S 153 .
  • step S 153 detected features point and the feature point information to be stored are shown on the LCD display 109 .
  • step S 154 whether the displayed feature point and feature point information have been instructed to be stored or not is checked. In principle, a newly detected feature point is additionally stored together with feature point information in the memory 1127 in step S 155 unless the newly detected feature point is completely identical to that stored in the memory 1127 .
  • the storing instruction can be carried out, for example, by the setting button 1164 by means of selecting a storing execution shown on the LCD display 109 (not shown). Accordingly, accuracy in identifying a person gradually becomes high.
  • the flow proceeds to step S 156 .
  • step S 156 whether the other feature points of the same image frame are to be stored or not is checked. When another feature point is selected, the flow returns to step S 151 and stores it with the same procedure as before.
  • step S 157 an operation mode of the digital camera is discriminated. When a shooting mode has been set, the storing procedure is completed. The storing operation is carried out every time when the displayed image is changed. When a reproduction mode is set, the flow proceeds to step S 158 . In step S 158 , whether the memory card storing execution is selected by the setting button 1164 or not is checked (not shown). When a storing instruction is selected, the flow proceeds to step S 159 . In step S 159 , a changed or a newly added feature point or feature point information is stored attaching with the original image in the memory card. When a storing instruction is not selected, the storing procedure is completed without renewing additional information.
  • step S 110 The setting an angle of view for shooting in step S 110 shown in FIG. 2 is explained with reference to FIG. 5.
  • This is particularly convenient setting sequence for shooting, for example, your child, Ms. C, in a sport meeting.
  • a person to be shot (for example, Ms. C) is selected as a priority shooting person in advance by the setting button 1164 from the feature point information stored in the memory 1127 on the basis of proper name information.
  • the person stored as the priority shooting person is given priority over the priority listed on the aforementioned feature point.
  • step S 172 whether the person (mainly the face of the person) is detected in the shooting image frame is checked. When it is not detected, the flow proceeds to step S 173 .
  • step S 173 CPU 112 instructs the driver 113 to zoom in toward a telephoto side of the zoom lens.
  • the zoom in operation may be carried out manually or automatically.
  • step S 174 whether the zoom lens reaches the maximum focal length position or not is checked. When the zoom lens does not reach the maximum focal length position, the flow returns to step S 172 repeating the sequence until the person is detected.
  • step S 174 when the zoom lens has reached the maximum focal length position, the flow proceeds to step S 175 .
  • step S 175 a warning that the person is not found (not shown) is displayed on the LCD monitor 109 and the procedure of setting an angle of view for shooting is completed. When the shooting image frame is changed upon changing shooting direction, the procedure starting from step S 172 is repeated.
  • step S 172 when the face of the person is detected, the flow proceeds to step S 176 .
  • step S 176 a maker is overlaid with the face of the person as shown in FIG. 15. From the displayed image, the user checks whether the face of the person to be set in advance is there in the shot image frame or not. When the face of the person is there, the user can easily capture the person to be shot in the image frame by moving the image frame.
  • step S 177 whether the face size of the person to be set in the image frame is a given size or more is checked. When the face size exceeds the given size, the flow is completed. On the other hand, when the face size is less than the given size, the flow proceeds to step S 178 .
  • step S 178 CPU 112 automatically zooms in the zoom lens. At that time, the center of gravity of the detected subject is controlled to stay in the vicinity of the center of the image frame by simultaneously driving the aforementioned VR lens by the driver 113 .
  • step S 179 whether the face size of the person to be set becomes more than a given size is checked. When the face size is not more that the given size, the flow proceeds to step S 180 .
  • step S 180 whether the zoom lens reaches the maximum focal length position or not is checked. When the zoom lens does not reach the maximum focal length position, the flow returns to step S 177 and zooming in operation and VR operation of the zoom lens are continued.
  • step S 180 when the zoom lens reaches the maximum focal length position, the flow proceeds to step S 181 to give a warning. The warning is shown on the LCD monitor 109 (not shown) as well as given by a sound by the buzzer 123 and the flow proceeds to the end.
  • step S 179 when the face size of the person to be set exceeds the given size, the flow is completed.
  • the given size is set its approximate size, for example, about 10% of the whole image frame by the setting button 1164 .
  • the face of the person to be set may merely be moved to the center of the image frame not carrying out zooming in. Accordingly, the user can manually zoom in the desired subject locating in the center of the image frame so as to become the desired size. In this manner, users can store the shot image of their child securely finding their child among a large number of children in an occasion such as a sport meeting, a concert, or the like.
  • the face is automatically zoomed in when the size of the face is small
  • the face may be zoomed out automatically so as to become a given size when the size of the face is large.
  • the zoom lens may be zoomed out automatically until the desired face is detected.
  • FIG. 6 is a flow chart showing how to set the best depth of focus by varying the aperture stop in response to the distance to each subject when a plurality of subjects are detected.
  • step S 201 whether an outline of the face or the eyes of a person is detected is checked. When neither of them is detected, the flow proceeds to step S 208 concluding that the shot image is a long distance shot such and a landscape.
  • step S 208 the aperture stop is set to a large value obtaining larger depth of focus.
  • step S 202 the zoom position (focal length) of the zoom lens is detected by the lens-position detector 121 and stored in the memory 1127 .
  • step S 203 the distance to the subject is calculated on the basis of the size of the face outline or the pupil distance stored in the memory 1127 and stored in the memory 1127 .
  • step S 204 whether the distance calculation has been completed regarding all people in the shot image frame is checked. When it has not been completed, the flow returns to step S 203 and the distance calculations regarding respective people are carried out storing each result in the memory 1127 .
  • step S 205 the number of detected people is discriminated.
  • the shooting image is discriminated as a group photograph, so the flow proceeds to step S 208 .
  • step S 208 the aperture stop is set to a large value obtaining larger depth of focus in order to bring every people into focus.
  • the best depth of focus to bring everyone into focus is derived on the basis of the distance to each person detected in step S 203 and the corresponding aperture value is set.
  • the flow proceeds to step S 206 .
  • step S 206 the face size of each detected person is discriminated.
  • the flow proceeds to step S 207 .
  • step S 207 the shooting image is discriminated as a portrait photograph and the aperture value is set to a small value obtaining smaller depth of focus.
  • the shooting image is discriminated as a commemorative photograph with a landscape, so the flow proceeds to step S 208 setting the aperture stop to a large value to obtain larger depth of focus.
  • the given value of the number of the people is set to three to four people in advance.
  • the shooting mode can automatically be changed to a portrait mode suitable for shooting a person with smaller depth of field.
  • the shooting mode can automatically changed to a landscape mode with larger depth of focus.
  • step S 221 shown in FIG. 7 whether any person is there in a given area of the shooting image frame or not is checked. In the method of checking the existence of a person, whether an outline of a face is detected or not is assumed to be checked.
  • step S 222 a predetermined fixed area such as a central area is set to be an AF area. This is because even if a person is detected, when the person is locating on the periphery of the image frame, the camera concludes that the user does not put emphasis on the person, and excludes the person.
  • FIG. 16 shows an example of the shooting area in such case.
  • a predetermined central area shown by a bold solid line is set to be an AF area.
  • other AF areas can be set in addition to the central area.
  • step S 221 when a person is detected in the given area, the flow proceeds to step S 223 .
  • step S 223 whether the number of the detected person is plural or not is checked. When the number is not plural, the flow proceeds to step S 228 , otherwise proceeds to step S 224 .
  • step S 224 the largest face among the detected faces is selected to be an AF area and attached with a display to be an AF area.
  • FIG. 17 shows an example of a shooting image frame in such case. The example shows that the largest detected face is set as an AF area by a solid line.
  • step S 225 whether any other person other than that automatically set as an AF area is set as an AF area or not is checked.
  • step S 226 the AF area is moved in turn by the setting button 1164 .
  • the order of the selection is in order from the priority.
  • the selection may be carried out in order of the size of the detected face.
  • step S 227 when the selection has completed, the flow proceeds to step S 228 .
  • step S 228 whether or not the area of the detected face is a first given value or more is checked. When the area is the first given value or less, the flow proceeds to step S 229 .
  • step S 229 an AF area having a given size (here, the size of the first given value) including the detected face inside the area is set. This is because when the area of the detected face is too small, the precision of the aforementioned AF calculation becomes worse.
  • FIG. 18 shows an example of such case.
  • step S 228 when the area of the detected face is larger than the first given value, the flow proceeds to step S 230 .
  • step S 230 whether or not the area of the detected face is a second given value or more is checked. When the area is the second given value or more, the digital camera concludes that the shooting image is a portrait photograph and the flow proceeds to step S 231 .
  • step S 231 the position of the detected eye is set to an AF area instead of setting whole face area to an AF area.
  • FIG. 19 shows an example of the case.
  • the face area is the second given value or less, the flow proceeds to step S 232 .
  • step S 232 the previously detected face area is set to an AF area.
  • the first and second given values are set to the best values in advance on the basis of shooting various subjects.
  • a person having the highest stored priority or a priority shooting person explained in the section of setting angle of view may be displayed first. Or a person may be selected in order from the shortest distance by calculating the distance to each person at the same time of detecting the face.
  • AF movement corresponding to a person may become resistant to the background by limiting the moving range of the focusing lens to a given range in the vicinity of the calculated distance. Furthermore, AF tracking movement to the highest priority person becomes fast and certain.
  • the shooting distance of the first frame is determined on the basis of the peak evaluation value of the contrast method, and on and after the second frame the distance to the subject can be calculated by detecting the difference in the face outline or the pupil distance relative to that of the previous frame in combination with the zoom lens position. Accordingly, an AF control capable of tracking the subject movement with high speed can be realized.
  • the above-described sequence for setting an AF area can be applied to setting an AE area.
  • the first and second given values are determined to the best values in advance based upon experiments.
  • step S 241 whether the shooting mode is set to a portrait mode suitable for shooting a portrait photograph or not is checked.
  • this portrait mode for example, the aperture stop is set to a value near to full open in order to defocus the background, white balance is set making much on the skin color, and a focusing mode is set to the AF mode.
  • step S 242 whether a person is detected or not is checked. When no person is detected, the flow proceeds to step S 243 . In step S 243 , it gives warning on the monitor or by a buzzer.
  • step S 244 the shooting mode is changed to a landscape mode suitable for shooting a long distance subject and the sequence completes.
  • the landscape mode in order to obtain large depth of focus, the aperture stop is set to a large value, and the focusing mode is set to a fixed position where the depth of focus reaches infinity by driving the focusing lens.
  • White balance is set to an ordinary shooting condition or a condition making much on green of trees and blue of the sky upon shooting in the daytime.
  • step S 245 the sequence completes.
  • step S 245 whether a person is detected or not is checked. When no person is detected, the sequence completes.
  • step S 246 it gives warning on the monitor or by a buzzer.
  • step S 247 the shooting mode is changed to a portrait mode suitable for shooting a person and the sequence completes.
  • step S 251 whether or not the luminance of the subject in a given AE area detected by the AE calculator 1121 is a given value or more is checked.
  • the subject is not limited to a person.
  • the luminance of the subject is less than the given value, in other words, a dark subject
  • the flow proceeds to step S 261 .
  • the luminance of the subject is more than the given value, in other words, a bright subject
  • the flow proceeds to step S 252 .
  • step S 252 whether a person has been detected in the shooting image frame or not is checked. In this case also, a person is discriminated by checking whether or not a face outline is detected. When no face outline is detected, flow proceeds to step S 253 .
  • step S 253 the speedlight is set not to emit light.
  • CPU 112 controls the speedlight not to emit light upon shooting on the basis of the setting not to emit light. Accordingly, upon actual shooting, the subject is exposed by a shutter speed and an aperture value on the basis of the calculation result of the AE calculator 1121 .
  • step S 254 lightness of the detected face is measured.
  • step S 255 whether measured lightness of the face is brighter than a given vale or not is checked. When it is brighter than the given value, the flow proceeds to step S 253 , otherwise to step S 256 .
  • step S 256 a distance to the detected person is calculated on the basis of the size of the detected face or the pupil distance and the focal length upon shooting similar to the aforementioned step S 203 in FIG. 6.
  • step S 257 whether the distance to the person is within the range of proper exposure of the speedlight or not is checked.
  • step S 258 the CPU sets a pre-emitting mode emitting light prior to shooting in order to reduce red-eye and step proceeds to step S 259 .
  • step S 259 the emitting light quantity of the speedlight is set on the basis of the calculation to give proper exposure to the face of the detected person. Accordingly, CPU 112 controls to set the shutter speed and aperture value calculated by the AE calculator 1121 upon actual shooting. So, whole image frame except the person is shot with a proper exposure.
  • a speedlight is controlled to emit with a proper light quantity calculated on the basis of the distance to the person. Therefore, the person also can be shot with a proper exposure.
  • the function is especially effective for shooting with backlight.
  • CPU 112 controls the speedlight to give pre-emitting set in step S 258 in order to reduce red-eye.
  • the pre-emitting may be set to emit a plurality of times.
  • step S 257 when the distance to the person is not within the range of proper exposure of the speedlight, the flow proceeds to step S 260 .
  • step S 260 the warning that the person is not given a proper exposure is displayed (not shown).
  • step S 251 when the luminance of the subject is a dark subject, the flow proceeds to step S 261 .
  • step S 261 whether a person has been detected in the shooting image frame or not is checked. When an outline of a face is detected, the flow proceeds to step S 262 .
  • step S 262 as the same as in step S 256 , a distance to the detected person is calculated.
  • step S 263 whether the distance to the person is within the range of proper exposure of the speedlight or not is checked. When it is not within the range of proper exposure, the flow proceeds to step S 260 .
  • step S 260 a warning that the person is not given a proper exposure is displayed.
  • step S 264 the CPU sets a pre-emitting mode emitting light prior to shooting.
  • the pre-emitting mode is for determining the emitting light quantity of the speedlight upon actual shooting on the basis of the reflected light from the face upon pre-emitting in addition to the reduction of red-eye described in step S 258 .
  • step S 265 the emitting light quantity of the speedlight upon actual shooting is determined on the basis of the reflected light from the face upon pre-emitting. Similar to the prior case, the pre-emitting may be set to emit a plurality of times.
  • step S 261 when an outline of a face is not detected, the flow proceeds to step S 266 .
  • step S 266 the emitting light quantity of the speedlight is set on the basis of the AE calculation of the luminance of the subject.
  • step S 258 or S 264 instead of setting the pre-emitting mode for reducing red-eye, it may be possible that red-eye is corrected by software detecting the pupil of the shot image after shooting.
  • FIG. 10 is a flowchart showing a sequence constructed such that when a full-press switch 1163 is turned on once, the digital camera automatically shoots at a plurality of peak positions of the focus evaluation value obtained from the AF area. Accordingly, a plurality of image frames focusing at respective subjects corresponding to respective peak positions.
  • step S 301 when a half-press switch 1162 is turned on, the flow proceeds to step S 302 .
  • step S 302 CPU 112 carries out focusing from the closest distance to infinity to calculate evaluation value and detects peak values.
  • step S 303 whether there is a plurality of peak values or not is checked. When the peak value is only one, the flow proceeds to step S 304 .
  • step S 304 whether a person is detected by the feature-detection calculator 1123 or not is detected. When a person is detected, a distance to the detected person is calculated on the basis of the size of the detected face or the pupil distance and the focal length upon shooting and is discriminated whether which peak position among a plurality of peak positions is corresponding to.
  • step S 305 CPU 112 selects a closest locating person as the first shooting position and drives the focusing lens to the peak position corresponding to the closest person.
  • step S 306 when the peak position is only one, the flow proceeds to step S 306 .
  • step S 306 the detected peak position (in this case, the position becomes the closest peak position) is selected.
  • step S 304 when a plurality of peak positions are detected and when no person is detected, the flow proceeds to step S 306 .
  • step S 306 the closest position is selected as a shooting position.
  • step S 307 whether the full-press switch 1163 is turned on or not is checked.
  • the flow proceeds to step S 313 .
  • step S 308 an exposure is carried out at a peak position selected in step S 305 or S 306 and the stored image data is read out after the exposure is over.
  • step S 309 whether there is another peak position corresponding to another person or not is checked. When a peak position corresponding to another person is there, the flow proceeds to step S 310 .
  • step S 310 the position is selected and the flow returns to step S 308 .
  • step S 308 the second exposure is carried out and the stored image data is read out after the exposure is over.
  • the flow proceeds to step S 311 .
  • step S 311 whether the exposure for the closest peak position is completed or not is checked. When the exposure for the closest peak position is not completed, the flow proceeds to step S 312 .
  • step S 312 the exposure is continued. When the exposure for the closest peak position is completed, the sequence is completed.
  • step S 307 when the full-press switch 1163 is not turned on, the flow proceeds to step S 313 .
  • step S 313 whether the half-press switch 1162 is turned on or not is checked. When the half-press switch 1162 is turned on, the flow returns to step S 307 . In step S 307 , the focusing is locked until the full-press switch 1163 is turned on. On the other hand, in step S 313 , when the half-press switch 1162 is not turned on, the sequence is completed.
  • FIG. 20 is a drawing showing a case a person and a flower locating on this side of the person are disposed in a shooting image frame.
  • FIG. 21 is a graph showing change in evaluation value relative to the focusing lens position. In the case where the whole image frame is assumed to be an AF area, change in the evaluation value is shown. In this case, two peak positions (P 1 and P 2 ) are detected in the evaluation value.
  • the closest peak P 2 is selected regardless of their mutual sizes.
  • the peak position x 1 is corresponding to a person. Accordingly, by shooting two times at the closest peak position x 2 and the peak position x 1 corresponding to the person, image data being in focus at each subject can be obtained.
  • by shooting only peak positions corresponding to people it is possible to set that when the closest peak position is not corresponding to a person, the subject is not shot. In this case, similar to the setting the angle of view, it is possible to set that a person having priority to be shot is set to the camera in advance and only one peak position corresponding to the person is shot.
  • step S 321 whether the full-press switch 1163 is turned on or not is checked.
  • the flow proceeds to step S 322 .
  • step S 322 the pupil of a subject of the image data before turned on the full-press switch 1163 is detected by the feature-detection calculator 1123 .
  • step S 323 the actual exposure is postponed until the pupil is detected and the flow returns to step S 322 .
  • the flow proceeds to step S 324 .
  • step S 324 the actual exposure is carried out.
  • step S 325 the exposed image data is read out.
  • step S 326 whether the pupil is detected by the feature-detection calculator 1123 is checked. When the pupil is not detected, the flow proceeds to step S 327 .
  • step S 327 a warning sound is given by a buzzer 123 and the flow returns to step S 322 .
  • step S 326 when the pupil is detected, the sequence is completed. In this manner, whether the subject's eyes are opened or not is checked before and after the actual shooting. Accordingly, when the subject is shot with his/her eyes closed, you can shoot again without delay.
  • the subject when the subject is shot with his/her eyes closed, you may correct the pupil of the shot image by software after shooting instead of shooting again.
  • the opened eyes of the subject are detected from a video image of the subject shot after shooting and replaced with the closed eyes.
  • the other defects of the shot subject can also be corrected by shooting again.
  • the subject moves upon shooting, it is discriminated by detecting an image movement from the reproduced image.
  • the number of faces is compared by counted before and after shooting or when the outline of a face is not clear enough, it is possible to set the camera to shoot again.
  • the warning in step S 327 not only gives a warning by a buzzer, but also voice warning explaining particular problem may be possible such as “Someone closed eyes.”, “Camera shake!”, or “Someone's face is hidden away.”
  • step S 401 whether the face outline of a person is detected by the feature-detection calculator 1123 or not is detected. When it is not detected, storing procedure is carried out by using parameters for color reproduction or outline enhancement set in advance.
  • the flow proceeds to step S 402 .
  • step S 402 the number of the detected faces is counted. When the number of the faces is a given number or less, the flow proceeds to step S 406 , other wise to step S 403 .
  • the given number is preferably three to four.
  • the detected number is more than three to four, the image is discriminated as a group photography and the flow proceeds to step S 403 .
  • step S 403 parameters for color reproduction is used giving priority to skin color in the digital signal processor 106 .
  • step S 404 a specific site of the face is detected.
  • step S 405 the outline enhancement of the face except the specific site is processed to become weak.
  • the specific site is such as an eye, a nose, a mouth, ears, hair, an eyebrow, and the like. Accordingly, since a low-pass-filter is applied to spatial frequency characteristics except the specific site, wrinkles, moles, freckles, or the like can be inconspicuous,.
  • step S 402 when the number of faces is a given value or less, the flow proceeds to step S 406 .
  • step S 406 the size of the face is checked.
  • the largest face is compared.
  • the image is discriminated as a portrait photograph and the flow proceeds to step S 403 selecting procedure priority to skin color.
  • the image is discriminated as a ceremonial photograph with landscape and an ordinary storing procedure is carried out.
  • step S 403 not only the face portion but also whole image area is selected the procedure using parameters priority to skin color instead of ordinary color parameters. This is because the area except skin has little skin color component, so that even if the procedure using parameters priority to skin color is carried out, not much effect is received. Accordingly, complicated procedure that picks up the face portion only and carries out the procedure using parameters priority to skin color only to the portion becomes not necessary.
  • step S 405 by increasing the outline enhancement to the detected specific site such as an eye, a nose, a mouth, ears, hair, an eyebrow, and the like, the face can be expressed boldly. Since the outline enhancement is not effective to a small face, it is possible to set that the outline enhancement is carried out only to a face having a certain amount of area. Moreover, it may be possible to select either step S 403 for the skin color process or step S 405 for the outline enhancement. By preparing a plurality of parameters for the skin color process or the outline enhancement and suitably selecting such parameters, it is easy that the degree of skin color or outline enhancement is made to be the best condition. Moreover, in the case of detecting age and sex, parameters for saturation and luminance may be selected as well as parameters for hue on the basis of the detected result.
  • the feature point detection is carried out before shooting the subject as described in step S 105 in FIG. 2.
  • the feature point detection is not necessary to carry out before shooting.
  • the feature point detection maybe carried out to the shot image data after shooting.
  • the feature point detection may be carried out only to the shot image data by locating step S 105 for detecting a feature point before step S 114 for storing procedure. Accordingly, since the feature detection is not carried out before shooting, the shooting procedure can be carried out quickly, so you can shoot without loosing shutter chance.

Abstract

A digital camera system capable of operating by detecting a feature point, which has not been accomplished, in addition to ordinary functions of a conventional camera is provided. According to an aspect of the present invention, a digital camera system includes a detecting means that detects a given feature point from an image data, a receiving means that receives an order from a user, a selecting means that selects each feature point in accordance with a given order instructed by the receiving means when a plurality of feature points are detected, and a display that displays feature point information identifying the feature point selected by the selecting means.

Description

    INCORPORATION BY REFERENCE
  • The disclosures of the following priority applications are herein incorporated by reference: [0001]
  • Japanese Patent Application No. 2003-109882 filed on Apr. 15, 2003; [0002]
  • Japanese Patent Application No. 2003-109883 filed on Apr. 15, 2003; [0003]
  • Japanese Patent Application No. 2003-109884 filed on Apr. 15, 2003; [0004]
  • Japanese Patent Application No. 2003-109885 filed on Apr. 15, 2003; and [0005]
  • Japanese Patent Application No. 2003-109886 filed on Apr. 15, 2003. [0006]
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention [0007]
  • The present invention relates to a digital camera system capable of detecting a feature point of a person and operating on the basis of the detected result. [0008]
  • 2. Description of Related Art [0009]
  • The method for detecting a person from an image data has been known starting from a system that confirms a person himself by comparing fingerprints or the iris with that stored in advance. U.S. Pat. No. 5,982,912 precisely discloses a method that discriminates a person by comparing feature points detected from an input image with feature points such as an eye, a nose, a mouth, and the like stored in advance. Japanese Laid-Open Patent Application No. 10-232934 discloses a method that increases accuracy of the image dictionary upon storing feature points detected in such manner. The following examples are applications of such method to a camera. [0010]
  • U.S. Pat. No. 5,347,371 discloses a video camera that separately controls parameters for processing a specific subject portion and those for the other portion by detecting the specific subject signal. Accordingly, for example, white balance of the subject can be corrected and the background can be defocused upon shooting portrait photography. [0011]
  • U.S. Pat. No. 5,812,193 discloses a video camera that calculates the area of the detected face image and carries out zooming process comparing it with a reference face area. [0012]
  • Japanese Laid-Open Patent Application No. 9-233384 discloses an image input device that divides a shot image data into a given number and automatically expands and outputs a divided image including a specific image among the divided images. [0013]
  • EP1128316A1 (28.02.2000 U.S. Pat. No. 514,436) discloses a camera that stores data such as coordinates and dimension of a face detected by a face-detection-algorism, position of the eye and a pose of the head together with the image data. Moreover, it discloses that the camera carries out automatic red-eye correction algorism and applies to a detected face a face-priority-color-balance algorism. [0014]
  • Japanese Laid-Open Patent Application No. 2001-218020 discloses an image processing method that assumes sex of a person by detecting lips and locally carries out processes such as skin color, gradation, and smoothing. [0015]
  • Japanese Laid-Open Patent Application No. 2001-330882 discloses a camera that changes a detection algorism for detecting subject information corresponding to a shooting mode. Here, for example, focusing and the aperture value are controlled corresponding to the number and the size of the detected face in accordance with the face detection algorism. [0016]
  • U.S. Laid-Open Patent Application No. 2002/101619A1 discloses an image storing device that stores a shot image in connection with discrimination information of the subject stored in advance. [0017]
  • Japanese Laid-Open Patent Application No. 2002-051255 discloses a main-subject-detection camera that detects the main subject and measures the distance to the main subject when a plurality of people are detected by a person-detection means. Here, the person locating at the nearest position, having the largest area, or locating at the center of the image frame is discriminated as the main subject. [0018]
  • Japanese Laid-Open Patent Application No. 2002-333652 discloses an image shooting device that generates a storing signal comparing shot face information with face information stored in advance. When plurality number of faces are there in the image frame, a face corresponding to higher priority face code is focused. [0019]
  • U.S. Laid-Open Patent Application No. 2003/0071908A1 discloses an imaging device that detects a face and sets a distance measuring area or a photometry area to at least a portion of the face. Moreover, it discloses an image-shooting-with-emitting-a-speedlight device that detects a face and emits a speedlight for preventing red-eye. [0020]
  • SUMMARY OF THE INVENTION
  • The present invention is made in view of the aforementioned problems and has an object to provide a digital camera system capable of operating by detecting a feature point, which has not been accomplished, in addition to ordinary functions of a conventional camera. [0021]
  • In order to solve the problems, a digital camera system according to [0022] claim 1 includes a detecting means that detects a given feature point from an image data, a receiving means that receives an order from a user, a selecting means that selects each feature point in accordance with a given order instructed by the receiving means when a plurality of feature points are detected, and a display that displays feature point information identifying the feature point selected by the selecting means. Accordingly, a user can easily select a desired person. In claim 2, the display displays information regarding the feature point overlaid with the image data is included. In claim 3, a face detection means that detects the size of a face from the feature point detected by detecting means is included. The selecting means selects the face in descending order of the face size detected by the face detection means. In claim 4, a distance detection means that detects a distance to the feature point detected by the detecting means is included. The selecting means selects the feature point in ascending order of the distance detected by the distance detection means, so the user can easily select a desired subject. In claim 5, a focus-area-setting means that sets a given area including the feature point detected by the detecting means as a focus area for detecting focus is included. In claim 6, a photometry-area-setting means that sets a given area including the feature point detected by the detecting means as a photometry area is included.
  • In another aspect of the present invention, claim [0023] 7 provides a digital camera system including a detecting means that detects a given feature point from an image data, a display that displays the feature point detected by the detecting means, a receiving means that receives information regarding the feature point displayed by the display, and a memory that stores the feature point and information regarding the feature point. Accordingly, information regarding the feature point together with the feature point are stored in the memory such as a nonvolatile memory in the digital camera.
  • In claim [0024] 8, the information regarding the feature point is specific name information. In claim 9, the information regarding the feature point is priority information determined when a plurality of feature points are detected at a time. In claim 10, a discriminating means that discriminates the priority information, and a selecting means that selects feature point in order of the priority discriminated by the discriminating means are included. In claim 11, a distance-measuring-area-setting means that sets a distance measuring area for measuring a distance to a subject displayed on the display is included. The priority information is a priority among the plurality of feature points upon setting the distance measuring area by the distance-measuring-area-setting means. In claim 12, a photometry-area-setting means that sets a photometry area for measuring lightness of the subject displayed on the display is included. The priority information is a priority among the plurality of feature points upon setting the photometry area by the photometry-area-setting means.
  • In claim [0025] 13, the information regarding the feature point is at least one of color process information and outline correction process information upon storing the image data including the feature point. In claim 14, the information regarding the feature point is at least one of color process information and outline correction process information upon reproducing the image data including the feature point. In claim 15, a discriminating means that discriminates and displays whether or not at least one of the feature point and information regarding the feature point displayed on the display is stored in the memory is included.
  • In another aspect of the present invention, claim [0026] 16 provides a digital camera system including a detecting means that detects a given feature point from an image data, a display that displays the feature point detected by the detecting means, a input means that inputs information regarding the feature point displayed by the display, a instruction means that instructs to store the feature point and information regarding the feature point in connection with the image data, and a memory that stores the feature point, information regarding the feature point, and the image data instructed by the instruction means. Accordingly, information regarding the feature point and the feature point are stored in the memory in connection with the image data, so it is convenient to select later a subject on the basis of the information regarding the feature point. In claim 17, the information regarding the feature point is positional information in the image data upon detecting the feature point from the image data.
  • The invention according to claim [0027] 18 includes a memory that stores a first feature point and first specific name information regarding the first feature point, a detecting means that detects a given feature point from an image data, an input means that inputs second specific name information regarding a second feature point detected by the detecting means, and a storing instruction means that instructs to additionally store in the memory the second feature point when the first specific name information and the second specific name information are identical and the first feature point and the second feature point are different. Accordingly, when the specific memory information regarding the detected subject is the same as the specific name information such as a person's name stored in the memory such as a built-in memory and when a new feature point regarding the person is detected, the detected feature point is additionally stored in the built-in memory, so that the accuracy of discriminating the person can be increased.
  • The invention according to claim [0028] 19 includes a first memory that stores a first feature point and specific name information regarding the first feature point, a second memory that stores a second feature point and the specific name information in connection with an image data, a storing instruction means that instructs to additionally store in the first memory the second feature point when the first feature point and the second feature point are different. Accordingly, feature points regarding the same specific name information are additionally stored in advance in the built-in memory from a memory card in which the image data, feature point and the specific name information such as a person's name regarding thereof are stored, so that the accuracy of discriminating the person can be increased.
  • The invention according to claim [0029] 20 includes a first memory that stores a first feature point and specific name information regarding the first feature point, a second memory that stores a second feature point and the specific name information in connection with an image data, and a storing instruction means that instructs to additionally store in the second memory the first feature point when the first feature point and the second feature point are different. Accordingly, a feature point not detected from the image data stored in the memory card can additionally be stored in the memory card, so the number of feature points regarding the person in the memory card can gradually be increased.
  • The invention according to claim [0030] 21 includes a display that displays an image data, a detecting means that detects a given feature point from the image data, a memory that stores a plurality of feature points in advance, a checking means that checks whether or not the feature point detected by the detecting means is the same as any one of the feature points stored in the memory, and a discriminating-display means that discriminates and displays on the display the checked result checked by the checking means. Accordingly, it becomes possible to discriminate immediately whether the detected feature point has already stored or not. In claim 22, the memory stores at least one of specific name information regarding the feature point and priority information for setting a priority of selection when a plurality of feature points are detected at a time, and the discriminating-display means displays on the display information stored in the memory regarding the feature point checked as the same by the checking means.
  • The invention according to claim [0031] 23 includes a detecting means that detects a given feature point from an image data, and a control means that controls the detected feature point in connection with the image data. Accordingly, the image data and the feature point detected from it can be stored in connection with each other.
  • The invention according to claim [0032] 24 includes a memory that stores a given feature point in an image data in connection with information regarding the given feature point, a detecting means that detects a feature point from an image data, an assigning means that assigns at least one of the given feature point and information regarding the given feature point stored in the memory, an agreement checking means that checks whether or not the feature point detected by the detecting means is the same as the given feature point, a size checking means that checks the size of the feature point checked by the agreement checking means as the same, and a zooming means that zooms in/out a given area including the feature point corresponding to the size of the feature point checked by the size checking means. Accordingly, when a feature pint as the same as a given feature point such as a person is detected, the feature point is checked its size and zoomed in/out to become a given size.
  • In claim [0033] 25, the agreement checking means includes an overlaid display means that displays a subject corresponding to the feature point checked as the same by the checking means overlaid with a maker. In claim 26, the information regarding the feature point is specific name information for specifying the feature point. In claim 27, the zooming means zooms in/out such that the size of the feature point checked by the size checking means becomes a given range of the size. In claim 28, a position-detecting means that detects the position of the agreed feature point in the shooting image frame is included. The zooming means includes a vibration correction lens that corrects vibration upon shooting and a vibration correction lens driver that drives the vibration correction lens such that the agreed feature point comes to a given position in the shooting image frame in response to the detected result of the position-detecting means. Accordingly, a desired subject always comes to a given position (such as the center) of the image frame and is zoomed in/out. In claim 29, a position-detecting means that detects the position of the agreed feature point in the shooting image frame is included. The zooming means includes an electronic zooming means that zooms in/out electronically such that the agreed feature point comes to a given position in the shooting image frame in response to the detected result of the position-detecting means. Accordingly, the detected feature point is zoomed in/out to a given position such as the center of the image frame.
  • The invention according to claim [0034] 30 includes a detecting means that detects a given feature point from an image data, a position-detecting means that detects the position of the feature point in a shooting image frame, a vibration correction lens that corrects vibration upon shooting, and a driver that drives the vibration correction lens such that the feature point comes to a given position in the shooting image frame in response to the detected result of the position-detecting means. Accordingly, the detected feature point can always be positioned optically at a desired position in the shooting image frame without using a camera platform. In claim 31, the given position locates in the vicinity of the center of the shooting image frame. In claim 32, a memory that stores the given feature point in the image data together with information regarding the given feature point, an assigning means that assigns at least one of the given feature point and information regarding the given feature point stored in the memory, and an agreement checking means that checks whether or not the feature point detected by the detecting means is the same as the given feature point are further included. The driver drives the vibration correction lens such that the feature point checked by the agreement checking means as the same comes to the given position. Accordingly, a given feature point can always be shot at a desired position such as the center of the image frame.
  • The invention according to claim [0035] 33 includes a shooting instruction means that instructs to shoot a still image of a subject, a detecting means that detects a given feature point from the still image data shot in response to the instruction of the shooting instruction means, a discriminating means that discriminates a state of the given feature point detected by the detecting means, and a warning means that warns in accordance with the discriminated result of the discriminating means. Accordingly, when the shot condition has not been satisfactory after shooting, the warning gives a warning to a user right away, so that the user can take a measure such as reshooting. In claim 34, the given feature point is a pupil portion of a person and when the discriminating means discriminates that a pupil has not been detected, the warning means gives a warning. Accordingly, when a person is shot with his/her eyes shut, a warning is given. In claim 35, the given feature point is an eye or a face outline of a person and when the discriminating means discriminates that the eye or the face outline has a camera shake, the warning means gives a warning. Accordingly, when a person is shot with his/her eyes blinking or with his/her face moving, a warning is given. In claim 36, the detecting means detects a face of a person before shooting a still image and the given feature point is a face of a person and when the number of the faces detected by the detecting means before shooting a still image has not coincide with that detected from the shot still image, the warning means gives a warning. Accordingly, when a desired person is shot with hiding behind another person, a warning is given.
  • The invention according to claim [0036] 37 includes a shooting instruction means that instructs to shoot an image of a subject, a detecting means that detects a given feature point from the image data shot in response to the instruction of the shooting instruction means, a discriminating means that discriminates a state of the given feature point detected by the detecting means, and a reshooting instruction means that instructs the shooting instruction means to reshoot the subject in accordance with the discriminated result of the discriminating means. Accordingly, when a shot condition has not been satisfactory after shooting, the subject is automatically reshot. In claim 38, the given feature point is a pupil portion of a person and when the discriminating means discriminates that a pupil has not been detected, the reshooting instruction means instructs to reshoot the subject. Accordingly, when a person is shot with his/her eyes shut, the person is automatically reshot. In claim 39, the given feature point is an eye or a face outline of a person and when the discriminating means discriminates that the eye or the face outline has a camera shake, the reshooting instruction means instructs to reshoot the subject. Accordingly, when a person is shot with his/her eyes blinking or with his/her face moving, the person is automatically reshot. In claim 40, the detecting means detects a face of a person before shooting an image and the given feature point is a face of a person and when the number of the faces detected by the detecting means before shooting an image has not coincide with that detected from the shot image, the reshooting instruction means instructs to reshoot the subject. Accordingly, when a desired person is shot with hiding behind another person, the person is automatically reshot.
  • The invention according to claim [0037] 41 includes a detecting means that detects a given feature point from an image data, a memory that stores a plurality of color reproduction parameters for carrying out color reproduction of the whole image data, a discriminating means that discriminates a face of a person from the feature point detected by the detecting means, a size comparator that compares the size of the face discriminated by the discriminating means with a given value, and a selecting means that selects a color reproduction parameter giving priority to skin color among the plurality of color reproduction parameters when the size comparator discriminates that the size of the face is the given value or more. Accordingly, when the detected face size is a given value or more, color reproduction parameter giving priority to skin color is selected.
  • The invention according to claim [0038] 42 includes a detecting means that detects a given feature point from an image data, a memory that stores a plurality of color reproduction parameters for carrying out color reproduction of the whole image data, a discriminating means that discriminates a face of a person from the feature point detected by the detecting means, a number comparator that compares the number of the faces discriminated by the discriminating means with a given value, and a selecting means that selects a color reproduction parameter giving priority to skin color among the plurality of color reproduction parameters when the number comparator discriminates that the number of the faces is the given value or more. Accordingly, when the number of detected faces is a given value or more, color reproduction parameter giving priority to skin color is selected.
  • The invention according to claim [0039] 43 includes an imaging device that images a subject, an aperture stop that controls light quantity incident on the imaging device, a detecting means that detects a given feature point from an image data output from the imaging device, a discriminating means that discriminates the size and the number of the faces from the feature point detected by the detecting means, and a control means that controls the aperture value of the aperture stop to become small when the discriminating means discriminates that the face size detected by the detecting means is a first given value or more and a second given value or less. Accordingly, when the size of the detected face is large to a certain extent and when the number of the detected face is three to four or less, the image is discriminated as a portrait photograph and shot by setting small aperture value to obtain an image with shallow depth of focus.
  • The invention according to claim [0040] 44 includes a detecting means that detects a given feature point for discriminating a subject from an image data, a setting means that sets a given setting condition corresponding to at least one item of photometry, measuring distance and white balance each including a plurality of setting conditions upon shooting, and an instructing means that instructs the setting means to set different setting condition in accordance with the detected result of the detecting means. Accordingly, the best setting condition in accordance with the detected subject can be set.
  • In claim [0041] 45, a discriminating means that discriminates the subject is further included. When the setting condition is any one of a condition suitable for a landscape, a distant subject, and a night view and when the discriminating means discriminates a person as the subject, the instructing means instructs the setting means to set a setting condition suitable for shooting a person. Accordingly, in the case of the aperture value is large for obtaining large depth of focus as an example suitable for shooting a landscape, when a person is detected in the shooting image frame, the shooting mode is immediately shifted to a mode suitable for shooting a person setting the aperture vale to small obtaining shallow depth of focus. In claim 46, when the setting condition is suitable for shooting a person and when the detecting means does not detect a person as the subject, the instructing means instructs the setting means to set any one of a condition suitable for a landscape, a distant object and a night view. This is the opposite case of the above-described claim 45. In the case setting a shooting mode suitable for shooting a person, when a person is not detected in the shooting image frame, the shooting mode is shifted to that suitable for shooting a landscape. In claim 47, a warning means that gives a warning when the setting condition is suitable for shooting a person and when the detecting means does not detect a person as the subject is further included.
  • The invention according to claim [0042] 48 includes an AF means that controls focusing on the basis of a signal output from a given AF area in an image data, a detecting means that detects a given feature point from the image data, a face discriminating means that discriminates a face of a person from the feature point detected by the detecting means, a position discriminating means that discriminates a position of the face discriminated by the face discriminating means, and a setting means that sets a given second area as an AF area when the position discriminating means discriminates that the face position is outside of a given first area. Accordingly, when a subject is located on the periphery of the shooting image frame, the AF area is set to a predetermined central area.
  • The invention according to claim [0043] 49 includes a shooting lens that is composed of a zoom lens and a focusing lens for shooting a subject, a position sensor that detects a position of the zoom lens, a detecting means that detects a given feature point and information regarding the feature point from an image data shot by the shooting lens, and a calculator that calculates a distance to the subject on the basis of information regarding the feature point detected by the detecting means and the position of the zoom lens detected by the position sensor. Accordingly, the distance to the subject is calculated on the basis of the information regarding the detected feature point and the zoom position. In claim 50, the information regarding the feature point is at least one of the face size and the pupil distance. In claim 51, a restriction means that restricts a moving range of the focusing lens to a given range on the basis of the distance to the subject calculated by the calculator is further included. Accordingly, by restricting the focus range of the focusing lens, the AF movement can be carried out faster, and even if a high contrast backdrop exists, the AF movement cannot be affected by it. In claim 52, an aperture stop that controls light quantity incident on the shooting lens, and an aperture determining means that determines an aperture value of the aperture stop such that when a plurality of faces are detected by the detecting means, a given face among the plurality of faces comes in focus on the basis of the distances to the plurality of faces calculated by the calculator are further included. Accordingly, by varying the aperture value in accordance with the calculated distance to each face, a desired face can be located within the depth of focus of the shooting lens.
  • The invention according to claim [0044] 53 includes an illumination means that illuminates a subject upon shooting the subject, a detecting means that detects a given feature point from an image data, a distance calculator that calculates a distance to the feature point on the basis of the feature point detected by the detecting means, and a illumination quantity setting means that sets an illumination light quantity of the illumination means on the basis of the distance calculated by the distance calculator. Accordingly, the light quantity of the speedlight can be set in accordance with the distance to the detected feature point.
  • In claim [0045] 54, a plurality of photometry areas that measure luminance of the subject, and an exposure setting means that sets an exposure condition upon shooting on the basis of an output of a given photometry area among the plurality of photometry areas are further included. Accordingly, a proper exposure can be provided to both the detected feature point and the backdrop even if it is backlight condition.
  • In claim [0046] 55, a size detector that detects a face size or a pupil distance from the feature point detected by the detecting means, and a lens position sensor that detects the focal length of the zoom lens are further included. The distance calculator calculates a distance to the feature point on the basis of the face size or the pupil distance detected by the size detector and the focal length of the zoom lens detected by the lens position sensor. In claim 56, a discriminating means that discriminates whether or not the distance is within the controllable exposure range of the illumination means on the basis of the distance to the subject calculated by the distance calculator, and a warning means that gives a warning when the discriminating means discriminates that the distance is out of the controllable exposure range are further included.
  • The invention according to claim [0047] 57 includes a main illumination means that illuminates a subject upon shooting the subject, an auxiliary illumination means that illuminates the subject with an auxiliary illumination in advance, a detecting means that detects a given feature point from an image data, and a setting means that sets an illumination light quantity of the main illumination means on the basis of a reflection light from the feature point illuminated with the auxiliary illumination by the auxiliary illumination means. Accordingly, since the illumination light quantity upon shooting is determined in accordance with the reflected light from the feature point, the best exposure can be provided to the feature point. In claim 58, the feature point is a face portion of a person.
  • The invention according to claim [0048] 59 includes an imaging device that shoots an image of a subject, a memory that stores an image data, a detecting means that detects a given feature point from the image data, an instructing means that instructs the imaging device to shoot the subject for storing in the memory, and a controller that controls the detecting means not to carry out detecting procedure to an image data output from the imaging device before the instructing means gives the instruction. The detection is not carried out to the image data output from the imaging device simply for monitoring purpose before shooting the image data for storing the memory such as a memory card. After the shutter release button is pressed, the detection is carried out to the image data output for storing before storing the image data. Accordingly, a precious shutter chance is not given away.
  • In claim [0049] 60, a processing means that processes at least one of white balance process and outline enhancement process on the basis of the feature point detected by the detecting means in response to the instruction given by the instructing means is further included. In claim 61, a controller that controls the memory to store the image data processed by the processing means is further included.
  • The invention according to claim [0050] 62 includes a memory that stores a given feature point together with information regarding the feature point detected from an image data, a display that displays either the feature point or the information regarding the feature point stored in the memory, and a deleting means that deletes from the memory at lest a portion of the feature point or the information regarding the feature point displayed on the display. Accordingly, the feature point or information regarding the feature point can be deleted from the memory such as the inside memory or the outside memory card.
  • The invention according to claim [0051] 63 includes a memory that stores a given feature point together with information regarding the feature point detected from an image data, a display that displays either the feature point or the information regarding the feature point stored in the memory, and a controller that changes at least a portion of the feature point or the information regarding the feature point displayed on the display and stores to the memory. Accordingly, the feature point or information regarding the feature point can be changed from the memory such as the inside memory or the outside memory card.
  • Other feature and advantages according to the present invention will be readily understood from the detailed description of the preferred embodiments in conjunction with the accompanying drawings.[0052]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram explaining main features of a digital camera system according to the present invention. [0053]
  • FIG. 2 is a flow chart explaining the total sequence of actions of the digital camera according to the present invention. [0054]
  • FIG. 3 is a flow chart explaining a sequence of actions of the digital camera according to the present invention in which the mode of the digital camera is set to reproduction mode. [0055]
  • FIG. 4 is a flow chart explaining a sequence for storing feature point information. [0056]
  • FIG. 5 is a flow chart explaining a sequence for setting shooting angle of view. [0057]
  • FIG. 6 is a flow chart explaining a sequence for setting shooting condition. [0058]
  • FIG. 7 is a flow chart explaining a sequence for setting other shooting condition. [0059]
  • FIG. 8 is a flow chart explaining a sequence for setting other shooting condition. [0060]
  • FIG. 9 is a flow chart explaining a sequence for setting an emitting light quantity of a speedlight. [0061]
  • FIG. 10 is a flow chart explaining a shooting sequence. [0062]
  • FIG. 11 is a flowchart explaining an other shooting sequence. [0063]
  • FIG. 12 is a flow chart explaining a shooting sequence. [0064]
  • FIG. 13 is a drawing explaining a storing state of a feature point and feature information. [0065]
  • FIG. 14 is a drawing explaining a storing state of a image data and feature information attached thereof. [0066]
  • FIG. 15 is a drawing showing markers overlaid each detected feature point discriminating with different marker. [0067]
  • FIG. 16 shows an example of setting an AF area or an AE area. [0068]
  • FIG. 17 shows another example of setting an AF area or an AE area. [0069]
  • FIG. 18 shows another example of setting an AF area or an AE area. [0070]
  • FIG. 19 shows another example of setting an AF area or an AE area. [0071]
  • FIG. 20 shows an example of setting an AF area. [0072]
  • FIG. 21 is a graph showing change in evaluation value relative to the focusing lens position. [0073]
  • FIG. 22 is a drawing explaining the case when the distance to the person is calculated on the basis of the pupil distance of the detected person and the focal length of the zoom lens.[0074]
  • DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • Embodiments of the present invention are going to be explained below with reference to accompanying drawings. [0075]
  • FIG. 1 is a block diagram explaining main features of a digital camera system according to the present invention. [0076]
  • A [0077] shooting lens 101 is composed of a zoom lens for varying the focal length continuously, a focusing lens for adjusting focal point, and a VR (vibration reduction) lens for correcting a camera shake upon shooting. These lenses are driven by a driver 113. The driver 113 is composed of a zooming lens driving mechanism and its driving circuit, a focusing lens driving mechanism and its driving circuit, and a VR lens driving mechanism and its driving circuit. Each mechanism is controlled by a CPU 112. A detector 121 detects positions of the focusing lens and the zooming lens and transmits each lens position to the CPU 112.
  • The [0078] shooting lens 101 forms a subject image on an imaging surface of an imaging device 103. The imaging device 103 a is a photoelectric converter such as a CCD-type or MOS-type solid-state imaging device outputting electric signals in response to the intensity of the subject image formed on the imaging surface. The imaging device 103 is driven by a driver 115 controlling timing of outputting signals therefrom. An aperture stop 102 is arranged between the shooting lens 101 and the imaging device 103. The aperture stop 102 is driven by a driver 114 having a stopping mechanism and its driving circuit. An imaging signal from the solid-state imaging device 103 is input to an analogue signal processor 104 and processed such as a correlated double sampling (CDS) process and the like. The imaging signal processed by the analogue signal processor 104 is converted from an analogue signal to a digital signal by an A/D converter 135.
  • The A/D converted signal is carried out various image processing such as edge enhancement, gamma correction and the like by a [0079] digital signal processor 106. A plurality of parameters for edge enhancement are provided in advance and the optimum parameter is selected in accordance with the mage data. In the digital signal processor 106, a luminance/color difference signal generating circuit and the like carrying out processing for recording are included and parameters for generating these signals are also provided. Accordingly, the most suitable parameter is selected from these plurality of parameters in accordance with a shot image. The plurality of parameters for edge enhancement and color reproduction are stored in a memory 1127, explained later, in which the best suited parameter is selected by the CPU 112. A buffer memory 105 is a frame memory on which data of a plurality of image frames shot by the imaging device 103 and temporally stores the A/D converted signals. The data stored in the buffer memory 105 is read out by the digital signal processor 106, carried out each processing described above, and, after processing, stored again in the buffer memory 105. The CPU 112 is connected with the digital signal processor 106 and drivers 113 through 115, and carries out sequential control of the shooting movement of the camera system. An AE calculator 1121 in the CPU 112 carries out auto-exposure calculation on the basis of the image signal from the imaging device. An AWB calculator 1122 carries out auto-white-balance calculation for setting parameters for white balance. A feature-detection calculator 1123 stores features such as a shape, position, size and the like of a person in the image data in the memory 1127 on the basis of a given algorism, calculates an approximate distance to each detected person on the basis of the sizes of the detected face, pupil distance, and the like and the focal length of the zoom lens detected by the detector 121, and stores it to the memory 1127 together with the detected time and date. Here, the method of calculating the distance is explained below with reference to FIG. 22. FIG. 22 shows the case when the distance to the person is calculated on the basis of the pupil distance of the detected person. The reference symbol “A” denotes an average value of the pupil distance of a grown-up man, “a” denotes a detected pupil distance formed on the imaging device, “L” denotes a distance between a shooting lens and the person, and “f” denotes the focal length. The following proportional expression is easily derived from FIG. 22:
  • A/L=a/f
  • Therefore, the distance to the person L becomes L=(A/a)·f. In this manner, detected features and the distances to the features calculated on the basis of the detected features are temporally stored in the [0080] memory 1127. Then, the user selects features to be saved among such stored features and registers them by selecting. The contents and the method of the registration is explained later in detail with reference to FIG. 13.
  • A band-pass filter (BPF) [0081] 1124 picks up high frequency component of a given frequency range on the basis of a shooting signal in the focus detection area arranged in the imaging area. The output of the BPF 1124 is input to a following adder 1125, and, here, the absolute value of the high frequency component is integrated as a focus evaluation value. An AF calculator 1126 carries out the AF calculation by a contrast method on the basis of these focus evaluation values. On the basis of the calculation result of the AF calculator 1126, the CPU 112 adjusts focus range of the shooting lens 101 and carries out focusing.
  • On an operating [0082] member 116 connecting with the CPU 112, a power switch 1161 for turning on/off the power of the camera system, a half-press switch 1162 and a full-press switch 1163 for turning on/off in response to the shutter release button, a setting button 1164 for selecting various kinds of contents for shooting mode, an Up/Down button 1165 for renewing reproducing images, and the like. The setting button 1164 uses the U/D button together to set a name to a selected feature by selecting an alphabet, a numeral, and the like. Except this function, the U/D button 1165 is also used for selecting a desired person from a plurality of detected people, and for manually driving the zoom to the telephoto/wide-angle side upon shooting.
  • When the luminance of a subject is low, a [0083] speedlight 122 is emitted. Upon shooting with using the speedlight, the speedlight 122 also has a monitor pre-flash function that prevents or reduces the subject's eyes becoming red or measures the luminance of the subject in advance by emitting an AF-assist illuminator when the luminance of the subject is low. The reference number 123 denotes a sounding body such as a buzzer for warning something wrong with the camera system by a sound. In the memory 1127, a peak value of the evaluation value detected by the result from the AF calculation and corresponding lens position are stored in addition to the aforementioned feature information. Image data carried out various processing by the digital signal processor 106 is stored in an external memory 111 such as a memory card and the like through a read-write signal processor 110 after temporally storing in the buffer memory 105. When the image data is stored in the external memory 111, generally a given compression format such as a JPEG format is used for compressing the image data. The read-write signal processor 110 carries out data compression upon storing the image data in the external memory 111 and data expansion upon reproducing a compressed image data from an external memory 111 or transferred from another camera system. The reference number 120 denotes an interface for carrying out data communication with an external device such as a digital camera and the like by radio transmission or connected line. Such interface may exist a plurality numbers at a time.
  • A [0084] monitor 109 is an LCD display for showing a shot subject image or showing various setting menus upon shooting/reproducing. This is also used for reproducing an image data stored in the external memory 111 or transferred from another camera system. When an image is shown on the monitor 109, an image data stored in the buffer memory 105 is read out and converted a digital image data into an analogue image signal by a D/A converter 108. Then, an image is shown on the monitor 109 by using the analogue image signal.
  • The contrast method that is an AF control method used by the digital camera is explained. In this method, focusing is carried out by using the fact that degree of defocusing and a contrast of an image has a mutual relation and the contrast of an image becomes maximum when the image comes into focus. The magnitude of contrast can be evaluated by the magnitude of high frequency component of the imaging signal. In other words, the high frequency component of the imaging signal is detected by the [0085] BPF 1124, the absolute value of the high frequency component is integrated by the adder 1125, and let the result be a focus evaluation value. As described above, the AF calculator 1126 carries out AF calculation on the basis of the focus evaluation value. CPU 112 adjusts the focusing position of the shooting lens 101 by using the result of the calculation.
  • FIGS. 2 and 3 are flow charts showing the total sequence of actions of the digital camera having a face recognition function. In FIG. 2, in step S[0086] 101, when the digital camera detects that the power of the digital camera is turned on by the power switch 1161, the flow proceeds to a step S102. In step S 102, the operation mode of the digital camera is checked. Here, whether the mode set by the setting button 1164 is set to a shooting mode for shooting a subject or to a reproducing mode for reproducing an image data stored in the memory card is discriminated. When the mode is set to the reproducing mode, the flow proceeds to step S117 shown in FIG. 3. When the mode is set to the shooting mode, the flow proceeds to step S103. In step S103, the subject image is displayed videos on the LCD monitor 109. In step S104, whether the displayed image is set to carry out a feature detection process for detecting feature points in accordance with a given algorism or not is discriminated. The setting button 1164 is used for this setting. When the image is not set to carry out a feature detection process, the flow proceeds to step S113 and ordinary shooting process is carried out. When the image is set to carry out a feature detection process, the flow proceeds to step S105 and feature points and their positional information are detected from every one or two to three frames of the video image data displayed on the LCD monitor 109. The detected feature points are such as a face, eyes, pupils, eyebrows, a nose, a mouth, ears, hands, legs, and outline of eyeglasses and its direction, position, and dimension. Moreover, sex, race and age of the person can be discriminated by detecting hairstyle, bone structure, and the kind of clothes of the person. Furthermore, not only a person but also general subject of an animal such as a dog, a cat, and a bird, and houses and cars can be detected. The following explanation is mainly in the case of detecting features of a person.
  • In step S[0087] 106, whether there is any coincident feature point between a plurality of detected feature points and those stored in the memory 1127 of the digital camera in advance is checked. When there is no coincident feature point, the flow proceeds to step S107. In step S107, a marker indicating that a feature point is detected is overlaid with the image displayed on the LCD monitor 109. On the other hand, when there is a coincident feature point, then the flow proceeds to step S108. In step S108, another maker different from the other makers indicating that the feature point has already stored is overlaid. An example of the case is shown in FIG. 15. FIG. 15 shows that among the six people in the frame a person's face is too small to detect a feature point as a face, the other five people are detected their feature points as respective faces, and a person among them is detected as one already stored. The faces of four people who are simply detected their feature points are enclosed with a broken line and the face of a person whose feature point has already been stored is enclosed with a solid line. Moreover, when personal information such as a name corresponding to the feature point has already been stored as feature point information, it is also shown as FIG. 15. Accordingly, identification of the subject is confirmed all the more. In this embodiment, priority on selecting AE area or AF area explained later is also stored as feature information. An example of recording in the memory 1127 regarding a feature point is shown in FIG. 13. In FIG. 13, feature points corresponding to respective names such as Mr. A, Ms. B, and Ms. C, and a feature point that has no-name such as Mr. Unknown are stored in turn. In the stored contents of Mr. A, the aforementioned priority upon selecting AE area and AF area is set to 1.
  • Accordingly, for example, when Mr. A and Ms. C are detected simultaneously in the same shot image frame, an area including Mr. A takes priority to be set as an AE area or an AF area. The order of priority can be changed arbitrarily. As Mr. A's feature point information, the date when Mr. A's feature point information is stored is then stored as a registration date. The registration date indicated by (1) is the date Mr. A is stored in the first place. The dates indicated by (2) and (3) are the dates Mr. A is additionally stored in different states where facing sideway, turning backward, wearing eyeglasses, or the like. [0088]
  • By storing a plurality of feature points as the same person in accordance of wearing and not wearing eyeglasses or beard, accuracy of identifying a person from the detected feature points is increased. Regarding such feature points, the contents can be displayed on the [0089] LCD monitor 109 and added or deleted arbitrarily. In addition to the priority and the registration date, simple comments, effective processing (such as white balance setting, outline compensation, and the like) upon storing or reproducing when the feature point is detected, the distance to the feature point, and the like may also be stored. The actual data of such feature point set to be stored is stored in the feature point data area.
  • Step S[0090] 109 through step S114 show processing peculiar to the detected feature point. Even if a feature point is detected, you can arbitrarily choose a step to be applied among respective steps by using the setting button 1164. The following explanation corresponds to a case that all steps are selected. In step S109, the detected result shown on the display is stored. The storing procedure in step S109 is explained later in detail with reference to FIG. 4. After finished storing, the flow proceeds to step S110 for setting an angle of view. By setting in step S110, even if a plurality of people are there in a shot image frame, a subject to be aimed is automatically detected and is zoomed up to be placed at the center of the frame. The function is particularly effective upon shooting your child in a sports meeting or a concert. The step S110 is explained later in detail with reference to FIG. 5. In step S111, shooting conditions are set. When a plurality of people are there in a shooting image frame, an area including a person to be shot is set as an AF area or an AE area, or an aperture stop corresponding to the size or the number of the people is set. The step S111 is explained later in detail with reference to FIGS. 6 through 8. In step S112, a speedlight is set. The step S112 is explained later in detail with reference to FIG. 9. The steps from S109 through S112 are settings before shooting, so the order of the settings can be changed arbitrarily in accordance with the shooting image frame and the contents of each setting also can be changed at each step.
  • In step S[0091] 113, a subject is shot. In this step, by detecting people, the number of shooting frames is automatically set, and the actual exposure is carried out in response to the movement of the people upon shooting. The procedure of the shooting steps is explained later in detail with reference to FIGS. 10 and 11. After shooting recording procedure is carried out in step S114. In this step, an outline of the face of a subject is detected and processes such as changing white balance, and automatically reducing freckles and moles are carried out. The step S114 is explained later in detail with reference to FIG. 12. In step S115, the processed image data and the feature point information are combined as a single file to be stored in the memory card. In step S116, whether the power is turned off or not is discriminated. When the power is not turned off, the flow returns to step S102 and discriminates the operation mode of the digital camera. When the power switch is turned off, the sequence is completed.
  • In step S[0092] 102, when reproduction mode has been set, the flow proceeds to step S117 sown in FIG. 3. In step S117, an image data stored in the memory card 111 is reproduced and displayed on the LCD monitor 109. The reproduced image may be a still image or a video image. In step S118, similar to step S104, whether the displayed image is set to carry out a feature detection process or not is discriminated. When the mode is not set to carry out a feature detection process, the flow proceeds to step S127 to carry out ordinary reproduction. When the mode is set to carry out a feature detection process, the flow proceeds to step S119. In step S119, whether feature point information is attached to a reproducing image data or not is discriminated. When feature point information is not attached, the flow proceeds to step S120. In step S120, a feature point is detected from the image data similar to step S105 and the flow proceeds to step S122. When feature point information is attached, the flow proceeds to step S121. In step S121, feature point information attached to a reproducing image data is read out and the flow proceeds to step S122. In step S122, the detected feature points, read out feature points, and feature information are overlaid with the reproduced image. Instead of the feature points, the aforementioned marker or an icon may be overlaid.
  • In step S[0093] 123, whether there is any coincident feature point between a plurality of detected feature points and those stored in the memory 1127 of the digital camera is checked. Similar to step S106, when there is no coincident feature point, the flow proceeds to step S124. In step S124, a marker indicating that a feature point is detected is overlaid with the image displayed on the LCD monitor 109. On the other hand, when there is a coincident feature point, then the flow proceeds to step S125. In step S125, another maker different from the other makers indicating that the feature point has already stored is overlaid. In step 126, the detected result shown on the display is stored. The storing procedure is explained later with reference to FIG. 4. After completion of storing in step S126, the flow proceeds to step S127. In step S127, whether the next image is reproduced or not is discriminated. When the next image is selected by the U/D button 1165, the flow returns to step S117. On the other hand, when the next image is not selected, the flow proceeds to step S128. In step S128, whether the power switch is turned off or not is discriminated. When the power switch is not turned off, the flow returns to step S102 shown in FIG. 2. When the power switch is turned off, the flow proceeds to the end.
  • <Storing Feature Point Information>[0094]
  • The step for storing feature point information is explained with reference to FIG. 4. The step for storing feature point information shown in FIG. 4 is similar to the aforementioned step S[0095] 109 in FIG. 2 and step S126 in FIG. 3. When the image data is a shot image data, in step S151, whether there is any coincident feature point between detected feature points and those stored in the memory 1127 of the digital camera is checked. When the image data is a reproduced image data, in step S151, feature point or feature point information attached to the reproduced image data is read out. Whether there is any coincident feature point or feature point information between those of the read out image data and those stored in the memory 1127 in the form explained in FIG. 13 is checked. When feature point or feature point information is not attached to the reproduced image data, feature point is detected from the reproduced image data similar to the shot image data.
  • Here, feature point information attached to the image data is explained with reference to FIG. 14. In the image data file DSC[0096] 002 as shown in FIG. 14, feature point information and feature point data are additionally stored beside the actual image data. In the case of FIG. 14, two people of Mr. A and Ms. C are stored as feature point information. As for the stored contents, priority, the date when Mr. A or Ms. C is detected in the image data, and the position of center of gravity of the feature point are stored. As for Mr. A, in addition to those, two other feature points detected from other image data than the image data DSC002 is additionally stored. Similar to FIG. 13, simple comments or processing upon recording/reproducing may be stored. Moreover, the distance to the feature point calculated by the feature-detection calculator 1123 may be stored. The data contents of the feature point information can be changed, added, and deleted arbitrarily. The actual feature point data regarding Mr. A and Ms. C is stored in turn in the feature point data area shown below.
  • In step S[0097] 151, when the feature point of a shot image data or the feature point or the feature point information of a reproduced image data has already been stored in the memory 1127, the flow proceeds to step S152. In step S152, whether or not the already stored feature point or feature point information is to be changed or added is checked. In particular, detected person's name or priority is added or changed. When there is no change or addition in step S152, the flow proceeds to step S156. On the other hand, there is any change or addition, the flow proceeds to step S153.
  • In step S[0098] 151, when the feature point of a shot image data or the feature point or the feature point information of a reproduced image data has not been stored in the memory 1127, the flow proceeds to step S153. In step S153, detected features point and the feature point information to be stored are shown on the LCD display 109. In step S154, whether the displayed feature point and feature point information have been instructed to be stored or not is checked. In principle, a newly detected feature point is additionally stored together with feature point information in the memory 1127 in step S155 unless the newly detected feature point is completely identical to that stored in the memory 1127. The storing instruction can be carried out, for example, by the setting button 1164 by means of selecting a storing execution shown on the LCD display 109 (not shown). Accordingly, accuracy in identifying a person gradually becomes high. When the detected feature point has already been stored or when a totally unrelated feature point for the user is detected, it is not stored, so the flow proceeds to step S156. In step S156, whether the other feature points of the same image frame are to be stored or not is checked. When another feature point is selected, the flow returns to step S151 and stores it with the same procedure as before.
  • When any other feature point is not selected, the flow proceeds to step S[0099] 157. In step S157, an operation mode of the digital camera is discriminated. When a shooting mode has been set, the storing procedure is completed. The storing operation is carried out every time when the displayed image is changed. When a reproduction mode is set, the flow proceeds to step S158. In step S158, whether the memory card storing execution is selected by the setting button 1164 or not is checked (not shown). When a storing instruction is selected, the flow proceeds to step S159. In step S159, a changed or a newly added feature point or feature point information is stored attaching with the original image in the memory card. When a storing instruction is not selected, the storing procedure is completed without renewing additional information.
  • <Setting an Angle of View for Shooting>[0100]
  • The setting an angle of view for shooting in step S[0101] 110 shown in FIG. 2 is explained with reference to FIG. 5. This is particularly convenient setting sequence for shooting, for example, your child, Ms. C, in a sport meeting. In step S171, a person to be shot (for example, Ms. C) is selected as a priority shooting person in advance by the setting button 1164 from the feature point information stored in the memory 1127 on the basis of proper name information. The person stored as the priority shooting person is given priority over the priority listed on the aforementioned feature point. In step S172, whether the person (mainly the face of the person) is detected in the shooting image frame is checked. When it is not detected, the flow proceeds to step S173. In step S173, CPU 112 instructs the driver 113 to zoom in toward a telephoto side of the zoom lens. The zoom in operation may be carried out manually or automatically. In step S174, whether the zoom lens reaches the maximum focal length position or not is checked. When the zoom lens does not reach the maximum focal length position, the flow returns to step S172 repeating the sequence until the person is detected. In step S174, when the zoom lens has reached the maximum focal length position, the flow proceeds to step S175. In step S175, a warning that the person is not found (not shown) is displayed on the LCD monitor 109 and the procedure of setting an angle of view for shooting is completed. When the shooting image frame is changed upon changing shooting direction, the procedure starting from step S172 is repeated.
  • In step S[0102] 172, when the face of the person is detected, the flow proceeds to step S176. In step S176, a maker is overlaid with the face of the person as shown in FIG. 15. From the displayed image, the user checks whether the face of the person to be set in advance is there in the shot image frame or not. When the face of the person is there, the user can easily capture the person to be shot in the image frame by moving the image frame. In step S177, whether the face size of the person to be set in the image frame is a given size or more is checked. When the face size exceeds the given size, the flow is completed. On the other hand, when the face size is less than the given size, the flow proceeds to step S178. In step S178, CPU 112 automatically zooms in the zoom lens. At that time, the center of gravity of the detected subject is controlled to stay in the vicinity of the center of the image frame by simultaneously driving the aforementioned VR lens by the driver 113.
  • In step S[0103] 179, whether the face size of the person to be set becomes more than a given size is checked. When the face size is not more that the given size, the flow proceeds to step S180. In step S180, whether the zoom lens reaches the maximum focal length position or not is checked. When the zoom lens does not reach the maximum focal length position, the flow returns to step S177 and zooming in operation and VR operation of the zoom lens are continued. In step S180, when the zoom lens reaches the maximum focal length position, the flow proceeds to step S181 to give a warning. The warning is shown on the LCD monitor 109 (not shown) as well as given by a sound by the buzzer 123 and the flow proceeds to the end. In step S179, when the face size of the person to be set exceeds the given size, the flow is completed. Here, the given size is set its approximate size, for example, about 10% of the whole image frame by the setting button 1164. Moreover, in step S178, the face of the person to be set may merely be moved to the center of the image frame not carrying out zooming in. Accordingly, the user can manually zoom in the desired subject locating in the center of the image frame so as to become the desired size. In this manner, users can store the shot image of their child securely finding their child among a large number of children in an occasion such as a sport meeting, a concert, or the like. Although the preceding explanation is the case that the face is automatically zoomed in when the size of the face is small, the face may be zoomed out automatically so as to become a given size when the size of the face is large. Similarly, in step S174, after reaching the maximum focal length position, when the image frame is changed by the user, the zoom lens may be zoomed out automatically until the desired face is detected. The sequences in these cases are similar to those in the case of zooming in, so the duplicated explanation is omitted.
  • <Setting Shooting Conditions>[0104]
  • Setting shooting conditions in step S[0105] 111 shown in FIG. 2 is explained with reference to FIGS. 6 through 8. FIG. 6 is a flow chart showing how to set the best depth of focus by varying the aperture stop in response to the distance to each subject when a plurality of subjects are detected.
  • In step S[0106] 201, whether an outline of the face or the eyes of a person is detected is checked. When neither of them is detected, the flow proceeds to step S208 concluding that the shot image is a long distance shot such and a landscape. In step S208, the aperture stop is set to a large value obtaining larger depth of focus. When an outline of the face or the eyes of a person is detected in step S201, the flow proceeds to step S202. In step S202, the zoom position (focal length) of the zoom lens is detected by the lens-position detector 121 and stored in the memory 1127. In step S203, the distance to the subject is calculated on the basis of the size of the face outline or the pupil distance stored in the memory 1127 and stored in the memory 1127. In step S204, whether the distance calculation has been completed regarding all people in the shot image frame is checked. When it has not been completed, the flow returns to step S203 and the distance calculations regarding respective people are carried out storing each result in the memory 1127.
  • After the distance calculation regarding all detected people has been completed, the flow proceeds to step S[0107] 205. In step S205, the number of detected people is discriminated. When the number of people detected in step S205 is more than a given value, the shooting image is discriminated as a group photograph, so the flow proceeds to step S208. In step S208, the aperture stop is set to a large value obtaining larger depth of focus in order to bring every people into focus. In particular, the best depth of focus to bring everyone into focus is derived on the basis of the distance to each person detected in step S203 and the corresponding aperture value is set. When the number of people is less than the given value, the flow proceeds to step S206. In step S206, the face size of each detected person is discriminated. When the face size is more than a given value, the flow proceeds to step S207. In step S207, the shooting image is discriminated as a portrait photograph and the aperture value is set to a small value obtaining smaller depth of focus. On the other hand, when the face size is smaller than the given value, the shooting image is discriminated as a commemorative photograph with a landscape, so the flow proceeds to step S208 setting the aperture stop to a large value to obtain larger depth of focus. Here, the given value of the number of the people is set to three to four people in advance.
  • Accordingly, when a user has set the shooting mode to a landscape and a person is detected in the shooting image frame, the shooting mode can automatically be changed to a portrait mode suitable for shooting a person with smaller depth of field. On the other hand, when a user has set the shooting mode to a portrait mode and no person is detected, the shooting mode can automatically changed to a landscape mode with larger depth of focus. By the way, in the distance calculation to a subject described above, the face size and the pupil distance are different between an adult and a child, and have individual variations even among children. Accordingly, the value is an approximate distance derived from an average face size or pupil distance of an adult or an child. The accurate focusing position is determined on the basis of a peak position of the aforementioned contrast method. [0108]
  • Then, how to set AF area and AE area is explained with reference to FIGS. 7, 16, [0109] 17 and 18. In FIG. 7, although how to set AF area is explained, AE area can be set by the identical procedures. In step S221 shown in FIG. 7, whether any person is there in a given area of the shooting image frame or not is checked. In the method of checking the existence of a person, whether an outline of a face is detected or not is assumed to be checked.
  • When no person is detected, the flow proceeds to step S[0110] 222. In step S222, a predetermined fixed area such as a central area is set to be an AF area. This is because even if a person is detected, when the person is locating on the periphery of the image frame, the camera concludes that the user does not put emphasis on the person, and excludes the person. FIG. 16 shows an example of the shooting area in such case. In FIG. 16, since the person marked with a bold broken line is located outside the area shown by a narrow broken line in the image frame, a predetermined central area shown by a bold solid line is set to be an AF area. When a plurality of points can be measured, other AF areas can be set in addition to the central area.
  • In step S[0111] 221, when a person is detected in the given area, the flow proceeds to step S223. In step S223, whether the number of the detected person is plural or not is checked. When the number is not plural, the flow proceeds to step S228, otherwise proceeds to step S224. In step S224, the largest face among the detected faces is selected to be an AF area and attached with a display to be an AF area. FIG. 17 shows an example of a shooting image frame in such case. The example shows that the largest detected face is set as an AF area by a solid line. In step S225, whether any other person other than that automatically set as an AF area is set as an AF area or not is checked. When any other person marked by a broken line is selected by operating the setting button 1164, the flow proceeds to step S226. In step S226, the AF area is moved in turn by the setting button 1164. In this case, when the detected person has a priority, the order of the selection is in order from the priority. However, the selection may be carried out in order of the size of the detected face. In step S227, when the selection has completed, the flow proceeds to step S228. In step S228, whether or not the area of the detected face is a first given value or more is checked. When the area is the first given value or less, the flow proceeds to step S229. In step S229, an AF area having a given size (here, the size of the first given value) including the detected face inside the area is set. This is because when the area of the detected face is too small, the precision of the aforementioned AF calculation becomes worse. FIG. 18 shows an example of such case.
  • In step S[0112] 228, when the area of the detected face is larger than the first given value, the flow proceeds to step S230. In step S230, whether or not the area of the detected face is a second given value or more is checked. When the area is the second given value or more, the digital camera concludes that the shooting image is a portrait photograph and the flow proceeds to step S231. In step S231, the position of the detected eye is set to an AF area instead of setting whole face area to an AF area. FIG. 19 shows an example of the case. When the face area is the second given value or less, the flow proceeds to step S232. In step S232, the previously detected face area is set to an AF area. Here, the first and second given values are set to the best values in advance on the basis of shooting various subjects.
  • In the above-described explanation, although the largest face is selected at first in step S[0113] 224, a person having the highest stored priority or a priority shooting person explained in the section of setting angle of view may be displayed first. Or a person may be selected in order from the shortest distance by calculating the distance to each person at the same time of detecting the face. Moreover, regarding the aforementioned priority shooting person, AF movement corresponding to a person may become resistant to the background by limiting the moving range of the focusing lens to a given range in the vicinity of the calculated distance. Furthermore, AF tracking movement to the highest priority person becomes fast and certain. Moreover, when a continuous shooting mode is set in a sports photograph or the like, the shooting distance of the first frame is determined on the basis of the peak evaluation value of the contrast method, and on and after the second frame the distance to the subject can be calculated by detecting the difference in the face outline or the pupil distance relative to that of the previous frame in combination with the zoom lens position. Accordingly, an AF control capable of tracking the subject movement with high speed can be realized.
  • The above-described sequence for setting an AF area can be applied to setting an AE area. In this case also, the first and second given values are determined to the best values in advance based upon experiments. [0114]
  • Then, change in the shooting mode is explained with reference to FIG. 8. In step S[0115] 241, whether the shooting mode is set to a portrait mode suitable for shooting a portrait photograph or not is checked. In this portrait mode, for example, the aperture stop is set to a value near to full open in order to defocus the background, white balance is set making much on the skin color, and a focusing mode is set to the AF mode. When a portrait mode is set, the flow proceeds to step S242. In step S242, whether a person is detected or not is checked. When no person is detected, the flow proceeds to step S243. In step S243, it gives warning on the monitor or by a buzzer. In step S244, the shooting mode is changed to a landscape mode suitable for shooting a long distance subject and the sequence completes. In the landscape mode, in order to obtain large depth of focus, the aperture stop is set to a large value, and the focusing mode is set to a fixed position where the depth of focus reaches infinity by driving the focusing lens. White balance is set to an ordinary shooting condition or a condition making much on green of trees and blue of the sky upon shooting in the daytime. On the other hand, when a person is detected in step S242, the sequence completes. When a portrait mode does not set in step S241, the flow proceeds to step S245. In step S245, whether a person is detected or not is checked. When no person is detected, the sequence completes. When a person is detected, the flow proceeds to step S246. In step S246, it gives warning on the monitor or by a buzzer. In step S247, the shooting mode is changed to a portrait mode suitable for shooting a person and the sequence completes.
  • <Setting Speedlight>[0116]
  • The method of setting an emitting light quantity of a speedlight is explained with reference to FIG. 9. [0117]
  • In step S[0118] 251, whether or not the luminance of the subject in a given AE area detected by the AE calculator 1121 is a given value or more is checked. Here, the subject is not limited to a person. When the luminance of the subject is less than the given value, in other words, a dark subject, the flow proceeds to step S261. When the luminance of the subject is more than the given value, in other words, a bright subject, the flow proceeds to step S252. In step S252, whether a person has been detected in the shooting image frame or not is checked. In this case also, a person is discriminated by checking whether or not a face outline is detected. When no face outline is detected, flow proceeds to step S253. In step S253, the speedlight is set not to emit light. CPU 112 controls the speedlight not to emit light upon shooting on the basis of the setting not to emit light. Accordingly, upon actual shooting, the subject is exposed by a shutter speed and an aperture value on the basis of the calculation result of the AE calculator 1121.
  • When a face outline is detected in step S[0119] 252, the flow proceeds to step S254. In step S254, lightness of the detected face is measured. In step S255, whether measured lightness of the face is brighter than a given vale or not is checked. When it is brighter than the given value, the flow proceeds to step S253, otherwise to step S256. In step S256, a distance to the detected person is calculated on the basis of the size of the detected face or the pupil distance and the focal length upon shooting similar to the aforementioned step S203 in FIG. 6. In step S257, whether the distance to the person is within the range of proper exposure of the speedlight or not is checked. When it is within the range of proper exposure, the flow proceeds to step S258. In step S258, the CPU sets a pre-emitting mode emitting light prior to shooting in order to reduce red-eye and step proceeds to step S259. In step S259, the emitting light quantity of the speedlight is set on the basis of the calculation to give proper exposure to the face of the detected person. Accordingly, CPU 112 controls to set the shutter speed and aperture value calculated by the AE calculator 1121 upon actual shooting. So, whole image frame except the person is shot with a proper exposure. On the other hand, regarding a person darker than the surroundings a speedlight is controlled to emit with a proper light quantity calculated on the basis of the distance to the person. Therefore, the person also can be shot with a proper exposure. The function is especially effective for shooting with backlight. Before emitting the speedlight CPU 112 controls the speedlight to give pre-emitting set in step S258 in order to reduce red-eye. The pre-emitting may be set to emit a plurality of times. In step S257, when the distance to the person is not within the range of proper exposure of the speedlight, the flow proceeds to step S260. In step S260, the warning that the person is not given a proper exposure is displayed (not shown).
  • In step S[0120] 251, when the luminance of the subject is a dark subject, the flow proceeds to step S261. In step S261, whether a person has been detected in the shooting image frame or not is checked. When an outline of a face is detected, the flow proceeds to step S262. In step S262 as the same as in step S256, a distance to the detected person is calculated. In step S263, whether the distance to the person is within the range of proper exposure of the speedlight or not is checked. When it is not within the range of proper exposure, the flow proceeds to step S260. In step S260, a warning that the person is not given a proper exposure is displayed. On the other hand, when the distance is within the range of proper exposure, the flow proceeds to step S264. In step S264, the CPU sets a pre-emitting mode emitting light prior to shooting. The pre-emitting mode is for determining the emitting light quantity of the speedlight upon actual shooting on the basis of the reflected light from the face upon pre-emitting in addition to the reduction of red-eye described in step S258. In step S265, the emitting light quantity of the speedlight upon actual shooting is determined on the basis of the reflected light from the face upon pre-emitting. Similar to the prior case, the pre-emitting may be set to emit a plurality of times. The pre-emitting for reducing red-eye and that for measuring reflected light may be carried out separately. In step S261, when an outline of a face is not detected, the flow proceeds to step S266. In step S266, the emitting light quantity of the speedlight is set on the basis of the AE calculation of the luminance of the subject. In step S258 or S264, instead of setting the pre-emitting mode for reducing red-eye, it may be possible that red-eye is corrected by software detecting the pupil of the shot image after shooting.
  • <Shooting>[0121]
  • Sequences of two shooting methods different from an ordinary shooting method are explained with reference to FIGS. 10, 11, [0122] 20 and 21. FIG. 10 is a flowchart showing a sequence constructed such that when a full-press switch 1163 is turned on once, the digital camera automatically shoots at a plurality of peak positions of the focus evaluation value obtained from the AF area. Accordingly, a plurality of image frames focusing at respective subjects corresponding to respective peak positions. In step S301, when a half-press switch 1162 is turned on, the flow proceeds to step S302. In step S302, CPU 112 carries out focusing from the closest distance to infinity to calculate evaluation value and detects peak values. In step S303, whether there is a plurality of peak values or not is checked. When the peak value is only one, the flow proceeds to step S304. In step S304, whether a person is detected by the feature-detection calculator 1123 or not is detected. When a person is detected, a distance to the detected person is calculated on the basis of the size of the detected face or the pupil distance and the focal length upon shooting and is discriminated whether which peak position among a plurality of peak positions is corresponding to. In step S305, CPU 112 selects a closest locating person as the first shooting position and drives the focusing lens to the peak position corresponding to the closest person. In step S303, when the peak position is only one, the flow proceeds to step S306. In step S306, the detected peak position (in this case, the position becomes the closest peak position) is selected. In step S304, when a plurality of peak positions are detected and when no person is detected, the flow proceeds to step S306. In step S306, the closest position is selected as a shooting position.
  • In step S[0123] 307, whether the full-press switch 1163 is turned on or not is checked. When the switch is not turned on, the flow proceeds to step S313. On the other hand, when the switch is turned on, the flow proceeds to step S308. In step S308, an exposure is carried out at a peak position selected in step S305 or S306 and the stored image data is read out after the exposure is over. In step S309, whether there is another peak position corresponding to another person or not is checked. When a peak position corresponding to another person is there, the flow proceeds to step S310. In step S310, the position is selected and the flow returns to step S308. In step S308, the second exposure is carried out and the stored image data is read out after the exposure is over. When any other peak position corresponding to the other person is not there, the flow proceeds to step S311. In step S311, whether the exposure for the closest peak position is completed or not is checked. When the exposure for the closest peak position is not completed, the flow proceeds to step S312. In step S312, the exposure is continued. When the exposure for the closest peak position is completed, the sequence is completed.
  • In step S[0124] 307, when the full-press switch 1163 is not turned on, the flow proceeds to step S313. In step S313, whether the half-press switch 1162 is turned on or not is checked. When the half-press switch 1162 is turned on, the flow returns to step S307. In step S307, the focusing is locked until the full-press switch 1163 is turned on. On the other hand, in step S313, when the half-press switch 1162 is not turned on, the sequence is completed.
  • An example of actual shooting is explained with reference to FIGS. 20 and 21. FIG. 20 is a drawing showing a case a person and a flower locating on this side of the person are disposed in a shooting image frame. In an ordinary AF shooting, since the closest object has priority of focusing, in this case, a single image frame focusing at the flower locating on this side is shot. FIG. 21 is a graph showing change in evaluation value relative to the focusing lens position. In the case where the whole image frame is assumed to be an AF area, change in the evaluation value is shown. In this case, two peak positions (P[0125] 1 and P2) are detected in the evaluation value. In an ordinary AF, when the values have a given value or more, the closest peak P2 is selected regardless of their mutual sizes. When only the subject contrast is simply detected in this manner, you cannot tell whether a person is which peak position P1 or P2. On the other hand, by calculating distance to the person on the basis of the face size or pupil distance, you can tell the peak position x1 is corresponding to a person. Accordingly, by shooting two times at the closest peak position x2 and the peak position x1 corresponding to the person, image data being in focus at each subject can be obtained. Alternatively, by shooting only peak positions corresponding to people, it is possible to set that when the closest peak position is not corresponding to a person, the subject is not shot. In this case, similar to the setting the angle of view, it is possible to set that a person having priority to be shot is set to the camera in advance and only one peak position corresponding to the person is shot.
  • Accordingly, even if more than one person are there in the AF area, an image frame securely focused to a desired person can be obtained. When a plurality of people are there, it may be possible that each person who is corresponding to a peak position of the evaluation value having a given value or more is shot instead of shooting everyone. Alternatively, it may be possible to set maximum number of serial shooting. As described before, since the distance to the feature point calculated on the basis of the feature point is not accurate, by using this method in detecting peak points of people complementary to the contrast method when a plurality of peak values are there, people can be focused precisely. [0126]
  • Then, the method avoiding a shot with closed eyes is explained with reference to FIG. 11. In step S[0127] 321, whether the full-press switch 1163 is turned on or not is checked. When the full-press switch 1163 is turned on, the flow proceeds to step S322. In step S322, the pupil of a subject of the image data before turned on the full-press switch 1163 is detected by the feature-detection calculator 1123. When it is discriminated that the pupil is not detected because the subject is closing his/her eyes, the flow proceeds to step S323. In step S323, the actual exposure is postponed until the pupil is detected and the flow returns to step S322. When the pupil is detected, the flow proceeds to step S324. In step S324, the actual exposure is carried out. In step S325, the exposed image data is read out. In step S326, whether the pupil is detected by the feature-detection calculator 1123 is checked. When the pupil is not detected, the flow proceeds to step S327. In step S327, a warning sound is given by a buzzer 123 and the flow returns to step S322. In step S326, when the pupil is detected, the sequence is completed. In this manner, whether the subject's eyes are opened or not is checked before and after the actual shooting. Accordingly, when the subject is shot with his/her eyes closed, you can shoot again without delay. Alternatively, when the subject is shot with his/her eyes closed, you may correct the pupil of the shot image by software after shooting instead of shooting again. As for the correction method, the opened eyes of the subject are detected from a video image of the subject shot after shooting and replaced with the closed eyes.
  • In the explanation with reference to FIG. 11, although the subject shot with his/her eyes closed is detected after shooting and shot again, the other defects of the shot subject can also be corrected by shooting again. For example, when the subject moves upon shooting, it is discriminated by detecting an image movement from the reproduced image. When the face of a person is hidden upon shooting a group photography, the number of faces is compared by counted before and after shooting or when the outline of a face is not clear enough, it is possible to set the camera to shoot again. In the warning in step S[0128] 327, not only gives a warning by a buzzer, but also voice warning explaining particular problem may be possible such as “Someone closed eyes.”, “Camera shake!”, or “Someone's face is hidden away.”
  • <Storing>[0129]
  • Storing procedure accompanying with detecting feature point is explained with reference to FIG. 12. [0130]
  • In step S[0131] 401, whether the face outline of a person is detected by the feature-detection calculator 1123 or not is detected. When it is not detected, storing procedure is carried out by using parameters for color reproduction or outline enhancement set in advance. When the face outline is detected, the flow proceeds to step S402. In step S402, the number of the detected faces is counted. When the number of the faces is a given number or less, the flow proceeds to step S406, other wise to step S403. Here, the given number is preferably three to four. When the detected number is more than three to four, the image is discriminated as a group photography and the flow proceeds to step S403. In step S403, parameters for color reproduction is used giving priority to skin color in the digital signal processor 106. In step S404, a specific site of the face is detected. In step S405, the outline enhancement of the face except the specific site is processed to become weak. The specific site is such as an eye, a nose, a mouth, ears, hair, an eyebrow, and the like. Accordingly, since a low-pass-filter is applied to spatial frequency characteristics except the specific site, wrinkles, moles, freckles, or the like can be inconspicuous,. In step S402, when the number of faces is a given value or less, the flow proceeds to step S406. In step S406, the size of the face is checked. When a plurality of faces are detected, the largest face is compared. When the area of the face is a given value or more, the image is discriminated as a portrait photograph and the flow proceeds to step S403 selecting procedure priority to skin color. On the other hand, when the area of the face is a given value or less, the image is discriminated as a ceremonial photograph with landscape and an ordinary storing procedure is carried out.
  • As described above, in step S[0132] 403, not only the face portion but also whole image area is selected the procedure using parameters priority to skin color instead of ordinary color parameters. This is because the area except skin has little skin color component, so that even if the procedure using parameters priority to skin color is carried out, not much effect is received. Accordingly, complicated procedure that picks up the face portion only and carries out the procedure using parameters priority to skin color only to the portion becomes not necessary.
  • In the explanation described above, contrary to the process is step S[0133] 405, by increasing the outline enhancement to the detected specific site such as an eye, a nose, a mouth, ears, hair, an eyebrow, and the like, the face can be expressed boldly. Since the outline enhancement is not effective to a small face, it is possible to set that the outline enhancement is carried out only to a face having a certain amount of area. Moreover, it may be possible to select either step S403 for the skin color process or step S405 for the outline enhancement. By preparing a plurality of parameters for the skin color process or the outline enhancement and suitably selecting such parameters, it is easy that the degree of skin color or outline enhancement is made to be the best condition. Moreover, in the case of detecting age and sex, parameters for saturation and luminance may be selected as well as parameters for hue on the basis of the detected result.
  • In the explanation described above, although these procedures are carried out before storing, these procedures may be carried out upon reproducing. By storing characteristic information, white balance process information and outline enhancement process information of each person in addition to the aforementioned feature point information and feature point data in the image file explained in FIG. 14, the best procedures can be carried out upon reproducing. [0134]
  • In the explanation described above, the feature point detection is carried out before shooting the subject as described in step S[0135] 105 in FIG. 2. When the procedure for storing described in FIG. 12 is carried out, the feature point detection is not necessary to carry out before shooting. The feature point detection maybe carried out to the shot image data after shooting. In other words, the feature point detection may be carried out only to the shot image data by locating step S105 for detecting a feature point before step S114 for storing procedure. Accordingly, since the feature detection is not carried out before shooting, the shooting procedure can be carried out quickly, so you can shoot without loosing shutter chance.
  • Additional advantages and modification will readily occur to those skilled in the art. Therefore, the invention in its broader aspects is not limited to the specific details, and representative devices shown and described herein. Accordingly, various modifications may be made without departing from the spirit or scope of the general inventive concept as defined by the appended claims and their equivalents. [0136]

Claims (63)

What is claimed is:
1. A digital camera system comprising:
a detecting means that detects a given feature point from an image data;
a receiving means that receives an order from a user;
a selecting means that selects each feature point in accordance with a given order instructed by the receiving means when a plurality of feature points are detected; and
a display that displays feature point information identifying the feature point selected by the selecting means.
2. The digital camera system according to claim 1, wherein the display displays information regarding the feature point overlaid with the image data.
3. The digital camera system according to claim 1 further comprising:
a face detection means that detects the size of a face from the feature point detected by detecting means;
wherein the selecting means selects the face in descending order of the face size detected by the face detection means.
4. The digital camera system according to claim 1 further comprising:
a distance detection means that detects a distance to the feature point detected by the detecting means;
wherein the selecting means selects the feature point in ascending order of the distance detected by the distance detection means.
5. The digital camera system according to claim 1 further comprising:
a focus-area-setting means that sets a given area including the feature point detected by the detecting means as a focus area for detecting focus.
6. The digital camera system according to claim 1 further comprising:
a photometry-area-setting means that sets a given area including the feature point detected by the detecting means as a photometry area.
7. A digital camera system comprising:
a detecting means that detects a given feature point from an image data;
a display that displays the feature point detected by the detecting means;
a receiving means that receives information regarding the feature point displayed by the display; and
a memory that stores the feature point and information regarding the feature point.
8. The digital camera system according to claim 7, wherein the information regarding the feature point is specific name information.
9. The digital camera system according to claim 7, wherein the information regarding the feature point is priority information determined when a plurality of feature points are detected at a time.
10. The digital camera system according to claim 9 further comprising:
a discriminating means that discriminates the priority information; and
a selecting means that selects feature point in order of the priority discriminated by the discriminating means.
11. The digital camera system according to claim 9 further comprising:
a distance-measuring-area-setting means that sets a distance measuring area for measuring a distance to a subject displayed on the display;
wherein the priority information is a priority among the plurality of feature points upon setting the distance measuring area by the distance-measuring-area-setting means.
12. The digital camera system according to claim 9 further comprising:
a photometry-area-setting means that sets a photometry area for measuring lightness of the subject displayed on the display;
wherein the priority information is a priority among the plurality of feature points upon setting the photometry area by the photometry-area-setting means.
13. The digital camera system according to claim 7, wherein the information regarding the feature point is at least one of color process information and outline correction process information upon storing the image data including the feature point.
14. The digital camera system according to claim 7, wherein the information regarding the feature point is at least one of color process information and outline correction process information upon reproducing the image data including the feature point.
15. The digital camera system according to claim 7 further comprising:
a discriminating means that discriminates and displays whether or not at least one of the feature point and information regarding the feature point displayed on the display is stored in the memory.
16. A digital camera system comprising:
a detecting means that detects a given feature point from an image data;
a display that displays the feature point detected by the detecting means;
a input means that inputs information regarding the feature point displayed by the display;
a instruction means that instructs to store the feature point and information regarding the feature point in connection with the image data; and
a memory that stores the feature point, information regarding the feature point, and the image data instructed by the instruction means.
17. The digital camera system according to claim 16, wherein the information regarding the feature point is positional information in the image data upon detecting the feature point from the image data.
18. A digital camera system comprising:
a memory that stores a first feature point and first specific name information regarding the first feature point;
a detecting means that detects a given feature point from an image data;
an input means that inputs second specific name information regarding a second feature point detected by the detecting means; and
a storing instruction means that instructs to additionally store in the memory the second feature point when the first specific name information and the second specific name information are identical and the first feature point and the second feature point are different.
19. A digital camera system comprising:
a first memory that stores a first feature point and specific name information regarding the first feature point;
a second memory that stores a second feature point and the specific name information in connection with an image data; and
a storing instruction means that instructs to additionally store in the first memory the second feature point when the first feature point and the second feature point are different.
20. A digital camera system comprising:
a first memory that stores a first feature point and specific name information regarding the first feature point;
a second memory that stores a second feature point and the specific name information in connection with an image data; and
a storing instruction means that instructs to additionally store in the second memory the first feature point when the first feature point and the second feature point are different.
21. A digital camera system comprising:
a display that displays an image data;
a detecting means that detects a given feature point from the image data;
a memory that stores a plurality of feature points in advance;
a checking means that checks whether or not the feature point detected by the detecting means is the same as any one of the feature points stored in the memory; and
a discriminating-display means that discriminates and displays on the display the checked result checked by the checking means.
22. The digital camera system according to claim 21, wherein the memory stores at least one of specific name information regarding the feature point and priority information for setting a priority of selection when a plurality of feature points are detected at a time; and
the discriminating-display means displays on the display information stored in the memory regarding the feature point checked as the same by the checking means.
23. A digital camera system comprising:
a detecting means that detects a given feature point from an image data; and
a control means that controls the detected feature point in connection with the image data.
24. A digital camera system comprising:
a memory that stores a given feature point in an image data in connection with information regarding the given feature point;
a detecting means that detects a feature point from an image data;
an assigning means that assigns at least one of the given feature point and information regarding the given feature point stored in the memory;
an agreement checking means that checks whether or not the feature point detected by the detecting means is the same as the given feature point;
a size checking means that checks the size of the feature point checked by the agreement checking means as the same; and
a zooming means that zooms in/out a given area including the feature point corresponding to the size of the feature point checked by the size checking means.
25. The digital camera system according to claim 24, wherein the agreement checking means includes an overlaid display means that displays a subject corresponding to the feature point checked as the same by the checking means overlaid with a maker.
26. The digital camera system according to claim 24, wherein the information regarding the feature point is specific name information for specifying the feature point.
27. The digital camera system according to claim 24, wherein the zooming means zooms in/out such that the size of the feature point checked by the size checking means becomes a given range of the size.
28. The digital camera system according to claim 24 further comprising:
a position-detecting means that detects the position of the agreed feature point in the shooting image frame;
wherein the zooming means includes a vibration correction lens that corrects vibration upon shooting and a vibration correction lens driver that drives the vibration correction lens such that the agreed feature point comes to a given position in the shooting image frame in response to the detected result of the position-detecting means.
29. The digital camera system according to claim 24 further comprising:
a position-detecting means that detects the position of the agreed feature point in the shooting image frame;
wherein the zooming means includes an electronic zooming means that zooms in/out electronically such that the agreed feature point comes to a given position in the shooting image frame in response to the detected result of the position-detecting means.
30. A digital camera system comprising:
a detecting means that detects a given feature point from an image data;
a position-detecting means that detects the position of the feature point in a shooting image frame;
a vibration correction lens that corrects vibration upon shooting; and
a driver that drives the vibration correction lens such that the feature point comes to a given position in the shooting image frame in response to the detected result of the position-detecting means.
31. The digital camera system according to claim 30, wherein the given position locates in the vicinity of the center of the shooting image frame.
32. The digital camera system according to claim 30 further comprising:
a memory that stores the given feature point in the image data together with information regarding the given feature point;
an assigning means that assigns at least one of the given feature point and information regarding the given feature point stored in the memory; and
an agreement checking means that checks whether or not the feature point detected by the detecting means is the same as the given feature point;
wherein the driver drives the vibration correction lens such that the feature point checked by the agreement checking means as the same comes to the given position.
33. A digital camera system comprising:
a shooting instruction means that instructs to shoot a still image of a subject;
a detecting means that detects a given feature point from the still image data shot in response to the instruction of the shooting instruction means;
a discriminating means that discriminates a state of the given feature point detected by the detecting means; and
a warning means that warns in accordance with the discriminated result of the discriminating means.
34. The digital camera system according to claim 33, wherein the given feature point is a pupil portion of a person and when the discriminating means discriminates that a pupil has not been detected, the warning means gives a warning.
35. The digital camera system according to claim 33, wherein the given feature point is an eye or a face outline of a person and when the discriminating means discriminates that the eye or the face outline has a camera shake, the warning means gives a warning.
36. The digital camera system according to claim 33, wherein the detecting means detects a face of a person before shooting a still image and the given feature point is a face of a person and when the number of the faces detected by the detecting means before shooting a still image has not coincide with that detected from the shot still image, the warning means gives a warning.
37. A digital camera system comprising:
a shooting instruction means that instructs to shoot an image of a subject;
a detecting means that detects a given feature point from the image data shot in response to the instruction of the shooting instruction means;
a discriminating means that discriminates a state of the given feature point detected by the detecting means; and
a reshooting instruction means that instructs the shooting instruction means to reshoot the subject in accordance with the discriminated result of the discriminating means.
38. The digital camera system according to claim 37, wherein the given feature point is a pupil portion of a person and when the discriminating means discriminates that a pupil has not been detected, the reshooting instruction means instructs to reshoot the subject.
39. The digital camera system according to claim 37, wherein the given feature point is an eye or a face outline of a person and when the discriminating means discriminates that the eye or the face outline has a camera shake, the reshooting instruction means instructs to reshoot the subject.
40. The digital camera system according to claim 37, wherein the detecting means detects a face of a person before shooting an image and the given feature point is a face of a person and when the number of the faces detected by the detecting means before shooting an image has not coincide with that detected from the shot image, the reshooting instruction means instructs to reshoot the subject.
41. A digital camera system comprising:
a detecting means that detects a given feature point from an image data;
a memory that stores a plurality of color reproduction parameters for carrying out color reproduction of the whole image data;
a discriminating means that discriminates a face of a person from the feature point detected by the detecting means;
a size comparator that compares the size of the face discriminated by the discriminating means with a given value; and
a selecting means that selects a color reproduction parameter giving priority to skin color among the plurality of color reproduction parameters when the size comparator discriminates that the size of the face is the given value or more.
42. A digital camera system comprising:
a detecting means that detects a given feature point from an image data;
a memory that stores a plurality of color reproduction parameters for carrying out color reproduction of the whole image data;
a discriminating means that discriminates a face of a person from the feature point detected by the detecting means;
a number comparator that compares the number of the faces discriminated by the discriminating means with a given value; and
a selecting means that selects a color reproduction parameter giving priority to skin color among the plurality of color reproduction parameters when the number comparator discriminates that the number of the faces is the given value or more.
43. A digital camera system comprising:
an imaging device that images a subject;
an aperture stop that controls light quantity incident on the imaging device;
a detecting means that detects a given feature point from an image data output from the imaging device;
a discriminating means that discriminates the size and the number of the faces from the feature point detected by the detecting means; and
a control means that controls the aperture value of the aperture stop to become small when the discriminating means discriminates that the face size detected by the detecting means is a first given value or more and a second given value or less.
44. A digital camera system comprising:
a detecting means that detects a given feature point for discriminating a subject from an image data;
a setting means that sets a given setting condition corresponding to at least one item of photometry, measuring distance and white balance each including a plurality of setting conditions upon shooting; and
an instructing means that instructs the setting means to set different setting condition in accordance with the detected result of the detecting means.
45. The digital camera system according to claim 44 further comprising:
a discriminating means that discriminates the subject;
wherein when the setting condition is any one of a condition suitable for a landscape, a distant subject, and a night view and when the discriminating means discriminates a person as the subject, the instructing means instructs the setting means to set a setting condition suitable for shooting a person.
46. The digital camera system according to claim 44, wherein when the setting condition is suitable for shooting a person and when the detecting means does not detect a person as the subject, the instructing means instructs the setting means to set any one of a condition suitable for a landscape, a distant object and a night view.
47. The digital camera system according to claim 46 further comprising:
a warning means that gives a warning when the setting condition is suitable for shooting a person and when the detecting means does not detect a person as the subject.
48. A digital camera system comprising:
an AF means that controls focusing on the basis of a signal output from a given AF area in an image data;
a detecting means that detects a given feature point from the image data;
a face discriminating means that discriminates a face of a person from the feature point detected by the detecting means;
a position discriminating means that discriminates a position of the face discriminated by the face discriminating means; and
a setting means that sets a given second area as an AF area when the position discriminating means discriminates that the face position is outside of a given first area.
49. A digital camera system comprising:
a shooting lens that is composed of a zoom lens and a focusing lens for shooting a subject;
a position sensor that detects a position of the zoom lens;
a detecting means that detects a given feature point and information regarding the feature point from an image data shot by the shooting lens; and
a calculator that calculates a distance to the subject on the basis of information regarding the feature point detected by the detecting means and the position of the zoom lens detected by the position sensor.
50. The digital camera system according to claim 49, wherein the information regarding the feature point is at least one of the face size and the pupil distance.
51. The digital camera system according to claim 49 further comprising:
a restriction means that restricts a moving range of the focusing lens to a given range on the basis of the distance to the subject calculated by the calculator.
52. The digital camera system according to claim 49 further comprising:
an aperture stop that controls light quantity incident on the shooting lens; and
an aperture determining means that determines an aperture value of the aperture stop such that when a plurality of faces are detected by the detecting means, a given face among the plurality of faces comes in focus on the basis of the distances to the plurality of faces calculated by the calculator.
53. A digital camera system comprising:
an illumination means that illuminates a subject upon shooting the subject;
a detecting means that detects a given feature point from an image data;
a distance calculator that calculates a distance to the feature point on the basis of the feature point detected by the detecting means; and
a illumination quantity setting means that sets an illumination light quantity of the illumination means on the basis of the distance calculated by the distance calculator.
54. The digital camera system according to claim 53 further comprising:
a plurality of photometry areas that measure luminance of the subject; and
an exposure setting means that sets an exposure condition upon shooting on the basis of an output of a given photometry area among the plurality of photometry areas.
55. The digital camera system according to claim 53 further comprising:
a size detector that detects a face size or a pupil distance from the feature point detected by the detecting means; and
a lens position sensor that detects the focal length of the zoom lens;
wherein the distance calculator calculates a distance to the feature point on the basis of the face size or the pupil distance detected by the size detector and the focal length of the zoom lens detected by the lens position sensor.
56. The digital camera system according to claim 53 further comprising:
a discriminating means that discriminates whether or not the distance is within the controllable exposure range of the illumination means on the basis of the distance to the subject calculated by the distance calculator; and
a warning means that gives a warning when the discriminating means discriminates that the distance is out of the controllable exposure range.
57. A digital camera system comprising:
a main illumination means that illuminates a subject upon shooting the subject;
an auxiliary illumination means that illuminates the subject with an auxiliary illumination in advance;
a detecting means that detects a given feature point from an image data; and
a setting means that sets an illumination light quantity of the main illumination means on the basis of a reflection light from the feature point illuminated with the auxiliary illumination by the auxiliary illumination means.
58. The digital camera system according to claim 57, wherein the feature point is a face portion of a person.
59. A digital camera system comprising:
an imaging device that shoots an image of a subject;
a memory that stores an image data;
a detecting means that detects a given feature point from the image data;
an instructing means that instructs the imaging device to shoot the subject for storing in the memory; and
a controller that controls the detecting means not to carry out detecting procedure to an image data output from the imaging device before the instructing means gives the instruction.
60. The digital camera system according to claim 59 further comprising:
a processing means that processes at least one of white balance process and outline enhancement process on the basis of the feature point detected by the detecting means in response to the instruction given by the instructing means.
61. The digital camera system according to claim 60 further comprising:
a controller that controls the memory to store the image data processed by the processing means.
62. A digital camera system comprising:
a memory that stores a given feature point together with information regarding the feature point detected from an image data;
a display that displays either the feature point or the information regarding the feature point stored in the memory; and
a deleting means that deletes from the memory at lest a portion of the feature point or the information regarding the feature point displayed on the display.
63. A digital camera system comprising:
a memory that stores a given feature point together with information regarding the feature point detected from an image data;
a display that displays either the feature point or the information regarding the feature point stored in the memory; and
a controller that changes at least a portion of the feature point or the information regarding the feature point displayed on the display and stores to the memory.
US10/814,142 2003-04-15 2004-04-01 Digital camera system Abandoned US20040207743A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US12/289,689 US20090066815A1 (en) 2003-04-15 2008-10-31 Digital camera system
US13/067,502 US20110242363A1 (en) 2003-04-15 2011-06-06 Digital camera system
US13/964,648 US9147106B2 (en) 2003-04-15 2013-08-12 Digital camera system

Applications Claiming Priority (10)

Application Number Priority Date Filing Date Title
JP2003109883A JP2004317699A (en) 2003-04-15 2003-04-15 Digital camera
JP2003-109884 2003-04-15
JP2003109886A JP4196714B2 (en) 2003-04-15 2003-04-15 Digital camera
JP2003109885A JP2004320286A (en) 2003-04-15 2003-04-15 Digital camera
JP2003109882A JP2004320284A (en) 2003-04-15 2003-04-15 Digital camera
JP2003-109885 2003-04-15
JP2003-109886 2003-04-15
JP2003-109883 2003-04-15
JP2003109884A JP2004320285A (en) 2003-04-15 2003-04-15 Digital camera
JP2003-109882 2003-04-15

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US12/289,689 Continuation US20090066815A1 (en) 2003-04-15 2008-10-31 Digital camera system

Publications (1)

Publication Number Publication Date
US20040207743A1 true US20040207743A1 (en) 2004-10-21

Family

ID=32966792

Family Applications (4)

Application Number Title Priority Date Filing Date
US10/814,142 Abandoned US20040207743A1 (en) 2003-04-15 2004-04-01 Digital camera system
US12/289,689 Abandoned US20090066815A1 (en) 2003-04-15 2008-10-31 Digital camera system
US13/067,502 Abandoned US20110242363A1 (en) 2003-04-15 2011-06-06 Digital camera system
US13/964,648 Expired - Fee Related US9147106B2 (en) 2003-04-15 2013-08-12 Digital camera system

Family Applications After (3)

Application Number Title Priority Date Filing Date
US12/289,689 Abandoned US20090066815A1 (en) 2003-04-15 2008-10-31 Digital camera system
US13/067,502 Abandoned US20110242363A1 (en) 2003-04-15 2011-06-06 Digital camera system
US13/964,648 Expired - Fee Related US9147106B2 (en) 2003-04-15 2013-08-12 Digital camera system

Country Status (3)

Country Link
US (4) US20040207743A1 (en)
EP (1) EP1471455B1 (en)
DE (1) DE602004030390D1 (en)

Cited By (129)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030002715A1 (en) * 1999-12-14 2003-01-02 Kowald Julie Rae Visual language classification system
US20050179780A1 (en) * 2004-01-27 2005-08-18 Canon Kabushiki Kaisha Face detecting apparatus and method
US20050189419A1 (en) * 2004-02-20 2005-09-01 Fuji Photo Film Co., Ltd. Image capturing apparatus, image capturing method, and machine readable medium storing thereon image capturing program
US20050265626A1 (en) * 2004-05-31 2005-12-01 Matsushita Electric Works, Ltd. Image processor and face detector using the same
US20050270410A1 (en) * 2004-06-03 2005-12-08 Canon Kabushiki Kaisha Image pickup apparatus and image pickup method
US20060034602A1 (en) * 2004-08-16 2006-02-16 Canon Kabushiki Kaisha Image capture apparatus and control method therefor
US20060098104A1 (en) * 2004-11-11 2006-05-11 Konica Minolta Photo Imaging, Inc. Image capturing apparatus
US20060210264A1 (en) * 2005-03-17 2006-09-21 Canon Kabushiki Kaisha Imaging apparatus and method for controlling display device
US20070019083A1 (en) * 2005-07-11 2007-01-25 Fuji Photo Film Co., Ltd. Image capturing apparatus, photograph quantity management method, and photograph quantity management program
US20070030381A1 (en) * 2005-01-18 2007-02-08 Nikon Corporation Digital camera
US20070052821A1 (en) * 2005-04-26 2007-03-08 Hajime Fukui Image capturing apparatus and its control method
US20070071316A1 (en) * 2005-09-27 2007-03-29 Fuji Photo Film Co., Ltd. Image correcting method and image correcting system
US20070122006A1 (en) * 2005-09-16 2007-05-31 Hidenori Taniguchi Image processing apparatus and method
US20070177765A1 (en) * 2006-01-31 2007-08-02 Canon Kabushiki Kaisha Method for displaying an identified region together with an image, program executable in a computer apparatus, and imaging apparatus
US20070237513A1 (en) * 2006-03-27 2007-10-11 Fujifilm Corporation Photographing method and photographing apparatus
US20070248281A1 (en) * 2006-04-25 2007-10-25 Motorola, Inc. Prespective improvement for image and video applications
US20070248345A1 (en) * 2006-04-04 2007-10-25 Nikon Corporation Camera
US20070263997A1 (en) * 2006-05-10 2007-11-15 Canon Kabushiki Kaisha Focus adjustment method, focus adjustment apparatus, and control method thereof
US20070269196A1 (en) * 2006-05-16 2007-11-22 Fujifilm Corporation System for and method of taking image
US20070274592A1 (en) * 2006-02-10 2007-11-29 Seiko Epson Corporation Method of generating image provided with face object information, method of correcting color, and apparatus operable to execute the methods
US20070280662A1 (en) * 2006-06-02 2007-12-06 Fujifilm Corporation Imaging device performing focus adjustment based on human face information
US20070285528A1 (en) * 2006-06-09 2007-12-13 Sony Corporation Imaging apparatus, control method of imaging apparatus, and computer program
US20070296825A1 (en) * 2006-06-26 2007-12-27 Sony Computer Entertainment Inc. Image Processing Device, Image Processing System, Computer Control Method, and Information Storage Medium
US20070296848A1 (en) * 2006-06-09 2007-12-27 Sony Corporation Imaging apparatus, imaging apparatus control method, and computer program
US20080002865A1 (en) * 2006-06-19 2008-01-03 Tetsuya Toyoda Electronic imaging apparatus and system for specifying an individual
US20080002028A1 (en) * 2006-06-30 2008-01-03 Casio Computer Co., Ltd. Imaging apparatus and computer readable recording medium
US20080008361A1 (en) * 2006-04-11 2008-01-10 Nikon Corporation Electronic camera and image processing apparatus
US20080037975A1 (en) * 2006-08-08 2008-02-14 Kenichi Nakajima Imaging device
US20080088733A1 (en) * 2006-10-13 2008-04-17 Fujifilm Corporation Digital camera and flash emission control method
US20080129860A1 (en) * 2006-11-02 2008-06-05 Kenji Arakawa Digital camera
US20080143866A1 (en) * 2006-12-19 2008-06-19 Pentax Corporation Camera having a focus adjusting system and a face recognition function
US20080170132A1 (en) * 2007-01-17 2008-07-17 Samsung Techwin Co., Ltd. Digital photographing apparatus, method for controlling the same, and a recording medium for storing a program to implement the method
US20080181460A1 (en) * 2007-01-31 2008-07-31 Masaya Tamaru Imaging apparatus and imaging method
US20080193116A1 (en) * 2007-02-09 2008-08-14 Canon Kabushiki Kaisha Focusing device and image-capturing device provided with the same
US20080204564A1 (en) * 2007-02-22 2008-08-28 Matsushita Electric Industrial Co., Ltd. Image pickup apparatus and lens barrel
US20080204565A1 (en) * 2007-02-22 2008-08-28 Matsushita Electric Industrial Co., Ltd. Image pickup apparatus and lens barrel
US20080205869A1 (en) * 2007-02-26 2008-08-28 Syuji Nose Image taking apparatus
CN101281290A (en) * 2007-04-04 2008-10-08 株式会社尼康 Digital camera
US20080246852A1 (en) * 2007-03-30 2008-10-09 Sanyo Electric Co., Ltd. Image pickup device and image pickup method
US20080278587A1 (en) * 2007-05-10 2008-11-13 Katsutoshi Izawa Focus adjustment apparatus, method, and program
US20080278589A1 (en) * 2007-05-11 2008-11-13 Karl Ola Thorn Methods for identifying a target subject to automatically focus a digital camera and related systems, and computer program products
US20080284901A1 (en) * 2007-05-18 2008-11-20 Takeshi Misawa Automatic focus adjusting apparatus and automatic focus adjusting method, and image pickup apparatus and image pickup method
US20080284900A1 (en) * 2007-04-04 2008-11-20 Nikon Corporation Digital camera
US20080292299A1 (en) * 2007-05-21 2008-11-27 Martin Kretz System and method of photography using desirable feature recognition
US20080316325A1 (en) * 2007-06-19 2008-12-25 Hoya Corporation Camera having an autofocusing system
US20090028390A1 (en) * 2007-07-24 2009-01-29 Seiko Epson Corporation Image Processing for Estimating Subject Distance
US20090028394A1 (en) * 2007-07-24 2009-01-29 Nikon Corporation Imaging device, image detecting method and focus adjusting method
EP2037320A1 (en) * 2007-09-14 2009-03-18 Sony Corporation Imaging apparatus, imaging apparatus control method, and computer program
US20090095880A1 (en) * 2007-10-16 2009-04-16 Nec Electronics Corporation Autofocus control circuit, autofocus control method and image pickup apparatus
US20090109321A1 (en) * 2007-10-31 2009-04-30 Nikon Corporation Image tracking device, imaging device, image tracking method, and imaging method
US20090128640A1 (en) * 2006-02-20 2009-05-21 Matsushita Electric Industrial Co., Ltd Image device and lens barrel
US20090135269A1 (en) * 2005-11-25 2009-05-28 Nikon Corporation Electronic Camera and Image Processing Device
US20090147107A1 (en) * 2005-02-15 2009-06-11 Nikon Corporation Electronic camera
US20090180696A1 (en) * 2003-07-15 2009-07-16 Yoshihisa Minato Object determining device and imaging apparatus
US20090185046A1 (en) * 2006-03-23 2009-07-23 Nikon Corporation Camera and Image Processing Program
US20090201389A1 (en) * 2008-02-11 2009-08-13 Samsung Techwin Co., Ltd. Digital image processing apparatus and method of controlling the same
US20090213263A1 (en) * 2008-02-25 2009-08-27 Nikon Corporation Imaging system and method for detecting target object
US20090219406A1 (en) * 2008-03-03 2009-09-03 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US20090226158A1 (en) * 2008-03-07 2009-09-10 Omron Corporation Measurement device and method, imaging device, and program
US20090238549A1 (en) * 2008-03-19 2009-09-24 Atsushi Kanayama Autofocus system
US20090256925A1 (en) * 2008-03-19 2009-10-15 Sony Corporation Composition determination device, composition determination method, and program
US20090262213A1 (en) * 2006-09-13 2009-10-22 Yoshikazu Watanabe Imaging device and subject detection method
US20090268080A1 (en) * 2008-04-25 2009-10-29 Samsung Techwin Co., Ltd. Bracketing apparatus and method for use in digital image processor
US20090284645A1 (en) * 2006-09-04 2009-11-19 Nikon Corporation Camera
US20100033593A1 (en) * 2008-08-06 2010-02-11 Canon Kabushiki Kaisha Image pick-up apparatus and control method therefor
US20100066847A1 (en) * 2007-02-22 2010-03-18 Nikon Corporation Imaging apparatus and program
US20100067892A1 (en) * 2008-09-16 2010-03-18 Canon Kabushiki Kaisha Imaging apparatus and control method
US20100086292A1 (en) * 2008-10-08 2010-04-08 Samsung Electro- Mechanics Co., Ltd. Device and method for automatically controlling continuous auto focus
US20100103286A1 (en) * 2007-04-23 2010-04-29 Hirokatsu Akiyama Image pick-up device, computer readable recording medium including recorded program for control of the device, and control method
US20100118151A1 (en) * 2008-11-12 2010-05-13 Yoshijiro Takano Autofocus system
EP2187624A1 (en) 2008-11-18 2010-05-19 Fujinon Corporation Autofocus system
EP2187623A1 (en) 2008-11-14 2010-05-19 Fujinon Corporation Autofocus system
US20100157084A1 (en) * 2008-12-18 2010-06-24 Olympus Imaging Corp. Imaging apparatus and image processing method used in imaging device
US20100189426A1 (en) * 2009-01-23 2010-07-29 Inventec Appliances (Shanghai) Co., Ltd. System and method for human machine interface for zoom content on display
US20100194897A1 (en) * 2007-07-09 2010-08-05 Panasonic Corporation Digital single-lens reflex camera
CN101841650A (en) * 2009-03-11 2010-09-22 卡西欧计算机株式会社 The camera head that is suitable for personage's photography
US20100245612A1 (en) * 2009-03-25 2010-09-30 Takeshi Ohashi Image processing device, image processing method, and program
US20100295962A1 (en) * 2009-05-20 2010-11-25 Hoya Corporation Imaging apparatus and hdri method
US20110002678A1 (en) * 2009-07-03 2011-01-06 Fujifilm Corporation Photographing control device and method, as well as program
US20110007187A1 (en) * 2008-03-10 2011-01-13 Sanyo Electric Co., Ltd. Imaging Device And Image Playback Device
US20110019066A1 (en) * 2009-07-22 2011-01-27 Yoshijiro Takano Af frame auto-tracking system
US20110019936A1 (en) * 2009-07-22 2011-01-27 Satish Kumar Bhrugumalla Imaging system with multiframe scaler
US20110090357A1 (en) * 2005-02-07 2011-04-21 Rajiv Rainier Digital camera with automatic functions
US20110135157A1 (en) * 2009-12-08 2011-06-09 Electronics And Telecommunications Research Institute Apparatus and method for estimating distance and position of object based on image of single camera
US20110158624A1 (en) * 2009-12-25 2011-06-30 Fujifilm Corporation Autofocus system
US8023009B2 (en) 2007-03-27 2011-09-20 Fujifilm Corporation Imaging apparatus for correcting optical distortion and wide-angle distortion
US20110267529A1 (en) * 2010-01-06 2011-11-03 Kazumasa Tabata Imaging apparatus
US20110279701A1 (en) * 2007-05-18 2011-11-17 Casio Computer Co., Ltd. Image pickup device, face detection method, and computer-readable recording medium
US20120062769A1 (en) * 2010-03-30 2012-03-15 Sony Corporation Image processing device and method, and program
US20120120249A1 (en) * 2009-07-29 2012-05-17 Sony Corporation Control apparatus, imaging system, control method, and program
US8300112B2 (en) 2006-07-03 2012-10-30 Canon Kabushiki Kaisha Image processing apparatus and control method therefor
US20120274562A1 (en) * 2011-04-28 2012-11-01 Nokia Corporation Method, Apparatus and Computer Program Product for Displaying Media Content
US20120307032A1 (en) * 2011-06-06 2012-12-06 Sony Corporation Image processing device, image processing method, image processing system, program, and recording medium
US8350954B2 (en) 2009-07-13 2013-01-08 Canon Kabushiki Kaisha Image processing apparatus and image processing method with deconvolution processing for image blur correction
US20130010095A1 (en) * 2010-03-30 2013-01-10 Panasonic Corporation Face recognition device and face recognition method
US20130107026A1 (en) * 2011-11-01 2013-05-02 Samsung Electro-Mechanics Co., Ltd. Remote control apparatus and gesture recognition method for remote control apparatus
US20130201366A1 (en) * 2012-02-03 2013-08-08 Sony Corporation Image processing apparatus, image processing method, and program
US20130308001A1 (en) * 2012-05-17 2013-11-21 Honeywell International Inc. Image stabilization devices, methods, and systems
US8769377B2 (en) 2007-06-15 2014-07-01 Spansion Llc Error correction scheme for non-volatile memory
CN103916592A (en) * 2013-01-04 2014-07-09 三星电子株式会社 Apparatus and method for photographing portrait in portable terminal having camera
US20140226858A1 (en) * 2013-02-14 2014-08-14 Samsung Electronics Co., Ltd. Method of tracking object using camera and camera system for object tracking
CN104038701A (en) * 2014-07-01 2014-09-10 宇龙计算机通信科技(深圳)有限公司 Method and system for terminal shooting and terminal
US8860875B2 (en) 2009-12-01 2014-10-14 Panasonic Intellectual Property Corporation Of America Imaging device for recognition and method of controlling the same
US8890993B2 (en) 2010-12-10 2014-11-18 Olympus Imaging Corp. Imaging device and AF control method
US20140347513A1 (en) * 2013-05-21 2014-11-27 Canon Kabushiki Kaisha Detection apparatus, method for detecting feature point and storage medium
US20150009356A1 (en) * 2013-07-02 2015-01-08 Canon Kabushiki Kaisha Image processing apparatus, image processing method, image processing program, and imaging apparatus
CN104284085A (en) * 2013-07-08 2015-01-14 Lg电子株式会社 Electronic device and method of operating the same
US20150049195A1 (en) * 2013-08-15 2015-02-19 Tomoko Ishigaki Image processing unit, object detection method, object detection program, and vehicle control system
US20150062335A1 (en) * 2012-02-22 2015-03-05 Hitachi Kokusai Electric Inc. Radio communication apparatus, radio communication method, and radio communication system
CN104980681A (en) * 2015-06-15 2015-10-14 联想(北京)有限公司 Video acquisition method and video acquisition device
US20160142619A1 (en) * 2014-11-14 2016-05-19 Canon Kabushiki Kaisha Focus control apparatus, image capturing apparatus, method for controlling focus control apparatus and storage medium
US20160261792A1 (en) * 2015-03-03 2016-09-08 Xiaomi Inc. Method and apparatus for adjusting photography parameters
CN106060391A (en) * 2016-06-27 2016-10-26 联想(北京)有限公司 Method and device for processing working mode of camera, and electronic equipment
CN107302689A (en) * 2017-08-24 2017-10-27 无锡北斗星通信息科技有限公司 Gun type camera self-adaptive switch system
CN107343152A (en) * 2017-08-25 2017-11-10 无锡北斗星通信息科技有限公司 Real-time passenger image data correction system
EP3140982B1 (en) 2014-05-05 2018-08-29 Philips Lighting Holding B.V. Device with a camera and a screen
CN108470321A (en) * 2018-02-27 2018-08-31 北京小米移动软件有限公司 U.S. face processing method, device and the storage medium of photo
CN108520202A (en) * 2018-03-15 2018-09-11 华南理工大学 Confrontation robustness image characteristic extracting method based on variation spherical projection
CN109190574A (en) * 2018-09-13 2019-01-11 郑州云海信息技术有限公司 A kind of hair style recommended method, device, terminal and storage medium based on big data
CN109309864A (en) * 2018-08-08 2019-02-05 周群 Nationality's information intelligent identifying system
US10303933B2 (en) * 2016-07-29 2019-05-28 Samsung Electronics Co., Ltd. Apparatus and method for processing a beauty effect
US10579870B2 (en) 2016-12-20 2020-03-03 Samsung Electronics Co., Ltd. Operating method for function of iris recognition and electronic device supporting the same
CN110933304A (en) * 2019-11-27 2020-03-27 RealMe重庆移动通信有限公司 Method and device for determining to-be-blurred region, storage medium and terminal equipment
US10686991B2 (en) 2018-09-11 2020-06-16 Sony Corporation Techniques for improving photograph quality for fouled lens or sensor situations
US10887525B2 (en) * 2019-03-05 2021-01-05 Sony Corporation Delivery of notifications for feedback over visual quality of images
US20210235005A1 (en) * 2020-01-28 2021-07-29 Panasonic I-Pro Sensing Solutions Co., Ltd. Monitoring camera, camera parameter determining method and storage medium
US11394869B2 (en) * 2019-09-06 2022-07-19 Panasonic Intellectual Property Management Co., Ltd. Imaging device with focusing operation based on subject and predetermined region
US20220309992A1 (en) * 2021-03-24 2022-09-29 Canon Kabushiki Kaisha Head-mounted display, display control system, information processor, and method for controlling head-mounted display
US11716536B2 (en) * 2020-04-22 2023-08-01 Canon Kabushiki Kaisha Control device, image capturing apparatus, and control method for detecting obstacle

Families Citing this family (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4040613B2 (en) 2004-08-31 2008-01-30 キヤノン株式会社 Imaging device
US7095892B2 (en) 2005-01-26 2006-08-22 Motorola, Inc. Object-of-interest image capture
JP4772544B2 (en) * 2005-04-27 2011-09-14 富士フイルム株式会社 Imaging apparatus, imaging method, and program
CN100581218C (en) 2005-05-11 2010-01-13 富士胶片株式会社 Imaging apparatus, imaging method, image processing apparatus, and image processing method
JP4577113B2 (en) 2005-06-22 2010-11-10 オムロン株式会社 Object determining device, imaging device, and monitoring device
US7697827B2 (en) 2005-10-17 2010-04-13 Konicek Jeffrey C User-friendlier interfaces for a camera
JP5087856B2 (en) * 2006-04-05 2012-12-05 株式会社ニコン Electronic camera
JP4724890B2 (en) 2006-04-24 2011-07-13 富士フイルム株式会社 Image reproduction apparatus, image reproduction method, image reproduction program, and imaging apparatus
CN101137012B (en) * 2006-07-25 2010-09-08 富士胶片株式会社 Screening device and method
CN101534698A (en) * 2006-09-27 2009-09-16 乔治亚技术研究公司 Systems and methods for the measurement of surfaces
JP4568918B2 (en) * 2007-01-30 2010-10-27 富士フイルム株式会社 Imaging apparatus and imaging control method
JP5019939B2 (en) * 2007-04-19 2012-09-05 パナソニック株式会社 Imaging apparatus and imaging method
JP2009010777A (en) 2007-06-28 2009-01-15 Sony Corp Imaging device, photography control method, and program
JP5046788B2 (en) * 2007-08-10 2012-10-10 キヤノン株式会社 Imaging apparatus and control method thereof
JP5099488B2 (en) * 2007-08-31 2012-12-19 カシオ計算機株式会社 Imaging apparatus, face recognition method and program thereof
JP2009118009A (en) * 2007-11-02 2009-05-28 Sony Corp Imaging apparatus, method for controlling same, and program
US8022982B2 (en) 2008-01-02 2011-09-20 Sony Ericsson Mobile Communications Ab Camera system and method for operating a camera system
US20090174805A1 (en) * 2008-01-07 2009-07-09 Motorola, Inc. Digital camera focusing using stored object recognition
JP2010097167A (en) 2008-09-22 2010-04-30 Fujinon Corp Auto focus device
US20100253797A1 (en) * 2009-04-01 2010-10-07 Samsung Electronics Co., Ltd. Smart flash viewer
US20100317398A1 (en) * 2009-06-10 2010-12-16 Ola Thorn Camera system and method for flash-based photography
JP2011118834A (en) * 2009-12-07 2011-06-16 Sony Corp Apparatus and method for processing information, and program
JP5459031B2 (en) * 2010-04-13 2014-04-02 ソニー株式会社 Information processing apparatus, information processing method, and program
KR20120035042A (en) * 2010-10-04 2012-04-13 삼성전자주식회사 Digital photographing apparatus and method for controlling the same
KR20120139100A (en) * 2011-06-16 2012-12-27 삼성전자주식회사 Apparatus and method for security management using face recognition
KR101797040B1 (en) * 2011-11-28 2017-11-13 삼성전자주식회사 Digital photographing apparatus and control method thereof
CN102609957A (en) * 2012-01-16 2012-07-25 上海智觉光电科技有限公司 Method and system for detecting picture offset of camera device
EP2907298B1 (en) * 2012-10-11 2019-09-18 LG Electronics Inc. Image processing device and image processing method
KR101978219B1 (en) * 2013-03-15 2019-05-14 엘지전자 주식회사 Mobile terminal and controlling method thereof
JP6295534B2 (en) * 2013-07-29 2018-03-20 オムロン株式会社 Programmable display, control method, and program
CN105554348A (en) * 2015-12-25 2016-05-04 北京奇虎科技有限公司 Image display method and device based on video information
CN105704395B (en) * 2016-04-05 2018-09-14 广东欧珀移动通信有限公司 Photographic method and camera arrangement
CN109981964B (en) * 2017-12-27 2021-07-27 深圳市优必选科技有限公司 Robot-based shooting method and shooting device and robot
US11113507B2 (en) 2018-05-22 2021-09-07 Samsung Electronics Co., Ltd. System and method for fast object detection

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5347371A (en) * 1990-11-29 1994-09-13 Hitachi, Ltd. Video camera with extraction unit for extracting specific portion of video signal
US5812193A (en) * 1992-11-07 1998-09-22 Sony Corporation Video camera system which automatically follows subject changes
US5982912A (en) * 1996-03-18 1999-11-09 Kabushiki Kaisha Toshiba Person identification apparatus and method using concentric templates and feature point candidates
US20020101619A1 (en) * 2001-01-31 2002-08-01 Hisayoshi Tsubaki Image recording method and system, image transmitting method, and image recording apparatus
US6940545B1 (en) * 2000-02-28 2005-09-06 Eastman Kodak Company Face detecting camera and method

Family Cites Families (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5075715A (en) * 1989-03-27 1991-12-24 Canon Kabushiki Kaisha Flash device with charging completion control feature
JP2829919B2 (en) 1989-03-27 1998-12-02 キヤノン株式会社 Flash photography system for image signal recorder
US6118484A (en) * 1992-05-22 2000-09-12 Canon Kabushiki Kaisha Imaging apparatus
JP3585625B2 (en) 1996-02-27 2004-11-04 シャープ株式会社 Image input device and image transmission device using the same
JP3469031B2 (en) 1997-02-18 2003-11-25 株式会社東芝 Face image registration apparatus and method
JP3222091B2 (en) * 1997-05-27 2001-10-22 シャープ株式会社 Image processing apparatus and medium storing image processing apparatus control program
JPH11220683A (en) 1998-01-30 1999-08-10 Canon Inc Image processor and method therefor and storage medium
JP2000089311A (en) 1998-09-08 2000-03-31 Olympus Optical Co Ltd Stroboscope system for camera
JP2000305141A (en) 1999-04-21 2000-11-02 Olympus Optical Co Ltd Electronic camera
JP4829391B2 (en) 1999-06-18 2011-12-07 キヤノン株式会社 Imaging device
JP2001167110A (en) 1999-12-08 2001-06-22 Matsushita Electric Ind Co Ltd Picture retrieving method and its device
JP2001216515A (en) * 2000-02-01 2001-08-10 Matsushita Electric Ind Co Ltd Method and device for detecting face of person
JP2001218020A (en) 2000-02-04 2001-08-10 Fuji Photo Film Co Ltd Picture processing method
JP2001330882A (en) 2000-05-24 2001-11-30 Canon Inc Camera with subject recognizing function
GB0018161D0 (en) * 2000-07-25 2000-09-13 Bio4 Limited Identity systems
JP2002051255A (en) 2000-07-31 2002-02-15 Olympus Optical Co Ltd Main object detectable camera
JP3913520B2 (en) 2000-10-20 2007-05-09 富士フイルム株式会社 Image processing system and order system
US6859552B2 (en) * 2000-11-07 2005-02-22 Minolta Co., Ltd. Image retrieving apparatus
JP2002150287A (en) 2000-11-07 2002-05-24 Minolta Co Ltd Image detector, image detection method, digital camera and printer
JP4017828B2 (en) 2001-02-14 2007-12-05 富士フイルム株式会社 Digital camera
JP2002333652A (en) 2001-05-10 2002-11-22 Oki Electric Ind Co Ltd Photographing device and reproducing apparatus
JP5011625B2 (en) 2001-09-06 2012-08-29 株式会社ニコン Imaging device
US7298412B2 (en) * 2001-09-18 2007-11-20 Ricoh Company, Limited Image pickup device, automatic focusing method, automatic exposure method, electronic flash control method and computer program
JP2003107335A (en) 2001-09-28 2003-04-09 Ricoh Co Ltd Image pickup device, automatic focusing method, and program for making computer execute the method
JP4870887B2 (en) 2001-09-28 2012-02-08 株式会社リコー Imaging apparatus, strobe control method, and program for computer to execute the method
JP4208450B2 (en) 2001-10-15 2009-01-14 株式会社東芝 Face image monitoring system
US7082211B2 (en) * 2002-05-31 2006-07-25 Eastman Kodak Company Method and system for enhancing portrait images

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5347371A (en) * 1990-11-29 1994-09-13 Hitachi, Ltd. Video camera with extraction unit for extracting specific portion of video signal
US5812193A (en) * 1992-11-07 1998-09-22 Sony Corporation Video camera system which automatically follows subject changes
US5982912A (en) * 1996-03-18 1999-11-09 Kabushiki Kaisha Toshiba Person identification apparatus and method using concentric templates and feature point candidates
US6940545B1 (en) * 2000-02-28 2005-09-06 Eastman Kodak Company Face detecting camera and method
US20020101619A1 (en) * 2001-01-31 2002-08-01 Hisayoshi Tsubaki Image recording method and system, image transmitting method, and image recording apparatus

Cited By (243)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7606397B2 (en) * 1999-12-14 2009-10-20 Canon Kabushiki Kaisha Visual language classification system
US20030002715A1 (en) * 1999-12-14 2003-01-02 Kowald Julie Rae Visual language classification system
US7912363B2 (en) * 2003-07-15 2011-03-22 Omron Corporation Object determining device and imaging apparatus
US20090180696A1 (en) * 2003-07-15 2009-07-16 Yoshihisa Minato Object determining device and imaging apparatus
US7734098B2 (en) * 2004-01-27 2010-06-08 Canon Kabushiki Kaisha Face detecting apparatus and method
US20050179780A1 (en) * 2004-01-27 2005-08-18 Canon Kabushiki Kaisha Face detecting apparatus and method
US7538814B2 (en) * 2004-02-20 2009-05-26 Fujifilm Corporation Image capturing apparatus capable of searching for an unknown explanation of a main object of an image, and method for accomplishing the same
US20050189419A1 (en) * 2004-02-20 2005-09-01 Fuji Photo Film Co., Ltd. Image capturing apparatus, image capturing method, and machine readable medium storing thereon image capturing program
US7948524B2 (en) * 2004-05-31 2011-05-24 Panasonic Electric Works Co., Ltd. Image processor and face detector using the same
US20050265626A1 (en) * 2004-05-31 2005-12-01 Matsushita Electric Works, Ltd. Image processor and face detector using the same
US7733412B2 (en) 2004-06-03 2010-06-08 Canon Kabushiki Kaisha Image pickup apparatus and image pickup method
US20050270410A1 (en) * 2004-06-03 2005-12-08 Canon Kabushiki Kaisha Image pickup apparatus and image pickup method
US20100201864A1 (en) * 2004-06-03 2010-08-12 Canon Kabushiki Kaisha Image pickup apparatus and image pickup method
US8300139B2 (en) 2004-06-03 2012-10-30 Canon Kabushiki Kaisha Image pickup apparatus and image pickup method
US20060034602A1 (en) * 2004-08-16 2006-02-16 Canon Kabushiki Kaisha Image capture apparatus and control method therefor
US7430369B2 (en) * 2004-08-16 2008-09-30 Canon Kabushiki Kaisha Image capture apparatus and control method therefor
US20060098104A1 (en) * 2004-11-11 2006-05-11 Konica Minolta Photo Imaging, Inc. Image capturing apparatus
US20070030381A1 (en) * 2005-01-18 2007-02-08 Nikon Corporation Digital camera
US7791668B2 (en) * 2005-01-18 2010-09-07 Nikon Corporation Digital camera
US8456538B2 (en) * 2005-02-07 2013-06-04 Sony Corporation Digital camera with automatic functions
US20110090357A1 (en) * 2005-02-07 2011-04-21 Rajiv Rainier Digital camera with automatic functions
US7881601B2 (en) 2005-02-15 2011-02-01 Nikon Corporation Electronic camera
US20090147107A1 (en) * 2005-02-15 2009-06-11 Nikon Corporation Electronic camera
US7672580B2 (en) 2005-03-17 2010-03-02 Canon Kabushiki Kaisha Imaging apparatus and method for controlling display device
US20060210264A1 (en) * 2005-03-17 2006-09-21 Canon Kabushiki Kaisha Imaging apparatus and method for controlling display device
US7636123B2 (en) * 2005-04-26 2009-12-22 Canon Kabushiki Kaisha Image capturing apparatus with light emission controlling mechanism and method of controlling the image capturing apparatus
US20070052821A1 (en) * 2005-04-26 2007-03-08 Hajime Fukui Image capturing apparatus and its control method
US7787665B2 (en) * 2005-07-11 2010-08-31 Fujifilm Corporation Image capturing apparatus, photograph quantity management method, and photograph quantity management program
US20070019083A1 (en) * 2005-07-11 2007-01-25 Fuji Photo Film Co., Ltd. Image capturing apparatus, photograph quantity management method, and photograph quantity management program
US20070122006A1 (en) * 2005-09-16 2007-05-31 Hidenori Taniguchi Image processing apparatus and method
US8194935B2 (en) * 2005-09-16 2012-06-05 Canon Kabushiki Kaisha Image processing apparatus and method
US20070071316A1 (en) * 2005-09-27 2007-03-29 Fuji Photo Film Co., Ltd. Image correcting method and image correcting system
US8488847B2 (en) 2005-11-25 2013-07-16 Nikon Corporation Electronic camera and image processing device
US20090135269A1 (en) * 2005-11-25 2009-05-28 Nikon Corporation Electronic Camera and Image Processing Device
US20070177765A1 (en) * 2006-01-31 2007-08-02 Canon Kabushiki Kaisha Method for displaying an identified region together with an image, program executable in a computer apparatus, and imaging apparatus
US7826639B2 (en) 2006-01-31 2010-11-02 Canon Kabushiki Kaisha Method for displaying an identified region together with an image, program executable in a computer apparatus, and imaging apparatus
US20070274592A1 (en) * 2006-02-10 2007-11-29 Seiko Epson Corporation Method of generating image provided with face object information, method of correcting color, and apparatus operable to execute the methods
US20090128640A1 (en) * 2006-02-20 2009-05-21 Matsushita Electric Industrial Co., Ltd Image device and lens barrel
US8736691B2 (en) 2006-02-20 2014-05-27 Panasonic Corporation Image pickup apparatus to control an exposure time based on motion of a detected optical image
US20090185046A1 (en) * 2006-03-23 2009-07-23 Nikon Corporation Camera and Image Processing Program
US8199242B2 (en) * 2006-03-23 2012-06-12 Nikon Corporation Camera and image processing program
US20070237513A1 (en) * 2006-03-27 2007-10-11 Fujifilm Corporation Photographing method and photographing apparatus
US8009976B2 (en) * 2006-04-04 2011-08-30 Nikon Corporation Camera having face detection and with automatic focus control using a defocus amount
US20070248345A1 (en) * 2006-04-04 2007-10-25 Nikon Corporation Camera
US8306280B2 (en) 2006-04-11 2012-11-06 Nikon Corporation Electronic camera and image processing apparatus
US20080008361A1 (en) * 2006-04-11 2008-01-10 Nikon Corporation Electronic camera and image processing apparatus
US9485415B2 (en) 2006-04-11 2016-11-01 Nikon Corporation Electronic camera and image processing apparatus
US20070248281A1 (en) * 2006-04-25 2007-10-25 Motorola, Inc. Prespective improvement for image and video applications
US20110169986A1 (en) * 2006-04-25 2011-07-14 Motorola, Inc. Perspective improvement for image and video applications
US7742624B2 (en) * 2006-04-25 2010-06-22 Motorola, Inc. Perspective improvement for image and video applications
US20100208943A1 (en) * 2006-04-25 2010-08-19 Motorola, Inc. Perspective improvement for image and video applications
US8494224B2 (en) 2006-04-25 2013-07-23 Motorola Mobility Llc Perspective improvement for image and video applications
US20070263997A1 (en) * 2006-05-10 2007-11-15 Canon Kabushiki Kaisha Focus adjustment method, focus adjustment apparatus, and control method thereof
US20100226636A1 (en) * 2006-05-10 2010-09-09 Canon Kabushiki Kaisha Focus adjustment method, focus adjustment apparatus, and control method thereof
US8145049B2 (en) * 2006-05-10 2012-03-27 Canon Kabushiki Kaisha Focus adjustment method, focus adjustment apparatus, and control method thereof
US7945152B2 (en) 2006-05-10 2011-05-17 Canon Kabushiki Kaisha Focus adjustment method, focus adjustment apparatus, and control method thereof
US20070269196A1 (en) * 2006-05-16 2007-11-22 Fujifilm Corporation System for and method of taking image
US7668451B2 (en) * 2006-05-16 2010-02-23 Fujifilm Corporation System for and method of taking image
US7751701B2 (en) * 2006-06-02 2010-07-06 Fujifilm Corporation Imaging device performing focus adjustment based on human face information
US20070280662A1 (en) * 2006-06-02 2007-12-06 Fujifilm Corporation Imaging device performing focus adjustment based on human face information
US7714927B2 (en) * 2006-06-09 2010-05-11 Sony Corporation Imaging apparatus, imaging apparatus control method and computer program product, with eye blink detection features
US20070285528A1 (en) * 2006-06-09 2007-12-13 Sony Corporation Imaging apparatus, control method of imaging apparatus, and computer program
US20100194912A1 (en) * 2006-06-09 2010-08-05 Sony Corporation Imaging apparatus, imaging apparatus control method, and computer program product, with eye blink detection features
US8253848B2 (en) * 2006-06-09 2012-08-28 Sony Corporation Imaging apparatus, imaging apparatus control method, and computer program product, with eye blink detection features
US8649574B2 (en) * 2006-06-09 2014-02-11 Sony Corporation Imaging apparatus, control method of imaging apparatus, and computer program
US20070296848A1 (en) * 2006-06-09 2007-12-27 Sony Corporation Imaging apparatus, imaging apparatus control method, and computer program
US20110134273A1 (en) * 2006-06-09 2011-06-09 Sony Corporation Imaging apparatus, control method of imaging apparatus, and computer program
US20080002865A1 (en) * 2006-06-19 2008-01-03 Tetsuya Toyoda Electronic imaging apparatus and system for specifying an individual
US8180116B2 (en) * 2006-06-19 2012-05-15 Olympus Imaging Corp. Image pickup apparatus and system for specifying an individual
US20070296825A1 (en) * 2006-06-26 2007-12-27 Sony Computer Entertainment Inc. Image Processing Device, Image Processing System, Computer Control Method, and Information Storage Medium
US7944476B2 (en) * 2006-06-26 2011-05-17 Sony Computer Entertainment Inc. Image processing device, image processing system, computer control method, and information storage medium
US20080002028A1 (en) * 2006-06-30 2008-01-03 Casio Computer Co., Ltd. Imaging apparatus and computer readable recording medium
US8284256B2 (en) * 2006-06-30 2012-10-09 Casio Computer Co., Ltd. Imaging apparatus and computer readable recording medium
US8300112B2 (en) 2006-07-03 2012-10-30 Canon Kabushiki Kaisha Image processing apparatus and control method therefor
US20080037975A1 (en) * 2006-08-08 2008-02-14 Kenichi Nakajima Imaging device
US7761000B2 (en) * 2006-08-08 2010-07-20 Eastman Kodak Company Imaging device
US8538252B2 (en) * 2006-09-04 2013-09-17 Nikon Corporation Camera
US20090284645A1 (en) * 2006-09-04 2009-11-19 Nikon Corporation Camera
US20090262213A1 (en) * 2006-09-13 2009-10-22 Yoshikazu Watanabe Imaging device and subject detection method
US8830346B2 (en) 2006-09-13 2014-09-09 Ricoh Company, Ltd. Imaging device and subject detection method
US8358350B2 (en) * 2006-09-13 2013-01-22 Ricoh Company, Ltd. Imaging device and subject detection method
US8035730B2 (en) * 2006-10-13 2011-10-11 Fujifilm Corporation Digital camera and flash emission control method based on face detection
US20080088733A1 (en) * 2006-10-13 2008-04-17 Fujifilm Corporation Digital camera and flash emission control method
US7978262B2 (en) * 2006-10-13 2011-07-12 Fujifilm Corporation Digital camera and flash emission control method
US20090190002A1 (en) * 2006-10-13 2009-07-30 Fujifilm Corporation Digital camera and flash emission control method
US20080129860A1 (en) * 2006-11-02 2008-06-05 Kenji Arakawa Digital camera
US8059186B2 (en) 2006-12-19 2011-11-15 Hoya Corporation Camera having a focus adjusting system and a face recognition function
US20080143866A1 (en) * 2006-12-19 2008-06-19 Pentax Corporation Camera having a focus adjusting system and a face recognition function
US20080170132A1 (en) * 2007-01-17 2008-07-17 Samsung Techwin Co., Ltd. Digital photographing apparatus, method for controlling the same, and a recording medium for storing a program to implement the method
CN101227560B (en) * 2007-01-17 2012-09-05 三星电子株式会社 Digital photographing apparatus, method for controlling the same
US8063943B2 (en) * 2007-01-17 2011-11-22 Samsung Electronics Co., Ltd. Digital photographing apparatus, method for controlling the same, and a recording medium for storing a program to implement the method
US20080181460A1 (en) * 2007-01-31 2008-07-31 Masaya Tamaru Imaging apparatus and imaging method
US20080193116A1 (en) * 2007-02-09 2008-08-14 Canon Kabushiki Kaisha Focusing device and image-capturing device provided with the same
US7747159B2 (en) 2007-02-09 2010-06-29 Canon Kabushiki Kaisha Focusing device and image-capturing device provided with the same
US20080204564A1 (en) * 2007-02-22 2008-08-28 Matsushita Electric Industrial Co., Ltd. Image pickup apparatus and lens barrel
US8228391B2 (en) 2007-02-22 2012-07-24 Panasonic Corporation Image pickup apparatus and lens barrel
US20100066847A1 (en) * 2007-02-22 2010-03-18 Nikon Corporation Imaging apparatus and program
US20080204565A1 (en) * 2007-02-22 2008-08-28 Matsushita Electric Industrial Co., Ltd. Image pickup apparatus and lens barrel
US8411155B2 (en) 2007-02-22 2013-04-02 Panasonic Corporation Image pickup apparatus and lens barrel
US8254771B2 (en) 2007-02-26 2012-08-28 Fujifilm Corporation Image taking apparatus for group photographing
US20080205869A1 (en) * 2007-02-26 2008-08-28 Syuji Nose Image taking apparatus
US7720369B2 (en) * 2007-02-26 2010-05-18 Fujifilm Corporation Image taking apparatus
US8346073B2 (en) 2007-02-26 2013-01-01 Fujifilm Corporation Image taking apparatus
US8023009B2 (en) 2007-03-27 2011-09-20 Fujifilm Corporation Imaging apparatus for correcting optical distortion and wide-angle distortion
US8144235B2 (en) * 2007-03-30 2012-03-27 Sanyo Electric Co., Ltd. Image pickup device and image pickup method
US20080246852A1 (en) * 2007-03-30 2008-10-09 Sanyo Electric Co., Ltd. Image pickup device and image pickup method
EP1986421A3 (en) * 2007-04-04 2008-12-03 Nikon Corporation Digital camera
US8253847B2 (en) * 2007-04-04 2012-08-28 Nikon Corporation Digital camera having an automatic focus
US20110141344A1 (en) * 2007-04-04 2011-06-16 Nikon Corporation Digital camera
US20080284900A1 (en) * 2007-04-04 2008-11-20 Nikon Corporation Digital camera
CN101281290A (en) * 2007-04-04 2008-10-08 株式会社尼康 Digital camera
US8780227B2 (en) * 2007-04-23 2014-07-15 Sharp Kabushiki Kaisha Image pick-up device, control method, recording medium, and portable terminal providing optimization of an image pick-up condition
US20100103286A1 (en) * 2007-04-23 2010-04-29 Hirokatsu Akiyama Image pick-up device, computer readable recording medium including recorded program for control of the device, and control method
US8106999B2 (en) * 2007-05-10 2012-01-31 Fujifilm Corporation Focus adjustment apparatus, method, and program
US20080278587A1 (en) * 2007-05-10 2008-11-13 Katsutoshi Izawa Focus adjustment apparatus, method, and program
US20080278589A1 (en) * 2007-05-11 2008-11-13 Karl Ola Thorn Methods for identifying a target subject to automatically focus a digital camera and related systems, and computer program products
US20080284901A1 (en) * 2007-05-18 2008-11-20 Takeshi Misawa Automatic focus adjusting apparatus and automatic focus adjusting method, and image pickup apparatus and image pickup method
US20110279701A1 (en) * 2007-05-18 2011-11-17 Casio Computer Co., Ltd. Image pickup device, face detection method, and computer-readable recording medium
US8004599B2 (en) 2007-05-18 2011-08-23 Fujifilm Corporation Automatic focus adjusting apparatus and automatic focus adjusting method, and image pickup apparatus and image pickup method
US7920785B2 (en) * 2007-05-21 2011-04-05 Sony Ericsson Mobile Communications Ab System and method of photography using desirable feature recognition
US7664389B2 (en) * 2007-05-21 2010-02-16 Sony Ericsson Mobile Communications Ab System and method of photography using desirable feature recognition
US20100039527A1 (en) * 2007-05-21 2010-02-18 Sony Ericsson Mobile Communications Ab System and method of photography using desirable feature recognition
US20080292299A1 (en) * 2007-05-21 2008-11-27 Martin Kretz System and method of photography using desirable feature recognition
US8769377B2 (en) 2007-06-15 2014-07-01 Spansion Llc Error correction scheme for non-volatile memory
US8169530B2 (en) * 2007-06-19 2012-05-01 Pentax Ricoh Imaging Company Camera having an autofocusing system
US20080316325A1 (en) * 2007-06-19 2008-12-25 Hoya Corporation Camera having an autofocusing system
US8928761B2 (en) 2007-07-09 2015-01-06 Panasonic Corporation Digital camera
US20100194897A1 (en) * 2007-07-09 2010-08-05 Panasonic Corporation Digital single-lens reflex camera
US8237803B2 (en) 2007-07-09 2012-08-07 Panasonic Coporation Digital single-lens reflex camera including control section that performs camera shake correction and motion detecting section that detects speed of subject
US20090028394A1 (en) * 2007-07-24 2009-01-29 Nikon Corporation Imaging device, image detecting method and focus adjusting method
US20090028390A1 (en) * 2007-07-24 2009-01-29 Seiko Epson Corporation Image Processing for Estimating Subject Distance
US8542941B2 (en) * 2007-07-24 2013-09-24 Nikon Corporation Imaging device, image detecting method and focus adjusting method
EP2037320A1 (en) * 2007-09-14 2009-03-18 Sony Corporation Imaging apparatus, imaging apparatus control method, and computer program
US8068164B2 (en) 2007-09-14 2011-11-29 Sony Corporation Face recognition auto focus apparatus for a moving image
US20090095880A1 (en) * 2007-10-16 2009-04-16 Nec Electronics Corporation Autofocus control circuit, autofocus control method and image pickup apparatus
US8571402B2 (en) * 2007-10-31 2013-10-29 Nikon Corporation Image tracking device, imaging device, image tracking method, and imaging method
US20090109321A1 (en) * 2007-10-31 2009-04-30 Nikon Corporation Image tracking device, imaging device, image tracking method, and imaging method
US20090201389A1 (en) * 2008-02-11 2009-08-13 Samsung Techwin Co., Ltd. Digital image processing apparatus and method of controlling the same
US8194177B2 (en) * 2008-02-11 2012-06-05 Samsung Electronics Co., Ltd. Digital image processing apparatus and method to photograph an image with subject eyes open
US20090213263A1 (en) * 2008-02-25 2009-08-27 Nikon Corporation Imaging system and method for detecting target object
US8488052B2 (en) * 2008-02-25 2013-07-16 Nikon Corporation Imaging system and method for detecting target object
US20120262591A1 (en) * 2008-03-03 2012-10-18 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US20090219406A1 (en) * 2008-03-03 2009-09-03 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US8531547B2 (en) * 2008-03-03 2013-09-10 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US8223216B2 (en) * 2008-03-03 2012-07-17 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US20090226158A1 (en) * 2008-03-07 2009-09-10 Omron Corporation Measurement device and method, imaging device, and program
US7881599B2 (en) * 2008-03-07 2011-02-01 Omron Corporation Measurement device and method, imaging device, and program
US20110007187A1 (en) * 2008-03-10 2011-01-13 Sanyo Electric Co., Ltd. Imaging Device And Image Playback Device
US20090256925A1 (en) * 2008-03-19 2009-10-15 Sony Corporation Composition determination device, composition determination method, and program
US8265474B2 (en) 2008-03-19 2012-09-11 Fujinon Corporation Autofocus system
US8810673B2 (en) * 2008-03-19 2014-08-19 Sony Corporation Composition determination device, composition determination method, and program
US20090238549A1 (en) * 2008-03-19 2009-09-24 Atsushi Kanayama Autofocus system
US20090268080A1 (en) * 2008-04-25 2009-10-29 Samsung Techwin Co., Ltd. Bracketing apparatus and method for use in digital image processor
US8189090B2 (en) 2008-08-06 2012-05-29 Canon Kabushiki Kaisha Image pick-up apparatus and control method therefor
US20100033593A1 (en) * 2008-08-06 2010-02-11 Canon Kabushiki Kaisha Image pick-up apparatus and control method therefor
US20100067892A1 (en) * 2008-09-16 2010-03-18 Canon Kabushiki Kaisha Imaging apparatus and control method
US8195042B2 (en) * 2008-09-16 2012-06-05 Canon Kabushiki Kaisha Imaging apparatus and control method
USRE45900E1 (en) * 2008-09-16 2016-02-23 Canon Kabushiki Kaisha Imaging apparatus and control method
US20100086292A1 (en) * 2008-10-08 2010-04-08 Samsung Electro- Mechanics Co., Ltd. Device and method for automatically controlling continuous auto focus
US20100118151A1 (en) * 2008-11-12 2010-05-13 Yoshijiro Takano Autofocus system
EP2187625A1 (en) 2008-11-12 2010-05-19 Fujinon Corporation Autofocus system
EP2187623A1 (en) 2008-11-14 2010-05-19 Fujinon Corporation Autofocus system
US20100123790A1 (en) * 2008-11-14 2010-05-20 Yoshijiro Takano Autofocus system
US20100123782A1 (en) * 2008-11-18 2010-05-20 Kunio Yata Autofocus system
EP2187624A1 (en) 2008-11-18 2010-05-19 Fujinon Corporation Autofocus system
US8570391B2 (en) * 2008-12-18 2013-10-29 Olympus Imaging Corp. Imaging apparatus and image processing method used in imaging device
US20100157084A1 (en) * 2008-12-18 2010-06-24 Olympus Imaging Corp. Imaging apparatus and image processing method used in imaging device
US20100189426A1 (en) * 2009-01-23 2010-07-29 Inventec Appliances (Shanghai) Co., Ltd. System and method for human machine interface for zoom content on display
CN101841650A (en) * 2009-03-11 2010-09-22 卡西欧计算机株式会社 The camera head that is suitable for personage's photography
US20100245612A1 (en) * 2009-03-25 2010-09-30 Takeshi Ohashi Image processing device, image processing method, and program
US8675098B2 (en) * 2009-03-25 2014-03-18 Sony Corporation Image processing device, image processing method, and program
US9131149B2 (en) 2009-03-25 2015-09-08 Sony Corporation Information processing device, information processing method, and program
US8305453B2 (en) * 2009-05-20 2012-11-06 Pentax Ricoh Imaging Company, Ltd. Imaging apparatus and HDRI method
US20100295962A1 (en) * 2009-05-20 2010-11-25 Hoya Corporation Imaging apparatus and hdri method
US20110002678A1 (en) * 2009-07-03 2011-01-06 Fujifilm Corporation Photographing control device and method, as well as program
US8107805B2 (en) * 2009-07-03 2012-01-31 Fujifilm Corporation Photographing control device and method, as well as program
US8350954B2 (en) 2009-07-13 2013-01-08 Canon Kabushiki Kaisha Image processing apparatus and image processing method with deconvolution processing for image blur correction
US20110019936A1 (en) * 2009-07-22 2011-01-27 Satish Kumar Bhrugumalla Imaging system with multiframe scaler
US8897602B2 (en) * 2009-07-22 2014-11-25 Aptina Imaging Corporation Imaging system with multiframe scaler
EP2293542A2 (en) 2009-07-22 2011-03-09 Fujifilm Corporation Autofocus frame auto-tracking system
US20110019066A1 (en) * 2009-07-22 2011-01-27 Yoshijiro Takano Af frame auto-tracking system
US20120120249A1 (en) * 2009-07-29 2012-05-17 Sony Corporation Control apparatus, imaging system, control method, and program
US9596415B2 (en) * 2009-07-29 2017-03-14 Sony Corporation Control apparatus, imaging system, control method, and program for changing a composition of an image
US8860875B2 (en) 2009-12-01 2014-10-14 Panasonic Intellectual Property Corporation Of America Imaging device for recognition and method of controlling the same
US8467579B2 (en) * 2009-12-08 2013-06-18 Electronics And Telecommunications Research Institute Apparatus and method for estimating distance and position of object based on image of single camera
US20110135157A1 (en) * 2009-12-08 2011-06-09 Electronics And Telecommunications Research Institute Apparatus and method for estimating distance and position of object based on image of single camera
US8224172B2 (en) 2009-12-25 2012-07-17 Fujifilm Corporation Autofocus system
US20110158624A1 (en) * 2009-12-25 2011-06-30 Fujifilm Corporation Autofocus system
US8441568B2 (en) * 2010-01-06 2013-05-14 Panasonic Corporation Image apparatus prioritizing displayed attribute information for recognized people
US20110267529A1 (en) * 2010-01-06 2011-11-03 Kazumasa Tabata Imaging apparatus
US9621779B2 (en) * 2010-03-30 2017-04-11 Panasonic Intellectual Property Management Co., Ltd. Face recognition device and method that update feature amounts at different frequencies based on estimated distance
US20120062769A1 (en) * 2010-03-30 2012-03-15 Sony Corporation Image processing device and method, and program
US9253388B2 (en) * 2010-03-30 2016-02-02 Sony Corporation Image processing device and method, and program
US20130010095A1 (en) * 2010-03-30 2013-01-10 Panasonic Corporation Face recognition device and face recognition method
US8890993B2 (en) 2010-12-10 2014-11-18 Olympus Imaging Corp. Imaging device and AF control method
US20120274562A1 (en) * 2011-04-28 2012-11-01 Nokia Corporation Method, Apparatus and Computer Program Product for Displaying Media Content
US9158374B2 (en) * 2011-04-28 2015-10-13 Nokia Technologies Oy Method, apparatus and computer program product for displaying media content
US9517029B2 (en) * 2011-06-06 2016-12-13 Sony Corporation Image processing device, image processing method, image processing system, program, and recording medium
US20120307032A1 (en) * 2011-06-06 2012-12-06 Sony Corporation Image processing device, image processing method, image processing system, program, and recording medium
US20130107026A1 (en) * 2011-11-01 2013-05-02 Samsung Electro-Mechanics Co., Ltd. Remote control apparatus and gesture recognition method for remote control apparatus
US20130201366A1 (en) * 2012-02-03 2013-08-08 Sony Corporation Image processing apparatus, image processing method, and program
US9618613B2 (en) * 2012-02-22 2017-04-11 Hitachi Kokusai Electric Inc. Radio communication apparatus, radio communication method, and radio communication system
US20150062335A1 (en) * 2012-02-22 2015-03-05 Hitachi Kokusai Electric Inc. Radio communication apparatus, radio communication method, and radio communication system
US8854481B2 (en) * 2012-05-17 2014-10-07 Honeywell International Inc. Image stabilization devices, methods, and systems
US20130308001A1 (en) * 2012-05-17 2013-11-21 Honeywell International Inc. Image stabilization devices, methods, and systems
US20140192217A1 (en) * 2013-01-04 2014-07-10 Samsung Electronics Co., Ltd. Apparatus and method for photographing portrait in portable terminal having camera
KR102092571B1 (en) 2013-01-04 2020-04-14 삼성전자 주식회사 Apparatus and method for taking a picture of portrait portable terminal having a camera and camera device
CN103916592A (en) * 2013-01-04 2014-07-09 三星电子株式会社 Apparatus and method for photographing portrait in portable terminal having camera
KR20140089132A (en) * 2013-01-04 2014-07-14 삼성전자주식회사 Apparatus and method for taking a picture of portrait portable terminal having a camera and camera device
US9282239B2 (en) * 2013-01-04 2016-03-08 Samsung Electronics Co., Ltd. Apparatus and method for photographing portrait in portable terminal having camera
US20140226858A1 (en) * 2013-02-14 2014-08-14 Samsung Electronics Co., Ltd. Method of tracking object using camera and camera system for object tracking
US9402025B2 (en) * 2013-05-21 2016-07-26 Canon Kabushiki Kaisha Detection apparatus, method for detecting feature point and storage medium
US20140347513A1 (en) * 2013-05-21 2014-11-27 Canon Kabushiki Kaisha Detection apparatus, method for detecting feature point and storage medium
US20150009356A1 (en) * 2013-07-02 2015-01-08 Canon Kabushiki Kaisha Image processing apparatus, image processing method, image processing program, and imaging apparatus
US9560265B2 (en) * 2013-07-02 2017-01-31 Canon Kabushiki Kaisha Image processing apparatus, image processing method, image processing program, and imaging apparatus
CN104284085A (en) * 2013-07-08 2015-01-14 Lg电子株式会社 Electronic device and method of operating the same
US20150049195A1 (en) * 2013-08-15 2015-02-19 Tomoko Ishigaki Image processing unit, object detection method, object detection program, and vehicle control system
EP3140982B1 (en) 2014-05-05 2018-08-29 Philips Lighting Holding B.V. Device with a camera and a screen
CN104038701A (en) * 2014-07-01 2014-09-10 宇龙计算机通信科技(深圳)有限公司 Method and system for terminal shooting and terminal
US20160142619A1 (en) * 2014-11-14 2016-05-19 Canon Kabushiki Kaisha Focus control apparatus, image capturing apparatus, method for controlling focus control apparatus and storage medium
US9843716B2 (en) * 2015-03-03 2017-12-12 Xiaomi Inc. Method and apparatus for adjusting photography parameters
US20160261792A1 (en) * 2015-03-03 2016-09-08 Xiaomi Inc. Method and apparatus for adjusting photography parameters
CN104980681A (en) * 2015-06-15 2015-10-14 联想(北京)有限公司 Video acquisition method and video acquisition device
US20170374284A1 (en) * 2016-06-27 2017-12-28 Lenovo (Beijing) Co., Ltd. Camera operation mode control
US10432860B2 (en) * 2016-06-27 2019-10-01 Lenovo (Beijing) Co., Ltd. Camera operation mode control
DE102017113116B4 (en) 2016-06-27 2023-07-27 Lenovo (Beijing) Co., Ltd. Camera Operation Mode Control
CN106060391A (en) * 2016-06-27 2016-10-26 联想(北京)有限公司 Method and device for processing working mode of camera, and electronic equipment
US10303933B2 (en) * 2016-07-29 2019-05-28 Samsung Electronics Co., Ltd. Apparatus and method for processing a beauty effect
US10579870B2 (en) 2016-12-20 2020-03-03 Samsung Electronics Co., Ltd. Operating method for function of iris recognition and electronic device supporting the same
CN107302689A (en) * 2017-08-24 2017-10-27 无锡北斗星通信息科技有限公司 Gun type camera self-adaptive switch system
CN107343152A (en) * 2017-08-25 2017-11-10 无锡北斗星通信息科技有限公司 Real-time passenger image data correction system
CN108470321A (en) * 2018-02-27 2018-08-31 北京小米移动软件有限公司 U.S. face processing method, device and the storage medium of photo
CN108520202A (en) * 2018-03-15 2018-09-11 华南理工大学 Confrontation robustness image characteristic extracting method based on variation spherical projection
CN109309864A (en) * 2018-08-08 2019-02-05 周群 Nationality's information intelligent identifying system
US10686991B2 (en) 2018-09-11 2020-06-16 Sony Corporation Techniques for improving photograph quality for fouled lens or sensor situations
CN109190574A (en) * 2018-09-13 2019-01-11 郑州云海信息技术有限公司 A kind of hair style recommended method, device, terminal and storage medium based on big data
US10887525B2 (en) * 2019-03-05 2021-01-05 Sony Corporation Delivery of notifications for feedback over visual quality of images
US11394869B2 (en) * 2019-09-06 2022-07-19 Panasonic Intellectual Property Management Co., Ltd. Imaging device with focusing operation based on subject and predetermined region
CN110933304A (en) * 2019-11-27 2020-03-27 RealMe重庆移动通信有限公司 Method and device for determining to-be-blurred region, storage medium and terminal equipment
US20210235005A1 (en) * 2020-01-28 2021-07-29 Panasonic I-Pro Sensing Solutions Co., Ltd. Monitoring camera, camera parameter determining method and storage medium
US11665322B2 (en) * 2020-01-28 2023-05-30 i-PRO Co., Ltd. Monitoring camera, camera parameter determining method and storage medium
US11716536B2 (en) * 2020-04-22 2023-08-01 Canon Kabushiki Kaisha Control device, image capturing apparatus, and control method for detecting obstacle
US20220309992A1 (en) * 2021-03-24 2022-09-29 Canon Kabushiki Kaisha Head-mounted display, display control system, information processor, and method for controlling head-mounted display

Also Published As

Publication number Publication date
EP1471455A3 (en) 2005-05-25
US20110242363A1 (en) 2011-10-06
DE602004030390D1 (en) 2011-01-20
US20090066815A1 (en) 2009-03-12
EP1471455B1 (en) 2010-12-08
US20130329029A1 (en) 2013-12-12
EP1471455A2 (en) 2004-10-27
US9147106B2 (en) 2015-09-29

Similar Documents

Publication Publication Date Title
US9147106B2 (en) Digital camera system
JP4196714B2 (en) Digital camera
JP2004317699A (en) Digital camera
JP2004320286A (en) Digital camera
EP1522952B1 (en) Digital camera
JP4182117B2 (en) IMAGING DEVICE, ITS CONTROL METHOD, PROGRAM, AND STORAGE MEDIUM
KR101539043B1 (en) Image photography apparatus and method for proposing composition based person
US8004599B2 (en) Automatic focus adjusting apparatus and automatic focus adjusting method, and image pickup apparatus and image pickup method
US7248300B1 (en) Camera and method of photographing good image
KR100924685B1 (en) Imaging apparatus and control method thereof
US20060034602A1 (en) Image capture apparatus and control method therefor
JP2005130468A (en) Imaging apparatus and its control method
JP2008193411A (en) Photographing controller, photographing device, and photographing control method
JP2010074735A (en) Operation input apparatus, operation input method, and program
JP5171468B2 (en) IMAGING DEVICE AND IMAGING DEVICE CONTROL METHOD
JP2004320285A (en) Digital camera
JP4853707B2 (en) Imaging apparatus and program thereof
US7796163B2 (en) System for and method of taking image based on objective body in a taken image
JP2010014783A (en) Photographing device
JP2007067934A (en) Imaging apparatus and its control method
JP2004320284A (en) Digital camera
JP5109864B2 (en) Electronic still camera
JP2003289468A (en) Imaging apparatus
JP2007005966A (en) System for calculation amount of exposure, and control method and control program thereof
JP2006074498A (en) Image processor and imaging apparatus

Legal Events

Date Code Title Description
AS Assignment

Owner name: NIKON TECHNOLOGIES, INC., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:NOZAKI, HORITAKE;HIBINO, HIDEO;KOBAYASHI, TOSHIAKI;AND OTHERS;REEL/FRAME:015172/0501;SIGNING DATES FROM 20040311 TO 20040325

Owner name: NIKON CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:NOZAKI, HORITAKE;HIBINO, HIDEO;KOBAYASHI, TOSHIAKI;AND OTHERS;REEL/FRAME:015172/0501;SIGNING DATES FROM 20040311 TO 20040325

AS Assignment

Owner name: NIKON CORPORATION, JAPAN

Free format text: CO. TO CO. ASSIGNMENT;ASSIGNOR:NIKON TECHNOLOGIES INC.;REEL/FRAME:017626/0875

Effective date: 20060511

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION