US20010050718A1 - Focus control apparatus and method for use with a video camera or the like - Google Patents

Focus control apparatus and method for use with a video camera or the like Download PDF

Info

Publication number
US20010050718A1
US20010050718A1 US08/913,209 US91320997A US2001050718A1 US 20010050718 A1 US20010050718 A1 US 20010050718A1 US 91320997 A US91320997 A US 91320997A US 2001050718 A1 US2001050718 A1 US 2001050718A1
Authority
US
United States
Prior art keywords
focus
estimation value
lens
estimation
control apparatus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US08/913,209
Other versions
US6362852B2 (en
Inventor
Yujiro Ito
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Assigned to SONY CORPORATION reassignment SONY CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ITO, YUJIRO
Publication of US20010050718A1 publication Critical patent/US20010050718A1/en
Application granted granted Critical
Publication of US6362852B2 publication Critical patent/US6362852B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/673Focus control based on electronic image sensor signals based on contrast or high frequency components of image signals, e.g. hill climbing method

Definitions

  • the present invention relates to a focus control apparatus and a focus control method suitable for use in a video camera or the like.
  • a consumer video camera has employed an autofocus apparatus for automatically focusing a lens on an object.
  • the estimation value includes any noise, when the focus lens is passed by a focus position, the estimation value does not become maximum, and, conversely, when the focus lens is not located on the focus position, the estimation value becomes maximum. This may lead to misjudgment of the focus position.
  • a sampling point used for obtaining the estimation value does not always coincide with the position where the estimation value becomes maximum. Moreover, since the focus lens is reciprocated plural times in the vicinity of this maximum estimation value position, i.e., the focus position to determine the focus position, it takes a considerable time to determine the focus position.
  • a focus control apparatus is a focus control apparatus having an imaging means for imaging an object through a focus lens to output an electric signal corresponding to the object, including an extracting means for extracting a high-frequency component of the electric signal output from the imaging means, an estimation value generating means for generating an estimation value indicative of a focus state of the object based on the high-frequency component output from the extracting means, a storage means for storing the plurality of estimation values changed as the focus lens is moved in response to a focus lens position in order to obtain a just focus position, a selecting means for selecting a plurality of estimation values to be used for calculation of the just focus position from the estimation values stored in the storage means, and a control means for calculating the just focus position based on the plurality of estimation values selected by the selecting means and lens positions corresponding to the plurality of selected estimation values.
  • a focus control method is a focus control method of moving a focus lens of a video camera to a just focus position, including a) a step of extracting a high-frequency component of an electric signal output from an imaging means, b) a step of generating an estimation value indicative of a focus state of an object based on the high-frequency component extracted in the step a), c) a step of storing the plurality of estimation values changed as the focus lens is moved in response to a focus lens position, d) a step of selecting a plurality of estimation values to be used for calculation of the just focus position from the estimation values stored in the step c), e) a step of calculating the just focus position based on the plurality of estimation values selected in the step d) and lens positions corresponding to the plurality of selected estimation values, and f) a step of moving the focus lens to the just focus position.
  • the just focus position is calculated based on a plurality of selected estimation values and the lens positions corresponding to the plurality of selected estimation values, even if the estimation value includes a noise or the estimation value constantly includes a noise when the luminance is low, it is possible to carry out the focus control with high accuracy. Even if the focus lens is passed by the just focus position only once, then it is possible to calculate the just focus position. Therefore, it is possible to determine the just focus position at high speed to that extent.
  • FIG. 1 is a diagram showing an entire arrangement of an imaging apparatus formed of a video camera
  • FIG. 2 is a diagram showing a specific arrangement of an autofocus controlling circuit 34 ;
  • FIG. 3 is a diagram showing a specific arrangement of a horizontal-direction estimation value generating circuit 62 ;
  • FIG. 4 is a diagram showing a specific arrangement of a vertical-direction estimation value generating circuit 63 ;
  • FIG. 5 is a table showing a filter coefficient ⁇ and a window size set for respective circuits of the horizontal-direction estimation value generating circuit 62 and the vertical-direction estimation value generating 63 ;
  • FIG. 6 is a diagram used to explain the respective window sizes
  • FIG. 7 is a table showing weight data W set for respective estimation values E;
  • FIG. 8 is a diagram showing divided areas of a picture presented by an area detecting circuit 38 ;
  • FIG. 9 is a diagram showing a specific circuit arrangement of the area detecting circuit 38 ;
  • FIGS. 10 to 15 are flowcharts used to explain an autofocus operation
  • FIG. 16 is a flowchart used to explain an operation of determining a target object
  • FIG. 17 is a diagram showing a movement of a lens when a lens movement direction is determined in order to focus the lens on an object
  • FIGS. 18A and 18B are diagrams showing a state that a non-target object lies in a window
  • FIG. 19 is a diagram showing fluctuation of estimation values stored in a RAM 66 when the lens movement direction is determined
  • FIG. 20 is a table showing data stored in the RAM 66 during the autofocus operation
  • FIG. 21 is a graph showing change of the estimation values obtained upon the autofocus operation
  • FIG. 22 is diagram showing a state of image pickup of an object A and an object B having the same color
  • FIG. 23 is a table of information about an object.
  • FIG. 24 is a log table of a target object.
  • the video camera apparatus includes a lens block 1 for optically condensing incident light to the front of an imaging device, an imaging block 2 for converting light incident from the lens block into RGB electric video signals obtained by an image pickup, a signal processing block 3 for subjecting the video signals to a predetermined signal processing, and a CPU 4 for controlling the lens block 1 , the imaging block 2 , and the signal processing block.
  • the lens block 1 is detachably provided in a video camera apparatus body.
  • This lens block 1 includes, as optical elements, a zoom lens 11 for, by moving along an optical axis, continuously change a focal length without changing a position of an image point to thereby zoom an image of an object, a focus lens 12 for bringing the object into focus, and an iris mechanism 13 for adjusting an amount of light incident on the front of the imaging device by changing its aperture area.
  • the lens block 1 further includes a position detecting sensor 11 a for detecting an optical-axis direction position of the zooming lens 11 , a drive motor 11 b for moving the zooming lens 11 in the optical-axis direction, a zoom-lens drive circuit 11 c for supplying a drive control signal to the drive motor 11 b, a position detecting sensor 12 a for detecting an optical-axis direction position of the focus lens 12 , a drive motor 12 b for moving the focus lens 12 in the optical-axis direction, a focus-lens drive circuit 12 c for supplying a drive control signal to the drive motor 12 b, a position detecting sensor 13 a for detecting an aperture position of the iris mechanism 13 , a drive motor 13 b for opening and closing the iris mechanism 13 , and an iris mechanism drive circuit 13 c for supplying a drive control signal to the drive motor 13 b.
  • Detection signals from the position detecting sensors 11 a, 12 a, 13 a are always supplied to the CPU 4 .
  • the zooming lens drive circuit 11 c, the focus lens drive circuit 12 c, and the iris mechanism drive circuit 13 c are electrically connected to the CPU 4 so as to be supplied with control signals from the latter.
  • the lens block 1 has an EEROM 15 for storing a focal length data of the zoom lens 11 and an aperture ratio data thereof, a focal length data of the focus lens 12 and an aperture ratio thereof, and a manufacturer name of the lens block 1 and a serial number thereof.
  • the EEPROM 15 is connected to the CPU 4 so that the respective data stored therein are read out therefrom based on a read command from the CPU 4 .
  • the imaging block 2 has a color separation prism 21 for color-separating incident light from the lens block 1 into three primary-color lights of red (R), green (G) and blue (B) and imaging devices 22 R, 22 G and 22 B for converting lights of R component, G component and B component, which are obtained by separating light at the color separation prism 21 and are focused on image surfaces thereof, into electric video signals (R), (G), (B) to output the signals.
  • Each of these imaging devices 22 R, 22 G and 22 B is formed of a CCD (Charge Cupled Device), for example.
  • the imaging block 21 has preamplifiers 23 R, 23 G, 23 B for respectively amplifying levels of the video signals (R), (G), (B) output from the imaging devices 22 R, 22 G, 22 B and for carrying out correlated double sampling for removing a reset noise.
  • the imaging block 2 further has a timing signal generating circuit 24 for generating a VD signal, an HD signal and a CLK signal each serving as a basic clock used for operation of each of circuits in the video camera apparatus based on a reference clock from a reference clock circuit provided therein, and a CCD drive circuit 25 for supplying a drive clock to the imaging device 22 R, the imaging device 22 G and the imaging device 22 B based on the VD signal, the HD signal and the CLK signal supplied from the timing signal generating circuit.
  • the VD signal is a clock signal representing one vertical period.
  • the HD signal is a clock signal representing one horizontal period.
  • the CLK signal is a clock signal representing one pixel clock.
  • the timing clock formed of these VD, HD and CLK signals is supplied to each of the circuits in the video camera apparatus through the CPU 4 , though not shown.
  • the signal processing block 3 is a block provided in the video camera apparatus for subjecting the video signals (R), (G), (B) supplied from the imaging block 2 to a predetermined signal processing.
  • the signal processing block 3 has A/D converter circuits 31 R, 31 G, 31 B for respectively converting the analog video signals (R), (G), (B) into digital video signals (R), (G), (B), gain control circuits 32 R, 32 G, 32 B for respectively controlling gains of the digital video signals (R), (G), (B) based on a gain control signal from the CPU 4 , and signal processing circuits 33 R, 33 G, 33 B for respectively subjecting the digital video signals (R), (G), (B) to a predetermined signal processing.
  • the signal processing circuits 33 R, 33 G, 33 B have knee circuits 331 R, 331 G, 331 B for compressing the video signals to a certain degree or more, ⁇ correction circuits 332 R, 332 G, 332 B for correcting the levels of the video signals in accordance with a preset ⁇ curve, and B/W clip circuits 333 R, 333 G, 333 B for clipping a black level smaller than a predetermined level and a white level larger than a predetermined level.
  • Each of the signal processing circuits 33 R, 33 G, 33 B may have a known black ⁇ correction circuit, a known contour emphasizing circuit, a known linear matrix circuit and so on other than the knee circuit, the ⁇ correction circuit, and the B/W clip circuit.
  • the signal processing block 3 has an encoder 37 for receiving the video signals (R), (G), (B) output from the signal processing circuits 33 R, 33 G, 33 B and for generating a luminance signal (Y) and color-difference signals (R ⁇ Y), (B ⁇ Y) from the video signals (R), (G), (B).
  • the signal processing block 3 further has a focus control circuit 34 for receiving the video signals (R), (G), (B) respectively output from the gain control circuit 32 R, 32 G, 32 B and for generating an estimation data E and a direction data Dr both used for controlling the focus based on the video signals (R), (G), (B), an iris control circuit 35 for receiving the video signals (R), (G), (B) respectively output from the signal processing circuits 33 R, 33 G, 33 B and for controlling the iris based on the levels of the received signals so that an amount of light incident on each of the imaging devices 22 R, 22 G, 22 B should be a proper amount of light, and a white balance controlling circuit 36 for receiving the video signals (R), (G), (B) respectively output from the signal processing circuits 33 R, 33 G, 33 B and for carrying out white balance control based on the levels of the received signals.
  • a focus control circuit 34 for receiving the video signals (R), (G), (B) respectively output from the gain control circuit 32 R, 32 G, 32 B and for generating
  • the iris control circuit 35 has an NAM circuit for selecting a signal having a maximum level from the supplied video signals (R), (G), (B), and an integrating circuit for dividing the selected signal with respect to areas of a picture corresponding thereto to totally integrate each of the video signals corresponding to the areas of the picture.
  • the iris control circuit 35 considers every illumination condition of an object such as back lighting, front lighting, flat lighting, spot lighting or the like to generate an iris control signal used for controlling the iris, and supplies this iris control signal to the CPU 4 .
  • the CPU 4 supplies a control signal to the iris drive circuit 13 c based on the iris control signal.
  • the CPU 4 supplies a gain control signal to the gain controlling circuits 32 R, 32 G, 32 B based on the white balance control signal.
  • the signal processing block 3 further has an area detecting circuit 38 and a frame memory 39 .
  • the area detecting circuit 38 is a circuit for receiving a luminance signal (Y) and color difference signals (R ⁇ Y), (B ⁇ Y) from the encoder 37 , and for, based on the luminance signal and the color difference signals, selecting an area, where a pixel data having the same color as that of an object designated as a target object exists, from areas set in the whole picture.
  • THe area detecting circuit will be described in detail later on.
  • the frame memory 39 is a memory for receiving the luminance signal (Y) and the color difference signals (R ⁇ Y), (B ⁇ Y) from the encoder 37 , and for temporarily storing the luminance signal and the color difference signals.
  • the frame memory is formed of three memories of a frame memory for the luminance signal (Y), a frame memory for the color difference signal (R ⁇ Y) and a frame memory for the color difference signal (B ⁇ Y).
  • the luminance signal and the color difference signals stored in the respective frame memories are read out therefrom based on read addresses supplied from the CPU 4 , and the read luminance and color difference signals are supplied to the CPU 4 .
  • the focus control circuit 34 will hereinafter be described in detail with reference to FIG. 2.
  • the luminance-signal generating circuit 61 is a circuit for generating a luminance signal from the supplied video signals R, G, B. In order to determine whether the lens is in focus or out of focus, it is sufficient to determine whether the contrast is high or low. Therefore, since change of the contrast has no relation with change of a level of a color difference signal, it is possible to determine whether the contrast is high or low, by detecting only the change of a level of the luminance signal.
  • the luminance-signal generating circuit 61 can generate the luminance signal Y by subjecting the supplied video signals R, G, B to a known calculation based on
  • the horizontal-direction estimation value generating circuit 62 is a circuit for generating a horizontal-direction estimation value.
  • the horizontal-direction estimation value is a data indicating how much the level of the luminance signal is changed when the luminance signal is sampled in the horizontal direction, i.e., a data indicating how much contrast there is in the horizontal direction.
  • the horizontal-direction estimation value generating circuit 62 has a first horizontal-direction estimation value generating circuit 62 a for generating a first horizontal-direction estimation value E 1 , a second horizontal-direction estimation value generating circuit 62 b for generating a second horizontal-direction estimation value E 2 , a third horizontal-direction estimation value generating circuit 62 c for generating a third horizontal-direction estimation value E 3 , a fourth horizontal-direction estimation value generating circuit 62 d for generating a fourth horizontal-direction estimation value E 4 , a fifth horizontal-direction estimation value generating circuit 62 e for generating a fifth horizontal-direction estimation value E 5 , a sixth horizontal-direction estimation value generating circuit 62 f for generating a sixth horizontal-direction estimation value E 6 , a seventh horizontal-direction estimation value generating circuit 62 g for generating a seventh horizontal-direction estimation value E 7 , an eighth horizontal-direction estimation value generating circuit 62
  • the first horizontal-direction estimation value generating circuit 62 a of the horizontal-direction estimation value generating circuit 62 has a high-pass filter 621 for extracting a high-frequency component of the luminance signal, an absolute-value calculating circuit 622 for converting the extracted high-frequency component into an absolute value to thereby obtain a data having positive values only, a horizontal-direction integrating circuit 623 for integrating an absolute-value data in the horizontal direction to thereby cumulatively add the data of the high-frequency component in the horizontal direction, a vertical-direction integrating circuit 624 for integrating the data integrated in the vertical direction, and a window pulse generating circuit 625 for supplying an enable signal used for allowing integrating operations of the horizontal-direction integrating circuit 623 and the vertical-direction integrating circuit 624 .
  • the high-pass filter 621 is formed of a one-dimension finite impulse response filter for filtering the high-frequency component of the luminance signal in response to one sample clock CLK from the window pulse generating circuit 625 .
  • the high-pass filter 621 has a cutoff frequency characteristic expressed by
  • the window pulse generating circuit 625 has a plurality of counters operated based on the clock signal VD representing one vertical period, on the clock signal HD representing one horizontal period and on the clock signal CLK representing one sample clock.
  • the window pulse generating circuit 625 supplies the enable signal to the horizontal-direction integrating circuit 623 based at every one sample clock signal CLK and supplies the enable signal to the vertical-direction integrating circuit 624 at every one horizontal period based on the counted value of the counter.
  • the window pulse generating circuit 625 of the first horizontal-direction estimation value circuit 62 a has a counter whose initial count value is set so that a size of a window should be that of 192 pixels ⁇ 60 pixels.
  • the first horizontal-direction estimation value E 1 output from the horizontal-direction estimation value generating circuit 62 indicates data obtained by integrating all the high-frequency components in the window of 192 pixels ⁇ 60 pixels.
  • the counter is connected to the CPU 4 so as to be supplied with an offset value from the latter.
  • the initial count value is a count value set so that a window center should be agreed with a center of a picture obtained by image pickup.
  • the offset value supplied from the CPU 4 means a count value to be added to the initial count value. Therefore, when the offset value is supplied from the CPU 4 , the count value of the counter is changed and consequently a center position of the window is changed.
  • each of the second to twelfth horizontal-direction estimation value generating circuits 62 b to 62 has a high-pass filter 621 , an absolute-value calculating circuit 622 , a horizontal-direction integrating circuit 623 , a vertical-direction integrating circuit 624 , and a window pulse generating circuit 625 .
  • a different point among the respective circuits lies in that the respective circuits ( 62 a to 621 ) have different combinations of their filter coefficients ⁇ and their window sizes.
  • the estimation values E 1 to E 12 generated by the respective circuits are different from one another.
  • FIG. 5A shows the filter coefficients ⁇ and the window sizes which are respectively set for the first horizontal-direction estimation value generating circuit 62 a to the twelfth horizontal-direction estimation value generating circuit 621 .
  • the reason for setting such different filter coefficients will hereinafter be described.
  • the high-pass filter having a high cutoff frequency is very suitable for use when the lens is substantially in a just focus state (which means a state that a lens is in focus).
  • the reason for this is that the estimation value is changed at a considerably large rate as compared with a lens movement in the vicinity of the just focus point. Since the estimation value is changed at a small rate when the lens is considerably out of focus, it is not too much to say that the high-pass filter having the high cutoff frequency is not suitable for use when the lens is considerably out of focus.
  • the high-pass filter having a low cutoff frequency is suitable for use when the lens is considerably out of focus.
  • the reason for this is that when the lens is moved while being considerably out of focus, the estimation value is changed at a considerably large rate. Since the estimation value is changed at a small rate when the lens is moved in the substantial just focus state, then it is not too much to say that the high-pass filter having the low cutoff frequency is not suitable for use in the substantial just focus state.
  • each of the high-pass filter having the high cutoff frequency and the high-pass filter having the low cutoff frequency has both of advantage and disadvantage. It is difficult to determine which of the high-pass filters is more suitable. Therefore, preferably, a plurality of high-pass filters having different filter coefficients are used and generate a plurality of estimation values in order to select a most proper estimation value.
  • the horizontal-direction estimation value generating circuit 63 has plural kinds of preset windows shown in FIG. 6A.
  • a window W 1 is a window of 192 pixels ⁇ 60 pixels.
  • a window W 2 is a window of 132 pixels ⁇ 60 pixels.
  • a window W 3 is a window of 384 pixels ⁇ 120 pixels.
  • a window W 4 is a window of 264 pixels ⁇ 120 pixels.
  • a window W 5 is a window of 768 pixels ⁇ 240 pixels.
  • a window W 6 is a window of 548 pixels ⁇ 240 pixels.
  • the vertical-direction estimation value generating circuit 63 is a circuit for generating an estimation value in the vertical direction.
  • the estimation value in the vertical direction is a data indicating how much the level of the luminance signal is changed when the luminance signal is sampled in the vertical direction, i.e., a data indicating how much there is the contrast in the vertical direction.
  • the vertical-direction estimation value generating circuit 62 has a first vertical-direction estimation value generating circuit 63 a for generating a first vertical-direction estimation value E 13 , a second vertical-direction estimation value generating circuit 63 b for generating a second vertical-direction estimation value E 14 , a third vertical-direction estimation value generating circuit 63 c for generating a third vertical-direction estimation value E 15 , a fourth vertical-direction estimation value generating circuit 63 d for generating a fourth vertical-direction estimation value E 16 , a fifth vertical-direction estimation value generating circuit 63 e for generating a fifth vertical-direction estimation value E 17 , a sixth vertical-direction estimation value generating circuit 63 f for generating a sixth vertical-direction estimation value E 18 , a seventh vertical-direction estimation value generating circuit 63 g for generating a seventh vertical-direction estimation value E 19 , an eighth vertical-direction estimation value generating circuit 63
  • the first vertical-direction estimation value generating circuit 63 a of the vertical-direction estimation value generating circuit 63 has a horizontal-direction mean value generating circuit 631 for generating a mean value data of levels of luminance signals in the horizontal direction, a high-pass filter 632 for extracting a high-frequency component of the mean-value data of the luminance signals, an absolute-value calculating circuit 633 for converting the extracted high-frequency component into an absolute value to thereby obtain a data having positive values only, a vertical-direction integrating circuit 634 for integrating an absolute-value data in the vertical direction to thereby cumulatively add the data of the high-frequency component in the vertical direction, and a window pulse generating circuit 635 for supplying an enable signal used for allowing integrating operations of the horizontal-direction mean value generating circuit 631 and the vertical-direction integrating circuit 634 .
  • the high-pass filter 632 is formed of a one-dimension finite impulse response filter for filtering the high-frequency component of the luminance signal in response to one horizontal period signal HD from the window pulse generating circuit 625 .
  • the high-pass filter 632 has the same cutoff frequency characteristic as that of the high-pass filter 621 of the first horizontal-direction estimation value generating circuit 62 a.
  • the window pulse generating circuit 635 has a plurality of counters operated based on the clock signal VD representing one vertical period, the clock signal HD representing one horizontal period and the clock signal CLK representing one sample clock supplied from the CPU 4 .
  • the window pulse generating circuit 635 supplies the enable signal to the horizontal-direction mean value generating circuit 631 based on the counted value of the counter at every one sample clock signal CLK and supplies the enable signal to the vertical-direction integrating circuit 634 at every one horizontal period.
  • the window pulse generating circuit 635 of the first vertical-direction estimation value generating circuit 63 a has a counter whose initial count value is set so that a size of a window should be that of 120 pixels ⁇ 80 pixels.
  • the first vertical-direction estimation value E 13 output from the vertical-direction estimation value generating circuit 63 indicates data obtained by integrating all the high-frequency components in the window of 120 pixels ⁇ 80 pixels.
  • the counter is connected to the CPU 4 so as to be supplied with an offset value from the latter.
  • the initial count value is a count value set so that a window center should be agreed with a center of a picture obtained by image pickup.
  • the offset value supplied from the CPU 4 means a count value to be added to the initial count value. Therefore, when the offset value is supplied from the CPU 4 , the count value of the counter in the window pulse generating circuit 635 is changed and consequently a center position of the window is changed.
  • each of the second to twelfth vertical-direction estimation value generating circuits 63 b to 631 h has a horizontal-direction mean value generating circuit 631 , a high-pass filter 632 , an absolute-value calculating circuit 633 , a vertical-direction integrating circuit 634 , and a window pulse generating circuit 635 .
  • a different point among the respective circuits lies in that the respective circuits have different combinations of their filter coefficients a and their window sizes similarly to those of the horizontal-direction estimation value generating circuit 62 .
  • the estimation values E 1 to E 12 generated by the respective circuits are different from one another.
  • FIG. 5B shows the filter coefficients ⁇ and the window sizes both of which are respectively set for the first vertical-direction estimation value generating circuit 62 a to the twelfth vertical-direction estimation value generating circuit 621 .
  • the vertical-direction estimation value generating circuit 63 has plural kinds of preset windows shown in FIG. 6B.
  • centers of these plurality of windows are agreed with the centers of the pictures obtained by image pickup.
  • a window W 7 is a window of 120 pixels ⁇ 80 pixels.
  • a window W 8 is a window of 120 pixels ⁇ 60 pixels.
  • a window W 9 is a window of 240 pixels ⁇ 160 pixels.
  • a window W 10 is a window of 240 pixels ⁇ 120 pixels.
  • a window W 11 is a window of 480 pixels ⁇ 320 pixels.
  • a window W 12 is a window of 480 pixels ⁇ 240 pixels.
  • the focus control circuit since the focus control circuit has twenty-four estimation value generating circuits for generating twenty-four kinds of estimation values obtained from combination of twelve window sizes and two filter coefficients, it is possible to obtain plural kinds of estimation values. Moreover, since the estimation value is totally obtained based on the respective estimation values, it is possible to improve the accuracy of the estimation value.
  • microcomputer 64 will be described with respect to FIGS. 2 and 7.
  • the microcomputer 64 is a circuit for receiving twenty-four estimation values E 1 to E 24 generated by the horizontal-direction estimation value generating circuit 62 and the vertical-direction estimation value generating circuit 63 and for calculating, based on these twenty-four estimation values, the direction in which the lens is to be moved and a lens position where the estimation value is maximum, i.e., a lens position where the lens is in focus.
  • estimation values E 1 (X 2 ) to E 24 (X 2 ) generated when the lens is moved to the position X 2 are stored in a RAM 66 . Since the RAM 66 stores data in a ring buffer system, the previously stored estimation values E 1 (X 1 ) to E 24 (X 1 ) are not erased until the RAM becomes full of stored data. These estimation values E i are stored in the RAM 64 when designation of a pointer by the microcomputer 64 .
  • the area detecting circuit 38 will be described with reference to FIGS. 8 to 9 .
  • the area detecting circuit 38 is a circuit for dividing a picture into one hundred and twenty-eight areas and for determining in which of the divided areas a pixel data having the same color as that of the object set as the target object exists.
  • the area detecting circuit 38 has a logic circuit for judging all the pixel data supplied from the encoder. Specifically, as shown in FIG. 8, when one area is set so as to have a size of 48 pixels ⁇ 30 pixels, a picture is divided into 16 portions in the horizontal direction and into 8 portions in the vertical direction and consequently one hundred and twenty-eight areas can be defined in one picture. As shown in FIG. 8, the one hundred and twenty-eight areas are defined as area numbers A 000 to A 127 in that order.
  • the encoder 37 supplies the luminance signal Y, the color difference signal
  • the area detecting circuit 38 is supplied with an upper limit luminance signal
  • L from the CPU 4 .
  • the area detecting circuit is supplied from the CPU 4 with multiplication coefficients ⁇ 3 , ⁇ 4 , ⁇ 5 , ⁇ 6 to be multiplied with the luminance signal Y supplied from the encoder 37 .
  • U the lower limit luminance signal
  • L the upper limit color difference signal
  • L supplied from the CPU 4
  • are signals obtained based on the luminance signal and the color difference signal obtained by image pickup of the object which a camera man determines as a target object.
  • the values of the signals will never be changed.
  • L are set so as to have values approximate to that of the luminance signal
  • of a certain pixel data has a value between the upper limit color difference signal
  • R 0 ⁇ Y 0 L are set so as to have values approximate to that of the color difference signal
  • of a certain pixel data has a value between the upper limit color difference signal
  • L are set so as to have values approximate to that of the color difference signal
  • the area detecting circuit 38 has a multiplier circuit 71 a for multiplying the multiplication coefficient ⁇ 4 with the luminance signal Y supplied from the encoder 37 , a multiplier circuit 71 b for multiplying the multiplication coefficient ⁇ 3 with the luminance signal Y, a multiplier circuit 71 c for multiplying the multiplication coefficient ⁇ 6 with the luminance signal Y, and a multiplier circuit 71 d for multiplying the multiplication coefficient ⁇ 5 with the luminance signal Y.
  • the area detecting circuit 38 further has a switch circuit 72 a for selecting either of a multiplied output signal from the multiplier circuit 71 a and the upper limit color difference signal
  • the area detecting circuit 38 has a comparator 73 a supplied with the luminance signal Y and the upper limit luminance signal
  • the area detecting circuit 38 also has a gate circuit 74 a supplied with an output signal from the comparator 73 a and an output signal from the comparator 73 b, a gate circuit 74 b supplied with an output signal from the comparator 73 c and an output signal from the comparator 73 d, a gate circuit 74 c supplied with an output signal from the comparator 73 e and an output signal from the comparator 73 f, and a gate circuit 75 supplied with an output signal from the gate circuit 74 a, an output signal from the gate circuit 74 b, and an output signal from the gate circuit 74 c.
  • the area detecting circuit 38 has a flag signal generating circuit 76 formed of one hundred and twenty-eight chip circuits.
  • the one hundred and twenty-eight chip circuits are provided so as to correspond to the one hundred and twenty-eight areas A 001 to A 127 shown in FIG. 8.
  • Each of the chip circuits is supplied with an output signal from the gate circuit 75 , a pixel clock signal CLK and a chip select signal CS.
  • the pixel clock signal CLK and the chip select signal CS are supplied from the CPU 4 so as to correspond to the luminance signal and the color difference signal of every pixel data supplied from the encoder 37 .
  • the pixel clock signal CLK is a clock signal corresponding to a timing of a processing of each pixel data.
  • a “Low”-level pixel clock signal is supplied to the chip circuit, and in other cases, a “High”-level pixel clock signal is supplied thereto.
  • a “Low”-level chip select signal CS is supplied only to the chip circuit selected from the 128 chip circuits, and a “High”-level chip select signal is supplied to other chip circuits which are not selected.
  • Each of the chip circuits provided in the flag signal generating circuit 76 has a gate circuit 76 a and a counter 76 b. Therefore, the flag signal generating circuit 76 have the one hundred and twenty-eight gate circuits 76 a and the one hundred and twenty-eight counters 76 b.
  • the gate circuit 76 a outputs a “Low”-level signal only when all of the output signal supplied from the gate circuit 75 , the pixel clock signal CLK and the chip select signal CS are at “Low” level.
  • the counter 76 b is a counter for responding to a clock timing of the pixel clock signal CLK and for counting up only when it is supplied with a “Low”-level signal from the gate circuit 76 a.
  • the counter generates a flag signal when its count value becomes a predetermined number or greater (5 counts or greater in this embodiment).
  • the generated flag signal is supplied to a multiplexer 77 .
  • the multiplexer 77 receives the flag signal output from each of the chip circuits of the flag signal generating circuit 76 and supplies the same to the CPU 4 . At this time, the multiplexer 77 supplies the number of the chip circuit outputting the flag signal to the CPU 4 .
  • the CPU 4 can select the area where the pixel data having the same color component as that obtained by image pickup of the target object with reference to the number.
  • the switch circuits 72 a, 72 b, 72 c and 72 d provided in the area detecting circuit 38 must carry out the switching operations. Therefore, the switching operations will be described.
  • the switch circuits 72 a, 72 b, 72 c and 72 d it is necessary to select an object mode based on the luminance signal
  • the object mode has four modes. Modes 0 to 3 will hereinafter be described successively.
  • the mode 0 is a mode selected when the object set as the target object has color information to some degree. Specifically, it means that both of values of
  • the CPU 4 When the CPU 4 selects the mode 0 as the object mode, the CPU 4 supplies control signals to the switch circuits 72 a, 72 b, 72 c and 72 d to thereby respectively set the switch states of the switch circuits 72 a, 72 b, 72 c and 72 d to “Up”, “Up”, “Up”, and “Up”. Once the switch states are set, these switch states will never be changed until the object mode is changed.
  • the mode 1 is a mode a mode selected when the object set as the target object has color components including red color components exceeding a predetermined level and blue color components which does not exceed a predetermined level. Specifically, it means that the value of
  • the CPU 4 When the CPU 4 selects the mode 1 as the object mode, the CPU 4 supplies control signals to the switch circuits 72 a, 72 b, 72 c and 72 d to thereby respectively set the switch states of the switch circuits 72 a, 72 b, 72 c and 72 d to “Up”, “Up”, “Down”, and “Down”.
  • the mode 2 is a mode a mode selected when the object set as the target object has color components including blue color components exceeding a predetermined level and red color components which does not exceed a predetermined level. Specifically, it means that the value of
  • the CPU 4 When the CPU 4 selects the mode 2 as the object mode, the CPU 4 supplies control signals to the switch circuits 72 a, 72 b, 72 c and 72 d to thereby respectively set the switch states of the switch circuits 72 a, 72 b, 72 c and 72 d to “Down”, “Down”, “Up”, and “Up”.
  • the mode 3 is a mode selected when the object set as the target object has color components including both of the blue color components and the red color components which do not exceed a predetermined level. Specifically, it means that either of values of
  • the mode 3 is selected by the CPU 4 when a relationship among the luminance signal
  • the CPU 4 selects the mode 3 as the object mode, the CPU 4 supplies control signals to the switch circuits 72 a, 72 b, 72 c and 72 d to thereby respectively set the switch states of the switch circuits 72 a, 72 b, 72 c and 72 d to “Down”, “Down”, “Down”, and “Down”.
  • the area detecting circuit 38 carries out a processing of detecting an object.
  • the detecting processing will subsequently be described with reference to FIG. 9 in correspondence with each of the object modes.
  • the comparator 73 a compares the upper limit luminance signal
  • the comparator outputs a “high”-level signal, and when
  • the comparator outputs a “Low”-level signal.
  • the comparator 73 b compares the lower limit luminance signal
  • the comparator outputs a “high”-level signal, and when
  • the comparator outputs a “Low”-level signal, and when
  • the comparator outputs a “Low”-level signal.
  • the gate circuit 74 a receives the output signals from the comparator 73 a and the comparator 73 b, and, when both of the output signals from the comparators 73 a, 73 b are at “high” level, outputs a “Low”-level signal to the gate circuit 75 at the succeeding stage.
  • the comparator 73 c Since the switch states of the switch circuits 72 a, 72 b are respectively set to “Up” and “Up” when the mode 0 is selected, the comparator 73 c is supplied with data Y ⁇ 4 and
  • are data supplied from the encoder 37 .
  • the comparator 73 c compares the data Y ⁇ 4 with the data
  • the comparator outputs a “High”-level signal, and when
  • the comparator outputs a “Low”-level signal.
  • the comparator 73 d compares the data Y ⁇ 3 with the data
  • the comparator outputs a “High”-level signal, and when
  • the comparator outputs a “Low”-level signal.
  • the gate circuit 74 b receives signals output from the comparator 73 c and the comparator 73 d, and, when both of the output signals from the comparators 73 c, 73 d are at “High” level, outputs a “Low”-level signal to the gate circuit 75 at the succeeding stage.
  • the comparator 73 e is supplied with data Y ⁇ 6 and
  • are data supplied from the encoder 37 .
  • the comparator 73 e compares the data Y ⁇ 6 with the data
  • the comparator outputs a “High”-level signal, and when
  • the comparator outputs a “Low”-level signal.
  • the comparator 73 d compares the data Y ⁇ 5 with the data
  • the comparator outputs a “High”-level signal, and when
  • the comparator outputs a “Low”-level signal.
  • the gate circuit 74 c receives signals output from the comparator 73 e and the comparator 73 f, and, when both of the output signals from the comparators 73 e, 73 f are at “High” level, outputs a “Low”-level signal to the gate circuit 75 at the succeeding stage.
  • the gate circuit 75 receives the output signals from the gate circuits 74 a, 74 b and 74 c, and, only when all of the output signals from the gate circuits 74 a, 74 b and 74 c are at “high” level, outputs a “Low”-level signal to the respective chip circuits of the flag generating circuit 76 .
  • satisfaction of the conditions of the equation (700) means that the luminance signal Y, the color difference signal
  • are successively supplied from the encoder 37 to the area detecting circuit 38 so as to correspond to the raster scan.
  • the area detecting circuit 38 is supplied with all the pixel data from the encoder 37 and determines whether or not each of the pixel data satisfies the conditions of the equation (700).
  • the area detecting circuit 38 is supplied with all the pixel data, a hardware circuit formed of the switch circuits 72 a to 72 d, the comparators 73 a to 73 f and the gate circuits 74 a to 74 c determines whether or not each of the pixel data satisfies the conditions of the equation (700). Therefore, it is possible to carry out the determination on a real time base without any processing load on the CPU 4 .
  • the gate circuit 75 Since any pixel data indicative of the same color as that of the set target object does not exist in areas other than the area A 035 in this example, even if any pixel data of the areas other than the area A 035 are supplied to the area detecting circuit 38 , the gate circuit 75 outputs a “High”-level signal. After the area detecting circuit 38 is supplied with the pixel data indicative of the same color of that of the target object in the area A 035 , the gate circuit 75 outputs the “Low”-level signal. At this time, the “Low”-level chip select signal CS is supplied only to the 36th chip circuit corresponding to the area A 035 , while the “High”-level chip select signal is supplied to other chip circuits.
  • the “Low”-level pixel clock signal is supplied to the chip circuits at the timing at which the pixel data indicative of the same color as that of the target object is supplied. Therefore, only when a gate circuit 76 a 035 of the 36th chip circuit is supplied with the “Low”-level signal from the gate circuit 75 , the “Low”-level pixel clock signal CLK and the “Low”-level chip select signal, the gate circuit 76 a 035 supplies a “Low”-level signal to a counter 76 b 035 .
  • the counter 76 b 035 When being supplied with the “Low”-level signal from the gate circuit 76 a 035 , the counter 76 b 035 counts the supply up and, when the count value becomes 5, outputs a flag signal to the multiplexer 77 .
  • This operation means that if there are fourteen thousand and forty pixel data in the area A 035 and there are the five pixel data or greater indicative of the same color as that of the set target object among them, then the chip circuit corresponding to the area A 035 outputs the flag signal.
  • the counter 76 b 035 of the chip circuit corresponding to the area A 035 outputs the flag signal and the counters 76 b corresponding to the area A 035 other than the area A 035 are prevented from outputting the flag signal.
  • the multiplexer 77 outputs the flag signal output from each of the chip circuits to the CPU 4 in correspondence with the area. In this case, the multiplexer outputs to the CPU 4 the flag signal output from the 36th chip circuit corresponding to the area A 035 .
  • the CPU 4 can recognize on a real time base which area the pixel indicative of the same color as the color of the set target pixel exists in.
  • FIGS. 10 to 16 are flowcharts therefor.
  • FIGS. 8 to 13 are flowcharts therefor and FIG. 14.
  • a focus mode is shifted from a manual focus mode to an autofocus mode when a camera man presses an autofocus button provided in an operation button 5 .
  • the autofocus mode includes a continuous mode in which the autofocus mode is continued after the button is pressed until a command of mode shift to the manual focus mode is issued, and a non-continuous mode in which, after an object is brought into focus, the autofocus mode is stopped and the mode is automatically shifted to the manual focus mode.
  • the continuous mode will be described in the following explanation with reference to the flowcharts. In processings in steps S 100 to S 131 , it is determined to which direction the lens is to be moved. In processings in steps S 201 to S 221 , the lens position is calculated so that the estimation value should be maximum.
  • the focus lens is moved to the position X 1 which is distant in the Far direction from an initial lens position X 0 by a distance of D/2, subsequently moved to a position X 2 which is distant in the Near direction from the position X 1 by a distance of D, and then moved to a position which is distant from the position X 2 in the Far direction by a distance of D/2, i.e., returned to the initial lens position X 0 .
  • the Near direction depicts a direction in which the lens is moved toward the imaging devices
  • the Far direction depicts a direction in which the lens is moved away from the imaging devices.
  • Reference symbol D depicts a focal depth.
  • the microcomputer 64 stores in the RAM 66 the estimation values Ei(X 0 ), the estimation values E i (X 1 ), and the estimation values E i (X 2 ) generated in the horizontal-direction estimation value generating circuit 62 and the vertical-direction estimation value generating circuit 63 .
  • the focal depth is a data indicating a range within which the lens is in focus around a focus point. Therefore, even if the focus lens is moved within the range of the focal depth, then it is impossible for a man to recognize deviation of focus resulting from such movement. Contrary, when the lens is moved from the position X 1 to the position X 2 , if the lens is moved by a distance exceeding the focal depth, then deviation of the focus resulting from the movement influences the video signal obtained by image pickup. Specifically, when a maximum movement amount of the lens is set within the focal depth, the deviation of the focus cannot be recognized.
  • step S 100 the microcomputer 64 stores in the RAM 66 the estimation values E 1 (X 0 ) to the estimation values E 24 (X 0 ) newly generated by the horizontal-direction estimation value generating circuit 62 and the vertical-direction estimation value generating circuit 63 . After finishing storing the above estimation values, the microcomputer 64 issues to the CPU 4 a command to move the focus lens in the Far direction by a distance of D/2.
  • step S 101 the CPU 4 outputs a command to the focus-lens motor drive circuit 12 c to move the focus lens in the Far direction by a distance of D/2.
  • step S 102 the microcomputer 64 stores in the RAM 66 the estimation values E 1 (X 1 ) to the estimation values E 24 (X 1 ) newly generated by the horizontal-direction estimation value generating circuit 62 and the vertical-direction estimation value generating circuit 63 . After finishing storing the above estimation values, the microcomputer 64 issues to the CPU 4 a command to move the focus lens in the Near direction by a distance of D.
  • step S 103 the CPU 4 outputs a command to the focus-lens motor drive circuit 12 c to move the focus lens in the Near direction by a distance of D.
  • step S 104 the microcomputer 64 stores in the RAM 66 the estimation values E 1 (X 2 ) to the estimation values E 24 (X 2 ) newly generated by the horizontal-direction estimation value generating circuit 62 and the vertical-direction estimation value generating circuit 63 . After finishing storing the above estimation values, the microcomputer 64 issues to the CPU 4 a command to move the focus lens in the Near direction by a distance of D/2.
  • the estimation values E 1 (X 0 ) to the estimation values E 24 (X 0 ) generated when the lens is located at the position X 0 , the estimation values E 1 (X 1 ) to the estimation values E 24 (X l ) generated when the lens is located at the position X 1 , and the estimation values E 1 (X 2 ) to the estimation values E 24 (X 2 ) generated when the lens is located at the position X 0 are stored in the RAM 66 of the microcomputer 64 .
  • Processings in steps S 105 to S 115 are processings for selecting an improper estimation value from the twenty-four estimation values.
  • FIGS. 18A and 18B show that a target object A to be brought into focus is imaged in a window W 2 and a non-target object B having high contrast and located on the front side of the target object A is imaged in a window W 1 but outside of the window W 2 .
  • the estimation value E 1 generated by the first horizontal-direction estimation value generating circuit 62 a having a preset window size value of the window W 1 inevitably includes high-frequency components resulting from the object B and hence is improper as the estimation value of the object A.
  • the estimation value E 1 inevitably becomes considerably large as compared with the estimation value E 2 generated by the second horizontal-direction estimation value generating circuit 62 b having the preset value of the window W 2 .
  • the estimation value E 7 generated by the seventh horizontal-direction estimation value generating circuit 62 g having a preset window size value of the window W 1 inevitably includes high-frequency components resulting from the object B and hence is improper as the estimation value of the object A. Therefore, the estimation value E 7 inevitably becomes considerably large as compared with the estimation value E 8 generated by the eighth horizontal-direction estimation value generating circuit 62 h having the preset value of the window W 2 .
  • FIG. 18B shows windows obtained when the lens is moved so as to be focused on the object A. The more the lens is adjusted so as to be focused on the object A, the more the lens becomes considerably out of focus with respect to the object B. When the lens becomes considerably out of focus with respect to the object B, an image of the object B becomes blurred considerably and the blurred image thereof enters the window W 2 . Therefore, in a state shown in FIGS.
  • the estimation value E 2 generated by the second horizontal-direction estimation value generating circuit 62 b having the preset value of the window W 2 is not always proper.
  • the estimation value E 8 generated by the eighth horizontal-direction estimation value generating circuit 62 h having the preset value of the window W 2 is not always proper.
  • steps S 105 to S 115 will specifically be described with reference to FIGS. 10 and 11.
  • step S 105 it is determined by using the estimation values E 1 (X 0 ) to E 24 (X 0 ) obtained when the lens is located at the position X 0 whether or not
  • estimation values E 1 , E 2 , E 7 , E 8 satisfy the equation (105), then it is determined that the estimation values E 1 , E 2 , E 7 , E 8 are proper values, and then the processing proceeds to step S 117 . If on the other hand the estimation values E 1 , E 2 , E 7 , E 8 do not satisfy the equation (105), then it is determined that at least the estimation values E 1 , E 2 , E 7 , E 8 are improper values, and then the processing proceeds to step S 106 .
  • step S 106 Since it is determined based on the calculated result of step S 105 that the estimation values E 1 , E 2 , E 7 , E 8 are improper, in step S 106 , the estimation values E 3 and E 9 obtained from the window W 3 which is a large window next to the window W 1 are used and the estimation values E 4 and E 10 obtained from the window W 4 which is a large window next to the window W 2 are used.
  • step S 106 similarly to step S 105 , it is determined by using the estimation values E 1 (X 0 ) to E 24 (X 0 ) obtained when the lens is located at the position X 0 whether or not
  • estimation values E 3 , E 4 , E 9 , E 10 satisfy the equation (106), then it is determined that the estimation values E 3 , E 4 , E 9 , E 10 are proper values, and then the processing proceeds to step S 107 . If on the other hand the estimation values E 3 , E 4 , E 9 , E 10 do not satisfy the equation (106), then it is determined that at least the estimation values E 3 , E 4 , E 9 , E 10 are improper values, and then the processing proceeds to step S 108 .
  • the estimation values E 3 , E 4 , E 9 , and E 10 satisfy the equation (106).
  • the estimation values E 3 , E 4 , E 9 , and E 10 become proper values, the non-target object B is brought into focus. Indeed, the lens should be focused on the target object A. But, if the lens is adjusted so as to be focused on the object A, then it is impossible to obtain the proper estimation values.
  • the autofocus control circuit 34 repeatedly executes the processing of a control loop and keeps the focus lens moving for a long time. Therefore, while the autofocus control circuit repeatedly executes the control loop, the video signal indicative of a blurred image must continuously be output. However, if the lens is focused on the non-target object B, then it is possible to prevent the video signal indicative of the blurred image from being output continuously by repeating the control loop for a long period of time.
  • step S 108 since it is determined based on the result of the calculation in step S 106 that the estimation values E 3 , E 4 , E 9 , and E 10 are improper, the estimation values E 5 and E 11 obtained from the window W 5 which is large next to the window W 3 are used and the estimation values E 6 and E 12 obtained from the window W 6 which is large next to the window W 4 are used.
  • step S 108 similarly to step S 106 , it is determined by using the estimation values E 1 (X 0 ) to E 24 (X 0 ) generated when the lens is located at the position X 0 , whether
  • estimation values E 5 , E 6 , E 11 , E 12 satisfy the equation (108), then it is determined that the estimation values E 5 , E 6 , E 11 , E 12 are proper values, and then the processing proceeds to step S 109 . If on the other hand the estimation values E 5 , E 6 , E 11 , E 12 do not satisfy the equation (108), then it is determined that at least the estimation values E 5 , E 6 , E 11 , E 12 are improper values, and then the processing proceeds to step S 110 .
  • step S 108 since it is determined based on the result of the calculation in step S 106 that the estimation values E 3 , E 4 , E 9 , and E 10 are improper, the estimation values E 5 and E 11 obtained from the window W 5 which is large next to the window W 3 are used and the estimation values E 6 and E 12 obtained from the window W 6 which is large next to the window W 4 are used.
  • step S 110 similarly to step S 108 , it is determined by using the estimation values E 1 (X 0 ) to E 24 (X 0 ) generated when the lens is located at the position X 0 , whether
  • estimation values E 13 , E 14 , E 19 , E 20 satisfy the equation (110), then it is determined that the estimation values E 13 , E 14 , E 19 , E 20 are proper values, and then the processing proceeds to step S 111 . If on the other hand the estimation values E 13 , E 14 , E 19 , E 20 do not satisfy the equation (110), then it is determined that at least the estimation values E 13 , E 14 , E 19 , E 20 are improper values, and then the processing proceeds to step S 112 .
  • step S 112 similarly to step S 110 , it is determined by using the estimation values E 1 (X 0 ) to E 24 (X 0 ) generated when the lens is located at the position X 0 , whether
  • estimation values E 15 , E 16 , E 21 , E 22 satisfy the equation (112), then it is determined that the estimation values E 15 , E 16 , E 21 , E 22 are proper values, and then the processing proceeds to step S 113 . If on the other hand the estimation values E 15 , E 16 , E 21 , E 22 do not satisfy the equation (112), then it is determined that at least the estimation values E 15 , E 16 , E 21 , E 22 are improper values, and then the processing proceeds to step S 114 .
  • step S 114 similarly to step S 110 , it is determined by using the estimation values E 1 (X 0 ) to E 24 (X 0 ) generated when the lens is located at the position X 0 , whether
  • estimation values E 17 , E 18 , E 23 , E 24 satisfy the equation (114), then it is determined that the estimation values E 17 , E 18 , E 23 , E 24 are proper values, and then the processing proceeds to step S 115 . If on the other hand the estimation values E 17 , E 18 , E 23 , E 24 do not satisfy the equation (114), then it is determined that at least the estimation values E 17 , E 18 , E 23 , E 24 are improper values, and then the processing proceeds to step S 116 .
  • step S 116 When the processing reaches step S 116 , it is inevitably determined that all the estimation values E 1 to E 24 are improper. Therefore, it is determined that the autofocus operation cannot be carried out. Then, the mode is shifted to the manual focus mode and the processing is ended.
  • processings in steps S 117 to S 131 are those in flowcharts for a specific operation for determining the lens movement direction. Processings in steps S 117 to S 131 are those carried out by the microcomputer 64 .
  • step S 118 it is determined whether or not the number i is a number defined as a non-use number. If it is determined that the number i is not defined as the non-use number, then the processing proceeds to step S 120 . If it is determined that the number i is defined as the non-use number, then in step S 119 the number i is incremented and then the next number of i is determined.
  • a processing in step S 120 is a processing carried out when the estimation value E i (X 0 ) has not a value substantially equal to E i (X 2 ) but a value larger than E i (X 2 ) to some degree and when the estimation value E i (X 1 ) has not a value substantially equal to E i (X 0 ) but a value larger than E i (X 0 ) to some degree.
  • the processing is that of determining, if the focus lens is moved in the Far direction from the position X 2 through the position X 0 to the position X 1 , whether or not the estimation values are increased in an order of the estimation values E i (X 2 ), E i (X 0 ), E i (X 1 ). Specifically, it is determined by calculating the following equations;
  • step S 121 the count-up value U cnt is added with the weight data Wi, and then the processing proceeds to step S 126 .
  • a processing in step S 122 is a processing carried out when the estimation value E i (X 0 ) has not a value substantially equal to E i (X 1 ) but a value larger than E i (X 1 ) to some degree and when the estimation value E i (X 2 ) has not a value substantially equal to E i (X 0 ) but a value larger than E i (X 0 ) to some degree.
  • the processing is that of determining, if the focus lens is moved in the Far direction from the position X 2 through the position X 0 to the position X 1 , whether or not the estimation values are decreased in an order of the estimation values E i (X 2 ), E i (X 0 ), E i (X 1 ). Specifically, it is determined by calculating the following equations;
  • step S 124 If the above estimation values satisfy the equation (122), it means that as the focus lens is moved from the position x 2 through the position X 0 to the position X 1 , the estimation values are decreased in an order of the estimation values corresponding thereto. Then, the processing proceeds to the next step S 123 . If the above estimation values do not satisfy the equation (122), then the processing proceeds to step S 124 .
  • step S 123 the count-down value D cnt is added with the weight data Wi, and then the processing proceeds to step S 126 .
  • a processing in step S 124 is a processing carried out when the estimation value E i (X 0 ) has not a value substantially equal to E i (X 1 ) but a value larger than E i (X 1 ) to some degree and when the estimation value E i (X 0 ) has not a value substantially equal to E i (X 2 ) but a value larger than E i (X 2 ) to some degree.
  • the processing is that of determining, if the focus lens is moved in the Far direction from the position X 2 through the position X 0 to the position X 1 , whether the peak of the estimation values lies in the estimation value E i (X 0 ). Specifically, it is determined by calculating the following equations;
  • step S 126 If the above estimation values satisfy the equation (124), it means that when the focus lens is moved from the position X 2 through the position X 0 to the position X 1 , the peak value of the estimation values is the estimation value E i (X 0 ). Then, the processing proceeds to the next step S 125 . If the above estimation values do not satisfy the equation (120), then the processing proceeds to step S 126 .
  • step S 125 the flat-count value F cnt is added with the weight data Wi, and then the processing proceeds to step S 126 .
  • step S 126 the number of i is incremented, and then the processing proceeds to step S 127 .
  • step S 127 it is determined whether or not the number of i is 24 because the horizontal-direction estimation value generating circuit 62 and the vertical-direction estimation value generating circuit 63 generate the twenty-four estimation values E. If the value of i is 24, then it is determined that calculations of all the estimation values are finished, and then the processing proceeds to step S 128 . If the value of i is not 24, then the processing loop formed of steps S 118 to S 127 is repeatedly carried out.
  • step S 128 it is determined by comparing the count-up value U cnt , the count-down value D cnt and the flat-count value F cnt , which is the largest value among the above count values. If it is determined that the count-up value U cnt is the largest, then the processing proceeds to step S 129 . If it is determined that the count-down value D cnt is the largest, then the processing proceeds to step S 130 . If it is determined that the flat-count value F cnt is the largest, then the processing proceeds to step S 131 .
  • step S 129 the microcomputer 64 determines that the direction toward the position X 1 is the hill-climbing direction of the estimation value, i.e., the direction in which the lens is to be in focus, and then supplies to the CPU 4 a signal designating the Far direction as the lens movement direction.
  • the direction toward the position X 1 is the hill-climbing direction of the estimation value, i.e., the direction in which the lens is to be in focus
  • step S 130 the microcomputer 64 determines that the direction toward the position X 2 is the hill-climbing direction of the estimation value, i.e., the direction in which the lens is to be in focus, and then supplies to the CPU 4 a signal designating the Near direction as the lens movement direction.
  • the direction toward the position X 2 is the hill-climbing direction of the estimation value, i.e., the direction in which the lens is to be in focus
  • step S 131 the microcomputer 64 determines that the position X 0 is the position at which the lens is in focus, and then the processing proceeds to step S 218 .
  • FIG. 19 is a diagram showing transition of change of the estimation values E i (X 2 ), E i (X 0 ), E i (X 1 ) respectively obtained when the lens is located at the lens positions X 2, X 0 , X 1 , by way of example.
  • step S 118 it is determined in step S 118 whether or not the number of i is the non-use number. In this case, it is assumed that all the numbers of i are numbers of the estimation values which can be used.
  • step S 128 since the count-up value U cnt has the largest value among them at the time of determination in step S 128 , the processing proceeds to step S 129 in the example shown in FIG. 19. As a result, the direction toward X 1 is determined as the focus direction.
  • Processings in steps S 200 to S 221 are those for determining the lens position at which the estimation value becomes maximum.
  • the flowcharts used to explain the processings are those carried out by the microcomputer 64 .
  • the processings in steps S 200 to S 221 will specifically be described with reference to FIGS. 13 to 15 .
  • a distance depicted by ⁇ X is defined as a distance by which the focus lens is moved in one field. Therefore, the distance ⁇ X depicts the distance by which the lens is moved in one field period. This distance ⁇ X not only depicts the distance by which the lens is moved in one field period but also has a polarity of ⁇ X determined based on the lens movement direction obtained in the processing in steps S 100 to S 130 . For example, if the lens movement direction is the Far direction, the value of the distance ⁇ X is set so as to have a positive polarity. If the lens movement direction is the Near direction, the value of the distance ⁇ X is set so as to have a negative polarity.
  • a sampling frequency is not limited to this embodiment.
  • the sampling may be carried out twice per one field, and the change can be properly effected.
  • step S 201 the microcomputer 64 issues to the CPU 4 a command to move the lens to a position X k .
  • the lens position X k is defined based on equation (200) as
  • step S 202 the microcomputer 64 stores in the RAM 66 the estimation values E 1 (X k ) to the estimation values E 24 (X k ) newly generated by the horizontal-direction estimation value generating circuit 62 and the vertical-direction estimation value generating circuit 63 .
  • the twenty-four estimation values E i are stored as a table shown in FIG. 16.
  • step S 204 it is determined whether or not the number of i is defined as the non-use number. If the number of i is not defined as the non-use number, then the processing proceeds to step S 206 . If the number of i is defined as the non-use number, then in step S 205 the value of i is incremented and the processing returns to step S 204 again.
  • step S 206 it is determined whether or not the estimation values E i (X k ) obtained when the focus lens is moved from a position X k ⁇ 1 to a position X k are increased to a certain degree or more as compared with the estimation values E i (X k ⁇ ). Specifically, it is determined based on a calculation of
  • the satisfaction of the condition of the equation (206) leads to the fact that the estimation values E i (X k ) are increased to a certain degree or more as compared with the estimation values E i (X k ⁇ 1 ). In this case, the processing proceeds to the next step S 207 . If the condition of the equation (206) is not satisfied, then the processing proceeds to step S 209 .
  • step S 207 since the estimation values E i (X k ) are increased to a certain degree or more as compared with the estimation values E i (X k ⁇ 1 ), a 2-bit data “01” indicative of increase of the estimation value is stored in the RAM 66 as a U/D information (up/down information) in connection with the estimation value E i (X k ).
  • step S 208 similarly to step S 121 , the count-up value U cnt is added with the weight data W i , and then the processing proceeds to step S 214 .
  • step S 209 it is determined whether or not the estimation values E i (X k ) obtained when the focus lens is moved from the position X k ⁇ 1 to the position X k are decreased to a certain degree or more as compared with the estimation values E i (X k ⁇ 1 ). Specifically, it is determined based on a calculation of
  • step S 210 since the estimation values E i (X k ) are decreased to a certain degree or more as compared with the estimation values E i (X k ⁇ 1 ), a 2-bit data “10” indicative of decrease of the estimation value is stored in the RAM 66 as the U/D information (up/down information) in connection with the estimation value E i (X k ).
  • step S 211 similarly to step S 123 , the count-down value D cnt is added with the weight data W i , and then the processing proceeds to step S 214 .
  • step S 212 the fact that the processing reaches step S 212 means that the estimation values E i (X k ) obtained when the focus lens is moveed from the positon X k ⁇ 1 to the position X k are not changed to a certain degree or more relative to the estimation values E i (X k ⁇ 1 ).
  • step S 212 a 2-bit data “00” indicative of flatness of the estimation value is stored in the RAM 66 as the U/D information (up/down information) in connection with the estimation value E i (X k ).
  • step S 213 similarly to step S 125 , the flat-count value F cnt is added with the weight data W i , and then the processing proceeds to step S 214 .
  • step S 214 the value of i is incremented, and then the processing proceeds to step S 215 .
  • step S 215 it is determined whether or not the value of i is 24. If it is determined that the value of i is 24, then it is determined that calculations of all the estimation values are finished, and then the processing proceeds to step S 216 . If it is determined the value of i is not 24, then the processing loop from step S 204 to step S 215 is repeatedly carried out until the value of i reaches 24.
  • a processing in step S 216 is that for determining whether or not the count-down value D cnt is the largest among the count values.
  • the processing in step S 216 will be described by using an example shown in FIG. 20.
  • FIG. 20 is a table showing a state of the respective estimation values and the respective up/down informations stored in the RAM 66 .
  • the microcomputer 64 stores in the RAM 66 the respective estimation values and the respective up/down informations set in connection with the former so that these values and informations should correspond to the position X k to which the lens is moved.
  • step S 216 An estimation value obtained by a synthetic judgement thus made in step S 216 will hereinafter be referred to as “a total estimation value”. Therefore, in other words, the processing in step S 216 can be expressed as that for determining whether or not the total estimation value is decreased.
  • D cnt W 1 +W 2 +W 3 +W 6 +W 7 +W 8 +W 10 +W 13 +W 14 +W 15 +W 16 +W 19 +W 21 +W 22 +W 24
  • step S 216 determines whether the total estimation value is decreased. If it is determined in step S 216 that the total estimation value is decreased, then the processing proceeds to step S 217 .
  • step S 217 the value of j is incremented, and then the processing proceeds to step S 218 .
  • This value of j is a value indicative of how many times the determination result in step S 216 is continuously YES, i.e., how many times the total estimation value is continuously decreased.
  • step S 218 it is determined in step S 218 whether or not the lens movement distance (X k+j from the position X k is larger than D ⁇ n.
  • An equation actually used for the determination is expressed by
  • D depicts a focal depth of the focus lens and n depicts a previously set coefficient.
  • n depicts a previously set coefficient. Study of experimental results reveals that when the value of n is set within the range of 1 ⁇ n ⁇ 10, the autofocus operation at an optimum speed can be realized.
  • step S 218 A determination carried out in step S 218 will be described with reference to FIG. 21.
  • An abscissa of a graph shown in FIG. 21 represents a lens position X, and an ordinate thereof represents an estimation value E(X) corresponding to the lens position.
  • step S 216 If on the other hand it is determined in step S 216 that the count-down value D cnt does not have the largest value, then it is determined that the total estimation value is not decreased, and then the processing proceeds to step S 219 .
  • This processing is that for resetting the value of j.
  • the reason for resetting the value of j is that j is the value indicative of how many times the total estimation value has been decreased continuously.
  • the continuous decrease of the total estimation value is stopped at the time of determination in step S 216 . Accordingly, in step S 219 , the value of j is reset.
  • step S 220 the value of k is incremented in order to further move the focus lens. Then, the processing returns to step S 201 .
  • step S 221 a lens position x g where the estimation value becomes maximum is calculated by interpolation.
  • the position xg is defined as a just focus position. This calculation for interpolation is a barycentric calculation.
  • Xg ⁇ X ⁇ ⁇ start X ⁇ ⁇ end ⁇ E ⁇ ( X ) ⁇ X ⁇ ⁇ X ⁇ X ⁇ ⁇ start X ⁇ ⁇ end ⁇ E ⁇ ( X ) ⁇ ⁇ X ( 10 )
  • X start and X end in the above equation depicts a start position and an end position of an integration range.
  • X depicts a lens position
  • E(X) depicts an estimation value obtained when the focus lens is located at the lens position X.
  • This barycentric calculation (the calculation according to the equation (10)) permits the lens position where the estimation value becomes maximum to be calculated even if the lens position Xg where the estimation value becomes maximum and the sample point of the estimation value are not overlapped each other.
  • FIG. 21 shows an example of the estimation value curve.
  • an estimation value E(X k ) sampled at the lens position X k two fields before is the maximum value among all the sampled estimation values.
  • a focus lens position X k+2 obtained when the maximum sampled estimation value is determined is employed as the end position Xend of the integration range in the equation (10).
  • a focus lens position corresponding to an estimation value having the same value of the estimation value E(X k+2 ) sampled at the integration end position X k+2 and located on the opposite side of the integration end position X k+2 relative to the lens position X k where the maximum sampled estimation value E(X k ) in the estimation value curve is selected is X′.
  • the focus lens position X′ is the integration range start position Xstart in the equation (10).
  • the integration range in the equation (10) ranges from the lens position X′ to the lens position X k+2 .
  • a discrete calculation is carried out in accordance with the following equation (11) to carry out a high-accuracy interpolation calculation.
  • the lens position X′ obtained as the integration start position Xstart is agreed with any of the sampled lens positions (X 0 to X k ⁇ 1 ), then the agreed lens position is employed as the integration start position in the equation (11). If on the other hand the lens position X′ obtained as the integration start position Xstart is not agreed with any of the sampled lens positions, then a lens position in which an estimation value is smaller than E(X k+2 ) and which is closest to the lens position X′ is selected from the sampled lens positions (X 0 to X k ⁇ 1 ) stored in the RAM 66 . In the example shown in FIG.
  • the integration range in the equation (11) is a range from the lens position X k ⁇ 3 to the lens position X k+2 .
  • the estimation value used in the interpolation calculation is the estimation value selected from the twenty-four estimation values E 1 to E 24 generated by the horizontal-direction estimation value generating circuit 62 and the vertical-direction estimation value generating circuit 63 . Specifically, of the estimation values having an estimation value curve and increased and decreased in the curve as the total estimation value, the estimation value having the largest weight data Wi is selected.
  • step S 216 This selection will be described with reference to the examples shown in FIGS. 20 and 21.
  • the lens position is the lens positions X k ⁇ 3 , X k ⁇ 2 , X k ⁇ 1 and X k
  • the lens position is the lens positions X k+ 1 and X k+2
  • the estimation values similarly to the total estimation value, increased when the lens position is the lens positions X k ⁇ 3 , X k ⁇ 2 , X k ⁇ 1 and X k and decreased when the lens position is the lens positions X k+1 and X k+2 are the estimation values E 1 , E 2 , E 14 , and E 16 in the example shown in FIG. 20.
  • Study of the relationship between the estimation value E and the weight data Wi shown in FIG. 7 reveals that the values of the weight data W 1 , W 2 , W 14 , and W 16 set for these selected estimation values E 1 , E 2 , E 14 , and E 16 are 20, 15, 5 and 3, respectively.
  • the barycentric calculation is carried out in accordance with the equation (11) by using the E 1 (X k ⁇ 3 ), E 1 (X k ⁇ 2 ), E 1 (X k ⁇ 1 ), E 1 (X k ), E 1 (X k+1 ) and E 1 (X k+2 ).
  • the lens position Xg where the estimation value becomes maximum is obtained with high accuracy.
  • the barycentric calculation for interpolation allows the lens position Xg to be calculated with high accuracy even if the sample point is not the lens position xg as shown in FIG. 21.
  • the lens position xg where the estimation value becomes maximum is approximate to a position at which an area obtained by the integration of the above integration range can be divided into two equal halves
  • the position at which the area can be divided into two equal halves may be employed as the lens position Xg.
  • the middle point position may be employed as the lens position xg.
  • the lower-limit estimation value corresponding thereto is defined as Eg(X k+1 ).
  • the maximum estimation value Eg(X k ) is updated at every field even after the lens is fixed on the lens position X k to focus the focus lens, while the lower-limit estimation value is fixed to Eg(X k+1 ).
  • step S 222 the microcomputer 64 supplies the control signal to the CPU 4 so that the focus lens should be moved to this lens position Xg.
  • step S 223 it is determined whether or not a command to track the object is issued to the CPU 4 .
  • the command to track the object is a command to control a tilt/pan operation of the video camera to track the movement of the object and also to change a position of the estimation value detection window used for the autofocus operation of the video camera.
  • this track command is issued to the CPU 4 when the camera man presses a track command button provided in the operation unit 5 . If the track command is supplied from the operation unit 5 , then the processing proceeds to step S 300 . If on the other hand the track command is not supplied, then the processing proceeds to step S 224 .
  • step S 224 it is determined whether or not a command to stop the autofocus operation is issued. If the camera man operates a button to cancel the autofocus mode, then the processing proceeds to step S 225 , wherein the mode is shifted to the manual focus mode.
  • step S 224 If it is determined in step S 224 that the command to stop the autofocus mode is not issued, then the processing proceeds to step S 226 , wherein the maximum estimation value E g (X k ) and the lower limit estimation value E g (X k+1 ) are compared. If the value of the maximum estimation value E g (X k ) becomes smaller than the lower limit estimation value E g (X k+1 ) due to change of an object or the like, then the processing proceeds to step S 227 , wherein the autofocus mode is restarted. When the autofocus mode is restarted, the processing returns to step S 100 again.
  • FIG. 16 is a flowchart therefor starting from step S 300 .
  • the flowchart therefor starting from step S 300 shows a processing carried out by the CPU 4 .
  • the processing will be described also with reference to an example shown in FIG. 22 for a comprehensive description of the flowchart.
  • FIG. 22 shows a state in which a round object A and a rectangular object B are imaged. It is assumed that in this example both of the object A and the object B have the same color as that of the set target object.
  • a horizontal scanning direction and a vertical scanning direction are respectively defined as an X-axis direction and a Y-axis direction. Therefore, coordinates of the raster scan start point, of a raster scan end point and of the center of the picture screen are respectively set as (0,0), (768,240) and (384,120).
  • step S 223 if the CPU 4 receives the automatic track command from the operation unit 5 , then the processing proceeds to step S 300 in the flowchart shown in FIG. 16.
  • step S 300 it is determined whether or not the object to be a target object is set by the camera man's operation of the operation unit 5 .
  • a method of setting the object to be the target object will be described.
  • the camera man carries out an image pickup so that a desired object to be the target object should be positioned at the center of the picture screen.
  • the CPU 4 recognizes the object located at the center of the picture screen as the desired target object which the cameraman sets, and stores a color information about this object in a RAM 4 a.
  • the method of setting the target object is not limited to the above method and may be that of setting an object having a predetermined color (e.g., a flesh color or the like) as the target object.
  • a predetermined color e.g., a flesh color or the like
  • step S 301 the CPU 4 selects the object mode most suitable for the target object set in step S 300 from the four object modes (modes 0 , 1 , 2 , 3 ) which have been described above.
  • the CPU 4 controls the switching operations of the switch circuits 72 a, 72 b, 72 c and 72 d in response to the selected object mode.
  • step S 301 When in step S 301 the CPU 4 selects the object mode and controls the switching operations of the switch circuits 72 a, 72 b, 72 c and 72 d, the area detecting circuit 38 detects an area where the pixel data indicating the same color component as the of the object set as the target object exists.
  • This detection processing is not a processing carried out by the CPU 4 but a processing carried out by the area detecting circuit 38 provided as a hardware circuit.
  • the area detecting circuit 38 is formed of a hardware circuit as described above, it is possible to determine satisfaction of the conditions with respect to all the pixel data from the encoder 37 on a real time base. Since the operation of the area detecting circuit has been described with reference to FIG. 9, it will not be described again.
  • step S 302 the CPU 4 recognizes, based on the signal indicative of the number of the chip circuit and supplied from the area detecting circuit 38 , in which area the pixel data indicative of the same color as that of the target object exists.
  • the CPU 4 can select only the area where the data indicative of the same color of that of the object set as the target object exists.
  • eight areas A 068 , A 069 , A 084 , A 085 , A 086 , A 087 , A 102 and A 103 are selected as the area having the same color as that of the target object.
  • step S 303 the CPU 4 reads out all the pixel data of the area selected in step S 302 from the frame memory 39 in an order of the raster scan. At this time, any of the pixel data of the areas which have not been selected in step S 302 are not read out therefrom. Since the same address is supplied to the frame memory 39 , each of the pixel data formed of Y data, (R ⁇ Y) data and (B ⁇ Y) data is read out therefrom. In the example shown in FIG. 22, only the pixel data of the eight areas A 068 , A 069 , A 084 , A 085 , A 086 , A 087 , A 102 and A 103 are read out from the frame memory 39 by the CPU 4 .
  • any of the pixel data of the areas other than the eight selected areas are not read out from the frame memory 39 . Since the CPU 4 determines the areas including the pixel data to be read out from the frame memory 39 based on the detected result of the area detecting circuit 38 as described above, it is possible to reduce the pixel data amount which the CPU 4 receives from the frame memory 39 . Therefore, the CPU 4 processes all the pixel data supplied from the frame memory 39 on a real time base.
  • step S 304 when the mode 0 is selected as the object mode, based on the read pixel data formed of the Y data, the (R ⁇ Y) data and the (B ⁇ Y) data, the CPU 4 determines whether or not the conditions shown in equation (700) are satisfied.
  • the mode 1 is selected as the object mode, the CPU determines whether or not the conditions shown in equation (701) are satisfied.
  • the mode 2 is selected as the object mode, the CPU determines whether or not the conditions shown in equation (702) are satisfied.
  • the mode 3 is selected as the object mode, the CPU determines whether or not the conditions shown in equation (703) are satisfied.
  • the CPU 4 When the CPU 4 carries out calculation for determining whether or not the conditions shown in the equation (700), (701), (702) or (703) are satisfied, the luminance signal Y, the color difference signal
  • Each of programs for determining whether or not the conditions shown in the equation (700), (701), (702) or (703) are satisfied is previously stored in the RAM 4 a.
  • the CPU 4 obtains a result of determination whether or not each of the pixel data of the selected area satisfies the conditions defined by the equation (700), (701), (702) or (703).
  • the result that the pixel data satisfy the conditions defined by the equation (700), (701), (702) or (703) means that the color indicated by the pixel data is similar to the color of the object set as the target object.
  • step S 304 the CPU 4 generates an object information table, which will be described later on, while carrying out the above processing of determining whether or not the conditions are satisfied and stores the same in the RAM 4 a.
  • the object information table there are recorded a coordinate information indicating on which line and from and to which pixel positions the color of the object set as the target exits and an object identification number indicating which number is allocated to the object having the same color as the color of the target object.
  • FIG. 23 shows an object information table obtained from the example shown in FIG. 22.
  • a line position indicates, with a Y-axis coordinate, on which lines where an object having the same color as that of the target object exists.
  • a start pixel position indicates, with an X-axis coordinate, a coordinate of the first pixel data of the object having the same color as that of the target object.
  • An end pixel position indicates, with an X-axis coordinate, a coordinate of the last pixel data of the object having the same color as that of the target object.
  • An object identification number is a number indicating which number the object recognized as the object having the same color as that of the target object has.
  • data “191”, “221”, “258” and “1” as information indicative of the object A are respectively stored as the line number, the start pixel position, the end pixel position and the object identification number in the object information table as shown in FIG. 23.
  • data “191”, “318”, “319” and “2” as information indicative of the object B are respectively stored as the line number, the start pixel position, the end pixel position and the object identification number in the object information table.
  • step S 305 a window of a minimum size for surrounding the object having the same color as that of the target object is set.
  • a window W A defined within the range of 216 ⁇ X ⁇ 273 and 161 ⁇ Y ⁇ 202 is set as the minimum window for surrounding the object A
  • a window W B defined within the range of 309 ⁇ X ⁇ 358 and 191 ⁇ Y ⁇ 231 is set as the minimum window for surrounding the object B.
  • step S 306 a value of m is initially set to a minimum number among the object identification numbers stored in the object information table.
  • the symbol m is only a variable having any value of the minimum object identification number to the maximum object identification number stored in the object information table.
  • step S 307 it is determined based on a window center vector stored in a target object log table described later on whether or not an expected position coordinate exists within an mth window set in step S 305 . This determination is carried out to determine which the target object is, the object A or the object B.
  • FIG. 24 is a table showing an example of the target object log table.
  • Information about a coordinate position of an object determined as the target object at every field is stored in this target object log table.
  • a field number is a temporarily allocated number which is reset at every 30th field and also a sequential number successively allocated in every field.
  • a window X coordinate is a data indicating the X-axis direction range of the window set in step S 305 with an X-axis coordinate.
  • a window Y coordinate is a data indicating the Y-axis direction range of the window set in step S 305 with a Y-axis coordinate.
  • the example of the target object log table shown in FIG. 24 shows that at the time shown by the filed number 17 the window area set for the target object is defined by 312 ⁇ X ⁇ 362 and 186 ⁇ Y ⁇ 228.
  • the example also shows that the center position of the window is displaced from the center position of the picture obtained by the image pickup in the direction and distance indicated by the window center vector ( ⁇ 47, +87).
  • the window X-axis coordinates, the window Y-axis coordinates and the window center vectors generated at the times indicated by the filed numbers 18 and 19 are shown on the target object log table similarly to those as described above and hence need not to be described.
  • the data of window center vectors stored at the field number 17, the field number 18 and the field number 19 will be considered. Any considerable difference is not found among these values of the data indicated by these three window center vectors. This results not from the fact that the target object is not moved but from the fact that the window center vector indicates a movement vector of the target object relative to the position thereof at the previous field.
  • the CPU 4 controls the pan/tilt drive mechanism 16 so that the center position of the window indicating the moving target object should be located at the center of the picture obtained by the image pickup. Therefore, since the window center vector is defined as the vector indicative of the direction and distance of displacement from the center of the picture obtained by the image pickup, the window center vector indicates the movement vector of the target object relative to the position thereof at the previous field.
  • step S 307 it is determined whether or not the expected position coordinate exists in the first window W A (216 ⁇ X ⁇ 273, 161 ⁇ Y ⁇ 202) corresponding to the object A.
  • the expected position coordinate is a position coordinate obtained from the window center vector at the previous field stored in the above target object log table. For example, since the window center vector ( ⁇ X 19 , ⁇ Y 19 ) set at the time indicated by field number 19 is a vector ( ⁇ 49, +89), it is possible to expect that the window center vector obtained at the time indicated by the field number 20 will be substantially equal to the vector ( ⁇ 49, +89).
  • the window center vector stored in the target object log table indicates displacement amount of the coordinates relative to the picture center coordinates (384, 120) and the direction of the displacement thereof, the center position coordinate of the window set for the target object at the time indicated by the field number 20 can be considered as (335, 209).
  • This center position coordinate of the window is the expected position coordinate.
  • step S 307 it is determined in step S 307 that the expected position coordinate (335, 209) obtained from the target object log table does not exist in the window W A defined as the minimum window for surrounding the object A within the range of 216 ⁇ X ⁇ 273 and 161 ⁇ Y ⁇ 202. Therefore, the CPU 4 determines that the object A is not the set target object, and the processing proceeds to step S 308 .
  • step S 308 the value of m is incremented, and then the processing returns to step S 307 again.
  • step S 307 it is determined therein whether or not the expected position coordinate (335, 209) exists in the window W B defined as the minimum window for surrounding the object B within the range of 309 ⁇ X ⁇ 358 and 191 ⁇ Y ⁇ 231.
  • the CPU 4 determines that the object B is the set target object, and then the processing proceeds to step S 309 .
  • step S 309 the CPU 4 stores the coordinates of the window W B defined within the range of 309 ⁇ X ⁇ 358 and 191 ⁇ Y ⁇ 231 as a window X-axis coordinate and a window Y-axis coordinate in an area, indicated by the field number 20, of the target object log table of the RAM 4 a.
  • the CPU 4 calculates a center coordinate of the window W B from the coordinates of the window W B and stores a center coordinate of this window as the window center vector in the RAM 4 a.
  • the vector ( ⁇ 52, +91) is stored therein as the window center vector.
  • step S 310 based on the window center vector newly stored in step S 309 , the CPU 4 controls the tilt/pan drive mechanism 16 so that the center of the window W 2 should be agreed with the center of the picture. Specifically, based on the window center vector, the CPU 4 supplies the control signal to the motor drive circuit 16 b.
  • step S 311 based on the window center vector, the CPU 4 supplies an offset value to the estimation value generating circuit 62 of the focus control circuit 34 .
  • the offset value is an offset value supplied to each of the counters respectively provided in the window pulse generating circuits 625 , 635 shown in FIGS. 3 and 4.
  • each of the center coordinates of the windows W 1 to W 11 is agreed with the center coordinate of the picture obtained by image pickup.
  • step S 311 the offset values are supplied to the focus control circuit 34 , then the processing returns to step S 100 .
  • the present invention achieves the following effects. Initially, since a plurality of estimation values can be obtained by combination of a plurality of filter coefficients and a plurality of window sizes, it is possible to handle various objects.
  • the weight data are allocated to the estimation value generating circuits and hence the total estimation value can be obtained based on the plurality of estimation values and the weight data respectively corresponding to the estimation values, the accuracy of the estimation value finally obtained is improved.
  • the estimation-value curve describes a smooth parabola around the focus point, which allows high speed determination of the maximum estimation value. Therefore, the autofocus operation itself can be carried out at high speed.
  • the estimation values determined as the improper estimation values when the total estimation value is calculated are selected from the plurality of estimation values and the selected estimation values are not used for the determination of the total estimation value, the accuracy of the estimation values is further improved. For example, if the proper estimation value cannot be obtained with a small window, then the lens is focused on an object by using the estimation value corresponding to a window larger than the above small window. Therefore, it is possible to focus the lens on some object, which prevents the autofocus operation from being continued for a long period of time.
  • the lens is moved from the maximum point by a distance which is predetermined times as long as the focal depth.
  • the hill of the estimation values is flat, it is possible to determine whether or not the maximum point represents the maximum estimation value when the lens is moved by a predetermined distance. Therefore, there can be obtained the effect in which the focus point can be determined at high speed. For example, it is possible to avoid output of an image which becomes considerably blurred and strange because the lens becomes considerably out of focus when it is determined whether or not the maximum point represents the maximum estimation value.
  • the just focus position Xg is calculated by barycentric calculation, for example, based on a plurality of selected estimation values and the lens positions corresponding to the plurality of selected estimation values, even if the estimation value includes a noise or the estimation value constantly includes a noise when the luminance is low, it is possible to calculate the just focus position Xg and hence it is possible to carry out the focus control with high accuracy.
  • the just focus position Xg is calculated by barycentric calculation, for example, if the focus lens is passed by the just focus position at least once, then it is possible to calculate the just focus position. Therefore, it is possible to determine the just focus position at high speed to that extent.
  • the area detecting circuit 38 selects the area where the pixel data indicative of the same color of that of the target object exists and carries out the processing of determining only whether or not the pixel data of the selected area satisfies the conditions, it is possible to detect the position of the target object without the processing load on the CPU 4 .
  • the object mode is set in response to the color of the set target object and the calculation of the area detecting circuit 38 for determining whether or not the conditions are satisfied and the calculation of the CPU 4 for determining whether or not the conditions are satisfied are changed in response to the set object mode, it is possible to precisely recognize the object regardless of the color of the set object.
  • the object information table including a positional information of each object and the target object log table including information about a movement log of the target object are generated even if a plurality of objects have the same colors as that of the target object, it is possible to precisely recognize the target object.

Abstract

A focus control apparatus according to the present invention is a focus control apparatus to be used in a video camera or the like and having imaging means 1, 2 for imaging an object through a focus lens to output an electric signal corresponding to the object, including extracting means 621, 631 for extracting high-frequency components of the electric signals output from the imaging means 1, 2, estimation value generating means 62, 63 for generating estimation values indicative of focus states of the object based on the high-frequency components output from the extracting means 621, 631, a storage means 66 for storing a plurality of estimation values changed as the focus lens is moved in response to a focus lens position in order to obtain a just focus position, a selecting means 4 for selecting a plurality of estimation values to be used for calculation of the just focus position from the estimation values stored in the storage means 66, and a control means 4 for calculating the just focus position based on the plurality of estimation values selected by the selecting means and lens positions corresponding to the plurality of selected estimation values.
According to the above arrangement, since the just focus position is calculated based on a plurality of selected estimation values and the lens positions corresponding to the plurality of selected estimation values, even if the estimation value includes a noise or the estimation value constantly includes a noise when the luminance is low, it is possible to carry out the focus control with high accuracy. Further, even if the focus lens is passed by the just focus position only once, then it is possible to calculate the just focus position. Therefore, it is possible to determine the just focus position at high speed to that extent.

Description

    TECHNICAL FIELD
  • The present invention relates to a focus control apparatus and a focus control method suitable for use in a video camera or the like. [0001]
  • BACKGROUND ART
  • A consumer video camera has employed an autofocus apparatus for automatically focusing a lens on an object. [0002]
  • It is well known that, in order to discriminate whether or not a lens is in focus or out of focus, it is sufficient to discriminate whether contrast of a video signal obtained by an image pickup is high or low. In other words, if the contrast is high, then the lens is in focus. If on the other hand the contrast is low, then the lens is out of focus. A high-frequency component is extracted from the video signal obtained by an image pickup, and a data obtained by integrating the high-frequency component in a predetermined set area is generated. It is possible to discriminate whether the contrast is high or low, by using the integrated data. The integrated data is indicative of how much there is the high-frequency component in the predetermined area. In general, this data is called an estimation value. Accordingly, it is possible to realize the autofocus method by driving a focus lens so that the estimation value should be maximum (i.e., the contrast should be maximum). [0003]
  • When an object, a background and an image pickup condition are not changed, a noise resulting from an external disturbance is seldom included in the estimation value. However, when a video camera for imaging a moving picture is used, the object, the background and the image pickup condition are changed on a real time base. As a result, the estimation value sometimes includes the noise. Therefore, it is very difficult to detect a precise estimation value from a picture of an object changed on a real time base. [0004]
  • Moreover, if the estimation value includes any noise, when the focus lens is passed by a focus position, the estimation value does not become maximum, and, conversely, when the focus lens is not located on the focus position, the estimation value becomes maximum. This may lead to misjudgment of the focus position. [0005]
  • As a result, in order to detect a lens position where the estimation value is maximum, the focus lens is reciprocated in the vicinity of the focus position where the estimation value should be maximum. Therefore, it takes a considerable time to focus the focus lens. [0006]
  • If luminance is low, the estimation value obtain in this case constantly includes the noise, which makes it difficult to detect the focus position. Therefore, it is impossible to carry out a high-accuracy focus control. [0007]
  • A sampling point used for obtaining the estimation value does not always coincide with the position where the estimation value becomes maximum. Moreover, since the focus lens is reciprocated plural times in the vicinity of this maximum estimation value position, i.e., the focus position to determine the focus position, it takes a considerable time to determine the focus position. [0008]
  • For example, it is sometimes observed that an image picked up by a high-end video camera apparatus for use in a broadcasting station or for professional use is transmitted on the air as a live relay broadcast. If it takes a considerable time to carry out the autofocus operation when such live relay broadcast is carried out, consequently a video signal indicative of a blurred picture is transmitted on air. Therefore, a simplified, inexpensive and small autofocus apparatus such as that used in a consumer video camera is not necessary for the video camera for use in the broadcasting station or for professional use, but a high-accuracy focus control and a high-speed focus control are required therefor. [0009]
  • DISCLOSURE OF THE INVENTION
  • It is an object of the present invention to provide a focus control apparatus and a focus control method which can control a focus at high speed with high accuracy. [0010]
  • A focus control apparatus according to the present invention is a focus control apparatus having an imaging means for imaging an object through a focus lens to output an electric signal corresponding to the object, including an extracting means for extracting a high-frequency component of the electric signal output from the imaging means, an estimation value generating means for generating an estimation value indicative of a focus state of the object based on the high-frequency component output from the extracting means, a storage means for storing the plurality of estimation values changed as the focus lens is moved in response to a focus lens position in order to obtain a just focus position, a selecting means for selecting a plurality of estimation values to be used for calculation of the just focus position from the estimation values stored in the storage means, and a control means for calculating the just focus position based on the plurality of estimation values selected by the selecting means and lens positions corresponding to the plurality of selected estimation values. [0011]
  • A focus control method according to the present invention is a focus control method of moving a focus lens of a video camera to a just focus position, including a) a step of extracting a high-frequency component of an electric signal output from an imaging means, b) a step of generating an estimation value indicative of a focus state of an object based on the high-frequency component extracted in the step a), c) a step of storing the plurality of estimation values changed as the focus lens is moved in response to a focus lens position, d) a step of selecting a plurality of estimation values to be used for calculation of the just focus position from the estimation values stored in the step c), e) a step of calculating the just focus position based on the plurality of estimation values selected in the step d) and lens positions corresponding to the plurality of selected estimation values, and f) a step of moving the focus lens to the just focus position. [0012]
  • According to the present invention, since the just focus position is calculated based on a plurality of selected estimation values and the lens positions corresponding to the plurality of selected estimation values, even if the estimation value includes a noise or the estimation value constantly includes a noise when the luminance is low, it is possible to carry out the focus control with high accuracy. Even if the focus lens is passed by the just focus position only once, then it is possible to calculate the just focus position. Therefore, it is possible to determine the just focus position at high speed to that extent.[0013]
  • BRIEF DESCRIPTION OF DRAWINGS
  • FIG. 1 is a diagram showing an entire arrangement of an imaging apparatus formed of a video camera; [0014]
  • FIG. 2 is a diagram showing a specific arrangement of an [0015] autofocus controlling circuit 34;
  • FIG. 3 is a diagram showing a specific arrangement of a horizontal-direction estimation [0016] value generating circuit 62;
  • FIG. 4 is a diagram showing a specific arrangement of a vertical-direction estimation [0017] value generating circuit 63;
  • FIG. 5 is a table showing a filter coefficient α and a window size set for respective circuits of the horizontal-direction estimation [0018] value generating circuit 62 and the vertical-direction estimation value generating 63;
  • FIG. 6 is a diagram used to explain the respective window sizes; [0019]
  • FIG. 7 is a table showing weight data W set for respective estimation values E; [0020]
  • FIG. 8 is a diagram showing divided areas of a picture presented by an [0021] area detecting circuit 38;
  • FIG. 9 is a diagram showing a specific circuit arrangement of the [0022] area detecting circuit 38;
  • FIGS. [0023] 10 to 15 are flowcharts used to explain an autofocus operation;
  • FIG. 16 is a flowchart used to explain an operation of determining a target object; [0024]
  • FIG. 17 is a diagram showing a movement of a lens when a lens movement direction is determined in order to focus the lens on an object; [0025]
  • FIGS. 18A and 18B are diagrams showing a state that a non-target object lies in a window; [0026]
  • FIG. 19 is a diagram showing fluctuation of estimation values stored in a [0027] RAM 66 when the lens movement direction is determined;
  • FIG. 20 is a table showing data stored in the [0028] RAM 66 during the autofocus operation;
  • FIG. 21 is a graph showing change of the estimation values obtained upon the autofocus operation; [0029]
  • FIG. 22 is diagram showing a state of image pickup of an object A and an object B having the same color; [0030]
  • FIG. 23 is a table of information about an object; and [0031]
  • FIG. 24 is a log table of a target object.[0032]
  • BEST MODE FOR CARRYING OUT THE INVENTION
  • Initially, a focus control method and a video camera employing the above focus control method according to an embodiment of the present invention will hereinafter be described with reference to FIGS. [0033] 1 to 24.
  • A total arrangement of the video camera apparatus according to the present invention will be described with reference to FIG. 1. The video camera apparatus includes a [0034] lens block 1 for optically condensing incident light to the front of an imaging device, an imaging block 2 for converting light incident from the lens block into RGB electric video signals obtained by an image pickup, a signal processing block 3 for subjecting the video signals to a predetermined signal processing, and a CPU 4 for controlling the lens block 1, the imaging block 2, and the signal processing block.
  • The [0035] lens block 1 is detachably provided in a video camera apparatus body. This lens block 1 includes, as optical elements, a zoom lens 11 for, by moving along an optical axis, continuously change a focal length without changing a position of an image point to thereby zoom an image of an object, a focus lens 12 for bringing the object into focus, and an iris mechanism 13 for adjusting an amount of light incident on the front of the imaging device by changing its aperture area.
  • The [0036] lens block 1 further includes a position detecting sensor 11 a for detecting an optical-axis direction position of the zooming lens 11, a drive motor 11 b for moving the zooming lens 11 in the optical-axis direction, a zoom-lens drive circuit 11 c for supplying a drive control signal to the drive motor 11 b, a position detecting sensor 12 a for detecting an optical-axis direction position of the focus lens 12, a drive motor 12 b for moving the focus lens 12 in the optical-axis direction, a focus-lens drive circuit 12 c for supplying a drive control signal to the drive motor 12 b, a position detecting sensor 13 a for detecting an aperture position of the iris mechanism 13, a drive motor 13 b for opening and closing the iris mechanism 13, and an iris mechanism drive circuit 13 c for supplying a drive control signal to the drive motor 13 b.
  • Detection signals from the [0037] position detecting sensors 11 a, 12 a, 13 a are always supplied to the CPU 4. The zooming lens drive circuit 11 c, the focus lens drive circuit 12 c, and the iris mechanism drive circuit 13 c are electrically connected to the CPU 4 so as to be supplied with control signals from the latter.
  • The [0038] lens block 1 has an EEROM 15 for storing a focal length data of the zoom lens 11 and an aperture ratio data thereof, a focal length data of the focus lens 12 and an aperture ratio thereof, and a manufacturer name of the lens block 1 and a serial number thereof. The EEPROM 15 is connected to the CPU 4 so that the respective data stored therein are read out therefrom based on a read command from the CPU 4.
  • The [0039] imaging block 2 has a color separation prism 21 for color-separating incident light from the lens block 1 into three primary-color lights of red (R), green (G) and blue (B) and imaging devices 22R, 22G and 22B for converting lights of R component, G component and B component, which are obtained by separating light at the color separation prism 21 and are focused on image surfaces thereof, into electric video signals (R), (G), (B) to output the signals. Each of these imaging devices 22R, 22G and 22B is formed of a CCD (Charge Cupled Device), for example.
  • The [0040] imaging block 21 has preamplifiers 23R, 23G, 23B for respectively amplifying levels of the video signals (R), (G), (B) output from the imaging devices 22R, 22G, 22B and for carrying out correlated double sampling for removing a reset noise.
  • The [0041] imaging block 2 further has a timing signal generating circuit 24 for generating a VD signal, an HD signal and a CLK signal each serving as a basic clock used for operation of each of circuits in the video camera apparatus based on a reference clock from a reference clock circuit provided therein, and a CCD drive circuit 25 for supplying a drive clock to the imaging device 22R, the imaging device 22G and the imaging device 22B based on the VD signal, the HD signal and the CLK signal supplied from the timing signal generating circuit. The VD signal is a clock signal representing one vertical period. The HD signal is a clock signal representing one horizontal period. The CLK signal is a clock signal representing one pixel clock. The timing clock formed of these VD, HD and CLK signals is supplied to each of the circuits in the video camera apparatus through the CPU 4, though not shown.
  • The [0042] signal processing block 3 is a block provided in the video camera apparatus for subjecting the video signals (R), (G), (B) supplied from the imaging block 2 to a predetermined signal processing. The signal processing block 3 has A/ D converter circuits 31R, 31G, 31B for respectively converting the analog video signals (R), (G), (B) into digital video signals (R), (G), (B), gain control circuits 32R, 32G, 32B for respectively controlling gains of the digital video signals (R), (G), (B) based on a gain control signal from the CPU 4, and signal processing circuits 33R, 33G, 33B for respectively subjecting the digital video signals (R), (G), (B) to a predetermined signal processing. The signal processing circuits 33R, 33G, 33B have knee circuits 331R, 331G, 331B for compressing the video signals to a certain degree or more, γ correction circuits 332R, 332G, 332B for correcting the levels of the video signals in accordance with a preset γ curve, and B/W clip circuits 333R, 333G, 333B for clipping a black level smaller than a predetermined level and a white level larger than a predetermined level. Each of the signal processing circuits 33R, 33G, 33B may have a known black γ correction circuit, a known contour emphasizing circuit, a known linear matrix circuit and so on other than the knee circuit, the γ correction circuit, and the B/W clip circuit.
  • The [0043] signal processing block 3 has an encoder 37 for receiving the video signals (R), (G), (B) output from the signal processing circuits 33R, 33G, 33B and for generating a luminance signal (Y) and color-difference signals (R−Y), (B−Y) from the video signals (R), (G), (B).
  • The [0044] signal processing block 3 further has a focus control circuit 34 for receiving the video signals (R), (G), (B) respectively output from the gain control circuit 32R, 32G, 32B and for generating an estimation data E and a direction data Dr both used for controlling the focus based on the video signals (R), (G), (B), an iris control circuit 35 for receiving the video signals (R), (G), (B) respectively output from the signal processing circuits 33R, 33G, 33B and for controlling the iris based on the levels of the received signals so that an amount of light incident on each of the imaging devices 22R, 22G, 22B should be a proper amount of light, and a white balance controlling circuit 36 for receiving the video signals (R), (G), (B) respectively output from the signal processing circuits 33R, 33G, 33B and for carrying out white balance control based on the levels of the received signals.
  • The [0045] iris control circuit 35 has an NAM circuit for selecting a signal having a maximum level from the supplied video signals (R), (G), (B), and an integrating circuit for dividing the selected signal with respect to areas of a picture corresponding thereto to totally integrate each of the video signals corresponding to the areas of the picture. The iris control circuit 35 considers every illumination condition of an object such as back lighting, front lighting, flat lighting, spot lighting or the like to generate an iris control signal used for controlling the iris, and supplies this iris control signal to the CPU 4. The CPU 4 supplies a control signal to the iris drive circuit 13 c based on the iris control signal.
  • The white [0046] balance controlling circuit 36 generates a white balance control signal from the supplied video signals (R), (G), (B) so that the generated signal should satisfy (R−Y)=0 and (B−Y)=0, and supplies this white balance control signal to the CPU 4. The CPU 4 supplies a gain control signal to the gain controlling circuits 32R, 32G, 32B based on the white balance control signal.
  • The [0047] signal processing block 3 further has an area detecting circuit 38 and a frame memory 39.
  • The [0048] area detecting circuit 38 is a circuit for receiving a luminance signal (Y) and color difference signals (R−Y), (B−Y) from the encoder 37, and for, based on the luminance signal and the color difference signals, selecting an area, where a pixel data having the same color as that of an object designated as a target object exists, from areas set in the whole picture. THe area detecting circuit will be described in detail later on.
  • The [0049] frame memory 39 is a memory for receiving the luminance signal (Y) and the color difference signals (R−Y), (B−Y) from the encoder 37, and for temporarily storing the luminance signal and the color difference signals. The frame memory is formed of three memories of a frame memory for the luminance signal (Y), a frame memory for the color difference signal (R−Y) and a frame memory for the color difference signal (B−Y). The luminance signal and the color difference signals stored in the respective frame memories are read out therefrom based on read addresses supplied from the CPU 4, and the read luminance and color difference signals are supplied to the CPU 4.
  • The [0050] focus control circuit 34 will hereinafter be described in detail with reference to FIG. 2.
  • The [0051] focus control circuit 34 has a luminance signal generating circuit 61, a horizontal-direction estimation value generating circuit 62, a vertical-direction estimation value generating circuit 63, and a microcomputer 64.
  • The luminance-signal generating circuit [0052] 61 is a circuit for generating a luminance signal from the supplied video signals R, G, B. In order to determine whether the lens is in focus or out of focus, it is sufficient to determine whether the contrast is high or low. Therefore, since change of the contrast has no relation with change of a level of a color difference signal, it is possible to determine whether the contrast is high or low, by detecting only the change of a level of the luminance signal.
  • The luminance-signal generating circuit [0053] 61 can generate the luminance signal Y by subjecting the supplied video signals R, G, B to a known calculation based on
  • Y=0.3 R+0.59 G+0.11 B   (1)
  • The horizontal-direction estimation [0054] value generating circuit 62 is a circuit for generating a horizontal-direction estimation value. The horizontal-direction estimation value is a data indicating how much the level of the luminance signal is changed when the luminance signal is sampled in the horizontal direction, i.e., a data indicating how much contrast there is in the horizontal direction.
  • The horizontal-direction estimation value generating circuit [0055] 62 has a first horizontal-direction estimation value generating circuit 62 a for generating a first horizontal-direction estimation value E1, a second horizontal-direction estimation value generating circuit 62 b for generating a second horizontal-direction estimation value E2, a third horizontal-direction estimation value generating circuit 62 c for generating a third horizontal-direction estimation value E3, a fourth horizontal-direction estimation value generating circuit 62 d for generating a fourth horizontal-direction estimation value E4, a fifth horizontal-direction estimation value generating circuit 62 e for generating a fifth horizontal-direction estimation value E5, a sixth horizontal-direction estimation value generating circuit 62 f for generating a sixth horizontal-direction estimation value E6, a seventh horizontal-direction estimation value generating circuit 62 g for generating a seventh horizontal-direction estimation value E7, an eighth horizontal-direction estimation value generating circuit 62 h for generating an eighth horizontal-direction estimation value E8, a ninth horizontal-direction estimation value generating circuit 62 i for generating a ninth horizontal-direction estimation value E9, a tenth horizontal-direction estimation value generating circuit 62 j for generating a tenth horizontal-direction estimation value E10, an eleventh horizontal-direction estimation value generating circuit 62 k for generating an eleventh horizontal-direction estimation value E11, and a twelfth horizontal-direction estimation value generating circuit 621 for generating a twelfth horizontal-direction estimation value E12.
  • A detailed arrangement of the horizontal-direction estimation [0056] value generating circuit 62 will hereinafter be described with reference to FIG. 3.
  • The first horizontal-direction estimation [0057] value generating circuit 62 a of the horizontal-direction estimation value generating circuit 62 has a high-pass filter 621 for extracting a high-frequency component of the luminance signal, an absolute-value calculating circuit 622 for converting the extracted high-frequency component into an absolute value to thereby obtain a data having positive values only, a horizontal-direction integrating circuit 623 for integrating an absolute-value data in the horizontal direction to thereby cumulatively add the data of the high-frequency component in the horizontal direction, a vertical-direction integrating circuit 624 for integrating the data integrated in the vertical direction, and a window pulse generating circuit 625 for supplying an enable signal used for allowing integrating operations of the horizontal-direction integrating circuit 623 and the vertical-direction integrating circuit 624.
  • The high-[0058] pass filter 621 is formed of a one-dimension finite impulse response filter for filtering the high-frequency component of the luminance signal in response to one sample clock CLK from the window pulse generating circuit 625. The high-pass filter 621 has a cutoff frequency characteristic expressed by
  • (1−Z−1)/(1−αZ−1)  (2)
  • The first horizontal-direction estimation [0059] value generating circuit 62 a has a value of α=0.5 and has a frequency characteristic corresponding to the value of α.
  • The window [0060] pulse generating circuit 625 has a plurality of counters operated based on the clock signal VD representing one vertical period, on the clock signal HD representing one horizontal period and on the clock signal CLK representing one sample clock. The window pulse generating circuit 625 supplies the enable signal to the horizontal-direction integrating circuit 623 based at every one sample clock signal CLK and supplies the enable signal to the vertical-direction integrating circuit 624 at every one horizontal period based on the counted value of the counter. The window pulse generating circuit 625 of the first horizontal-direction estimation value circuit 62 a has a counter whose initial count value is set so that a size of a window should be that of 192 pixels×60 pixels. Therefore, the first horizontal-direction estimation value E1 output from the horizontal-direction estimation value generating circuit 62 indicates data obtained by integrating all the high-frequency components in the window of 192 pixels×60 pixels. The counter is connected to the CPU 4 so as to be supplied with an offset value from the latter. The initial count value is a count value set so that a window center should be agreed with a center of a picture obtained by image pickup. The offset value supplied from the CPU 4 means a count value to be added to the initial count value. Therefore, when the offset value is supplied from the CPU 4, the count value of the counter is changed and consequently a center position of the window is changed.
  • Similarly to the first horizontal-direction estimation [0061] value generating circuit 62 a, each of the second to twelfth horizontal-direction estimation value generating circuits 62 b to 62 has a high-pass filter 621, an absolute-value calculating circuit 622, a horizontal-direction integrating circuit 623, a vertical-direction integrating circuit 624, and a window pulse generating circuit 625. A different point among the respective circuits lies in that the respective circuits (62 a to 621) have different combinations of their filter coefficients α and their window sizes.
  • Therefore, the estimation values E[0062] 1 to E12 generated by the respective circuits are different from one another.
  • FIG. 5A shows the filter coefficients α and the window sizes which are respectively set for the first horizontal-direction estimation [0063] value generating circuit 62 a to the twelfth horizontal-direction estimation value generating circuit 621. The reason for setting such different filter coefficients will hereinafter be described.
  • For example, the high-pass filter having a high cutoff frequency is very suitable for use when the lens is substantially in a just focus state (which means a state that a lens is in focus). The reason for this is that the estimation value is changed at a considerably large rate as compared with a lens movement in the vicinity of the just focus point. Since the estimation value is changed at a small rate when the lens is considerably out of focus, it is not too much to say that the high-pass filter having the high cutoff frequency is not suitable for use when the lens is considerably out of focus. [0064]
  • On the other hand, the high-pass filter having a low cutoff frequency is suitable for use when the lens is considerably out of focus. The reason for this is that when the lens is moved while being considerably out of focus, the estimation value is changed at a considerably large rate. Since the estimation value is changed at a small rate when the lens is moved in the substantial just focus state, then it is not too much to say that the high-pass filter having the low cutoff frequency is not suitable for use in the substantial just focus state. [0065]
  • In short, each of the high-pass filter having the high cutoff frequency and the high-pass filter having the low cutoff frequency has both of advantage and disadvantage. It is difficult to determine which of the high-pass filters is more suitable. Therefore, preferably, a plurality of high-pass filters having different filter coefficients are used and generate a plurality of estimation values in order to select a most proper estimation value. [0066]
  • The horizontal-direction estimation [0067] value generating circuit 63 according to this embodiment has plural kinds of preset windows shown in FIG. 6A. When the offset value is not supplied from the CPU 4 to the counter provided in the window pulse generating circuit 625, centers of these plurality of windows are agreed with the centers of the pictures obtained by image pickup. A window W1 is a window of 192 pixels×60 pixels. A window W2 is a window of 132 pixels×60 pixels. A window W3 is a window of 384 pixels×120 pixels. A window W4 is a window of 264 pixels×120 pixels. A window W5 is a window of 768 pixels×240 pixels. A window W6 is a window of 548 pixels×240 pixels.
  • It is possible to generate different estimation values corresponding to the respective windows by setting a plurality of windows as described above. Therefore, regardless of a size of an object to be brought into focus, it is possible to obtain a proper estimation value from any of the first horizontal-direction estimation [0068] value generating circuit 62 a to the twelfth horizontal-direction estimation value generating circuit 621.
  • An arrangement of the vertical-direction estimation [0069] value generating circuit 63 will be described with reference to FIGS. 2 and 4.
  • The vertical-direction estimation [0070] value generating circuit 63 is a circuit for generating an estimation value in the vertical direction. The estimation value in the vertical direction is a data indicating how much the level of the luminance signal is changed when the luminance signal is sampled in the vertical direction, i.e., a data indicating how much there is the contrast in the vertical direction.
  • The vertical-direction estimation value generating circuit [0071] 62 has a first vertical-direction estimation value generating circuit 63 a for generating a first vertical-direction estimation value E13, a second vertical-direction estimation value generating circuit 63 b for generating a second vertical-direction estimation value E14, a third vertical-direction estimation value generating circuit 63 c for generating a third vertical-direction estimation value E15, a fourth vertical-direction estimation value generating circuit 63 d for generating a fourth vertical-direction estimation value E16, a fifth vertical-direction estimation value generating circuit 63 e for generating a fifth vertical-direction estimation value E17, a sixth vertical-direction estimation value generating circuit 63 f for generating a sixth vertical-direction estimation value E18, a seventh vertical-direction estimation value generating circuit 63 g for generating a seventh vertical-direction estimation value E19, an eighth vertical-direction estimation value generating circuit 63 h for generating an eighth vertical-direction estimation value E20, a ninth vertical-direction estimation value generating circuit 63 i for generating a ninth vertical-direction estimation value E21, a tenth vertical-direction estimation value generating circuit 63 j for generating a tenth vertical-direction estimation value E22, an eleventh vertical-direction estimation value generating circuit 63 k for generating an eleventh vertical-direction estimation value E23, and a twelfth vertical-direction estimation value generating circuit 631 for generating a twelfth vertical-direction estimation value E24 .
  • A detailed arrangement of the vertical-direction estimation [0072] value generating circuit 63 will hereinafter be described with reference to FIG. 4.
  • The first vertical-direction estimation [0073] value generating circuit 63 a of the vertical-direction estimation value generating circuit 63 has a horizontal-direction mean value generating circuit 631 for generating a mean value data of levels of luminance signals in the horizontal direction, a high-pass filter 632 for extracting a high-frequency component of the mean-value data of the luminance signals, an absolute-value calculating circuit 633 for converting the extracted high-frequency component into an absolute value to thereby obtain a data having positive values only, a vertical-direction integrating circuit 634 for integrating an absolute-value data in the vertical direction to thereby cumulatively add the data of the high-frequency component in the vertical direction, and a window pulse generating circuit 635 for supplying an enable signal used for allowing integrating operations of the horizontal-direction mean value generating circuit 631 and the vertical-direction integrating circuit 634.
  • The high-[0074] pass filter 632 is formed of a one-dimension finite impulse response filter for filtering the high-frequency component of the luminance signal in response to one horizontal period signal HD from the window pulse generating circuit 625. The high-pass filter 632 has the same cutoff frequency characteristic as that of the high-pass filter 621 of the first horizontal-direction estimation value generating circuit 62 a. The first vertical-direction estimation value generating circuit 63 a has a value of α=0.5 and has a frequency characteristic corresponding to the value of α.
  • The window [0075] pulse generating circuit 635 has a plurality of counters operated based on the clock signal VD representing one vertical period, the clock signal HD representing one horizontal period and the clock signal CLK representing one sample clock supplied from the CPU 4. The window pulse generating circuit 635 supplies the enable signal to the horizontal-direction mean value generating circuit 631 based on the counted value of the counter at every one sample clock signal CLK and supplies the enable signal to the vertical-direction integrating circuit 634 at every one horizontal period. The window pulse generating circuit 635 of the first vertical-direction estimation value generating circuit 63 a has a counter whose initial count value is set so that a size of a window should be that of 120 pixels×80 pixels. Therefore, the first vertical-direction estimation value E13 output from the vertical-direction estimation value generating circuit 63 indicates data obtained by integrating all the high-frequency components in the window of 120 pixels×80 pixels. The counter is connected to the CPU 4 so as to be supplied with an offset value from the latter. The initial count value is a count value set so that a window center should be agreed with a center of a picture obtained by image pickup. The offset value supplied from the CPU 4 means a count value to be added to the initial count value. Therefore, when the offset value is supplied from the CPU 4, the count value of the counter in the window pulse generating circuit 635 is changed and consequently a center position of the window is changed.
  • Similarly to the above first vertical-direction estimation [0076] value generating circuit 63 a, each of the second to twelfth vertical-direction estimation value generating circuits 63 b to 631 h has a horizontal-direction mean value generating circuit 631, a high-pass filter 632, an absolute-value calculating circuit 633, a vertical-direction integrating circuit 634, and a window pulse generating circuit 635. A different point among the respective circuits lies in that the respective circuits have different combinations of their filter coefficients a and their window sizes similarly to those of the horizontal-direction estimation value generating circuit 62.
  • Therefore, the estimation values E[0077] 1 to E12 generated by the respective circuits are different from one another.
  • FIG. 5B shows the filter coefficients α and the window sizes both of which are respectively set for the first vertical-direction estimation [0078] value generating circuit 62 a to the twelfth vertical-direction estimation value generating circuit 621.
  • The vertical-direction estimation [0079] value generating circuit 63 according to this embodiment has plural kinds of preset windows shown in FIG. 6B. When the offset value is not supplied from the CPU 4 to the counter provided in the window pulse generating circuit 625, centers of these plurality of windows are agreed with the centers of the pictures obtained by image pickup. A window W7 is a window of 120 pixels×80 pixels. A window W8 is a window of 120 pixels×60 pixels. A window W9 is a window of 240 pixels×160 pixels. A window W10 is a window of 240 pixels×120 pixels. A window W11 is a window of 480 pixels×320 pixels. A window W12 is a window of 480 pixels×240 pixels.
  • It is possible to generate different estimation values corresponding to the respective combinations of filter coefficients and windows by providing circuits having a plurality of filter characteristics and a plurality of windows as described above. Therefore, since the estimation value is totally generated from a plurality of estimation values regardless of an image pickup state of an object to be brought into focus, it is possible to obtain a precise total estimation value even if any one of the estimation values is not proper. [0080]
  • Therefore, according to this embodiment, since the focus control circuit has twenty-four estimation value generating circuits for generating twenty-four kinds of estimation values obtained from combination of twelve window sizes and two filter coefficients, it is possible to obtain plural kinds of estimation values. Moreover, since the estimation value is totally obtained based on the respective estimation values, it is possible to improve the accuracy of the estimation value. [0081]
  • The [0082] microcomputer 64 will be described with respect to FIGS. 2 and 7.
  • The [0083] microcomputer 64 is a circuit for receiving twenty-four estimation values E1 to E24 generated by the horizontal-direction estimation value generating circuit 62 and the vertical-direction estimation value generating circuit 63 and for calculating, based on these twenty-four estimation values, the direction in which the lens is to be moved and a lens position where the estimation value is maximum, i.e., a lens position where the lens is in focus.
  • The [0084] microcomputer 64 has a ROM 65 which stores a program used for calculating the twenty-four estimation values in accordance with a predetermined flowchart. As shown in FIG. 7, the ROM 65 stores twenty-four weight data Wi corresponding to the respective twenty-four estimation values Ei (i=1, 2, . . . 24) output from the twenty-four estimation value generating circuits (62 a to 621 and 63 a to 631). These weight data Wi are data used for giving priority to the twenty-four estimation values Ei. The higher values the weight data Wi have, the higher priority the corresponding estimation value Ei have. The weight data Wi have fixed values preset upon shipment from a factory.
  • The [0085] microcomputer 64 has a RAM 66 for storing the twenty-four estimation values Ei (i=1, 2, . . . , 24) respectively supplied from the twenty-four estimation value generating circuits (62 a to 621 and 63 a to 631) in connection with the position of the focus lens. It is assumed that estimation values generated when the lens is located at a position X1 are represented by E1(X1) to E24(X1). Initially, the estimation values E1(X1) to E24(X1) generated when the lens is located at a position X1 are stored in the RAM 66. Further, when the lens is moved from the position X1 to a position X2, estimation values E1(X2) to E24(X2) generated when the lens is moved to the position X2 are stored in a RAM 66. Since the RAM 66 stores data in a ring buffer system, the previously stored estimation values E1(X1) to E24(X1) are not erased until the RAM becomes full of stored data. These estimation values Ei are stored in the RAM 64 when designation of a pointer by the microcomputer 64.
  • The [0086] area detecting circuit 38 will be described with reference to FIGS. 8 to 9.
  • The [0087] area detecting circuit 38 is a circuit for dividing a picture into one hundred and twenty-eight areas and for determining in which of the divided areas a pixel data having the same color as that of the object set as the target object exists. The area detecting circuit 38 has a logic circuit for judging all the pixel data supplied from the encoder. Specifically, as shown in FIG. 8, when one area is set so as to have a size of 48 pixels×30 pixels, a picture is divided into 16 portions in the horizontal direction and into 8 portions in the vertical direction and consequently one hundred and twenty-eight areas can be defined in one picture. As shown in FIG. 8, the one hundred and twenty-eight areas are defined as area numbers A000 to A127 in that order.
  • A specific arrangement of the [0088] area detecting circuit 38 will be described with reference to FIG. 9. The encoder 37 supplies the luminance signal Y, the color difference signal |R−Y| and the color difference signal |B−Y| of every pixel data to the area detecting circuit 38. The area detecting circuit 38 is supplied with an upper limit luminance signal |Y0|U, a lower limit luminance signal |Y0|L, an upper limit color difference signal |R0−Y0|U, a lower limit color difference signal |R0−Y0|L, an upper limit color difference signal |B0−Y0|U, and a lower limit color difference signal |B0−Y0|L from the CPU 4. Further, the area detecting circuit is supplied from the CPU 4 with multiplication coefficients α3, ⊕4, ⊕5, α6 to be multiplied with the luminance signal Y supplied from the encoder 37.
  • The upper limit luminance signal |Y[0089] 0|U, the lower limit luminance signal |Y0|L, the upper limit color difference signal |R0−Y0|U, the lower limit color difference signal |R0−Y0|L, the upper limit color difference signal |B0−Y0|U, and the lower limit color difference signal |B0−Y0|L supplied from the CPU 4 will be described. The luminance signal Y0, the color difference signal |R0−Y0| and the color difference signal |B0−Y0| are signals obtained based on the luminance signal and the color difference signal obtained by image pickup of the object which a camera man determines as a target object. Therefore, once the target object is set, the values of the signals will never be changed. In order to allow judgement that if a luminance signal of a certain pixel data has a value between the upper limit luminance signal |Y0|U and the lower limit luminance signal |Y0|L, then the luminance signal is determined as a luminance signal having the substantially same level as that of the luminance signal Y0 of the target object, the upper limit luminance signal |Y0|U and the lower limit luminance signal |Y0|L are set so as to have values approximate to that of the luminance signal |Y0| of the target object. Similarly, in order to allow judgement that if a color difference signal |R−Y| of a certain pixel data has a value between the upper limit color difference signal |R0−Y0|U and the lower limit color difference signal |R0−Y0|L, then the color difference signal |R−Y| of the certain pixel data is determined as a color difference signal having the substantially same level as that of the color difference signal |R0−Y0| of the target object, the upper limit color difference signal |R0−Y0|U and the lower limit color difference signal |R0−Y0 L are set so as to have values approximate to that of the color difference signal |R0−Y0| of the target object. In order to allow judgement that if a color difference signal |B−Y| of a certain pixel data has a value between the upper limit color difference signal |R0−Y0| and the lower limit color difference signal |B0−Y0|L, then the color difference signal |B−Y| of the certain pixel data is determined as a color difference signal having the substantially same level as that of the color difference signal |B0−Y0| of the target object, the upper limit color difference signal |B0−Y0|U and the lower limit color difference signal |B0−Y0|L are set so as to have values approximate to that of the color difference signal |B0−Y0| of the target object.
  • The [0090] area detecting circuit 38 has a multiplier circuit 71 a for multiplying the multiplication coefficient α4 with the luminance signal Y supplied from the encoder 37, a multiplier circuit 71 b for multiplying the multiplication coefficient α3 with the luminance signal Y, a multiplier circuit 71 c for multiplying the multiplication coefficient α6 with the luminance signal Y, and a multiplier circuit 71 d for multiplying the multiplication coefficient α5 with the luminance signal Y. The area detecting circuit 38 further has a switch circuit 72 a for selecting either of a multiplied output signal from the multiplier circuit 71 a and the upper limit color difference signal |R0−Y0|U, a switch circuit 72 b for selecting either of a multiplied output signal from the multiplier circuit 71 b and the lower limit color difference signal |R0−Y0|L, a switch circuit 72 c for selecting either of a multiplied output signal from the multiplier circuit 71 c and the upper limit color difference signal |B0−Y0|U, and a switch circuit 72 d for selecting either of a multiplied output signal from the multiplier circuit 71 d and the lower limit color difference signal |B0−Y0|L. The area detecting circuit 38 has a comparator 73 a supplied with the luminance signal Y and the upper limit luminance signal |Y0|U, a comparator 73 b supplied with the luminance signal Y and the lower limit luminance signal |Y0|L, a comparator 73 c supplied with a signal from the switch circuit 72 a and the color difference signal |R−Y|, a comparator 73 d supplied with a signal from the switch circuit 72 b and the color difference signal |R−Y|, a comparator 73 e supplied with a signal from the switch circuit 72 c and the color difference signal |B−Y|, and a comparator 73 f supplied with a signal from the switch circuit 72 d and the color difference signal |B−Y|. The area detecting circuit 38 also has a gate circuit 74 a supplied with an output signal from the comparator 73 a and an output signal from the comparator 73 b, a gate circuit 74 b supplied with an output signal from the comparator 73 c and an output signal from the comparator 73 d, a gate circuit 74 c supplied with an output signal from the comparator 73 e and an output signal from the comparator 73 f, and a gate circuit 75 supplied with an output signal from the gate circuit 74 a, an output signal from the gate circuit 74 b, and an output signal from the gate circuit 74 c.
  • Further, the [0091] area detecting circuit 38 has a flag signal generating circuit 76 formed of one hundred and twenty-eight chip circuits. The one hundred and twenty-eight chip circuits are provided so as to correspond to the one hundred and twenty-eight areas A001 to A127 shown in FIG. 8. Each of the chip circuits is supplied with an output signal from the gate circuit 75, a pixel clock signal CLK and a chip select signal CS. The pixel clock signal CLK and the chip select signal CS are supplied from the CPU 4 so as to correspond to the luminance signal and the color difference signal of every pixel data supplied from the encoder 37. The pixel clock signal CLK is a clock signal corresponding to a timing of a processing of each pixel data. If a timing of a pixel data processed by the logic circuit at the preceding stage is agreed with a processing timing of the pixel data, then a “Low”-level pixel clock signal is supplied to the chip circuit, and in other cases, a “High”-level pixel clock signal is supplied thereto. A “Low”-level chip select signal CS is supplied only to the chip circuit selected from the 128 chip circuits, and a “High”-level chip select signal is supplied to other chip circuits which are not selected.
  • Each of the chip circuits provided in the flag [0092] signal generating circuit 76 has a gate circuit 76 a and a counter 76 b. Therefore, the flag signal generating circuit 76 have the one hundred and twenty-eight gate circuits 76 a and the one hundred and twenty-eight counters 76 b. The gate circuit 76 a outputs a “Low”-level signal only when all of the output signal supplied from the gate circuit 75, the pixel clock signal CLK and the chip select signal CS are at “Low” level. The counter 76 b is a counter for responding to a clock timing of the pixel clock signal CLK and for counting up only when it is supplied with a “Low”-level signal from the gate circuit 76 a. The counter generates a flag signal when its count value becomes a predetermined number or greater (5 counts or greater in this embodiment). The generated flag signal is supplied to a multiplexer 77.
  • The [0093] multiplexer 77 receives the flag signal output from each of the chip circuits of the flag signal generating circuit 76 and supplies the same to the CPU 4. At this time, the multiplexer 77 supplies the number of the chip circuit outputting the flag signal to the CPU 4. The CPU 4 can select the area where the pixel data having the same color component as that obtained by image pickup of the target object with reference to the number.
  • Before the [0094] area detecting circuit 38 carries out a processing for detecting an area, the switch circuits 72 a, 72 b, 72 c and 72 d provided in the area detecting circuit 38 must carry out the switching operations. Therefore, the switching operations will be described.
  • For switching the [0095] switch circuits 72 a, 72 b, 72 c and 72 d, it is necessary to select an object mode based on the luminance signal |Y0|, the color difference signal |R0−Y0| and the color difference signal |B−Y0| obtained from image pickup of an object which a cameraman set as a target object. The object mode has four modes. Modes 0 to 3 will hereinafter be described successively.
  • The [0096] mode 0 is a mode selected when the object set as the target object has color information to some degree. Specifically, it means that both of values of |R0−Y0|/Y0 and |B0−Y0|/Y0 indicative of color components obtained by image pickup of the object have a predetermined level or higher. In other words, this mode is selected when the selected target object has strong tint. Accordingly, the mode 0 is selected when a relationship among the luminance signal |Y0|, the color difference signal |T0−Y0| and the color difference signal |B0−Y0| obtained by image pickup of the set target object satisfies conditions expressed by
  • α3 ×|Y 0 |≦|R 0 −Y 0|≦α4 ×|Y 0|
  • and[0097]
  • α5 ×|Y 0 |≦|B 0 −Y 0|≦α6 ×|Y 0|  (70)
  • When the [0098] CPU 4 selects the mode 0 as the object mode, the CPU 4 supplies control signals to the switch circuits 72 a, 72 b, 72 c and 72 d to thereby respectively set the switch states of the switch circuits 72 a, 72 b, 72 c and 72 d to “Up”, “Up”, “Up”, and “Up”. Once the switch states are set, these switch states will never be changed until the object mode is changed.
  • The [0099] mode 1 is a mode a mode selected when the object set as the target object has color components including red color components exceeding a predetermined level and blue color components which does not exceed a predetermined level. Specifically, it means that the value of |R0−Y0|/Y0 indicative of color components obtained by image pickup of the object has a predetermined level or higher but the value of |B0−Y0|/Y0 does not have a predetermined level or higher. Accordingly, the mode 1 is selected when a relationship between the luminance signal |Y0| and the color difference signal |R0−Y0| obtained by image pickup of the set target object satisfies conditions expressed by
  • α3 ×Y 0 |≦|R 0 −Y 0|≦α4 ×|Y 0|
  • and[0100]
  • |B 0 −Y 0|≦α5 ×|Y 0|  (71)
  • When the [0101] CPU 4 selects the mode 1 as the object mode, the CPU 4 supplies control signals to the switch circuits 72 a, 72 b, 72 c and 72 d to thereby respectively set the switch states of the switch circuits 72 a, 72 b, 72 c and 72 d to “Up”, “Up”, “Down”, and “Down”.
  • The [0102] mode 2 is a mode a mode selected when the object set as the target object has color components including blue color components exceeding a predetermined level and red color components which does not exceed a predetermined level. Specifically, it means that the value of |B0−Y0|/Y0 indicative of color components obtained by image pickup of the object has a predetermined level or higher but the value of |R0−Y0|/Y0 does not have a predetermined level or higher. Accordingly, the mode 2 is selected by the CPU 4 when a relationship among the luminance signal |Y0| and the color difference signal |B0−Y0| obtained by image pickup of the set target object satisfies conditions expressed by
  • |R 0 −Y 0|≦α5 ×|Y 0|
  • and[0103]
  • α5 ×|Y 0 |≦|B 0 −Y 0|≦α6 ×|Y 0|  (72)
  • When the [0104] CPU 4 selects the mode 2 as the object mode, the CPU 4 supplies control signals to the switch circuits 72 a, 72 b, 72 c and 72 d to thereby respectively set the switch states of the switch circuits 72 a, 72 b, 72 c and 72 d to “Down”, “Down”, “Up”, and “Up”.
  • The [0105] mode 3 is a mode selected when the object set as the target object has color components including both of the blue color components and the red color components which do not exceed a predetermined level. Specifically, it means that either of values of |R0−Y0/Y0 and |B0−Y0|/Y0 indicative of color components obtained by image pickup of the object does not have a predetermined level or higher. In other words, this mode is selected when the selected target object does not have tint. Accordingly, the mode 3 is selected by the CPU 4 when a relationship among the luminance signal |Y0|, the color difference signal |R0−Y0| and the color difference signal |B0−Y0| obtained by image pickup of the set target object does not satisfy either of the above equations (70), (71) and (72). When the CPU 4 selects the mode 3 as the object mode, the CPU 4 supplies control signals to the switch circuits 72 a, 72 b, 72 c and 72 d to thereby respectively set the switch states of the switch circuits 72 a, 72 b, 72 c and 72 d to “Down”, “Down”, “Down”, and “Down”.
  • When the above switching operation is finished, the [0106] area detecting circuit 38 carries out a processing of detecting an object. The detecting processing will subsequently be described with reference to FIG. 9 in correspondence with each of the object modes.
  • 1) when the [0107] mode 0 is selected
  • Regardless of the selected mode, the [0108] comparator 73 a compares the upper limit luminance signal |Y0|U with the luminance signal Y supplied from the encoder 37. When
  • Y≦|Y 0|U
  • is established, the comparator outputs a “high”-level signal, and when[0109]
  • |Y 0|U ≦Y
  • is established, the comparator outputs a “Low”-level signal. The [0110] comparator 73 b compares the lower limit luminance signal |Y0|L with the luminance signal y supplied from the encoder. When
  • |Y 0|L ≦Y
  • is established, the comparator outputs a “high”-level signal, and when[0111]
  • Y≦|Y 0|L
  • is established, the comparator outputs a “Low”-level signal, and when[0112]
  • Y≦|Y 0|L
  • is established, the comparator outputs a “Low”-level signal. The [0113] gate circuit 74 a receives the output signals from the comparator 73 a and the comparator 73 b, and, when both of the output signals from the comparators 73 a, 73 b are at “high” level, outputs a “Low”-level signal to the gate circuit 75 at the succeeding stage.
  • Specifically, the calculation carried out by the [0114] comparators 73 a, 73 b and the gate circuit 74 a is defined as
  • |Y 0|L ≦Y≦|Y 0|U  (74)
  • In other words, if the luminance signal Y supplied from the encoder satisfies the condition defined by the equation (74), then the [0115] gate circuit 74 a outputs a “Low”-level signal.
  • Since the switch states of the [0116] switch circuits 72 a, 72 b are respectively set to “Up” and “Up” when the mode 0 is selected, the comparator 73 c is supplied with data Y×α4 and |R−Y| and the comparator 73 d is supplied with data Y×α3 and |R−Y|. The luminance signal Y and the color difference signal |R−Y| are data supplied from the encoder 37. The comparator 73 c compares the data Y×α4 with the data |R−Y|. When
  • |R−Y|≦Y×α 4
  • is established, the comparator outputs a “High”-level signal, and when[0117]
  • Y×α 4 ≦|R−Y|
  • is established, the comparator outputs a “Low”-level signal. The [0118] comparator 73 d compares the data Y×α3 with the data |R−Y|. When
  • Y×α 3 ≦|R−Y|
  • is established, the comparator outputs a “High”-level signal, and when[0119]
  • |R−Y|≦Y×α 3
  • is established, the comparator outputs a “Low”-level signal. The [0120] gate circuit 74 b receives signals output from the comparator 73 c and the comparator 73 d, and, when both of the output signals from the comparators 73 c, 73 d are at “High” level, outputs a “Low”-level signal to the gate circuit 75 at the succeeding stage.
  • Specifically, the calculation carried out by the [0121] comparators 73 c, 73 d and the gate circuit 74 b is defined as
  • Y×α 3 ≦|R−Y|≦Y×α 4  (75)
  • In other words, if the color difference signal |R−Y| supplied from the encoder satisfies the condition defined by the equation (75), then the [0122] gate circuit 74 b outputs a “Low”-level signal.
  • Further, since the switch states of the [0123] switch circuits 72 c, 72 d are respectively set to “Up” and “Up” when the mode 0 is selected, the comparator 73 e is supplied with data Y×α6 and |B−Y| and the comparator 73 f is supplied with data Y×α5 and |B−Y|. The luminance signal Y and the color difference signal |B−Y| are data supplied from the encoder 37. The comparator 73 e compares the data Y×α6 with the data |B−Y|. When
  • |B−Y|≦Y×α 6
  • is established, the comparator outputs a “High”-level signal, and when[0124]
  • Y×α 6 ≦|B−Y|
  • is established, the comparator outputs a “Low”-level signal. The [0125] comparator 73 d compares the data Y×α5 with the data |B−Y|. When
  • Y×α 5 ≦|B−Y|
  • is established, the comparator outputs a “High”-level signal, and when[0126]
  • |B−Y|≦Y×α 5
  • is established, the comparator outputs a “Low”-level signal. The [0127] gate circuit 74 c receives signals output from the comparator 73 e and the comparator 73 f, and, when both of the output signals from the comparators 73 e, 73 f are at “High” level, outputs a “Low”-level signal to the gate circuit 75 at the succeeding stage.
  • Specifically, the calculation carried out by the [0128] comparators 73 e, 73 f and the gate circuit 74 c is defined as
  • [0129] Y×α 5 ≦|B−Y|≦Y×α 6  (76)
  • In other words, if the color difference signal |B−Y| supplied from the encoder satisfies the condition defined by the equation (76), then the [0130] gate circuit 74 c outputs a “Low”-level signal.
  • The [0131] gate circuit 75 receives the output signals from the gate circuits 74 a, 74 b and 74 c, and, only when all of the output signals from the gate circuits 74 a, 74 b and 74 c are at “high” level, outputs a “Low”-level signal to the respective chip circuits of the flag generating circuit 76.
  • Specifically, since the conditions of the equations (70), (71) and (72) are satisfied when the [0132] mode 0 is selected as the object mode, the calculations carried out by the comparators 73 a to 73 f, the gate circuits 74 a to 74 c and the gate circuit 75 can be defined by
  • |Y 0|L ≦Y≦|Y 0|U
  • and[0133]
  • Y×α 3 ≦|R−Y|≦Y×α 4
  • and[0134]
  • Y×α 5 ≦|B−Y|≦Y×α 6  (700)
  • When the [0135] mode 0 is selected, satisfaction of the conditions of the equation (700) means that the luminance signal Y, the color difference signal |R−Y| and the color difference signal |B−Y| of the pixel data supplied from the encoder 37 are substantially equal to the luminance signal Y0, the color difference signal |R0−Y0| and the color difference signal |B0−Y0| obtained by image pickup of the object set as the target object. Only when the color of the target object is equal to the color indicated by the pixel data as described above, the gate circuit 75 outputs the “Low”-level signal.
  • 2) when the [0136] mode 1 is selected
  • The operations carried out when the [0137] mode 1 is selected are exactly similar to those carried out when the mode 0 is selected, and hence need not to be described in detail.
  • Since the operations are similar to those carried out when the [0138] mode 0 is selected, when the mode 1 is selected as the object mode, the calculations carried out by the comparators 73 a to 73 f, the gate circuits 74 a to 74 c and the gate circuit 75 can be defined by
  • |Y 0|L ≦Y≦|Y 0|U
  • and[0139]
  • Y×α 3 ≦|R−Y|≦Y×α 4
  • and[0140]
  • |B 0 −Y 0|L ≦|B−Y|≦|B 0 −Y 0|L  (701)
  • When the [0141] mode 1 is selected, satisfaction of the conditions of the equation (701) means that the luminance signal Y, the color difference signal |R−Y| and the color difference signal |B−Y| of the pixel data supplied from the encoder 37 are substantially equal to the luminance signal Y0, the color difference signal |R0−Y0| and the color difference signal |B0−Y0| obtained by image pickup of the object set as the target object. Similarly to the operations carried out when the mode 0 is selected, only when the color of the target object is equal to the color indicated by the pixel data as described above, the gate circuit 75 outputs the “Low”-level signal.
  • 3) when the [0142] mode 2 is selected
  • The operations carried out when the [0143] mode 2 is selected are exactly similar to those carried out when the mode 0 or the mode 1 is selected, and hence need not to be described in detail.
  • Since the operations are similar to those carried out when the [0144] mode 0 is selected, when the mode 2 is selected as the object mode, the calculations carried out by the comparators 73 a to 73 f, the gate circuits 74 a to 74 c and the gate circuit 75 can be defined by
  • |Y 0|L ≦Y≦|Y 0|U
  • and[0145]
  • |R 0 −Y 0|L ≦|R−Y|≦|R 0 −Y 0|L
  • and[0146]
  • Y×α 5 ≦|B−Y|≦Y×α 6  (702)
  • When the [0147] mode 2 is selected, satisfaction of the conditions of the equation (701) means that the luminance signal Y, the color difference signal |R−Y| and the color difference signal |B−Y| of the pixel data supplied from the encoder 37 are substantially equal to the luminance signal Y0, the color difference signal |R0−Y0| and the color difference signal |B0−Y0| obtained by image pickup of the object set as the target object. Similarly to the operations carried out when the mode 0 is selected, only when the color of the target object is equal to the color indicated by the pixel data as described above, the gate circuit 75 outputs the “Low”-level signal.
  • 4) when the [0148] mode 3 is selected
  • The operations carried out when the [0149] mode 3 is selected are exactly similar to those carried out when the mode 0, the mode 1 or the mode 2 is selected, and hence need not to be described in detail.
  • Since the operations are similar to those carried out when the [0150] mode 0 is selected, when the mode 3 is selected as the object mode, the calculations carried out by the comparators 73 a to 73 f, the gate circuits 74 a to 74 c and the gate circuit 75 can be defined by
  • |Y 0|L ≦Y≦|Y 0|U
  • and[0151]
  • Y×α 3 ≦|R−Y|≦Y×α 4
  • and[0152]
  • Y×α 5 ≦|B−Y|≦Y×α 6  (703)
  • When the [0153] mode 3 is selected, satisfaction of the conditions of the equation (703) means that the luminance signal Y, the color difference signal |R−Y| and the color difference signal |B−Y| of the pixel data supplied from the encoder 37 are substantially equal to the luminance signal Y0, the color difference signal |R0−Y0| and the color difference signal |B0|Y0| obtained by image pickup of the object set as the target object. Similarly to the operations carried out when the mode 0 is selected, only when the color of the target object is equal to the color indicated by the pixel data as described above, the gate circuit 75 outputs the “Low”-level signal.
  • Further, a total operation carried out by the [0154] area detecting circuit 38 when the mode 0 is selected as the object mode and a plurality of pixel data indicating the same color as that of the target object exist only in the thirty-sixth area A035 shown in FIG. 8 will be described with respect to FIG. 9.
  • The luminance signal Y, the color difference signal |R−Y| and the color difference signal |B−Y| are successively supplied from the [0155] encoder 37 to the area detecting circuit 38 so as to correspond to the raster scan. Specifically, the area detecting circuit 38 is supplied with all the pixel data from the encoder 37 and determines whether or not each of the pixel data satisfies the conditions of the equation (700).
  • Although the [0156] area detecting circuit 38 is supplied with all the pixel data, a hardware circuit formed of the switch circuits 72 a to 72 d, the comparators 73 a to 73 f and the gate circuits 74 a to 74 c determines whether or not each of the pixel data satisfies the conditions of the equation (700). Therefore, it is possible to carry out the determination on a real time base without any processing load on the CPU 4.
  • Since any pixel data indicative of the same color as that of the set target object does not exist in areas other than the area A[0157] 035 in this example, even if any pixel data of the areas other than the area A035 are supplied to the area detecting circuit 38, the gate circuit 75 outputs a “High”-level signal. After the area detecting circuit 38 is supplied with the pixel data indicative of the same color of that of the target object in the area A035, the gate circuit 75 outputs the “Low”-level signal. At this time, the “Low”-level chip select signal CS is supplied only to the 36th chip circuit corresponding to the area A035, while the “High”-level chip select signal is supplied to other chip circuits. The “Low”-level pixel clock signal is supplied to the chip circuits at the timing at which the pixel data indicative of the same color as that of the target object is supplied. Therefore, only when a gate circuit 76 a 035 of the 36th chip circuit is supplied with the “Low”-level signal from the gate circuit 75, the “Low”-level pixel clock signal CLK and the “Low”-level chip select signal, the gate circuit 76 a 035 supplies a “Low”-level signal to a counter 76 b 035. When being supplied with the “Low”-level signal from the gate circuit 76 a 035, the counter 76 b 035 counts the supply up and, when the count value becomes 5, outputs a flag signal to the multiplexer 77. This operation means that if there are fourteen thousand and forty pixel data in the area A035 and there are the five pixel data or greater indicative of the same color as that of the set target object among them, then the chip circuit corresponding to the area A035 outputs the flag signal. As a result, in this example, only the counter 76 b 035 of the chip circuit corresponding to the area A035 outputs the flag signal and the counters 76 b corresponding to the area A035 other than the area A035 are prevented from outputting the flag signal.
  • The [0158] multiplexer 77 outputs the flag signal output from each of the chip circuits to the CPU 4 in correspondence with the area. In this case, the multiplexer outputs to the CPU 4 the flag signal output from the 36th chip circuit corresponding to the area A035.
  • Since the [0159] area detecting circuit 38 formed of a hardware is operated as described above, the CPU 4 can recognize on a real time base which area the pixel indicative of the same color as the color of the set target pixel exists in.
  • An operation of a video camera apparatus will be described with reference to FIGS. [0160] 10 to 16 which are flowcharts therefor.
  • A focus control operation will initially be described. [0161]
  • An autofocus operation will be described with reference to FIGS. [0162] 8 to 13 which are flowcharts therefor and FIG. 14.
  • A focus mode is shifted from a manual focus mode to an autofocus mode when a camera man presses an autofocus button provided in an [0163] operation button 5. The autofocus mode includes a continuous mode in which the autofocus mode is continued after the button is pressed until a command of mode shift to the manual focus mode is issued, and a non-continuous mode in which, after an object is brought into focus, the autofocus mode is stopped and the mode is automatically shifted to the manual focus mode. The continuous mode will be described in the following explanation with reference to the flowcharts. In processings in steps S100 to S131, it is determined to which direction the lens is to be moved. In processings in steps S201 to S221, the lens position is calculated so that the estimation value should be maximum.
  • As shown in FIG. 17, in steps S[0164] 100 to S104, based on a command from the CPU 4, the focus lens is moved to the position X1 which is distant in the Far direction from an initial lens position X0 by a distance of D/2, subsequently moved to a position X2 which is distant in the Near direction from the position X1 by a distance of D, and then moved to a position which is distant from the position X2 in the Far direction by a distance of D/2, i.e., returned to the initial lens position X0. The Near direction depicts a direction in which the lens is moved toward the imaging devices, and the Far direction depicts a direction in which the lens is moved away from the imaging devices. Reference symbol D depicts a focal depth. The microcomputer 64 stores in the RAM 66 the estimation values Ei(X0), the estimation values Ei(X1), and the estimation values Ei(X2) generated in the horizontal-direction estimation value generating circuit 62 and the vertical-direction estimation value generating circuit 63.
  • The reason for preventing the focus lens from being moved from the position X[0165] 0 by a distance exceeding D/2 will be described. The focal depth is a data indicating a range within which the lens is in focus around a focus point. Therefore, even if the focus lens is moved within the range of the focal depth, then it is impossible for a man to recognize deviation of focus resulting from such movement. Contrary, when the lens is moved from the position X1 to the position X2, if the lens is moved by a distance exceeding the focal depth, then deviation of the focus resulting from the movement influences the video signal obtained by image pickup. Specifically, when a maximum movement amount of the lens is set within the focal depth, the deviation of the focus cannot be recognized.
  • The processing in each of steps S[0166] 100 to S104 will be described in detail with reference to FIG. 17.
  • In step S[0167] 100, the microcomputer 64 stores in the RAM 66 the estimation values E1(X0) to the estimation values E24(X0) newly generated by the horizontal-direction estimation value generating circuit 62 and the vertical-direction estimation value generating circuit 63. After finishing storing the above estimation values, the microcomputer 64 issues to the CPU 4 a command to move the focus lens in the Far direction by a distance of D/2.
  • In step S[0168] 101, the CPU 4 outputs a command to the focus-lens motor drive circuit 12 c to move the focus lens in the Far direction by a distance of D/2.
  • In step S[0169] 102, the microcomputer 64 stores in the RAM 66 the estimation values E1(X1) to the estimation values E24(X1) newly generated by the horizontal-direction estimation value generating circuit 62 and the vertical-direction estimation value generating circuit 63. After finishing storing the above estimation values, the microcomputer 64 issues to the CPU 4 a command to move the focus lens in the Near direction by a distance of D.
  • In step S[0170] 103, the CPU 4 outputs a command to the focus-lens motor drive circuit 12 c to move the focus lens in the Near direction by a distance of D.
  • In step S[0171] 104, the microcomputer 64 stores in the RAM 66 the estimation values E1(X2) to the estimation values E24(X2) newly generated by the horizontal-direction estimation value generating circuit 62 and the vertical-direction estimation value generating circuit 63. After finishing storing the above estimation values, the microcomputer 64 issues to the CPU 4 a command to move the focus lens in the Near direction by a distance of D/2.
  • Therefore, when the processing in step S[0172] 104 is finished, the estimation values E1(X0) to the estimation values E24(X0) generated when the lens is located at the position X0, the estimation values E1(X1) to the estimation values E24(Xl) generated when the lens is located at the position X1, and the estimation values E1(X2) to the estimation values E24(X2) generated when the lens is located at the position X0 are stored in the RAM 66 of the microcomputer 64.
  • Processings in steps S[0173] 105 to S115 are processings for selecting an improper estimation value from the twenty-four estimation values.
  • A basic concept of operations in steps S[0174] 105 to S115 will be described with reference to FIG. 18A and FIG. 18B.
  • FIGS. 18A and 18B show that a target object A to be brought into focus is imaged in a window W[0175] 2 and a non-target object B having high contrast and located on the front side of the target object A is imaged in a window W1 but outside of the window W2. At this time, since the object B exists within the window W1, the estimation value E1 generated by the first horizontal-direction estimation value generating circuit 62 a having a preset window size value of the window W1 inevitably includes high-frequency components resulting from the object B and hence is improper as the estimation value of the object A. Therefore, the estimation value E1 inevitably becomes considerably large as compared with the estimation value E2 generated by the second horizontal-direction estimation value generating circuit 62 b having the preset value of the window W2. Similarly, the estimation value E7 generated by the seventh horizontal-direction estimation value generating circuit 62 g having a preset window size value of the window W1 inevitably includes high-frequency components resulting from the object B and hence is improper as the estimation value of the object A. Therefore, the estimation value E7 inevitably becomes considerably large as compared with the estimation value E8 generated by the eighth horizontal-direction estimation value generating circuit 62 h having the preset value of the window W2.
  • It is not always determined that the estimation value E[0176] 2 or the estimation value E8 is proper on the basis of only the fact that the non-target object B does not exist in the window W2. The reason for this will be described with reference to FIG. 18B. FIG. 18B shows windows obtained when the lens is moved so as to be focused on the object A. The more the lens is adjusted so as to be focused on the object A, the more the lens becomes considerably out of focus with respect to the object B. When the lens becomes considerably out of focus with respect to the object B, an image of the object B becomes blurred considerably and the blurred image thereof enters the window W2. Therefore, in a state shown in FIGS. 18A and 18B, the estimation value E2 generated by the second horizontal-direction estimation value generating circuit 62 b having the preset value of the window W2 is not always proper. Similarly, the estimation value E8 generated by the eighth horizontal-direction estimation value generating circuit 62 h having the preset value of the window W2 is not always proper.
  • As described above, in order to determine whether or not the estimation values E[0177] 1 and E7 obtained from the window W1 and the estimation values E2 and E8 obtained from the window W2 are proper, it is sufficient to discriminate whether or not
  • |E 1 −E 2 |≦E 1×β
  • and[0178]
  • |E 7 −E 8 |≦E 7×β  (3)
  • are satisfied. β is a coefficient previously set based on an experimental result. While in this embodiment the value thereof is set to β=0.01, if predetermined values obtained from experiments are used instead of (E[0179] 1×β) and (E7×β), it is possible to obtain the same result without (E1×β) and (E7×β) being used in the equation (3).
  • In the determination based on the calculated result of the equation (3), if both of values of |E[0180] 1−E2| and |E7−E8| are smaller than a predetermined value, then it can be determined that there is almost no difference between the estimation values E1 and E2 and it can be determined that there is almost no difference between the estimation values E7 and E8. Therefore, it is determined that there is no object such as the non-target object B shown in FIG. 18A. If both of values of |E1−E2| and |E7−E8| are larger than a predetermined value, then it can be determined that there is some difference between the estimation values E1 and E2 and it can be determined that there is some difference between the estimation values E7 and E8. Therefore, it is determined that there is an object such as the non-target object B shown in FIG. 18A. Specifically, when the equation (3) is calculated, if the equation (3) is satisfied, then the estimation values E1 and E2 and the estimation values E7 and E8 are proper. If on the other hand the equation (3) is not satisfied, then each of the estimation values E1 and E2 and the estimation values E7 and E8 is not proper.
  • In consideration of the above basic concept, the processings in steps S[0181] 105 to S115 will specifically be described with reference to FIGS. 10 and 11.
  • In step S[0182] 105, it is determined by using the estimation values E1(X0) to E24(X0) obtained when the lens is located at the position X0 whether or not
  • |E 1(X 0)−E 2(X 0)|≦E 1(X 0)×β1
  • and[0183]
  • |E 7(X 0)−E 8(X 0)|≦E 7(X 0)×β1  (105)
  • are satisfied. If the estimation values E[0184] 1, E2, E7, E8 satisfy the equation (105), then it is determined that the estimation values E1, E2, E7, E8 are proper values, and then the processing proceeds to step S117. If on the other hand the estimation values E1, E2, E7, E8 do not satisfy the equation (105), then it is determined that at least the estimation values E1, E2, E7, E8 are improper values, and then the processing proceeds to step S106.
  • Since it is determined based on the calculated result of step S[0185] 105 that the estimation values E1, E2, E7, E8 are improper, in step S106, the estimation values E3 and E9 obtained from the window W3 which is a large window next to the window W1 are used and the estimation values E4 and E10 obtained from the window W4 which is a large window next to the window W2 are used.
  • In step S[0186] 106, similarly to step S105, it is determined by using the estimation values E1(X0) to E24(X0) obtained when the lens is located at the position X0 whether or not
  • |E 3(X 0)−E 4(X 0)|≦E 3(X 0)×β1
  • and[0187]
  • |E 9(X 0)−E 10(X 0)|≦E 9(X 0)×β1  (106)
  • are satisfied. If the estimation values E[0188] 3, E4, E9, E10 satisfy the equation (106), then it is determined that the estimation values E3, E4, E9, E10 are proper values, and then the processing proceeds to step S107. If on the other hand the estimation values E3, E4, E9, E10 do not satisfy the equation (106), then it is determined that at least the estimation values E3, E4, E9, E10 are improper values, and then the processing proceeds to step S108.
  • The reason for employing the windows W[0189] 3 and W4 having larger sizes will be described. As described above, since the estimation values E1 and E2 and the estimation values E7 and E8 are improper in the state shown in FIG. 18A, it is impossible to bring either the target object A or the non-target object B into focus. However, when the windows W3 and W4 larger than the windows W1 and W2 are used, it is considered that the non-target object B lies in the range of the window W4. If the whole non-target object B lies within the window W4, then difference between the estimation value E3 and the estimation value E4 becomes small and difference between the estimation value E9 and the estimation value E10 becomes small. Specifically, it is determined that the estimation values E3, E4, E9, and E10 satisfy the equation (106). As a result, since the estimation values E3, E4, E9, and E10 become proper values, the non-target object B is brought into focus. Indeed, the lens should be focused on the target object A. But, if the lens is adjusted so as to be focused on the object A, then it is impossible to obtain the proper estimation values. As a result, the autofocus control circuit 34 repeatedly executes the processing of a control loop and keeps the focus lens moving for a long time. Therefore, while the autofocus control circuit repeatedly executes the control loop, the video signal indicative of a blurred image must continuously be output. However, if the lens is focused on the non-target object B, then it is possible to prevent the video signal indicative of the blurred image from being output continuously by repeating the control loop for a long period of time.
  • In step S[0190] 107, numbers of i=1, 2, 7, 8 are defined as non-use numbers based on the result in step S105 that the estimation values E1, E2, E7, and E8 are improper values and on the result in step S106 that the estimation values E3, E4, E9, and E10 are proper values. Then, the processing proceeds to step S117. Since in step S107 the numbers of i=1, 2, 7, 8 are defined as the non-use numbers, the estimation values E1, E2, E7, and E8 will not be used in step S107 and the succeeding steps.
  • In step S[0191] 108, since it is determined based on the result of the calculation in step S106 that the estimation values E3, E4, E9, and E10 are improper, the estimation values E5 and E11 obtained from the window W5 which is large next to the window W3 are used and the estimation values E6 and E12 obtained from the window W6 which is large next to the window W4 are used.
  • In step S[0192] 108, similarly to step S106, it is determined by using the estimation values E1(X0) to E24(X0) generated when the lens is located at the position X0, whether
  • |E 5(X 0)−E 6(X 0)|≦E 5(X 0)×β1
  • and[0193]
  • |E 11(X 0)−E 12)X 0)|≦E 11(X 0)×β1  (108)
  • are satisfied. If the estimation values E[0194] 5, E6, E11, E12 satisfy the equation (108), then it is determined that the estimation values E5, E6, E11, E12 are proper values, and then the processing proceeds to step S109. If on the other hand the estimation values E5, E6, E11, E12 do not satisfy the equation (108), then it is determined that at least the estimation values E5, E6, E11, E12 are improper values, and then the processing proceeds to step S110.
  • In step S[0195] 109, only numbers of i=1, 2, 3, 4, 7, 8, 9, 10 are defined as non-use numbers based on the result in step S105 that the estimation values E1, E2, E7, and E8 are improper values, on the result in step S106 that the estimation values E3, E4, E9, and E10 are improper values, and on the result in step S108 that the estimation values E5, E6, E11, and E12 are proper values. Then, the processing proceeds to step S117. Since in step S109 the numbers of i=1, 2, 3, 4, 7, 8, 9, 10 are defined as the non-use numbers, the estimation values E1, E2, E3, E4, E7, E8, E9 and E10 will not be used in step S109 and the succeeding steps.
  • In step S[0196] 108, since it is determined based on the result of the calculation in step S106 that the estimation values E3, E4, E9, and E10 are improper, the estimation values E5 and E11 obtained from the window W5 which is large next to the window W3 are used and the estimation values E6 and E12 obtained from the window W6 which is large next to the window W4 are used.
  • In step S[0197] 110, similarly to step S108, it is determined by using the estimation values E1(X0) to E24(X0) generated when the lens is located at the position X0, whether
  • |E 13(X 0)−E 14(X 0)|≦E 13(X 0)×β2
  • and[0198]
  • |E 19(X 0)−E 20(X 0)|≦E 19(X 0)×β2  (110)
  • are satisfied. If the estimation values E[0199] 13, E14, E19, E20 satisfy the equation (110), then it is determined that the estimation values E13, E14, E19, E20 are proper values, and then the processing proceeds to step S111. If on the other hand the estimation values E13, E14, E19, E20 do not satisfy the equation (110), then it is determined that at least the estimation values E13, E14, E19, E20 are improper values, and then the processing proceeds to step S112.
  • In step S[0200] 111, only numbers of i=1 to 12 are defined as non-use numbers based on the result in step S105 that the estimation values E1, E2, E7, and E8 are improper values, on the result in step S106 that the estimation values E3, E4, E9, and E10 are improper values, on the result in step S108 that the estimation values E5, E6, E11, and E12 are improper values, and on the result in step S110 that the estimation values E13, E14, E19, and E20 are proper values. Then, the processing proceeds to step S117. Since in step S111 the numbers of i=1 to 12 are defined as the non-use numbers, the estimation values E1 to E12 will not be used in step S111 and the succeeding steps.
  • In step S[0201] 112, similarly to step S110, it is determined by using the estimation values E1(X0) to E24(X0) generated when the lens is located at the position X0, whether
  • |E 15(X 0)−E 16(X 0)|≦E 15(X 0)×β2
  • and[0202]
  • |E 21(X 0)−E 22(X 0)|≦E 21(X 0)×β2  (112)
  • are satisfied. If the estimation values E[0203] 15, E16, E21, E22 satisfy the equation (112), then it is determined that the estimation values E15, E16, E21, E22 are proper values, and then the processing proceeds to step S113. If on the other hand the estimation values E15, E16, E21, E22 do not satisfy the equation (112), then it is determined that at least the estimation values E15, E16, E21, E22 are improper values, and then the processing proceeds to step S114.
  • In step S[0204] 113, only numbers of i=1 to 14, 19 and 20 are defined as non-use numbers based on the result in step S105 that the estimation values E1, E2, E7, and E8 are improper values, on the result in step S106 that the estimation values E3, E4, E9, and E10 are improper values, on the result in step S108 that the estimation values E5, E6, E11, and E12 are improper values, on the result in step S110 that the estimation values E13, E14, E19, and E20 are improper values, and on the result in step S112 that the estimation values E15, E16, E21, and E22 are proper values. Then, the processing proceeds to step S117. Since in step S113 the numbers of i=1 to 12, 19 and 20 are defined as the non-use numbers, the estimation values E1 to E14, E19 and E20 will not be used in step S113 and the succeeding steps.
  • In step S[0205] 114, similarly to step S110, it is determined by using the estimation values E1(X0) to E24(X0) generated when the lens is located at the position X0, whether
  • |E 17(X 0)−E 18(X 0)|≦E 17(X 0)×β2
  • and[0206]
  • |E 23(X 0)−E 24(X 0)|≦E 23(X 0)×β2  (114)
  • are satisfied. If the estimation values E[0207] 17, E18, E23, E24 satisfy the equation (114), then it is determined that the estimation values E17, E18, E23, E24 are proper values, and then the processing proceeds to step S115. If on the other hand the estimation values E17, E18, E23, E24 do not satisfy the equation (114), then it is determined that at least the estimation values E17, E18, E23, E24 are improper values, and then the processing proceeds to step S116.
  • In step S[0208] 115, only numbers of i=1 to 16 and 19 to 22 are defined as non-use numbers based on the result in step S105 that the estimation values E1, E2, E7, and E8 are improper values, on the result in step S106 that the estimation values E3, E4, E9, and E10 are improper values, on the result in step S108 that the estimation values E5, E6, E7, and E12 are improper values, on the result in step S110 that the estimation values E13, E14, E19, and E20 are improper values, on the result in step S112 that the estimation values E15, E16, E21, and E22 are improper values, and on the result in step S114 that the estimation values E17, E18, E23, and E24 are proper values. Then, the processing proceeds to step S117. Since in step S115 the numbers of i=1 to 16 and 19 to 22 are defined as the non-use numbers, the estimation values E1 to E16 and E19 to E22 will not be used in step S115 and the succeeding steps.
  • When the processing reaches step S[0209] 116, it is inevitably determined that all the estimation values E1 to E24 are improper. Therefore, it is determined that the autofocus operation cannot be carried out. Then, the mode is shifted to the manual focus mode and the processing is ended.
  • Then, the processings in steps for selecting the improper estimation values from the twenty-four estimation values is ended. [0210]
  • As shown in FIGS. 12 and 13, processings in steps S[0211] 117 to S131 are those in flowcharts for a specific operation for determining the lens movement direction. Processings in steps S117 to S131 are those carried out by the microcomputer 64.
  • In step S[0212] 117, the number is set to i=1 and a count-up value Ucnt, a count-down value Dcnt and a flat count value Fcnt are reset.
  • In step S[0213] 118, it is determined whether or not the number i is a number defined as a non-use number. If it is determined that the number i is not defined as the non-use number, then the processing proceeds to step S120. If it is determined that the number i is defined as the non-use number, then in step S119 the number i is incremented and then the next number of i is determined.
  • A processing in step S[0214] 120 is a processing carried out when the estimation value Ei(X0) has not a value substantially equal to Ei(X2) but a value larger than Ei(X2) to some degree and when the estimation value Ei(X1) has not a value substantially equal to Ei(X0) but a value larger than Ei(X0) to some degree. To facilitate this processing further, the processing is that of determining, if the focus lens is moved in the Far direction from the position X2 through the position X0 to the position X1, whether or not the estimation values are increased in an order of the estimation values Ei(X2), Ei(X0), Ei(X1). Specifically, it is determined by calculating the following equations;
  • E i(X 2)×β3 <E 1(X 0)
  • and[0215]
  • E i(X 0)×β3 <E i(X i)  (120)
  • where β[0216] 3 is a coefficient experimentally obtained and set to β3=1.03 in this embodiment. If the above estimation values satisfy the equation (120), it means that as the focus lens is moved from the position X2 through the position X0 to the position X1, the estimation values are increased in an order of the estimation values corresponding thereto. Then, the processing proceeds to the next step S121. If the above estimation values do not satisfy the equation (120), then the processing proceeds to step S122.
  • In step S[0217] 121, the count-up value Ucnt is added with the weight data Wi, and then the processing proceeds to step S126.
  • A processing in step S[0218] 122 is a processing carried out when the estimation value Ei(X0) has not a value substantially equal to Ei(X1) but a value larger than Ei(X1) to some degree and when the estimation value Ei(X2) has not a value substantially equal to Ei(X0) but a value larger than Ei(X0) to some degree. To facilitate this processing further, the processing is that of determining, if the focus lens is moved in the Far direction from the position X2 through the position X0 to the position X1, whether or not the estimation values are decreased in an order of the estimation values Ei(X2), Ei(X0), Ei(X1). Specifically, it is determined by calculating the following equations;
  • E i(X 1)×β3 <E i(X 0)
  • and[0219]
  • E i(X 0)×β3 <E i(X 2)  (122).
  • If the above estimation values satisfy the equation (122), it means that as the focus lens is moved from the position x[0220] 2 through the position X0 to the position X1, the estimation values are decreased in an order of the estimation values corresponding thereto. Then, the processing proceeds to the next step S123. If the above estimation values do not satisfy the equation (122), then the processing proceeds to step S124.
  • In step S[0221] 123, the count-down value Dcnt is added with the weight data Wi, and then the processing proceeds to step S126.
  • A processing in step S[0222] 124 is a processing carried out when the estimation value Ei(X0) has not a value substantially equal to Ei(X1) but a value larger than Ei(X1) to some degree and when the estimation value Ei(X0) has not a value substantially equal to Ei(X2) but a value larger than Ei(X2) to some degree. To facilitate this processing further, the processing is that of determining, if the focus lens is moved in the Far direction from the position X2 through the position X0 to the position X1, whether the peak of the estimation values lies in the estimation value Ei(X0). Specifically, it is determined by calculating the following equations;
  • E i(X 1)×β3 <E i(X 0)
  • and[0223]
  • E i(X2)×β3 <E i(X 0)  (124).
  • If the above estimation values satisfy the equation (124), it means that when the focus lens is moved from the position X[0224] 2 through the position X0 to the position X1, the peak value of the estimation values is the estimation value Ei(X0). Then, the processing proceeds to the next step S125. If the above estimation values do not satisfy the equation (120), then the processing proceeds to step S126.
  • In step S[0225] 125, the flat-count value Fcnt is added with the weight data Wi, and then the processing proceeds to step S126.
  • In step S[0226] 126, the number of i is incremented, and then the processing proceeds to step S127.
  • In step S[0227] 127, it is determined whether or not the number of i is 24 because the horizontal-direction estimation value generating circuit 62 and the vertical-direction estimation value generating circuit 63 generate the twenty-four estimation values E. If the value of i is 24, then it is determined that calculations of all the estimation values are finished, and then the processing proceeds to step S128. If the value of i is not 24, then the processing loop formed of steps S118 to S127 is repeatedly carried out.
  • In step S[0228] 128, it is determined by comparing the count-up value Ucnt, the count-down value Dcnt and the flat-count value Fcnt, which is the largest value among the above count values. If it is determined that the count-up value Ucnt is the largest, then the processing proceeds to step S129. If it is determined that the count-down value Dcnt is the largest, then the processing proceeds to step S130. If it is determined that the flat-count value Fcnt is the largest, then the processing proceeds to step S131.
  • In step S[0229] 129, the microcomputer 64 determines that the direction toward the position X1 is the hill-climbing direction of the estimation value, i.e., the direction in which the lens is to be in focus, and then supplies to the CPU 4 a signal designating the Far direction as the lens movement direction.
  • In step S[0230] 130, the microcomputer 64 determines that the direction toward the position X2 is the hill-climbing direction of the estimation value, i.e., the direction in which the lens is to be in focus, and then supplies to the CPU 4 a signal designating the Near direction as the lens movement direction.
  • In step S[0231] 131, the microcomputer 64 determines that the position X0 is the position at which the lens is in focus, and then the processing proceeds to step S218.
  • The operations in steps S[0232] 118 to S131 will plainly be described with reference to the example shown in FIG. 19. FIG. 19 is a diagram showing transition of change of the estimation values Ei(X2), Ei(X0), Ei(X1) respectively obtained when the lens is located at the lens positions X2, X0, X1, by way of example.
  • Initially, it is determined in step S[0233] 118 whether or not the number of i is the non-use number. In this case, it is assumed that all the numbers of i are numbers of the estimation values which can be used.
  • In the first processing loop, the estimation values E[0234] 1 are estimated. Since E1(X2)<E1(X0)<E1(X1) is established, then this relationship satisfies the condition in step S120 and hence the processing proceeds to step S121. Therefore, in step S121, the calculation of Ucnt=0+W1 is carried out.
  • In the second processing loop, the estimation values E[0235] 2 are estimated. Since E2(X2)<E2(X0)<E2(X1) is established, then this relationship satisfies the condition in step S120 and hence the processing proceeds to step S121. Therefore, in step S121, the calculation of Ucnt=W1+W2 is carried out.
  • In the third, fourth and fifth processing loops, the calculations similar to those carried out in the first and second processing loops are carried out. In step S[0236] 121 of the fifth processing loop, the calculation of Ucnt=W1+W2+W3+W4+W5 is carried out.
  • In the sixth processing loop, the estimation values E[0237] 6 are estimated. Since E2(X2)<E2(X0)>E2(X1) is established, then this relationship satisfies the condition in step S124 and hence the processing proceeds to step S125. Therefore, in step S125, the calculation of Fcnt=0+W6 is carried out.
  • After the processing loops are repeatedly carried out twenty-four times as described above, finally the calculation of [0238]
  • U[0239] cnt=W1+W2+W3+W4+W5+W7+W8+W9+W11+W13+W14+W15+W17+W18+W21+W24
  • D[0240] cnt=W10+W16+W22
  • F[0241] cnt=W6+W12+W19
  • has been carried out. If the values of the weight data W[0242] i shown in FIG. 7 by way of example are substituted for the above count-up value Ucnt, the above count-down value Dcnt and the above flat count value Fcnt, then the following results are obtained.
  • U[0243] cnt=124
  • D[0244] cnt=13
  • F[0245] cnt=18
  • Therefore, since the count-up value U[0246] cnt has the largest value among them at the time of determination in step S128, the processing proceeds to step S129 in the example shown in FIG. 19. As a result, the direction toward X1 is determined as the focus direction.
  • Processings in steps S[0247] 200 to S221 are those for determining the lens position at which the estimation value becomes maximum. The flowcharts used to explain the processings are those carried out by the microcomputer 64. The processings in steps S200 to S221 will specifically be described with reference to FIGS. 13 to 15.
  • For clear explanation of the processings in step S[0248] 200 and the succeeding steps, the following equations are defined.
  • X 1 =X 0 +ΔX
  • X 2 =X 0+2×ΔX
  • X 3 =X 0+3×ΔX
  • . . [0249]
  • . . [0250]
  • . .[0251]
  • X k =X 0 +k×ΔX
  • X k+l =X 0+(k+1)×ΔX
  • . . [0252]
  • . . [0253]
  • . . [0254]
  • X k+j =X 0+(k+j)×ΔX  (200)
  • Since the estimation value is sampled in every field in this embodiment, a distance depicted by ΔX is defined as a distance by which the focus lens is moved in one field. Therefore, the distance ΔX depicts the distance by which the lens is moved in one field period. This distance ΔX not only depicts the distance by which the lens is moved in one field period but also has a polarity of ΔX determined based on the lens movement direction obtained in the processing in steps S[0255] 100 to S130. For example, if the lens movement direction is the Far direction, the value of the distance ΔX is set so as to have a positive polarity. If the lens movement direction is the Near direction, the value of the distance ΔX is set so as to have a negative polarity.
  • A sampling frequency is not limited to this embodiment. For example, the sampling may be carried out twice per one field, and the change can be properly effected. [0256]
  • In step S[0257] 200, K=1 is set.
  • In step S[0258] 201, the microcomputer 64 issues to the CPU 4 a command to move the lens to a position Xk. The lens position Xk is defined based on equation (200) as
  • X k =X 0 +k×ΔX
  • In step S[0259] 202, the microcomputer 64 stores in the RAM 66 the estimation values E1(Xk) to the estimation values E24(Xk) newly generated by the horizontal-direction estimation value generating circuit 62 and the vertical-direction estimation value generating circuit 63. The twenty-four estimation values Ei are stored as a table shown in FIG. 16.
  • In step S[0260] 203, i=1 and j=1 are set, and the count-up value Ucnt, the count-down value Dcnt and the flat count value Fcnt are reset.
  • In step S[0261] 204, it is determined whether or not the number of i is defined as the non-use number. If the number of i is not defined as the non-use number, then the processing proceeds to step S206. If the number of i is defined as the non-use number, then in step S205 the value of i is incremented and the processing returns to step S204 again.
  • In step S[0262] 206, it is determined whether or not the estimation values Ei(Xk) obtained when the focus lens is moved from a position Xk−1 to a position Xk are increased to a certain degree or more as compared with the estimation values Ei(Xk−). Specifically, it is determined based on a calculation of
  • E i(X k−1)×β4 <E i(X k)  (206)
  • where β[0263] 4 is a coefficient experimentally obtained and is set to β4=1.05 in this embodiment. The satisfaction of the condition of the equation (206) leads to the fact that the estimation values Ei(Xk) are increased to a certain degree or more as compared with the estimation values Ei(Xk−1). In this case, the processing proceeds to the next step S207. If the condition of the equation (206) is not satisfied, then the processing proceeds to step S209.
  • In step S[0264] 207, since the estimation values Ei(Xk) are increased to a certain degree or more as compared with the estimation values Ei(Xk−1), a 2-bit data “01” indicative of increase of the estimation value is stored in the RAM 66 as a U/D information (up/down information) in connection with the estimation value Ei(Xk).
  • In step S[0265] 208, similarly to step S121, the count-up value Ucnt is added with the weight data Wi, and then the processing proceeds to step S214.
  • In step S[0266] 209, it is determined whether or not the estimation values Ei(Xk) obtained when the focus lens is moved from the position Xk−1 to the position Xk are decreased to a certain degree or more as compared with the estimation values Ei(Xk−1). Specifically, it is determined based on a calculation of
  • E i(X k)×β4 <E i(X k−1)  (209)
  • The satisfaction of the condition of the equation (209) leads to the fact that the estimation values E[0267] i(Xk) are decreased to a certain degree or more as compared with the estimation values Ei(Xk−1). In this case, the processing proceeds to the next step S210. If the condition of the equation (209) is not satisfied, then the processing proceeds to step S212.
  • In step S[0268] 210, since the estimation values Ei(Xk) are decreased to a certain degree or more as compared with the estimation values Ei(Xk−1), a 2-bit data “10” indicative of decrease of the estimation value is stored in the RAM 66 as the U/D information (up/down information) in connection with the estimation value Ei(Xk).
  • In step S[0269] 211, similarly to step S123, the count-down value Dcnt is added with the weight data Wi, and then the processing proceeds to step S214.
  • In consideration of the conditions of the processings in step S[0270] 206 and S209, the fact that the processing reaches step S212 means that the estimation values Ei(Xk) obtained when the focus lens is moveed from the positon Xk−1 to the position Xk are not changed to a certain degree or more relative to the estimation values Ei(Xk−1).
  • Therefore, in step S[0271] 212, a 2-bit data “00” indicative of flatness of the estimation value is stored in the RAM 66 as the U/D information (up/down information) in connection with the estimation value Ei(Xk).
  • In step S[0272] 213, similarly to step S125, the flat-count value Fcnt is added with the weight data Wi, and then the processing proceeds to step S214.
  • In step S[0273] 214, the value of i is incremented, and then the processing proceeds to step S215.
  • In step S[0274] 215, it is determined whether or not the value of i is 24. If it is determined that the value of i is 24, then it is determined that calculations of all the estimation values are finished, and then the processing proceeds to step S216. If it is determined the value of i is not 24, then the processing loop from step S204 to step S215 is repeatedly carried out until the value of i reaches 24.
  • A processing in step S[0275] 216 is that for determining whether or not the count-down value Dcnt is the largest among the count values. The processing in step S216 will be described by using an example shown in FIG. 20. FIG. 20 is a table showing a state of the respective estimation values and the respective up/down informations stored in the RAM 66. As shown in FIG. 20, the microcomputer 64 stores in the RAM 66 the respective estimation values and the respective up/down informations set in connection with the former so that these values and informations should correspond to the position Xk to which the lens is moved.
  • When the lens is located at the position X[0276] k, if the processing loop from step S204 to step S215 is repeatedly carried out, then the count-up value Ucnt, the count-down value Dcnt and the flat-count value Fcnt are as follows.
  • U[0277] cnt=W1+W2+W4+W5+W8+W9+W11+W14+W15+W16+W19+W23
  • D[0278] cnt=W7+W10+W17+W18+W20+W21+W24
  • F[0279] cnt=W3+W6+W12+W13+W22
  • If the values of the weight data W[0280] i shown in FIG. 7 by way of example are substituted for the above count-up value Ucnt, the above count-down value Dcnt and the above flat count value Fcnt, then the following results are obtained.
  • U[0281] cnt=95
  • D[0282] cnt=34
  • F[0283] cnt=31
  • Specifically, although a value is increased, decreased or not changed depending upon each of the values, it is possible to judge in consideration of all the estimation values that the estimation value is increased. [0284]
  • An estimation value obtained by a synthetic judgement thus made in step S[0285] 216 will hereinafter be referred to as “a total estimation value”. Therefore, in other words, the processing in step S216 can be expressed as that for determining whether or not the total estimation value is decreased.
  • It will be described how to judge estimation values generated when the lens is located at the position X[0286] k+1, as shown in FIG. 20. When the lens is located at the position Xk+1, if the processing loop from step S204 to step S215 is repeatedly carried out, then the count-up value Ucnt, the count-down value Dcnt and the flat-count value Fcnt are as follows.
  • U[0287] cnt=W5+W11+W12+W17+W18+W20+W23
  • D[0288] cnt=W1+W2+W3+W6+W7+W8+W10+W13+W14+W15+W16+W19+W21+W22+W24
  • F[0289] cnt=W4+W9
  • If the values of the weight data W[0290] i shown in FIG. 7 by way of example are substituted for the above count-up value Ucnt, the above count-down value Dcnt and the above flat count value Fcnt, then the following results are obtained.
  • U[0291] cnt=29
  • D[0292] cnt=113
  • F[0293] cnt=18
  • Specifically, study of the above results can lead to determination that the total estimation value is decreased. If it is determined in step S[0294] 216 that the total estimation value is decreased, then the processing proceeds to step S217.
  • In step S[0295] 217, the value of j is incremented, and then the processing proceeds to step S218. This value of j is a value indicative of how many times the determination result in step S216 is continuously YES, i.e., how many times the total estimation value is continuously decreased.
  • Assuming that the first lens position where the total estimation value starts continuously decreasing is the position X[0296] k+1, it is determined in step S218 whether or not the lens movement distance (Xk+j from the position Xk is larger than D×n. An equation actually used for the determination is expressed by
  • ΔX×j≧D×n  (218)
  • where D depicts a focal depth of the focus lens and n depicts a previously set coefficient. Study of experimental results reveals that when the value of n is set within the range of 1≦n≦10, the autofocus operation at an optimum speed can be realized. [0297]
  • A determination carried out in step S[0298] 218 will be described with reference to FIG. 21. An abscissa of a graph shown in FIG. 21 represents a lens position X, and an ordinate thereof represents an estimation value E(X) corresponding to the lens position.
  • When j=1 is established, the total estimation value is that obtained at the lens position where the total estimation value is decreased first time, and hence the lens position corresponding to j=1 is the lens position X[0299] k+1. Therefore, a right side (ΔX×j) of the equation (218) represents the distance between the lens position Xk located immediately before the total estimation value has been decreased and the first lens position Xk+1 where the total estimation value starts decreasing first. Therefore, study of FIG. 21 reveals that the result of determination in step S218 is NO when j=1 is established.
  • When j=2 is established, the total estimation value is that obtained at the lens position where the total estimation value has been decreased continuously twice, and hence the lens position corresponding to j=2 is the lens position X[0300] k+2. Therefore, as shown in FIG. 21, a right side (ΔX×j) of the equation (218) represents the distance between the lens position Xk located immediately before the total estimation value has been decreased and the lens position Xk+2 where the total estimation value has been decreased continuously twice. Therefore, study of FIG. 21 reveals that the result of determination in step S218 is NO when j=2 is established.
  • If on the other hand it is determined in step S[0301] 216 that the count-down value Dcnt does not have the largest value, then it is determined that the total estimation value is not decreased, and then the processing proceeds to step S219.
  • In step S[0302] 219, the value of i is set to j=0. This processing is that for resetting the value of j. The reason for resetting the value of j is that j is the value indicative of how many times the total estimation value has been decreased continuously. Moreover, since the fact that the processing reaches step S219 means that it is determined in step S216 that the total estimation value is not decreased, the continuous decrease of the total estimation value is stopped at the time of determination in step S216. Accordingly, in step S219, the value of j is reset.
  • Since the value of j is reset when the continuous decrease of the total estimation value is stopped, even if a certain estimation value E(X[0303] k) has a maximum value produced simply by a noise, then the value of j is reset in the processing loop for the estimation values E(Xk+1) or E(Xk+2) or E(Xk+3) and hence the estimation value E(Xk) is prevented from being estimated as the largest value.
  • In step S[0304] 220, the value of k is incremented in order to further move the focus lens. Then, the processing returns to step S201.
  • If the result of the determination in step S[0305] 218 is YES, then the processing proceeds to step S221. In step S221, a lens position xg where the estimation value becomes maximum is calculated by interpolation. In the following description, the position xg is defined as a just focus position. This calculation for interpolation is a barycentric calculation. Xg = X start X end E ( X ) · X · X X start X end E ( X ) X ( 10 )
    Figure US20010050718A1-20011213-M00001
  • The reason for calculation the just focus position Xg by the barycentric calculation is that even if there is any noise, the noise seldom influences a calculation result of the focus position and that unlike it is unnecessary to determine a shape of an estimation value curve E(X) as a method of least squares requires (since the number of shapes of the objects is infinite, the estimation value curve E(X) can not be modeled by using the estimation value curve E(X).). [0306]
  • X start and X end in the above equation depicts a start position and an end position of an integration range. X depicts a lens position, and E(X) depicts an estimation value obtained when the focus lens is located at the lens position X. [0307]
  • This barycentric calculation (the calculation according to the equation (10)) permits the lens position where the estimation value becomes maximum to be calculated even if the lens position Xg where the estimation value becomes maximum and the sample point of the estimation value are not overlapped each other. [0308]
  • The integration range must be properly set to increase the accuracy of the barycentric calculation according to the equation (10). [0309]
  • A method of setting the integration range start position Xstart and end position Xend of the equation (10) will be described with reference to FIG. 21 which shows an example of the estimation value curve. In the example shown in FIG. 21, it is determined in step S[0310] 218 that when the focus lens is moved to a position Xk+2, an estimation value E(Xk) sampled at the lens position Xk two fields before is the maximum value among all the sampled estimation values. In this embodiment, a focus lens position Xk+2 obtained when the maximum sampled estimation value is determined is employed as the end position Xend of the integration range in the equation (10). As shown in FIG. 21, it is assumed that a focus lens position corresponding to an estimation value having the same value of the estimation value E(Xk+2) sampled at the integration end position Xk+2 and located on the opposite side of the integration end position Xk+2 relative to the lens position Xk where the maximum sampled estimation value E(Xk) in the estimation value curve is selected is X′. In this embodiment, it is assumed that the focus lens position X′ is the integration range start position Xstart in the equation (10). Specifically, the integration range in the equation (10) ranges from the lens position X′ to the lens position Xk+2.
  • However, since the sampled estimation values are not continuous values in the X-axis direction (in the lens movement direction) but discrete values obtained at every field, a continuous integration calculated in accordance with the equation (10) cannot be carried out. Therefore, the values between adjacent sampling points are calculated for interpolation. [0311]
  • In this embodiment, a discrete calculation is carried out in accordance with the following equation (11) to carry out a high-accuracy interpolation calculation. [0312] Xg = X = Xstart Xend E ( X ) · X · ΔX X = Xstart Xend E ( X ) · ΔX ( 11 )
    Figure US20010050718A1-20011213-M00002
  • If the lens position X′ obtained as the integration start position Xstart is agreed with any of the sampled lens positions (X[0313] 0 to Xk−1), then the agreed lens position is employed as the integration start position in the equation (11). If on the other hand the lens position X′ obtained as the integration start position Xstart is not agreed with any of the sampled lens positions, then a lens position in which an estimation value is smaller than E(Xk+2) and which is closest to the lens position X′ is selected from the sampled lens positions (X0 to Xk−1) stored in the RAM 66. In the example shown in FIG. 21, a lens position Xk−3 in which an estimation value is smaller than E(Xk+2) and which is closest to the lens position X′ is selected as the integration start position Xstart. Therefore, the integration range in the equation (11) is a range from the lens position Xk−3 to the lens position Xk+2.
  • Since it is sufficient to obtain a position close to the lens position X′ as the integration start position Xstart, the present invention is not limited to this embodiment. [0314]
  • In this case, although the integration range is not balanced relative to the maximum estimation value Xg, practically such unbalance does not influence the calculation accuracy so much because of the following reasons. [0315]
  • {circle over ([0316] 1)} If the movement speed of the focus lens is large in the vicinity of the peak (maximum) estimation value, then the estimation value E(Xk−3) is sufficiently small as compared with a peak of an ordinary estimation value and hence does not contribute to the integration calculation largely. Therefore, such unbalance does not influence the calculation accuracy of the just focus position Xg so much.
  • {circle over ([0317] 2)} If the movement speed of the focus lens is small in the vicinity of the peak (maximum) estimation value, then the estimation value E(Xk−3) contributes to the integration calculation largely. However, since an interval between an estimation value (data) and an estimation value (data) is small, the calculation accuracy of the just focus position Xg is improved, which cancels the lowering of the calculation accuracy resulting from the above large contribution of the estimation value to the integration calculation.
  • An estimation value used in the interpolation calculation expressed in the equation (11) will be described. The estimation value used in the interpolation calculation is the estimation value selected from the twenty-four estimation values E[0318] 1 to E24 generated by the horizontal-direction estimation value generating circuit 62 and the vertical-direction estimation value generating circuit 63. Specifically, of the estimation values having an estimation value curve and increased and decreased in the curve as the total estimation value, the estimation value having the largest weight data Wi is selected.
  • This selection will be described with reference to the examples shown in FIGS. 20 and 21. When the lens position is the lens positions X[0319] k−3, Xk−2, Xk−1 and Xk, it is determined in step S216 that the total estimation value is increased. When the lens position is the lens positions Xk+1 and Xk+2, it is determined in step S216 that the total estimation value is decreased. The estimation values, similarly to the total estimation value, increased when the lens position is the lens positions Xk−3, Xk−2, Xk−1 and Xk and decreased when the lens position is the lens positions Xk+1 and Xk+2 are the estimation values E1, E2, E14, and E16 in the example shown in FIG. 20. Study of the relationship between the estimation value E and the weight data Wi shown in FIG. 7 reveals that the values of the weight data W1, W2, W14, and W16 set for these selected estimation values E1, E2, E14, and E16 are 20, 15, 5 and 3, respectively. Since the estimation value having the largest weight data Wi is the estimation value E1, the barycentric calculation is carried out in accordance with the equation (11) by using the E1(Xk−3), E1(Xk−2), E1(Xk−1), E1(Xk), E1(Xk+1) and E1(Xk+2). Thus, the lens position Xg where the estimation value becomes maximum is obtained with high accuracy. The barycentric calculation for interpolation allows the lens position Xg to be calculated with high accuracy even if the sample point is not the lens position xg as shown in FIG. 21.
  • Since the lens position xg where the estimation value becomes maximum is approximate to a position at which an area obtained by the integration of the above integration range can be divided into two equal halves, the position at which the area can be divided into two equal halves may be employed as the lens position Xg. Since the lens position Xg where the estimation value becomes maximum is approximate to a middle point of the above integration range, the middle point position may be employed as the lens position xg. [0320]
  • While in the above embodiment the calculation according to the equation (11) is carried out by using only one estimation value selected from the twenty-four estimation values selected from the estimation value generating circuits, a plurality of estimation values selected from the twenty-four estimation values may be weighed by adding the weight data thereto, the weighed data being employed as the estimation value for use in the equation (11). [0321]
  • When the maximum estimation value Eg(X[0322] k) is defined, the lower-limit estimation value corresponding thereto is defined as Eg(Xk+1). The maximum estimation value Eg(Xk) is updated at every field even after the lens is fixed on the lens position Xk to focus the focus lens, while the lower-limit estimation value is fixed to Eg(Xk+1).
  • In step S[0323] 222, the microcomputer 64 supplies the control signal to the CPU 4 so that the focus lens should be moved to this lens position Xg.
  • In step S[0324] 223, it is determined whether or not a command to track the object is issued to the CPU 4. The command to track the object is a command to control a tilt/pan operation of the video camera to track the movement of the object and also to change a position of the estimation value detection window used for the autofocus operation of the video camera. For example, this track command is issued to the CPU 4 when the camera man presses a track command button provided in the operation unit 5. If the track command is supplied from the operation unit 5, then the processing proceeds to step S300. If on the other hand the track command is not supplied, then the processing proceeds to step S224.
  • In step S[0325] 224, it is determined whether or not a command to stop the autofocus operation is issued. If the camera man operates a button to cancel the autofocus mode, then the processing proceeds to step S225, wherein the mode is shifted to the manual focus mode.
  • If it is determined in step S[0326] 224 that the command to stop the autofocus mode is not issued, then the processing proceeds to step S226, wherein the maximum estimation value Eg(Xk) and the lower limit estimation value Eg(Xk+1) are compared. If the value of the maximum estimation value Eg(Xk) becomes smaller than the lower limit estimation value Eg(Xk+1) due to change of an object or the like, then the processing proceeds to step S227, wherein the autofocus mode is restarted. When the autofocus mode is restarted, the processing returns to step S100 again.
  • An operation of recognizing a set object will be described with reference to FIG. 16 which is a flowchart therefor starting from step S[0327] 300. The flowchart therefor starting from step S300 shows a processing carried out by the CPU 4. The processing will be described also with reference to an example shown in FIG. 22 for a comprehensive description of the flowchart.
  • FIG. 22 shows a state in which a round object A and a rectangular object B are imaged. It is assumed that in this example both of the object A and the object B have the same color as that of the set target object. As shown in FIG. 22, with reference to a raster scan start point (at an uppermost left position of the picture screen) as an origin, a horizontal scanning direction and a vertical scanning direction are respectively defined as an X-axis direction and a Y-axis direction. Therefore, coordinates of the raster scan start point, of a raster scan end point and of the center of the picture screen are respectively set as (0,0), (768,240) and (384,120). [0328]
  • In step S[0329] 223, if the CPU 4 receives the automatic track command from the operation unit 5, then the processing proceeds to step S300 in the flowchart shown in FIG. 16.
  • In step S[0330] 300, it is determined whether or not the object to be a target object is set by the camera man's operation of the operation unit 5. A method of setting the object to be the target object will be described. The camera man carries out an image pickup so that a desired object to be the target object should be positioned at the center of the picture screen. When the camera man presses a target object confirmation button provided in the operation unit 5 in a state in which the target object is located at the center of the picture screen, the CPU 4 recognizes the object located at the center of the picture screen as the desired target object which the cameraman sets, and stores a color information about this object in a RAM 4a. The method of setting the target object is not limited to the above method and may be that of setting an object having a predetermined color (e.g., a flesh color or the like) as the target object. After the color information of the object set as the target object is stored in the RAM 4a in step S300, the processing proceeds to step S301.
  • In step S[0331] 301, the CPU 4 selects the object mode most suitable for the target object set in step S300 from the four object modes ( modes 0, 1, 2, 3) which have been described above. The CPU 4 controls the switching operations of the switch circuits 72 a, 72 b, 72 c and 72 d in response to the selected object mode.
  • When in step S[0332] 301 the CPU 4 selects the object mode and controls the switching operations of the switch circuits 72 a, 72 b, 72 c and 72 d, the area detecting circuit 38 detects an area where the pixel data indicating the same color component as the of the object set as the target object exists. An important point of the present invention lies in that this detection processing is not a processing carried out by the CPU 4 but a processing carried out by the area detecting circuit 38 provided as a hardware circuit. In other words, since the area detecting circuit 38 is formed of a hardware circuit as described above, it is possible to determine satisfaction of the conditions with respect to all the pixel data from the encoder 37 on a real time base. Since the operation of the area detecting circuit has been described with reference to FIG. 9, it will not be described again.
  • In step S[0333] 302, the CPU 4 recognizes, based on the signal indicative of the number of the chip circuit and supplied from the area detecting circuit 38, in which area the pixel data indicative of the same color as that of the target object exists. Thus, the CPU 4 can select only the area where the data indicative of the same color of that of the object set as the target object exists. In the example shown in FIG. 22, eight areas A068, A069, A084, A085, A086, A087, A102 and A103 are selected as the area having the same color as that of the target object.
  • In step S[0334] 303, the CPU 4 reads out all the pixel data of the area selected in step S302 from the frame memory 39 in an order of the raster scan. At this time, any of the pixel data of the areas which have not been selected in step S302 are not read out therefrom. Since the same address is supplied to the frame memory 39, each of the pixel data formed of Y data, (R−Y) data and (B−Y) data is read out therefrom. In the example shown in FIG. 22, only the pixel data of the eight areas A068, A069, A084, A085, A086, A087, A102 and A103 are read out from the frame memory 39 by the CPU 4. At this time, any of the pixel data of the areas other than the eight selected areas are not read out from the frame memory 39. Since the CPU 4 determines the areas including the pixel data to be read out from the frame memory 39 based on the detected result of the area detecting circuit 38 as described above, it is possible to reduce the pixel data amount which the CPU 4 receives from the frame memory 39. Therefore, the CPU 4 processes all the pixel data supplied from the frame memory 39 on a real time base.
  • In step S[0335] 304, when the mode 0 is selected as the object mode, based on the read pixel data formed of the Y data, the (R−Y) data and the (B−Y) data, the CPU 4 determines whether or not the conditions shown in equation (700) are satisfied. When the mode 1 is selected as the object mode, the CPU determines whether or not the conditions shown in equation (701) are satisfied. When the mode 2 is selected as the object mode, the CPU determines whether or not the conditions shown in equation (702) are satisfied. When the mode 3 is selected as the object mode, the CPU determines whether or not the conditions shown in equation (703) are satisfied. When the CPU 4 carries out calculation for determining whether or not the conditions shown in the equation (700), (701), (702) or (703) are satisfied, the luminance signal Y, the color difference signal |R−Y| and the color difference signal |B−Y| defined in the equations (700), (701), (702) and (703) are signals read out from the frame memory 39. Each of programs for determining whether or not the conditions shown in the equation (700), (701), (702) or (703) are satisfied is previously stored in the RAM 4a.
  • If the all the pixel data stored in the [0336] frame memory 39 are subjected to the above processing of determining whether or not the conditions are satisfied, then the processing must be carried out many times, which prevents the processing of determining whether or not the conditions are satisfied from being carried out on a real time base. However, according to this embodiment, since only the pixel data of the selected area are subjected to the above the processing of determining whether or not the conditions are satisfied, it is possible for the CPU 4 to carry out the processing of determining whether or not the conditions are satisfied on a real time base.
  • Therefore, the [0337] CPU 4 obtains a result of determination whether or not each of the pixel data of the selected area satisfies the conditions defined by the equation (700), (701), (702) or (703). The result that the pixel data satisfy the conditions defined by the equation (700), (701), (702) or (703) means that the color indicated by the pixel data is similar to the color of the object set as the target object.
  • Further, in step S[0338] 304, the CPU 4 generates an object information table, which will be described later on, while carrying out the above processing of determining whether or not the conditions are satisfied and stores the same in the RAM 4a. In the object information table, there are recorded a coordinate information indicating on which line and from and to which pixel positions the color of the object set as the target exits and an object identification number indicating which number is allocated to the object having the same color as the color of the target object.
  • The object information table will be further described in detail with reference to FIG. 23. FIG. 23 shows an object information table obtained from the example shown in FIG. 22. A line position indicates, with a Y-axis coordinate, on which lines where an object having the same color as that of the target object exists. A start pixel position indicates, with an X-axis coordinate, a coordinate of the first pixel data of the object having the same color as that of the target object. An end pixel position indicates, with an X-axis coordinate, a coordinate of the last pixel data of the object having the same color as that of the target object. An object identification number is a number indicating which number the object recognized as the object having the same color as that of the target object has. [0339]
  • For example, since the object A shown in FIG. 22 exists in an area ranging from the 245th pixel to the 246th pixel of the 161th line, as shown in FIG. 23, data “161”, “245”, “246” are respectively stored as the line number, the start pixel position and the end pixel position in the object information table, and further the data “1” is stored as the object identification number indicative of the object A therein. Data of the rest of the object existing from the 162th line to the 190th line are stored similarly to the above data, and hence need not to be described. Since the object A exists on the 191th line within the range from the 221th pixel to 319th pixel and the object B exists between the 318th pixel to the 319th pixel as shown in FIG. 23, data “191”, “221”, “258” and “1” as information indicative of the object A are respectively stored as the line number, the start pixel position, the end pixel position and the object identification number in the object information table as shown in FIG. 23. Further, data “191”, “318”, “319” and “2” as information indicative of the object B are respectively stored as the line number, the start pixel position, the end pixel position and the object identification number in the object information table. [0340]
  • In step S[0341] 305, a window of a minimum size for surrounding the object having the same color as that of the target object is set. In the example shown in FIG. 22, a window WA defined within the range of 216≦X≦273 and 161≦Y≦202 is set as the minimum window for surrounding the object A, and a window WB defined within the range of 309≦X≦358 and 191≦Y≦231 is set as the minimum window for surrounding the object B.
  • In step S[0342] 306, a value of m is initially set to a minimum number among the object identification numbers stored in the object information table. However, the symbol m is only a variable having any value of the minimum object identification number to the maximum object identification number stored in the object information table.
  • In step S[0343] 307, it is determined based on a window center vector stored in a target object log table described later on whether or not an expected position coordinate exists within an mth window set in step S305. This determination is carried out to determine which the target object is, the object A or the object B.
  • The target object log table will be described with reference to FIG. 24. FIG. 24 is a table showing an example of the target object log table. Information about a coordinate position of an object determined as the target object at every field is stored in this target object log table. A field number is a temporarily allocated number which is reset at every 30th field and also a sequential number successively allocated in every field. A window X coordinate is a data indicating the X-axis direction range of the window set in step S[0344] 305 with an X-axis coordinate. A window Y coordinate is a data indicating the Y-axis direction range of the window set in step S305 with a Y-axis coordinate. The window center vector (ΔX, ΔY) is a vector indicating in which direction and how much the center position of the window set in step S305 is displaced from the center position (X=384, Y=120) of the picture obtained by the image pickup.
  • For example, the example of the target object log table shown in FIG. 24 shows that at the time shown by the filed [0345] number 17 the window area set for the target object is defined by 312≦X≦362 and 186≦Y≦228. The example also shows that the center position of the window is displaced from the center position of the picture obtained by the image pickup in the direction and distance indicated by the window center vector (−47, +87). The window X-axis coordinates, the window Y-axis coordinates and the window center vectors generated at the times indicated by the filed numbers 18 and 19 are shown on the target object log table similarly to those as described above and hence need not to be described.
  • The data of window center vectors stored at the [0346] field number 17, the field number 18 and the field number 19 will be considered. Any considerable difference is not found among these values of the data indicated by these three window center vectors. This results not from the fact that the target object is not moved but from the fact that the window center vector indicates a movement vector of the target object relative to the position thereof at the previous field. In this embodiment, the CPU 4 controls the pan/tilt drive mechanism 16 so that the center position of the window indicating the moving target object should be located at the center of the picture obtained by the image pickup. Therefore, since the window center vector is defined as the vector indicative of the direction and distance of displacement from the center of the picture obtained by the image pickup, the window center vector indicates the movement vector of the target object relative to the position thereof at the previous field.
  • The processing in step S[0347] 307 will be described again with reference to the above-mentioned target object log table. In step S307, it is determined whether or not the expected position coordinate exists in the first window WA (216≦X≦273, 161≦Y≦202) corresponding to the object A. The expected position coordinate is a position coordinate obtained from the window center vector at the previous field stored in the above target object log table. For example, since the window center vector (ΔX19, ΔY19) set at the time indicated by field number 19 is a vector (−49, +89), it is possible to expect that the window center vector obtained at the time indicated by the field number 20 will be substantially equal to the vector (−49, +89). Therefore, since the window center vector stored in the target object log table indicates displacement amount of the coordinates relative to the picture center coordinates (384, 120) and the direction of the displacement thereof, the center position coordinate of the window set for the target object at the time indicated by the field number 20 can be considered as (335, 209). This center position coordinate of the window is the expected position coordinate.
  • Specifically, it is determined in step S[0348] 307 that the expected position coordinate (335, 209) obtained from the target object log table does not exist in the window WA defined as the minimum window for surrounding the object A within the range of 216≦X≦273 and 161≦Y≦202. Therefore, the CPU 4 determines that the object A is not the set target object, and the processing proceeds to step S308.
  • In step S[0349] 308, the value of m is incremented, and then the processing returns to step S307 again.
  • Returning to step S[0350] 307 again, it is determined therein whether or not the expected position coordinate (335, 209) exists in the window WB defined as the minimum window for surrounding the object B within the range of 309≦X≦358 and 191≦Y≦231. In this step, since the expected position coordinate (335, 209) exists in the window WB, the CPU 4 determines that the object B is the set target object, and then the processing proceeds to step S309.
  • In step S[0351] 309, the CPU 4 stores the coordinates of the window WB defined within the range of 309≦X≦358 and 191≦Y≦231 as a window X-axis coordinate and a window Y-axis coordinate in an area, indicated by the field number 20, of the target object log table of the RAM 4a. The CPU 4 calculates a center coordinate of the window WB from the coordinates of the window WB and stores a center coordinate of this window as the window center vector in the RAM 4a. In the above example, the vector (−52, +91) is stored therein as the window center vector.
  • In step S[0352] 310, based on the window center vector newly stored in step S309, the CPU 4 controls the tilt/pan drive mechanism 16 so that the center of the window W2 should be agreed with the center of the picture. Specifically, based on the window center vector, the CPU 4 supplies the control signal to the motor drive circuit 16 b.
  • In step S[0353] 311, based on the window center vector, the CPU 4 supplies an offset value to the estimation value generating circuit 62 of the focus control circuit 34. The offset value is an offset value supplied to each of the counters respectively provided in the window pulse generating circuits 625, 635 shown in FIGS. 3 and 4. When the offset values are not supplied to the counters of the window pulse generating circuits 625 and 635, as shown in FIGS. 6A and 6B, each of the center coordinates of the windows W1 to W11 is agreed with the center coordinate of the picture obtained by image pickup. However, when the offset values are respectively supplied to the counters of the window pulse generating circuits 625 and 635 from the CPU 4, the count values of the respective counters are changed based on the offset values. Therefore, the center coordinates of the windows W1 to W11 are changed based on the offset values. If in step S311 the offset values are supplied to the focus control circuit 34, then the processing returns to step S100.
  • The present invention achieves the following effects. Initially, since a plurality of estimation values can be obtained by combination of a plurality of filter coefficients and a plurality of window sizes, it is possible to handle various objects. [0354]
  • Since the weight data are allocated to the estimation value generating circuits and hence the total estimation value can be obtained based on the plurality of estimation values and the weight data respectively corresponding to the estimation values, the accuracy of the estimation value finally obtained is improved. As the accuracy of the estimation value is improved, the estimation-value curve describes a smooth parabola around the focus point, which allows high speed determination of the maximum estimation value. Therefore, the autofocus operation itself can be carried out at high speed. [0355]
  • Since the estimation values determined as the improper estimation values when the total estimation value is calculated are selected from the plurality of estimation values and the selected estimation values are not used for the determination of the total estimation value, the accuracy of the estimation values is further improved. For example, if the proper estimation value cannot be obtained with a small window, then the lens is focused on an object by using the estimation value corresponding to a window larger than the above small window. Therefore, it is possible to focus the lens on some object, which prevents the autofocus operation from being continued for a long period of time. [0356]
  • Moreover, when the lens movement direction is determined in order to focus the lens on an object, a plurality of changed estimation values are estimated by employing decision by majority thereof and the weight data. Therefore, it is possible to precisely determine the focus direction by employing the sampling points of small number and a fine movement in the focal depth of the lens. [0357]
  • When it is determined whether or not the maximum point of the estimation value represents the maximum estimation value, the lens is moved from the maximum point by a distance which is predetermined times as long as the focal depth. As a result, even if the hill of the estimation values is flat, it is possible to determine whether or not the maximum point represents the maximum estimation value when the lens is moved by a predetermined distance. Therefore, there can be obtained the effect in which the focus point can be determined at high speed. For example, it is possible to avoid output of an image which becomes considerably blurred and strange because the lens becomes considerably out of focus when it is determined whether or not the maximum point represents the maximum estimation value. [0358]
  • When the maximum estimation value obtained when the lens is located at the focus point is calculated, the estimation value satisfying that the up/down state of the total estimation value and the up/down information stored in the [0359] RAM 66 are agreed with each other and having the largest weight data is selected as the maximum estimation value. Therefore, it is possible to achieve the effect in which the precise value of the maximum estimation value can be obtained.
  • According to this embodiment, since the just focus position Xg is calculated by barycentric calculation, for example, based on a plurality of selected estimation values and the lens positions corresponding to the plurality of selected estimation values, even if the estimation value includes a noise or the estimation value constantly includes a noise when the luminance is low, it is possible to calculate the just focus position Xg and hence it is possible to carry out the focus control with high accuracy. [0360]
  • Since the just focus position Xg is calculated by barycentric calculation, for example, if the focus lens is passed by the just focus position at least once, then it is possible to calculate the just focus position. Therefore, it is possible to determine the just focus position at high speed to that extent. [0361]
  • Since the [0362] area detecting circuit 38 selects the area where the pixel data indicative of the same color of that of the target object exists and carries out the processing of determining only whether or not the pixel data of the selected area satisfies the conditions, it is possible to detect the position of the target object without the processing load on the CPU 4.
  • Since the object mode is set in response to the color of the set target object and the calculation of the [0363] area detecting circuit 38 for determining whether or not the conditions are satisfied and the calculation of the CPU 4 for determining whether or not the conditions are satisfied are changed in response to the set object mode, it is possible to precisely recognize the object regardless of the color of the set object.
  • Since all the processings of the [0364] area detecting circuit 38 for determining whether or not the conditions are satisfied are carried out by the hardware circuit, it is possible to carry out the processing for determining whether or not each of the pixel data supplied from the encoder 37 satisfies the conditions, on a real time base.
  • Since the object information table including a positional information of each object and the target object log table including information about a movement log of the target object are generated even if a plurality of objects have the same colors as that of the target object, it is possible to precisely recognize the target object. [0365]
  • Since the position to which the target object is moved is calculated and the offset values obtained based on the position are supplied to the window pulse generating circuits, the center coordinates of the respective windows W[0366] 1 to W11 are changed so as to correspond to the target object. Therefore, even if the target object is moved, then it is possible to precisely set the windows for the moved target object, which allows the precise estimation values with respect to the moved target object to be obtained. As a result, it is possible to carry out the autofocus control with high accuracy.

Claims (38)

1. In a focus control apparatus having an imaging means for imaging an object through a focus lens to output an electric signal corresponding to said object,
said focus control apparatus being characterized by comprising:
an extracting means for extracting a high-frequency component of the electric signal output from said imaging means;
an estimation value generating means for generating an estimation value indicative of a focus state of said object based on said high-frequency component output from said extracting means;
a storage means for storing a plurality of estimation values changed as said focus lens is moved in response to a focus lens position in order to obtain a just focus position;
a selecting means for selecting a plurality of estimation values to be used for calculation of the just focus position from said estimation values stored in said storage means; and
a control means for calculating the just focus position based on the plurality of estimation values selected by said selecting means and lens positions corresponding to said plurality of selected estimation values.
2. In a focus control apparatus as claimed in
claim 1
,
said focus control apparatus, characterized in that said estimation value generating means generates said estimation value at a predetermined time interval.
3. In a focus control apparatus as claimed in
claim 2
,
said focus control apparatus, characterized in that when said focus lens is moved to said just focus position, said focus lens is passed by said just focus position once.
4. In a focus control apparatus as claimed in
claim 2
,
said focus control apparatus, characterized in that said selecting means selects estimation values at a start position and an end position from the estimation values stored in said storage means, and said just focus position is obtained by calculation of a range of said focus corresponding to said estimation value at the start position based on a value indicative of said focus lens position and said estimation value corresponding to said focus lens position.
5. In a focus control apparatus as claimed in
claim 4
,
said focus control apparatus, characterized in that said calculation is a calculation of a barycenter of said range.
6. In a focus control apparatus as claimed in
claim 4
,
said focus control apparatus, characterized in that said calculation is to divide an area obtained by integration of said range into two substantially equal halves.
7. In a focus control apparatus as claimed in
claim 4
,
said focus control apparatus, characterized in that said calculation is to obtain a substantial middle point of said range.
8. In a focus control apparatus as claimed in
claim 4
,
said focus control apparatus, characterized in that when said selecting means reads out a plurality of estimation values stored in said storage means and said estimation values are continuously decreased times of a predetermined number, said selecting means selects a last estimation value of the estimation values continuously decreased times of a predetermined number as the estimation value of the end position.
9. In a focus control apparatus as claimed in
claim 8
,
said focus control apparatus, characterized in that said selecting means selects as said estimation value at the start position an estimation value substantially equal to said estimation value at the end position and located on the opposite side of a hill of said estimation values with respect to said estimation value at the end position.
10. In a focus control apparatus as claimed in
claim 8
,
said focus control apparatus, characterized in that said selecting means selects as said estimation value at the start position an estimation value which is located on the opposite side of a hill of said estimation values with respect to said estimation value at the end position and which is substantially equal to said estimation value at the end position or an estimation value closest to and smaller than said estimation value at the end position.
11. In a focus control method of moving a focus lens of a video camera to a just focus position,
said focus control method being characterized by comprising:
a) a step of extracting a high-frequency component of an electric signal output from an imaging means;
b) a step of generating an estimation value indicative of a focus state of an object based on said high-frequency component extracted in said step a);
c) a step of storing a plurality of said estimation values changed as said focus lens is moved in response to a focus lens position;
d) a step of selecting a plurality of estimation values to be used for calculation of the just focus position from said estimation values stored in said step c);
e) a step of calculating the just focus position based on the plurality of estimation values selected in said step d) and lens positions corresponding to said plurality of selected estimation values; and
f) a step of moving said focus lens to said just focus position.
12. In a focus control method as claimed in
claim 11
,
said focus control method, characterized in that in said step b), said estimation value is generated at a predetermined time interval.
13. In a focus control method as claimed in
claim 12
,
said focus control method, characterized in that in said step c), when a plurality of said estimation values are stored in response to said focus lens position, said focus lens is passed by said just focus position once.
14. In a focus control method as claimed in
claim 12
,
said focus control method, characterized in that in said step d), estimation values at a start position and an end position are selected from the estimation values stored in said step c), and in said step e), said just focus position is obtained by calculation of a range from said focus lens position corresponding to said estimation value at the start position selected in said step d) to said focus lens position corresponding to said estimation value at the end position selected in said step d) based on a value indicative of said focus lens position and said estimation value corresponding to said focus lens position.
15. In a focus control method as claimed in
claim 14
,
said focus control method, characterized in that in said step e), a barycenter of said range is calculated.
16. In a focus control method as claimed in
claim 14
,
said focus control method, characterized in that in said step e), an area obtained by integration of said range is divided into two substantially equal halves.
17. In a focus control method as claimed in
claim 14
,
said focus control method, characterized in that in said step e), a substantial middle point of said range is calculated.
18. In a focus control method as claimed in
claim 14
,
said focus control method, characterized in that in said step d), when a plurality of estimation values stored in said step c) are read out and said estimation values are continuously decreased times of a predetermined number, a last estimation value of the estimation values continuously decreased times of a predetermined number is selected as the estimation value of the end position.
19. In a focus control method as claimed in
claim 18
,
said focus control method, characterized in that in said step d), an estimation value substantially equal to said estimation value at the end position and located on the opposite side of a hill of said estimation values with respect to said estimation value at the end position is selected as said estimation value at the start position.
20. In a focus control apparatus for controlling a focus of a video camera,
said focus control apparatus comprising:
an estimation value generating means for generating an estimation value indicative of a focus state of an object by extracting a high-frequency component of an image pickup signal output from an imaging means as a focus lens is moved; and
a control means for detecting a focus lens position where the estimation value generated by said estimation value generating means becomes maximum and calculating a just focus position by interpolation based on a plurality of estimation values generated when a focus lens is located at a lens position in the vicinity of said detected focus lens position.
21. In a focus control apparatus as claimed in
claim 20
,
said focus control apparatus, characterized in that said control means comprises a storage means for storing the estimation values generated by said estimation value generating means so that said estimation values should correspond to lens positions of the focus lens which is continuously moved.
22. In a focus control apparatus as claimed in
claim 21
,
said focus control apparatus, characterized in that said control means calculate said just focus position by barycentric calculation based on said plurality of estimation values and said plurality of lens positions corresponding to said plurality of estimation values.
23. In a focus control apparatus as claimed in
claim 21
,
said focus control apparatus, characterized in that said just focus position is calculated in accordance with
Xg = E ( X ) · X · X E ( X ) X
Figure US20010050718A1-20011213-M00003
where Xg depicts said just focus position, X depicts a lens position in the vicinity of the focus lens position where an estimation value becomes maximum, and E(X) depicts an estimation value obtained when a focus lens is located at a lens position X.
24. In a focus control apparatus as claimed in
claim 21
,
said focus control apparatus, characterized in that said control means further comprises a selecting means for selecting a plurality of estimation values to be used for calculation of said just focus position by int erpolation from a plurality of estimation values stored in said storage means.
25. In a focus control apparatus as claimed in
claim 24
,
said focus control apparatus, characterized in that when said focus lens is moved within a predetermined field from a first lens position where the estimation value generated by said estimation values generating means become maximum to a second lens position, if the estimation values generated by said estimation value generating means are continuously decreased, then said control means determines that the estimation value generated when said focus lens is located at the first lens position as the maximum estimation value among the estimation values generated by said estimation value generating means.
26. In a focus control apparatus as claimed in
claim 25
,
said focus control apparatus, characterized in that said selecting means selects estimation values generated while said focus lens is moved to said second lens position from a third lens position located such that said first lens position is located between said second lens position and it.
27. In a focus control apparatus as claimed in
claim 25
,
said focus control apparatus, characterized in that said selecting means selects estimation values generated while said focus lens is moved to said second lens position from a third lens position which is a lens position where an estimation value having a level substantially equal to a level of a second estimation value generated when said focus lens is located at said second lens position and which is located such that said first lens position is located between said second lens position and it.
28. In a focus control apparatus as claimed in
claim 27
,
said focus control apparatus, characterized in that said just focus position is calculated in accordance with
Xg = X 3 X 2 E ( X ) · X · Δ X X = X 3 X 2 E ( X ) · Δ X
Figure US20010050718A1-20011213-M00004
where Xg depicts said just focus position, X2 depicts the second lens position, X3 depicts the third lens position, and ΔX depicts a distance by which the lens is moved in one field.
29. In a focus control apparatus as claimed in
claim 28
,
said focus control apparatus, characterized in that said estimation value generating means is formed of a plurality of estimation value generating circuits respectively having different conditions for generating said estimation values, and said control means determines fluctuations of a plurality of estimation values obtained from said plurality of estimation value generating circuits to thereby obtain said first lens position.
30. In a focus control apparatus as claimed in
claim 27
,
said focus control apparatus, characterized in that said estimation value generating means is formed of a plurality of estimation value generating circuits respectively having different conditions for generating said estimation values, and said control means determines fluctuations of a plurality of estimation values by utilizing weight coefficients respectively corresponding to said plurality of estimation values obtained from said plurality of estimation value generating circuits.
31. In a focus control apparatus as claimed in
claim 30
,
said focus control apparatus, characterized in that a condition of generating estimation values set for said plurality of estimation value generating means is determined based on a condition for determining a filter characteristic for extracting a high-frequency component of said video signal and on a condition for determining a size of a detection window for said video signal.
32. In a focus control apparatus as claimed in
claim 30
,
said focus control apparatus, characterized in that said control means carries out a lens movement direction discriminating processing for discriminating a direction in which said estimation value is increased when said focus lens is moved forward or backward by a distance ranging within a focal depth of said focus lens, and then carries out a lens position detecting processing for detecting said first lens position while said focus lens is being moved in said discriminated direction.
33. In a focus control apparatus as claimed in
claim 32
,
said focus control apparatus, characterized in that said lens movement direction discriminating processing is carried out by determining a focus lens movement direction based on a plurality of estimation values obtained when the focus lens is located at an initial lens position where it is to be located initially, a plurality of estimation values obtained when the focus lens is located at a first movement position located away from said initial lens position by a predetermined distance in a direction toward an object, and a plurality of estimation values obtained when the focus lens is located at a second movement position located away from said initial lens position by a predetermined distance in a direction toward an imaging device.
34. In a focus control apparatus as claimed in
claim 33
,
said focus control apparatus, characterized in that said predetermined distance is equal to or shorter than the focal depth of said focus lens.
35. In a focus control apparatus as claimed in
claim 34
,
said focus control apparatus, characterized in that said control means further comprises an estimation value determining means for comparing a first estimation value obtained from the estimation value generating circuit having a first detection window and a second estimation value obtained from the estimation value generating circuit having a second detection window with a size larger than that of the first detection window to thereby determine whether or not said first estimation value is a proper estimation value indicative of a focus degree with respect to a desired object.
36. In a focus control apparatus as claimed in
claim 20
,
said focus control apparatus, characterized in that said control means discriminates a direction in which said estimation value is increased when said focus lens is moved forward and backward by a distance which does not exceed a focal depth, and then controls said estimation value generating means to continuously generate an estimation value at every field while said focus lens is being moved at a speed at which it moves by a distance longer than said focal depth within one field.
37. In a focus control apparatus as claimed in
claim 30
,
said focus control apparatus, characterized in that said control means discriminates, by totally determining a plurality of estimation values from said plurality of estimation value generating circuits, a direction in which said estimation value is increased when said focus lens is moved forward and backward by a distance which does not exceed a focal depth, and then controls said plurality of estimation value generating circuits to continuously generate an estimation value at every field while said focus lens is being moved at a speed at which it moves by a distance longer than said focal depth within one field in a discriminated direction, whereby said first focus position is detected by totally judging fluctuations of a plurality of estimation values from said plurality of estimation value generating circuits at every field.
38. In a focus control method of controlling a focus of a video camera,
said focus control method, comprising:
a) a step of generating an estimation value by extracting a high-frequency component of an image pickup signal output from an imaging means at every field as a focus lens is moved;
b) a step of detecting a lens position where the maximum estimation value among the estimation values generated in said step a) is generated;
c) a step of calculating a just focus position where the estimation value becomes maximum by interpolation based on a plurality of estimation values generated where the focus lens is located at a lens position in the vicinity of said detected lens position; and
d) a step of moving said focus lens to said just focus position.
US08/913,209 1996-01-11 1997-01-10 Focus control apparatus and method for use with a video camera or the like Expired - Fee Related US6362852B2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JPP8-003311 1996-01-11
JP331196 1996-01-11
JP8-003311 1996-01-11
PCT/JP1997/000034 WO1997025812A1 (en) 1996-01-11 1997-01-10 Device and method for controlling focus

Publications (2)

Publication Number Publication Date
US20010050718A1 true US20010050718A1 (en) 2001-12-13
US6362852B2 US6362852B2 (en) 2002-03-26

Family

ID=11553822

Family Applications (1)

Application Number Title Priority Date Filing Date
US08/913,209 Expired - Fee Related US6362852B2 (en) 1996-01-11 1997-01-10 Focus control apparatus and method for use with a video camera or the like

Country Status (3)

Country Link
US (1) US6362852B2 (en)
JP (1) JP3791012B2 (en)
WO (1) WO1997025812A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010010556A1 (en) * 2000-01-24 2001-08-02 Masahiko Sugimoto Imaging device, automatic focusing method and recording medium on which a program is recorded
US20030071911A1 (en) * 2001-10-15 2003-04-17 Junichi Shinohara Photographing object image adjusting apparatus and method and photographing apparatus
US20050018907A1 (en) * 2001-12-26 2005-01-27 Isao Kawanishi Image pickup apparatus and method
US20050275745A1 (en) * 2004-06-09 2005-12-15 Premier Image Technology Corporation Quick focusing method for a digital camera
US20080252744A1 (en) * 2007-04-12 2008-10-16 Hidekazu Suto Auto-focus apparatus, image-pickup apparatus, and auto-focus method
EP2083321A1 (en) * 2008-01-22 2009-07-29 Canon Kabushiki Kaisha Imaging apparatus and lens apparatus
US20130321693A1 (en) * 2012-06-05 2013-12-05 Kuo-Hung Lin Quick auto-focus method
US20150195458A1 (en) * 2012-07-12 2015-07-09 Sony Corporation Image shake correction device and image shake correction method and image pickup device
WO2023136736A1 (en) * 2022-01-12 2023-07-20 Mosst Merrick Anthony Preset-based automated image capture system

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11271600A (en) * 1998-03-25 1999-10-08 Minolta Co Ltd Distance detecting device
JP2001281529A (en) * 2000-03-29 2001-10-10 Minolta Co Ltd Digital camera
JP2002296493A (en) * 2001-03-30 2002-10-09 Fuji Photo Optical Co Ltd Focus state detector
JP2005352459A (en) * 2004-05-13 2005-12-22 Matsushita Electric Ind Co Ltd Focus adjusting device, focus adjusting method and digital camera
US7813579B2 (en) * 2004-05-24 2010-10-12 Hamamatsu Photonics K.K. Microscope system
US8035721B2 (en) * 2004-08-05 2011-10-11 Panasonic Corporation Imaging apparatus
JP4371074B2 (en) 2005-04-15 2009-11-25 ソニー株式会社 Control device and method, program, and camera
JP4419084B2 (en) 2005-04-15 2010-02-24 ソニー株式会社 Control device and method, program, and camera
JP4419085B2 (en) * 2005-04-15 2010-02-24 ソニー株式会社 Control apparatus and method, and program
JP2007102061A (en) * 2005-10-07 2007-04-19 Olympus Corp Imaging apparatus
US20080291314A1 (en) * 2007-05-25 2008-11-27 Motorola, Inc. Imaging device with auto-focus
JP4959535B2 (en) 2007-12-13 2012-06-27 株式会社日立製作所 Imaging device
US8274596B2 (en) * 2008-04-30 2012-09-25 Motorola Mobility Llc Method and apparatus for motion detection in auto-focus applications
KR101320350B1 (en) * 2009-12-14 2013-10-23 한국전자통신연구원 Secure management server and video data managing method of secure management server
US8941743B2 (en) 2012-09-24 2015-01-27 Google Technology Holdings LLC Preventing motion artifacts by intelligently disabling video stabilization
US9554042B2 (en) 2012-09-24 2017-01-24 Google Technology Holdings LLC Preventing motion artifacts by intelligently disabling video stabilization

Family Cites Families (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2554051B2 (en) 1986-03-04 1996-11-13 キヤノン株式会社 Autofocus device
KR940011885B1 (en) * 1987-02-18 1994-12-27 상요덴기 가부시기가이샤 Automatic focusing circuit for automatically matching focus in response to video signal
US4922346A (en) * 1987-06-30 1990-05-01 Sanyo Electric Co., Ltd. Automatic focusing apparatus having a variable focusing speed and particularly suited for use with interlaced scanning
US5619264A (en) * 1988-02-09 1997-04-08 Canon Kabushiki Kaisha Automatic focusing device
US4928170A (en) * 1988-06-21 1990-05-22 Visualtek, Inc. Automatic focus control for an image magnification system
US5083150A (en) * 1989-03-03 1992-01-21 Olympus Optical Co., Ltd. Automatic focusing apparatus
JP2786894B2 (en) 1989-08-09 1998-08-13 三洋電機株式会社 Auto focus camera
US5249058A (en) * 1989-08-08 1993-09-28 Sanyo Electric Co., Ltd. Apparatus for automatically focusing a camera lens
WO1991002428A1 (en) 1989-08-08 1991-02-21 Sanyo Electric Co., Ltd Automatically focusing camera
US5629735A (en) * 1989-08-20 1997-05-13 Canon Kabushiki Kaisha Image sensing device having a selectable detecting area
DE69030345T2 (en) * 1989-09-10 1997-08-21 Canon Kk Automatic focusing process
JP2974339B2 (en) * 1989-09-20 1999-11-10 キヤノン株式会社 Automatic focusing device
DE69127112T2 (en) * 1990-02-28 1998-03-05 Sanyo Electric Co Automatic focusing device for automatic focus adjustment depending on video signals
US5235375A (en) * 1990-04-12 1993-08-10 Olympus Optical Co., Ltd. Focusing position detecting and automatic focusing apparatus with optimal focusing position calculation method
JP3103587B2 (en) * 1990-04-25 2000-10-30 オリンパス光学工業株式会社 Automatic focusing device
EP0473462B1 (en) * 1990-08-31 1997-01-15 Victor Company Of Japan, Ltd. Imaging device with automatic focusing function
JPH04330411A (en) * 1991-05-02 1992-11-18 Olympus Optical Co Ltd Automatic focusing device
JP3162100B2 (en) 1991-05-08 2001-04-25 オリンパス光学工業株式会社 Focus detection device
US5475429A (en) * 1991-07-25 1995-12-12 Olympus Optical Co., Ltd. In-focus sensing device for sensing an in-focus condition using a ratio of frequency components at different positions
JP3209761B2 (en) * 1991-09-24 2001-09-17 キヤノン株式会社 Focus adjustment device
US5345264A (en) * 1992-02-27 1994-09-06 Sanyo Electric Co., Ltd. Video signal processing circuit for a video camera using a luminance signal
JP2996806B2 (en) * 1992-06-11 2000-01-11 キヤノン株式会社 Camera, automatic focus adjustment device and focus adjustment method
US5604537A (en) * 1992-09-10 1997-02-18 Canon Kabushiki Kaisha Imaging apparatus having an automatic focusing means
US5432552A (en) * 1992-11-04 1995-07-11 Sanyo Electric Co., Ltd. Automatic focusing apparatus including improved digital high-pass filter
US6236431B1 (en) * 1993-05-27 2001-05-22 Canon Kabushiki Kaisha Video camera apparatus with distance measurement area adjusted based on electronic magnification
US6222588B1 (en) * 1993-05-28 2001-04-24 Canon Kabushiki Kaisha Automatic focus adjusting device with driving direction control
US5757429A (en) * 1993-06-17 1998-05-26 Sanyo Electric Co., Ltd. Automatic focusing apparatus which adjusts the speed of focusing based on a change in the rate of the focus evaluating value
JPH09243906A (en) * 1996-03-05 1997-09-19 Eastman Kodak Japan Kk Automatic focusing device and method

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7821568B2 (en) 2000-01-24 2010-10-26 Fujifilm Corporation Imaging device, automatic focusing method and recording medium on which a program is recorded
US20070070238A1 (en) * 2000-01-24 2007-03-29 Masahiko Sugimoto Imaging device, automatic focusing method and recording medium on which a program is recorded
US7242434B2 (en) * 2000-01-24 2007-07-10 Fujifilm Corporation Imaging device, automatic focusing method and recording medium on which a program is recorded
US20010010556A1 (en) * 2000-01-24 2001-08-02 Masahiko Sugimoto Imaging device, automatic focusing method and recording medium on which a program is recorded
US20030071911A1 (en) * 2001-10-15 2003-04-17 Junichi Shinohara Photographing object image adjusting apparatus and method and photographing apparatus
US7372486B2 (en) * 2001-10-15 2008-05-13 Ricoh Company, Ltd. Photographing apparatus and method of adjusting an image of a photographing object
US20050018907A1 (en) * 2001-12-26 2005-01-27 Isao Kawanishi Image pickup apparatus and method
US7433531B2 (en) * 2001-12-26 2008-10-07 Sony Corporation Image pickup apparatus and method
US20050275745A1 (en) * 2004-06-09 2005-12-15 Premier Image Technology Corporation Quick focusing method for a digital camera
US20080252744A1 (en) * 2007-04-12 2008-10-16 Hidekazu Suto Auto-focus apparatus, image-pickup apparatus, and auto-focus method
EP2065741A3 (en) * 2007-04-12 2009-06-10 Sony Corporation Auto-focus apparatus, image- pickup apparatus, and auto- focus method
US8373790B2 (en) 2007-04-12 2013-02-12 Sony Corporation Auto-focus apparatus, image-pickup apparatus, and auto-focus method
US9066004B2 (en) 2007-04-12 2015-06-23 Sony Corporation Auto-focus apparatus, image pick-up apparatus, and auto-focus method for focusing using evaluation values
EP2083321A1 (en) * 2008-01-22 2009-07-29 Canon Kabushiki Kaisha Imaging apparatus and lens apparatus
US20130321693A1 (en) * 2012-06-05 2013-12-05 Kuo-Hung Lin Quick auto-focus method
US8730379B2 (en) * 2012-06-05 2014-05-20 Hon Hai Precison Industry Co., Ltd. Quick auto-focus method
US20150195458A1 (en) * 2012-07-12 2015-07-09 Sony Corporation Image shake correction device and image shake correction method and image pickup device
US9225903B2 (en) * 2012-07-12 2015-12-29 Sony Corporation Image blur correction apparatus, method of correcting image blur, and imaging apparatus
WO2023136736A1 (en) * 2022-01-12 2023-07-20 Mosst Merrick Anthony Preset-based automated image capture system

Also Published As

Publication number Publication date
WO1997025812A1 (en) 1997-07-17
US6362852B2 (en) 2002-03-26
JP3791012B2 (en) 2006-06-28

Similar Documents

Publication Publication Date Title
US6362852B2 (en) Focus control apparatus and method for use with a video camera or the like
US5877809A (en) Method of automatic object detection in image
US7382411B2 (en) Method for focus adjusting and camera
US7593054B2 (en) Focusing apparatus
US6236431B1 (en) Video camera apparatus with distance measurement area adjusted based on electronic magnification
KR0147572B1 (en) Method &amp; apparatus for object tracing
US7098954B2 (en) Interchangeable lens video camera system
US20020114015A1 (en) Apparatus and method for controlling optical system
US8213784B2 (en) Focus adjusting apparatus and focus adjusting method
US8502912B2 (en) Focusing apparatus and method for controlling the same
US7365790B2 (en) Autofocus system for an image capturing apparatus
US9800776B2 (en) Imaging device, imaging device body, and lens barrel
US7262804B2 (en) Autofocus camera adjusting focus lens position based on illumination characteristics
JP7271188B2 (en) Control device, imaging device, control method, and program
US7864239B2 (en) Lens barrel and imaging apparatus
US20030169363A1 (en) Image pickup apparatus and method, and image-pickup control computer program
US6275262B1 (en) Focus control method and video camera apparatus
US6222587B1 (en) Focus control method and video camera
EP0572163B1 (en) Auto focus apparatus
JP2008185823A (en) Focus detector and camera
US20190094656A1 (en) Imaging apparatus and control method of the same
EP0774863B1 (en) Focus controlling method and video camera device
US7046289B2 (en) Automatic focusing device, camera, and automatic focusing method
JPH05219418A (en) Focusing detector
KR100213888B1 (en) Auto-focusing control method for video camera

Legal Events

Date Code Title Description
AS Assignment

Owner name: SONY CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ITO, YUJIRO;REEL/FRAME:008942/0355

Effective date: 19971107

FPAY Fee payment

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 8

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees
STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20140326