USRE41196E1 - Method and apparatus for detecting motion between odd and even video fields - Google Patents

Method and apparatus for detecting motion between odd and even video fields Download PDF

Info

Publication number
USRE41196E1
USRE41196E1 US11/250,140 US25014005A USRE41196E US RE41196 E1 USRE41196 E1 US RE41196E1 US 25014005 A US25014005 A US 25014005A US RE41196 E USRE41196 E US RE41196E
Authority
US
United States
Prior art keywords
pixels
motion
field
fields
parity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime, expires
Application number
US11/250,140
Inventor
Steve Selby
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Genesis Microchip Inc
Original Assignee
Genesis Microchip Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Genesis Microchip Inc filed Critical Genesis Microchip Inc
Priority to US11/250,140 priority Critical patent/USRE41196E1/en
Application granted granted Critical
Publication of USRE41196E1 publication Critical patent/USRE41196E1/en
Adjusted expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0117Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving conversion of the spatial resolution of the incoming video signal
    • H04N7/012Conversion between an interlaced and a progressive signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/16Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter for a given display mode, e.g. for interlaced or progressive display mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • H04N5/145Movement estimation

Definitions

  • This invention relates in general to digital video signal processing and more particularly to a method and apparatus whereby motion between odd and even video fields may be reliably measured despite the presence of high vertical spatial frequencies.
  • the NTSC and PAL video standards are in widespread use throughout the world today. Both of these standards make use of interlacing in order to maximize the vertical refresh rate thereby reducing wide area flicker, while minimizing the bandwidth required for transmission.
  • interlacing With an interlaced video format, half of the lines that make up a picture are displayed during one vertical period (i.e. the even field), while the other half are displayed during the next vertical period (i.e. the odd field) and are positioned halfway between the lines displayed during the first period. While this technique has the benefits described above, the use of interlacing can also lead to the appearance of artifacts such as line flicker and visible line structure.
  • a computer graphics sequence created at 30 frames per second is converted to interlaced video at 60 fields per second using a pull down ratio of 2:2, where 2 fields are derived from each CG frame.
  • video transmission formats do not include explicit information about the type of source material being carried, such as whether the material was derived from a progressive source.
  • a video-processing device it is first necessary to determine whether the material originates from a progressive source. If it is determined that the material originates from such a source, it is furthermore necessary to determine precisely which video fields originate from which source frames. Such determination can be made by measuring the motion between successive fields of an input video sequence.
  • a second mode of motion that can be measured is the motion between successive fields which are of opposite parity (one odd and one even).
  • this mode of measurement overcomes the limitations of the above, it is inherently a more difficult measurement to make since a spatial offset exists between fields that are of opposite parity. Thus, even if there is no actual motion, a finite difference between the fields may exist owing to the spatial offset. This tends to increase the measured difference when there is no motion making it more difficult to reliably discriminate between when there is motion and when there is not. This is particularly true in the presence of noise and/or limited motion.
  • a number of methods have been proposed in the prior art for the measurement of motion between fields of opposite parity. It is an objective of the present invention to provide a method for the measurement of motion between fields of opposite parity with greater ability to discriminate between the presence of motion or lack thereof than those of the prior art.
  • the level of motion between the two fields at a specific position is determined by comparing the values of four vertically adjacent pixels, each of which having the same horizontal position, where the first and third pixels are taken from vertically adjacent lines in one field, the second and fourth pixels are taken from vertically adjacent lines in the other field such that the vertical position of the second pixel is halfway between the first and third pixels and the vertical position of the third pixel is halfway between the second and fourth pixels.
  • the local motion is taken as zero. Otherwise, the local motion is taken as the minimum of the absolute differences between the first and second pixels, the second and third pixels, and between the third and fourth pixels.
  • This technique has the benefit that false detection of motion arising from the presence of high vertical spatial frequencies is minimized, while actual motion is still readily detected.
  • false detection is completely avoided for vertical spatial frequencies less than one half of the vertical frame Nyquist frequency. Utilizing more than four pixels extends the range of vertical spatial frequencies for which false detection is completely avoided irrespective of the vertical frame Nyquist frequency.
  • the method of the present invention is scaled to utilize n pixels where n is greater than or equal to four, then false detection of motion is avoided for frequencies up to and including (n ⁇ 3)/(n ⁇ 2) of the vertical frame Nyquist frequency.
  • the resulting local measurement of motion can either be used directly or summed over an entire field in order to provide a global motion signal that is useful for determining whether an input sequence derives from a film source.
  • the contributing pixels are chosen such that their spatial positions remain constant regardless of whether the most recent of the two fields is even or odd. In this way, any motion that is falsely detected in a static image remains constant from one field to the next, thereby improving the ability to distinguish between falsely detected motion and actual motion that arises as a result of a sequence that was generated in accordance with a 2:2 pull down ratio.
  • FIG. 1 is a schematic representation showing how motion may be measured between successive fields of opposite parity, according to the prior art.
  • FIG. 2 is a schematic representation showing how motion may be measured between successive fields of opposite parity using a second method, according to the prior art.
  • FIG. 3 is a schematic representation showing how motion may be measured between successive fields of opposite parity using a third method, according to the prior art.
  • FIG. 4 is a schematic representation showing how motion may be measured between successive fields of opposite parity using a fourth method, according to the prior art.
  • FIG. 5 is a schematic representation of a first example showing how motion may be measured between successive fields of opposite parity, according to the method of the present invention.
  • FIG. 6 is a schematic representation of a second example showing how motion may be measured between successive fields of opposite parity, according to the method of the present invention.
  • FIG. 7 is a schematic representation showing how motion may be measured between successive fields of opposite parity, according to an alternative embodiment of the method of the present invention.
  • FIG. 8 is a block diagram of an apparatus for implementing the method of the present invention.
  • FIG. 9 is a schematic representation of a third example to show how motion may be measured between successive fields of opposite parity where the most recent of the two fields is even, according to the method of the present invention.
  • FIG. 10 is a schematic representation of a fourth example to show how motion may be measured between successive fields of opposite parity where the most recent of the two fields is odd, according to the method of the present invention.
  • FIG. 11 is a schematic representation showing how the contributing pixels are selected to have a particular spatio-temporal relationship to one another depending on whether the most recent field is even or odd, according to a further aspect of the present invention.
  • FIG. 12 is a schematic representation of a further example to show how motion may be measured between successive fields to opposite parity where the most recent of the two fields is odd, according to the method of the present invention.
  • FIG. 13 is a block diagram of an apparatus for implementing the method as set forth in FIGS. 9 , 11 and 12 , according to a preferred embodiment of the present invention.
  • FIG. 1 a first example is shown of how motion may be measured between successive fields of opposite parity, according to one technique known in the prior art.
  • the left half of FIG. 1 shows the spatio-temporal relationship between a set of vertically and temporally adjacent pixels at a given horizontal position. It is clearly shown in FIG. 1 that the vertical position of each pixel in the even field is halfway between the two nearest pixels in the odd field.
  • the right half of FIG. 1 shows the value of each pixel relative to its vertical position.
  • a curved line is shown connecting the pixels and is intended to represent an image detail, the intensity of which varies vertically within the image in a sinusoidal fashion with the bright and dark image portions (i.e.
  • the curved line is drawn continuously through the pixels of both the odd and the even fields to represent the fact that both fields are part of an image in which there is no motion.
  • Two pixels, P 1 and P 2 are highlighted showing their spatio-temporal relationship to one another and their values within the image.
  • the image detail has a vertical spatial frequency that is exactly equal to one half of the vertical frame Nyquist frequency and a peak amplitude equal to quantity A.
  • the formula at the bottom of FIG. 1 shows how a local measurement of motion is made using a first prior art technique. The motion is simply taken as the absolute difference between the two pixels P 1 and P 2 , as depicted in FIG. 1 .
  • FIG. 2 a somewhat enhanced measurement technique is shown as fully disclosed in U.S. Pat. No. 5,291,280 (Faroudja).
  • the left half of the figure shows the spatio-temporal relationship between pixels in two successive video fields while the right half shows the value of each pixel relative to its vertical position for a particular image detail.
  • the example used is that of an image detail that has a vertical spatial frequency exactly equal to one half of the vertical frame Nyquist frequency.
  • the formula for calculating the motion according to this second method is shown at the bottom of FIG. 2 .
  • the measured motion is taken as the lesser of the absolute differences between pixels P 1 and P 2 , and between pixels P 2 and P 3 , as depicted in FIG. 2 .
  • this technique fails to reject as motion the difference between the pixels that arises owing to their different vertical positions.
  • FIG. 3 a further enhanced measurement technique is shown, as disclosed in U.S. Pat. No. 6,014,182 (Swartz).
  • the left half of the figure shows the spatio-temporal relationship between pixels in two successive video fields while the right half shows the value of each pixel relative to its vertical position for a particular image detail.
  • the example used is that of an image detail that has a vertical spatial frequency exactly equal to one half of the vertical frame Nyquist frequency.
  • the formula for calculating the motion according to this third method is shown at the bottom of FIG. 3 .
  • the measured motion is taken as the lesser of the absolute differences between pixels P 1 and P 2 , and between pixels P 2 and P 3 , unless the absolute difference between pixels P 1 and P 3 is greater than the lesser of the absolute differences between pixels P 1 and P 2 , and between pixels P 2 and P 3 , in which case the motion value is taken as zero.
  • the pixel values used in this example are intended to represent samples of an image in which there is no motion, application of this technique results in a measured motion value equal to quantity A.
  • this technique fails to reject as motion the difference between the pixels that arises owing to their different vertical positions.
  • FIG. 4 another enhanced measurement technique is shown, as disclosed in U.S. Pat. No. 5,689,301 (Christopher).
  • the left half of the figure shows the spatio-temporal relationship between pixels in two successive video fields while the right half shows the value of each pixel relative to its vertical position for a particular image detail.
  • the example used is that of an image detail that has a vertical spatial frequency exactly equal to one half of the vertical frame Nyquist frequency.
  • the formula for calculating the motion according to this fourth method is shown at the bottom of FIG. 4 .
  • the measured motion is taken as the lesser of the absolute differences between pixels P 1 and P 2 , and between pixels P 2 and P 3 , unless the value of pixel P 2 is between the values of pixels P 1 and P 3 , in which case the motion value is taken as zero.
  • the pixel values used in this example are intended to represent samples of an image in which there is no motion, application of this technique results in a measured motion value equal to quantity A.
  • this technique fails to reject as motion the difference between the pixels that arises owing to their different vertical positions.
  • FIG. 5 an enhanced measurement technique is shown according to the present invention.
  • the left half of the figure shows the spatio-temporal relationship between pixels in two successive video fields while the right half shows the value of each pixel relative to its vertical position for a particular image detail.
  • the example used is that of an image detail that has a vertical spatial frequency exactly equal to one half of the vertical frame Nyquist frequency.
  • the formula for calculating the motion according to one aspect of the present invention is shown at the bottom of FIG. 5 .
  • the measured motion is taken as the lesser of the absolute differences between pixels P 1 and P 2 , pixels P 2 and P 3 , and between pixels P 3 and P 4 , unless the value of either pixel P 2 or pixel P 3 is between the values of its immediate neighbours, in which case the motion value is taken as zero.
  • the motion value generated in the example is zero, since the value of pixel P 3 is between that of P 2 and P 4 .
  • This is the desired result, since the pixel values in the example are intended to represent samples of an image in which there is no motion.
  • false detection of motion is completely avoided for vertical spatial frequencies less than one half of the vertical frame Nyquist frequency.
  • FIG. 6 another example is provided in which the present invention is applied to an image in which motion exists.
  • the left half of the figure shows the spatio-temporal relationship between pixels in two successive video fields while the right half shows the value of each pixel relative to its vertical position within the image.
  • a continuous line has been drawn through pixels P 1 and P 3 from the odd field
  • a separate line has been drawn through pixels P 2 and P 4 from the even field to represent the fact there is no direct correlation between the samples from the odd field and those from the even field.
  • Pixel values P 2 and P 4 differ from pixel values P 1 and P 2 by quantity B.
  • the motion value is given as quantity B which is the desired result since it correctly indicates the presence of motion between the fields.
  • a four-pixel aperture in the present invention may result in a lower measured motion value near the edges of moving objects than would otherwise be obtained using a two or three pixel aperture as in the prior art methods. When summed over an entire field, this may tend to produce a slightly lower total than would otherwise be obtained.
  • the present technique produces significantly lower false motion values for fields between which there is no motion. For typical video sources, the present technique results in a significantly higher ratio between the values measured where motion exists and the values measured where there is none. Hence, the ability to discriminate between motion and lack thereof is enhanced.
  • utilizing greater than four pixels extends the range of vertical spatial frequencies for which false detection is avoided.
  • FIG. 7 an example is provided which is similar to that of FIG. 5 except that the method has been generalized to make use of n pixels.
  • the formula for calculating the motion is shown at the bottom of the figure.
  • the example used is that of an image detail that has a vertical spatial frequency exactly equal to one half of the vertical frame Nyquist frequency.
  • Application of the formula in this case yields a motion value of zero, which is the desired result since there is no motion between the fields. It will be understood from FIG. 7 that for higher frequencies as well, in particular those frequencies up to and including (n ⁇ 3)/(n ⁇ 2) of the vertical frame Nyquist frequency, false detection of motion is completely avoided.
  • FIG. 8 shows an apparatus implementing the method of the present invention as shown in FIG. 5 where a motion value is calculated based on four pixels.
  • An input video signal is applied to the input of a memory controller 10 , a line delay element 12 and a first input of a differencing circuit 14 .
  • the pixel that is present at the video input at any given time corresponds to that designated as pixel P 4 in FIG. 5 .
  • the memory controller stores incoming video data into a DRAM array 11 and later retrieves it so as to produce a version of the input video signal that is delayed (e.g. by 263 lines in the case of an NTSC input).
  • the memory controller 10 may also concurrently retrieve other versions of the input video signal that are delayed by different amounts to be used for other purposes that are not relevant to the present invention.
  • the pixel that is output from the memory controller 10 at any given time corresponds to that designated as pixel P 3 in FIG. 5 , which is subsequently applied to the input of a second line delay element 13 , a first input of a differencing circuit 15 and the second input of differencing circuit 14 referred to herein above.
  • Line delay element 12 provides a version of the input video signal that is delayed by one vertical line, and corresponds to pixel P 2 in FIG. 5 .
  • Pixel P 2 is applied to a first input of a differencing circuit 16 and the second input of differencing circuit 15 described earlier.
  • Line delay element 13 provides a version of the delayed video signal from the memory controller that is further delayed by one vertical line and corresponds to pixel P 1 in FIG. 5 .
  • Pixel P 1 is applied to the second input of differencing circuit 16 .
  • Each of the differencing circuits 14 - 16 generates both the sign and the magnitude of the differences between their input signals.
  • the three signals representing the signs of the differences are applied to the inputs of override logic block 17 .
  • the three signals representing the magnitudes of the differences are applied to the inputs of the keep smallest value block 18 which propagates only the smallest of the three values at its input.
  • a multiplexor 19 selects either the output of the keep smallest value block or zero, depending on the output of override logic block 17 .
  • the value at the output of multiplexor 19 is forced to zero if the signs at the outputs of differencing circuits 14 and 15 are the same, of if the signs at the outputs of differencing circuits 15 and 16 are the same.
  • the value at the output of multiplexor 19 provides a measure of the motion in the vicinity of pixels P 1 -P 4 according to one aspect of the present invention.
  • the local motion value may be integrated over a complete field in order to provide an overall measure of the motion between two fields for the purpose of determining whether the input sequence derives from a film source. Alternatively, the local motion value may be used to advantage without subsequent integration for the conversion from interlaced to progressive format of material that has not been derived from film.
  • the spatio-temporal relationship of the contributing pixels relative to one another is fixed irrespective of whether the most recent field is even or odd.
  • the spatio-temporal relationship is chosen depending on whether the most recent field is even or odd, so as to generate a measure of the motion that does not change unduly from one field to the next. Referring now to FIG. 9 , an example of the present invention is provided which is similar to that shown in FIG.
  • the image detail includes a vertical frequency component that is greater than half the vertical frame Nyquist frequency. Note that in this example, the most recent of the two fields is even. Application of the method in this case results in a measured motion value of zero, since the value of P 3 clearly lies between that of P 2 and P 4 . It should be noted that the inventive method produces a value of zero even though in this case the image detail contains a frequency component outside of the range where false detection is guaranteed to be avoided. This is coincidental and may occur depending on the phase of the image signal with respect to the sample points.
  • FIG. 10 an example is set forth in which the method is applied to the same image detail set forth in FIG. 9 but where the most recent field is odd.
  • the spatio-temporal relationship between pixels P 1 to P 4 has been maintained, as in the prior art methods described earlier. Due to the half line offset between the odd and even fields, the four contributing pixels have moved along the contour of the static image detail, relative to FIG. 9 .
  • Application of the method in this case results in a measured motion value equal to quantity C, since the value of P 2 does not lie between that of P 1 and P 3 , nor does the value of P 3 lie between that of P 2 and P 4 .
  • the measured motion value may alternate from one field to the next depending on whether the most recent field is even or odd, despite the fact there may be no actual motion at all within the image.
  • the inventor has realized that this is a detrimental result since alternating high and low motion values is exactly the same pattern that would be produced by an actual motion sequence produced in accordance with a 2:2 pull down ratio, thereby hampering the ability to distinguish motion from static images in accordance with the present invention. Consequently, the inventor has concluded that the spatio-temporal relationship between the contributing pixels should preferably be chosen depending on whether the most recent field is even or odd, as shown in FIG. 11 . Essentially, the pixels are chosen such that for a static image, the same image samples are always used. Thus, if PI represents a sample from an odd field, then P 1 is always taken from an odd field, regardless of whether the most recent field is odd or even.
  • FIG. 12 an example is provided of the preferred method for choosing the spatio-temporal relationship between the contributing pixels as applied to the example of FIGS. 9 and 10 for the case where the most recent field is odd.
  • Application of the formula according to the method of the present invention yields a measured motion value of zero, which is the same result as in FIG. 9 where the most recent field is even. Thus, undue modulation of the motion value from field to field is effectively avoided.
  • pixel P 1 has consistently been taken from the odd field. It will be apparent to one of ordinary skill in the art that pixel P 1 could have consistently been taken from the even field instead, with results equal in overall performance.
  • FIG. 13 shows an apparatus for implementing the method of the present invention as shown in FIGS. 9 , 11 and 12 .
  • the same numbers have been used to designate those items that are in common with the apparatus shown in FIG. 8 .
  • the refinement of appropriately selecting the pixels so as to avoid modulation of the motion signal from one field to the next is achieved by the addition of four multiplexors 20 - 23 and through manipulation of the delay provided by the memory controller 10 . It will be apparent from inspection of FIGS. 10 and 12 that the less desirable spatio-temporal relationship between the contributing pixels for the case in which the most recent field is odd as shown in FIG. 10 , can be transformed to the more desirable case as show in FIG.
  • multiplexors 20 and 21 are used to interchange pixels P 3 and P 4
  • multiplexors 22 and 23 are used to interchange pixels P 1 and P 2 , for the case when the field that is currently being inputted is odd.

Abstract

A method for measuring motion at a horizontal and vertical position between video fields of opposite parity comprising the steps of measuring the signal values of at least two vertically adjacent pixels from a video field of one parity and at least two vertically adjacent pixels from a video field of the opposite parity such that when taken together, the pixels represent contiguous samples of an image at said horizontal and vertical position, and determining whether the signal value of any of the pixels lies between the signal values of adjacent pixels in the field of opposite parity and in response outputting a zero motion value, otherwise, outputting a motion value equal to the lowest absolute difference between any of the pixels and its closest adjacent pixel in the field of opposite parity.

Description

FIELD OF THE INVENTION
This invention relates in general to digital video signal processing and more particularly to a method and apparatus whereby motion between odd and even video fields may be reliably measured despite the presence of high vertical spatial frequencies.
BACKGROUND OF THE INVENTION
The NTSC and PAL video standards are in widespread use throughout the world today. Both of these standards make use of interlacing in order to maximize the vertical refresh rate thereby reducing wide area flicker, while minimizing the bandwidth required for transmission. With an interlaced video format, half of the lines that make up a picture are displayed during one vertical period (i.e. the even field), while the other half are displayed during the next vertical period (i.e. the odd field) and are positioned halfway between the lines displayed during the first period. While this technique has the benefits described above, the use of interlacing can also lead to the appearance of artifacts such as line flicker and visible line structure.
It is well known in the prior art that the appearance of an interlaced image can be improved by converting it to non-interlaced (progressive) format and displaying it as such. Moreover, many newer display technologies, for example Liquid Crystal Displays (LCDs), are non-interlaced by nature, therefore conversion from interlaced to progressive format is necessary before an image can be displayed at all.
Numerous methods have been proposed for converting an interlaced video signal to progressive format. For example, linear methods have been used, where pixels in the progressive output image are generated as a linear combination of spatially and/or temporally neighbouring pixels from the interlaced input sequence.
Although this approach may produce acceptable results under certain conditions, the performance generally represents a trade off between vertical spatial resolution and motion artifacts. Instead of accepting a compromise, it is possible to optimize performance by employing a method that is capable of adapting to the type of source material. For instance, it is well known that conversion from interlaced to progressive format can be accomplished with high quality for sources that originate from motion picture film or from computer graphics (CG). Such sources are inherently progressive in nature, but are transmitted in interlaced format in accordance with existing video standards. For example, motion picture film created at 24 frames per second is converted to interlaced video at 60 fields per second using a process known as 3:2 pull down, where 3 fields are derived from one frame and 2 are derived from the next, so as to provide the correct conversion ratio. Similarly, a computer graphics sequence created at 30 frames per second is converted to interlaced video at 60 fields per second using a pull down ratio of 2:2, where 2 fields are derived from each CG frame. By recognizing that a video sequence originates from a progressive source, it is possible for a format converter to reconstruct the sequence in progressive format exactly as it was before its conversion to interlaced format.
Unfortunately, video transmission formats do not include explicit information about the type of source material being carried, such as whether the material was derived from a progressive source. Thus, in order for a video-processing device to exploit the progressive nature of film or CG sources, it is first necessary to determine whether the material originates from a progressive source. If it is determined that the material originates from such a source, it is furthermore necessary to determine precisely which video fields originate from which source frames. Such determination can be made by measuring the motion between successive fields of an input video sequence.
It is common to measure at least two different modes of motion in determining the presence of a film source. Firstly, it is common to measure the motion between a given video field and that which preceded it by two fields. In this case, motion can be measured as the absolute difference between two pixels at the same spatial position in the two fields. A measure of the total difference between the two fields can be generated by summing the absolute differences at the pixel level over the entire field. The quality of the motion signal developed in this way will be fairly high, since the two fields being compared have the same parity (both odd or both even) and therefore corresponding samples from each field have the same position within the image. Thus any difference that is measured between two pixels will largely be the result of motion. Although the quality of measurement made in this way is high, unfortunately it is of limited value. For an input sequence derived from film in accordance with a 3:2 pull down ratio, only one out of five successive measurements made in this way will differ significantly from the rest. The measure of motion between the first and third fields of the three fields that are derived from the same motion picture frame will be substantially lower than the measurements obtained during the other four fields, since the two fields being compared are essentially the same and differ only in their noise content. This does not provide sufficient information to avoid artifacts under certain conditions when a film sequence is interrupted. Also, in the case of an input sequence derived from film or CG in accordance with a 2:2 pull down ratio, no useful information is provided whatsoever.
A second mode of motion that can be measured is the motion between successive fields which are of opposite parity (one odd and one even). Although this mode of measurement overcomes the limitations of the above, it is inherently a more difficult measurement to make since a spatial offset exists between fields that are of opposite parity. Thus, even if there is no actual motion, a finite difference between the fields may exist owing to the spatial offset. This tends to increase the measured difference when there is no motion making it more difficult to reliably discriminate between when there is motion and when there is not. This is particularly true in the presence of noise and/or limited motion. A number of methods have been proposed in the prior art for the measurement of motion between fields of opposite parity. It is an objective of the present invention to provide a method for the measurement of motion between fields of opposite parity with greater ability to discriminate between the presence of motion or lack thereof than those of the prior art.
Various techniques besides those linear methods described above, have also been proposed for conversion from interlaced to progressive format of video material not derived from film. For example, if it can be determined whether specific parts of an image are in motion, then each part can be processed accordingly to achieve more optimal results. This requires the measurement of motion locally and is akin to the problem of measuring motion globally as required to determine the presence of film sources. The same elemental operations may be used to measure differences at a pixel level, only in the latter case the differences are summed over an entire field to produce a global measurement, whereas in the former case the difference may be used as a measure of local motion without further summation. As with the global case, the local case may involve various modes of measurement. One of the modes that can be used to advantage is the local measurement of motion between successive fields of opposite parity. It is a further objective of the present invention to provide such a method.
The following patents are relevant as prior art relative to the present invention:
U.S. Pat. Documents
5,689,301 - Nov. 18, 1997 Method and apparatus for identifying
Christopher video fields produced by film sources
6,014,182 - Jan. 11, 2000 Film source video detection
Swartz
4,932,280 - Jan. 1, 1991 Motion sequence pattern detector for
Lyon video
5,291,280 - Mar. 1, 1994 Motion detection between even and odd
Faroudja fields within 2:1 interlaced television
standard
SUMMARY OF THE INVENTION
According to the present invention, a method and apparatus are provided whereby the motion between two fields of opposite parity may be measured with greater ability to discriminate between the presence of motion and lack thereof than with those techniques of the prior art. According to the present invention, the level of motion between the two fields at a specific position is determined by comparing the values of four vertically adjacent pixels, each of which having the same horizontal position, where the first and third pixels are taken from vertically adjacent lines in one field, the second and fourth pixels are taken from vertically adjacent lines in the other field such that the vertical position of the second pixel is halfway between the first and third pixels and the vertical position of the third pixel is halfway between the second and fourth pixels. If the value of the second pixel lies between the values of the first and third pixels, of if the value of the third pixel lies between the values of the second and fourth pixels, then the local motion is taken as zero. Otherwise, the local motion is taken as the minimum of the absolute differences between the first and second pixels, the second and third pixels, and between the third and fourth pixels.
This technique has the benefit that false detection of motion arising from the presence of high vertical spatial frequencies is minimized, while actual motion is still readily detected. Using this technique, false detection is completely avoided for vertical spatial frequencies less than one half of the vertical frame Nyquist frequency. Utilizing more than four pixels extends the range of vertical spatial frequencies for which false detection is completely avoided irrespective of the vertical frame Nyquist frequency. In general, if the method of the present invention is scaled to utilize n pixels where n is greater than or equal to four, then false detection of motion is avoided for frequencies up to and including (n−3)/(n−2) of the vertical frame Nyquist frequency. In any case, the resulting local measurement of motion can either be used directly or summed over an entire field in order to provide a global motion signal that is useful for determining whether an input sequence derives from a film source.
According to a further aspect of the present invention, the contributing pixels are chosen such that their spatial positions remain constant regardless of whether the most recent of the two fields is even or odd. In this way, any motion that is falsely detected in a static image remains constant from one field to the next, thereby improving the ability to distinguish between falsely detected motion and actual motion that arises as a result of a sequence that was generated in accordance with a 2:2 pull down ratio.
BRIEF DESCRIPTION OF THE DRAWINGS
A description of the prior art and of the preferred embodiments of the present invention is provided hereinbelow with reference to the following drawings in which:
FIG. 1 is a schematic representation showing how motion may be measured between successive fields of opposite parity, according to the prior art.
FIG. 2 is a schematic representation showing how motion may be measured between successive fields of opposite parity using a second method, according to the prior art.
FIG. 3 is a schematic representation showing how motion may be measured between successive fields of opposite parity using a third method, according to the prior art.
FIG. 4 is a schematic representation showing how motion may be measured between successive fields of opposite parity using a fourth method, according to the prior art.
FIG. 5 is a schematic representation of a first example showing how motion may be measured between successive fields of opposite parity, according to the method of the present invention.
FIG. 6 is a schematic representation of a second example showing how motion may be measured between successive fields of opposite parity, according to the method of the present invention.
FIG. 7 is a schematic representation showing how motion may be measured between successive fields of opposite parity, according to an alternative embodiment of the method of the present invention.
FIG. 8 is a block diagram of an apparatus for implementing the method of the present invention.
FIG. 9 is a schematic representation of a third example to show how motion may be measured between successive fields of opposite parity where the most recent of the two fields is even, according to the method of the present invention.
FIG. 10 is a schematic representation of a fourth example to show how motion may be measured between successive fields of opposite parity where the most recent of the two fields is odd, according to the method of the present invention.
FIG. 11 is a schematic representation showing how the contributing pixels are selected to have a particular spatio-temporal relationship to one another depending on whether the most recent field is even or odd, according to a further aspect of the present invention.
FIG. 12 is a schematic representation of a further example to show how motion may be measured between successive fields to opposite parity where the most recent of the two fields is odd, according to the method of the present invention.
FIG. 13 is a block diagram of an apparatus for implementing the method as set forth in FIGS. 9, 11 and 12, according to a preferred embodiment of the present invention.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
Referring now to FIG. 1, a first example is shown of how motion may be measured between successive fields of opposite parity, according to one technique known in the prior art. The left half of FIG. 1 shows the spatio-temporal relationship between a set of vertically and temporally adjacent pixels at a given horizontal position. It is clearly shown in FIG. 1 that the vertical position of each pixel in the even field is halfway between the two nearest pixels in the odd field. The right half of FIG. 1 shows the value of each pixel relative to its vertical position. A curved line is shown connecting the pixels and is intended to represent an image detail, the intensity of which varies vertically within the image in a sinusoidal fashion with the bright and dark image portions (i.e. signal crests and troughs) occurring in the even video field, and intermediate intensity image portions occurring in the odd field. The curved line is drawn continuously through the pixels of both the odd and the even fields to represent the fact that both fields are part of an image in which there is no motion. Two pixels, P1 and P2 are highlighted showing their spatio-temporal relationship to one another and their values within the image. In this example, the image detail has a vertical spatial frequency that is exactly equal to one half of the vertical frame Nyquist frequency and a peak amplitude equal to quantity A. The formula at the bottom of FIG. 1 shows how a local measurement of motion is made using a first prior art technique. The motion is simply taken as the absolute difference between the two pixels P1 and P2, as depicted in FIG. 1. Note that although the pixel values used in this example are intended to represent samples of an image in which there is no motion, application of this prior art technique will result in a measured motion value equal to quantity A. Thus, this technique fails to reject as motion the difference between pixels P1 and P2 that arises owing to their different vertical positions.
Referring now to FIG. 2, a somewhat enhanced measurement technique is shown as fully disclosed in U.S. Pat. No. 5,291,280 (Faroudja). The left half of the figure shows the spatio-temporal relationship between pixels in two successive video fields while the right half shows the value of each pixel relative to its vertical position for a particular image detail. The example used is that of an image detail that has a vertical spatial frequency exactly equal to one half of the vertical frame Nyquist frequency. The formula for calculating the motion according to this second method is shown at the bottom of FIG. 2. The measured motion is taken as the lesser of the absolute differences between pixels P1 and P2, and between pixels P2 and P3, as depicted in FIG. 2. As before, although the pixel values used in this example are intended to represent samples of an image in which there is no motion, application of this technique will result in a measured motion value equal to quantity A. Thus, as with the previous method, this technique fails to reject as motion the difference between the pixels that arises owing to their different vertical positions.
Referring now to FIG. 3, a further enhanced measurement technique is shown, as disclosed in U.S. Pat. No. 6,014,182 (Swartz). The left half of the figure shows the spatio-temporal relationship between pixels in two successive video fields while the right half shows the value of each pixel relative to its vertical position for a particular image detail. The example used is that of an image detail that has a vertical spatial frequency exactly equal to one half of the vertical frame Nyquist frequency. The formula for calculating the motion according to this third method is shown at the bottom of FIG. 3. The measured motion is taken as the lesser of the absolute differences between pixels P1 and P2, and between pixels P2 and P3, unless the absolute difference between pixels P1 and P3 is greater than the lesser of the absolute differences between pixels P1 and P2, and between pixels P2 and P3, in which case the motion value is taken as zero. As before, although the pixel values used in this example are intended to represent samples of an image in which there is no motion, application of this technique results in a measured motion value equal to quantity A. Thus, as with the previous method, this technique fails to reject as motion the difference between the pixels that arises owing to their different vertical positions.
Referring now to FIG. 4, another enhanced measurement technique is shown, as disclosed in U.S. Pat. No. 5,689,301 (Christopher). The left half of the figure shows the spatio-temporal relationship between pixels in two successive video fields while the right half shows the value of each pixel relative to its vertical position for a particular image detail. The example used is that of an image detail that has a vertical spatial frequency exactly equal to one half of the vertical frame Nyquist frequency. The formula for calculating the motion according to this fourth method is shown at the bottom of FIG. 4. The measured motion is taken as the lesser of the absolute differences between pixels P1 and P2, and between pixels P2 and P3, unless the value of pixel P2 is between the values of pixels P1 and P3, in which case the motion value is taken as zero. As before, although the pixel values used in this example are intended to represent samples of an image in which there is no motion, application of this technique results in a measured motion value equal to quantity A. Thus, as with the previous method, this technique fails to reject as motion the difference between the pixels that arises owing to their different vertical positions.
Referring now to FIG. 5, an enhanced measurement technique is shown according to the present invention. The left half of the figure shows the spatio-temporal relationship between pixels in two successive video fields while the right half shows the value of each pixel relative to its vertical position for a particular image detail. The example used is that of an image detail that has a vertical spatial frequency exactly equal to one half of the vertical frame Nyquist frequency. The formula for calculating the motion according to one aspect of the present invention is shown at the bottom of FIG. 5. The measured motion is taken as the lesser of the absolute differences between pixels P1 and P2, pixels P2 and P3, and between pixels P3 and P4, unless the value of either pixel P2 or pixel P3 is between the values of its immediate neighbours, in which case the motion value is taken as zero. Using this technique, the motion value generated in the example is zero, since the value of pixel P3 is between that of P2 and P4. This is the desired result, since the pixel values in the example are intended to represent samples of an image in which there is no motion. In fact, it can be shown that by using this technique, false detection of motion is completely avoided for vertical spatial frequencies less than one half of the vertical frame Nyquist frequency. Although some of the prior art techniques may avoid false motion under certain conditions, there is no vertical spatial frequency below which any of the four prior art techniques discussed above are guaranteed to avoid all false motion, as provided by the present invention.
Referring now to FIG. 6, another example is provided in which the present invention is applied to an image in which motion exists. The left half of the figure shows the spatio-temporal relationship between pixels in two successive video fields while the right half shows the value of each pixel relative to its vertical position within the image. In this example, a continuous line has been drawn through pixels P1 and P3 from the odd field, and a separate line has been drawn through pixels P2 and P4 from the even field to represent the fact there is no direct correlation between the samples from the odd field and those from the even field. Pixel values P2 and P4 differ from pixel values P1 and P2 by quantity B. According to the method of the present invention, the motion value is given as quantity B which is the desired result since it correctly indicates the presence of motion between the fields. The use of a four-pixel aperture in the present invention may result in a lower measured motion value near the edges of moving objects than would otherwise be obtained using a two or three pixel aperture as in the prior art methods. When summed over an entire field, this may tend to produce a slightly lower total than would otherwise be obtained. However, the present technique produces significantly lower false motion values for fields between which there is no motion. For typical video sources, the present technique results in a significantly higher ratio between the values measured where motion exists and the values measured where there is none. Hence, the ability to discriminate between motion and lack thereof is enhanced.
In another aspect of the present invention, utilizing greater than four pixels extends the range of vertical spatial frequencies for which false detection is avoided. Referring now to FIG. 7, an example is provided which is similar to that of FIG. 5 except that the method has been generalized to make use of n pixels. The formula for calculating the motion is shown at the bottom of the figure. The example used is that of an image detail that has a vertical spatial frequency exactly equal to one half of the vertical frame Nyquist frequency. Application of the formula in this case yields a motion value of zero, which is the desired result since there is no motion between the fields. It will be understood from FIG. 7 that for higher frequencies as well, in particular those frequencies up to and including (n−3)/(n−2) of the vertical frame Nyquist frequency, false detection of motion is completely avoided.
FIG. 8 shows an apparatus implementing the method of the present invention as shown in FIG. 5 where a motion value is calculated based on four pixels. An input video signal is applied to the input of a memory controller 10, a line delay element 12 and a first input of a differencing circuit 14. The pixel that is present at the video input at any given time corresponds to that designated as pixel P4 in FIG. 5. The memory controller stores incoming video data into a DRAM array 11 and later retrieves it so as to produce a version of the input video signal that is delayed (e.g. by 263 lines in the case of an NTSC input). The memory controller 10 may also concurrently retrieve other versions of the input video signal that are delayed by different amounts to be used for other purposes that are not relevant to the present invention. The pixel that is output from the memory controller 10 at any given time corresponds to that designated as pixel P3 in FIG. 5, which is subsequently applied to the input of a second line delay element 13, a first input of a differencing circuit 15 and the second input of differencing circuit 14 referred to herein above. Line delay element 12 provides a version of the input video signal that is delayed by one vertical line, and corresponds to pixel P2 in FIG. 5. Pixel P2 is applied to a first input of a differencing circuit 16 and the second input of differencing circuit 15 described earlier. Line delay element 13 provides a version of the delayed video signal from the memory controller that is further delayed by one vertical line and corresponds to pixel P1 in FIG. 5. Pixel P1 is applied to the second input of differencing circuit 16. Each of the differencing circuits 14-16 generates both the sign and the magnitude of the differences between their input signals. The three signals representing the signs of the differences are applied to the inputs of override logic block 17. The three signals representing the magnitudes of the differences are applied to the inputs of the keep smallest value block 18 which propagates only the smallest of the three values at its input. A multiplexor 19 selects either the output of the keep smallest value block or zero, depending on the output of override logic block 17. The value at the output of multiplexor 19 is forced to zero if the signs at the outputs of differencing circuits 14 and 15 are the same, of if the signs at the outputs of differencing circuits 15 and 16 are the same. The value at the output of multiplexor 19 provides a measure of the motion in the vicinity of pixels P1-P4 according to one aspect of the present invention. The local motion value may be integrated over a complete field in order to provide an overall measure of the motion between two fields for the purpose of determining whether the input sequence derives from a film source. Alternatively, the local motion value may be used to advantage without subsequent integration for the conversion from interlaced to progressive format of material that has not been derived from film.
In order to fully determine the motion sequence, it is necessary to measure a new motion value for each and every field that is received. In half of the cases, the most recent of the two fields is even, while in the other half the most recent field is odd. In all of the prior art methods described above, the spatio-temporal relationship of the contributing pixels relative to one another is fixed irrespective of whether the most recent field is even or odd. In a further aspect of the present invention, the spatio-temporal relationship is chosen depending on whether the most recent field is even or odd, so as to generate a measure of the motion that does not change unduly from one field to the next. Referring now to FIG. 9, an example of the present invention is provided which is similar to that shown in FIG. 5, except the image detail includes a vertical frequency component that is greater than half the vertical frame Nyquist frequency. Note that in this example, the most recent of the two fields is even. Application of the method in this case results in a measured motion value of zero, since the value of P3 clearly lies between that of P2 and P4. It should be noted that the inventive method produces a value of zero even though in this case the image detail contains a frequency component outside of the range where false detection is guaranteed to be avoided. This is coincidental and may occur depending on the phase of the image signal with respect to the sample points.
Referring to FIG. 10, an example is set forth in which the method is applied to the same image detail set forth in FIG. 9 but where the most recent field is odd. In this example, the spatio-temporal relationship between pixels P1 to P4 has been maintained, as in the prior art methods described earlier. Due to the half line offset between the odd and even fields, the four contributing pixels have moved along the contour of the static image detail, relative to FIG. 9. Application of the method in this case results in a measured motion value equal to quantity C, since the value of P2 does not lie between that of P1 and P3, nor does the value of P3 lie between that of P2 and P4. Thus, it can be seen that the measured motion value may alternate from one field to the next depending on whether the most recent field is even or odd, despite the fact there may be no actual motion at all within the image. The inventor has realized that this is a detrimental result since alternating high and low motion values is exactly the same pattern that would be produced by an actual motion sequence produced in accordance with a 2:2 pull down ratio, thereby hampering the ability to distinguish motion from static images in accordance with the present invention. Consequently, the inventor has concluded that the spatio-temporal relationship between the contributing pixels should preferably be chosen depending on whether the most recent field is even or odd, as shown in FIG. 11. Essentially, the pixels are chosen such that for a static image, the same image samples are always used. Thus, if PI represents a sample from an odd field, then P1 is always taken from an odd field, regardless of whether the most recent field is odd or even.
Referring now to FIG. 12, an example is provided of the preferred method for choosing the spatio-temporal relationship between the contributing pixels as applied to the example of FIGS. 9 and 10 for the case where the most recent field is odd. Application of the formula according to the method of the present invention yields a measured motion value of zero, which is the same result as in FIG. 9 where the most recent field is even. Thus, undue modulation of the motion value from field to field is effectively avoided. It should be noted that in the examples of FIGS. 9, 11 and 12, pixel P1 has consistently been taken from the odd field. It will be apparent to one of ordinary skill in the art that pixel P1 could have consistently been taken from the even field instead, with results equal in overall performance.
FIG. 13 shows an apparatus for implementing the method of the present invention as shown in FIGS. 9, 11 and 12. For convenience, the same numbers have been used to designate those items that are in common with the apparatus shown in FIG. 8. The refinement of appropriately selecting the pixels so as to avoid modulation of the motion signal from one field to the next is achieved by the addition of four multiplexors 20-23 and through manipulation of the delay provided by the memory controller 10. It will be apparent from inspection of FIGS. 10 and 12 that the less desirable spatio-temporal relationship between the contributing pixels for the case in which the most recent field is odd as shown in FIG. 10, can be transformed to the more desirable case as show in FIG. 12, by delaying the even field by one less line and by subsequently interchanging pixel P1 with P2 and pixel P3 with P4. In the apparatus of FIG. 13, multiplexors 20 and 21 are used to interchange pixels P3 and P4, while multiplexors 22 and 23 are used to interchange pixels P1 and P2, for the case when the field that is currently being inputted is odd.
A person understanding the present invention may conceive of other embodiments and variations thereof without departing from the sphere and scope of the invention as defined by the claims appended hereto.

Claims (12)

1. A method for measuring motion at a horizontal and vertical position between video fields of opposite parity of a video signal comprising the steps of:
measuring the video signal values of at least two vertically adjacent pixels from a video field of one parity and at least two vertically adjacent pixels from a video field of the opposite parity such that when taken together, the pixels represent contiguous samples of an image at said horizontal and vertical position; and
determining, using differencing circuitry, whether the signal value of any of said pixels lies between the signal values of adjacent pixels in the field of opposite parity and in response outputting a zero motion value, otherwise, outputting a motion value equal to the lowest absolute difference between any of said pixels and its closest adjacent pixel in the field of opposite parity; and
converting the video signal from interlaced to progressive format using the motion value.
2. The method of claim 1 wherein said pixels are measured from the same vertical positions in fields of like parity, irrespective of the order in which the fields were received.
3. The method of claim 1 wherein two vertically adjacent pixels are taken from an even video field and two vertically adjacent pixels are taken from an odd video field.
4. The method of claim 1 wherein motion values produced from each of a plurality of sets of vertically adjacent pixels are summed substantially over an entire field to produce an overall measure of the motion between said fields of opposite parity.
5. Apparatus for measuring motion at a horizontal and vertical position between video fields of opposite parity comprising:
register means for selecting at least two vertically adjacent pixels from a video field of one parity and at least two vertically adjacent pixels from a video field of the opposite parity such that when taken together, the pixels represent contiguous samples of an image at said horizontal and vertical position; and
differencing circuitry for determining whether the signal value of any of said pixels lies between the signal values of adjacent pixels in the field of opposite parity and in response outputting a zero motion value, otherwise, outputting a motion value equal to the lowest absolute difference between any of said pixels and its closest adjacent pixel in the field of opposite parity.
6. The apparatus of claim 5 wherein said pixels are measured from the same vertical positions in fields of like parity, irrespective of the order in which the fields were received.
7. The apparatus of claim 5 wherein two vertically adjacent pixels are taken from an even video field and two vertically adjacent pixels are taken from an odd video field.
8. The apparatus of claim 5 wherein motion values produced from each of a plurality of sets of vertically adjacent pixels are summed substantially over an entire field to produce an overall measure of the motion between said fields of opposite parity.
9. Apparatus for measuring motion at a horizontal and vertical position between video fields of opposite parity comprising:
means for measuring the signal values of at least two vertically adjacent pixels from a video field of one parity and at least two vertically adjacent pixels from a video field of the opposite parity such that when taken together, the pixels represent contiguous samples of an image at said horizontal and vertical position; and
differencing circuitry for determining whether the signal value of any of said pixels lies between the signal values of adjacent pixels in the field of opposite parity and in response outputting a zero motion value, otherwise, outputting a motion value equal to the lowest absolute difference between any of said pixels and its closest adjacent pixel in the field of opposite parity.
10. The apparatus of claim 9 wherein said pixels are measured from the same vertical positions in fields of like parity, irrespective of the order in which the fields were received.
11. The apparatus of claim 9 wherein two vertically adjacent pixels are taken from an even video field and two vertically adjacent pixels are taken from an odd video field.
12. The apparatus of claim 9 wherein motion values produced from each of a plurality of sets of vertically adjacent pixels are summed substantially over an entire field to produce an overall measure of the motion between said fields of opposite parity.
US11/250,140 2000-12-13 2005-10-12 Method and apparatus for detecting motion between odd and even video fields Expired - Lifetime USRE41196E1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/250,140 USRE41196E1 (en) 2000-12-13 2005-10-12 Method and apparatus for detecting motion between odd and even video fields

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/734,745 US6633612B2 (en) 2000-12-13 2000-12-13 Method and apparatus for detecting motion between odd and even video fields
US11/250,140 USRE41196E1 (en) 2000-12-13 2005-10-12 Method and apparatus for detecting motion between odd and even video fields

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/734,745 Reissue US6633612B2 (en) 2000-12-13 2000-12-13 Method and apparatus for detecting motion between odd and even video fields

Publications (1)

Publication Number Publication Date
USRE41196E1 true USRE41196E1 (en) 2010-04-06

Family

ID=24952912

Family Applications (3)

Application Number Title Priority Date Filing Date
US09/734,745 Ceased US6633612B2 (en) 2000-12-13 2000-12-13 Method and apparatus for detecting motion between odd and even video fields
US09/741,825 Expired - Lifetime US6647062B2 (en) 2000-12-13 2000-12-22 Method and apparatus for detecting motion and absence of motion between odd and even video fields
US11/250,140 Expired - Lifetime USRE41196E1 (en) 2000-12-13 2005-10-12 Method and apparatus for detecting motion between odd and even video fields

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US09/734,745 Ceased US6633612B2 (en) 2000-12-13 2000-12-13 Method and apparatus for detecting motion between odd and even video fields
US09/741,825 Expired - Lifetime US6647062B2 (en) 2000-12-13 2000-12-22 Method and apparatus for detecting motion and absence of motion between odd and even video fields

Country Status (2)

Country Link
US (3) US6633612B2 (en)
TW (1) TW518884B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080030613A1 (en) * 2004-12-09 2008-02-07 Thebault Cedric Method And Apparatus For Generating Motion Compensated Pictures
WO2019058173A1 (en) 2017-09-22 2019-03-28 Interblock D.D. Electronic-field communication for gaming environment amplification

Families Citing this family (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7215376B2 (en) 1997-10-06 2007-05-08 Silicon Image, Inc. Digital video system and methods for providing same
US6587158B1 (en) * 1998-07-23 2003-07-01 Dvdo, Inc. Method and apparatus for reducing on-chip memory in vertical video processing
US6489998B1 (en) * 1998-08-11 2002-12-03 Dvdo, Inc. Method and apparatus for deinterlacing digital video images
US6515706B1 (en) * 1998-09-15 2003-02-04 Dvdo, Inc. Method and apparatus for detecting and smoothing diagonal features video images
US20070211167A1 (en) * 1998-10-05 2007-09-13 Adams Dale R Digital video system and methods for providing same
US6697519B1 (en) * 1998-10-29 2004-02-24 Pixar Color management system for converting computer graphic images to film images
US6909469B2 (en) * 1999-08-11 2005-06-21 Silicon Image, Inc. Interlace motion artifact detection using vertical frequency detection and analysis
WO2001080559A2 (en) 2000-04-18 2001-10-25 Silicon Image Method, system and apparatus for identifying the source type and quality level of a video sequence
US6822691B1 (en) * 2000-12-20 2004-11-23 Samsung Electronics Co., Ltd. Method of detecting motion in an interlaced video sequence utilizing region by region motion information and apparatus for motion detection
US7095445B2 (en) * 2000-12-20 2006-08-22 Samsung Electronics Co., Ltd. Method of detecting motion in an interlaced video sequence based on logical operation on linearly scaled motion information and motion detection apparatus
DE10140695C2 (en) * 2001-08-24 2003-10-09 Ps Miro Holdings Inc & Co Kg Method and device for detecting movement in an image
US6784942B2 (en) * 2001-10-05 2004-08-31 Genesis Microchip, Inc. Motion adaptive de-interlacing method and apparatus
US7003035B2 (en) 2002-01-25 2006-02-21 Microsoft Corporation Video coding methods and apparatuses
US20040001546A1 (en) 2002-06-03 2004-01-01 Alexandros Tourapis Spatiotemporal prediction for bidirectionally predictive (B) pictures and motion vector prediction for multi-picture reference motion compensation
US7154952B2 (en) 2002-07-19 2006-12-26 Microsoft Corporation Timestamp-independent motion vector prediction for predictive (P) and bidirectionally predictive (B) pictures
CA2414378A1 (en) * 2002-12-09 2004-06-09 Corel Corporation System and method for controlling user interface features of a web application
GB2396505B (en) 2002-12-20 2006-01-11 Imagination Tech Ltd 3D Vector method of inter-field motion detection
US9330060B1 (en) 2003-04-15 2016-05-03 Nvidia Corporation Method and device for encoding and decoding video image data
US8660182B2 (en) * 2003-06-09 2014-02-25 Nvidia Corporation MPEG motion estimation based on dual start points
US8064520B2 (en) 2003-09-07 2011-11-22 Microsoft Corporation Advanced bi-directional predictive coding of interlaced video
US7292283B2 (en) * 2003-12-23 2007-11-06 Genesis Microchip Inc. Apparatus and method for performing sub-pixel vector estimations using quadratic approximations
US7542095B2 (en) * 2005-01-20 2009-06-02 Samsung Electronics Co., Ltd. Method and system of noise-adaptive motion detection in an interlaced video sequence
JP4655218B2 (en) * 2005-09-16 2011-03-23 ソニー株式会社 Signal processing apparatus and method, program, and recording medium
US7995141B2 (en) * 2005-10-18 2011-08-09 Broadcom Corporation System, method, and apparatus for displaying pictures on an interlaced display
US7924345B2 (en) * 2005-10-20 2011-04-12 Broadcom Corp. Method and system for deinterlacing using polarity change count
US7916784B2 (en) * 2005-10-20 2011-03-29 Broadcom Corporation Method and system for inverse telecine and field pairing
US7791673B2 (en) * 2005-10-20 2010-09-07 Broadcom Corporation Method and system for polarity change count
US8731071B1 (en) 2005-12-15 2014-05-20 Nvidia Corporation System for performing finite input response (FIR) filtering in motion estimation
US8724702B1 (en) 2006-03-29 2014-05-13 Nvidia Corporation Methods and systems for motion estimation used in video coding
US8660380B2 (en) * 2006-08-25 2014-02-25 Nvidia Corporation Method and system for performing two-dimensional transform on data value array with reduced power consumption
US8582032B2 (en) * 2006-09-07 2013-11-12 Texas Instruments Incorporated Motion detection for interlaced video
US7961256B2 (en) * 2006-10-17 2011-06-14 Broadcom Corporation System and method for bad weave detection for inverse telecine
TWI318535B (en) * 2006-11-14 2009-12-11 Novatek Microelectronics Corp Method and system for video de-interlace strategy determination
EP1990991A1 (en) * 2007-05-09 2008-11-12 British Telecommunications Public Limited Company Video signal analysis
US20080291209A1 (en) * 2007-05-25 2008-11-27 Nvidia Corporation Encoding Multi-media Signals
US8756482B2 (en) * 2007-05-25 2014-06-17 Nvidia Corporation Efficient encoding/decoding of a sequence of data frames
US9118927B2 (en) * 2007-06-13 2015-08-25 Nvidia Corporation Sub-pixel interpolation and its application in motion compensated encoding of a video signal
US8254455B2 (en) 2007-06-30 2012-08-28 Microsoft Corporation Computing collocated macroblock information for direct mode macroblocks
US8873625B2 (en) * 2007-07-18 2014-10-28 Nvidia Corporation Enhanced compression in representing non-frame-edge blocks of image frames
CN102138323A (en) * 2008-09-01 2011-07-27 三菱数字电子美国公司 Picture improvement system
TW201021547A (en) * 2008-11-25 2010-06-01 Novatek Microelectronics Corp Apparatus and method for motion adaptive interlacing
US8666181B2 (en) * 2008-12-10 2014-03-04 Nvidia Corporation Adaptive multiple engine image motion detection system and method
US8189666B2 (en) 2009-02-02 2012-05-29 Microsoft Corporation Local picture identifier and computation of co-located information
US20120051442A1 (en) * 2010-08-31 2012-03-01 Cristarella Sarah J Video Processor Configured to Correct Field Placement Errors in a Video Signal
US9171370B2 (en) * 2010-10-20 2015-10-27 Agency For Science, Technology And Research Method, an apparatus and a computer program product for deinterlacing an image having a plurality of pixels

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4982280A (en) 1989-07-18 1991-01-01 Yves C. Faroudja Motion sequence pattern detector for video
US4998287A (en) 1988-10-14 1991-03-05 General Instrument Corporation Determination of sequential positions of video fields derived from film
US5274442A (en) * 1991-10-22 1993-12-28 Mitsubishi Denki Kabushiki Kaisha Adaptive blocking image signal coding system
US5291280A (en) 1992-05-05 1994-03-01 Faroudja Y C Motion detection between even and odd fields within 2:1 interlaced television standard
US5317398A (en) 1992-08-17 1994-05-31 Rca Thomson Licensing Corporation Video/film-mode (3:2 pulldown) detector using patterns of two-field differences
US5394196A (en) 1991-04-05 1995-02-28 Thomson-Csf Method of classifying the pixels of an image belonging to a sequence of moving images and method of temporal interpolation of images using the said classification
US5398071A (en) 1993-11-02 1995-03-14 Texas Instruments Incorporated Film-to-video format detection for digital television
EP0690617A2 (en) 1994-06-30 1996-01-03 Eastman Kodak Company Method for controllably deinterlacing sequential lines of video data fields based upon pixel signals associated with four successive interlaced video fields
US5563651A (en) 1994-12-30 1996-10-08 Thomson Consumer Electronics, Inc. Method and apparatus for identifying video fields produced by film sources employing 2-2 and 3-2 pull down sequences
US5594813A (en) * 1992-02-19 1997-01-14 Integrated Information Technology, Inc. Programmable architecture and methods for motion estimation
US5689301A (en) 1994-12-30 1997-11-18 Thomson Consumer Electronics, Inc. Method and apparatus for identifying video fields produced by film sources
US5818968A (en) * 1995-03-20 1998-10-06 Sony Corporation High-efficiency coding method, high-efficiency coding apparatus, recording and reproducing apparatus, and information transmission system
US6014182A (en) 1997-10-10 2000-01-11 Faroudja Laboratories, Inc. Film source video detection
US6130723A (en) 1998-01-15 2000-10-10 Innovision Corporation Method and system for improving image quality on an interlaced video display
US6157412A (en) 1998-03-30 2000-12-05 Sharp Laboratories Of America, Inc. System for identifying video fields generated from film sources
US6340990B1 (en) * 1998-03-31 2002-01-22 Applied Intelligent Systems Inc. System for deinterlacing television signals from camera video or film
US6421698B1 (en) * 1998-11-04 2002-07-16 Teleman Multimedia, Inc. Multipurpose processor for motion estimation, pixel processing, and general processing
US20020149703A1 (en) 2000-04-18 2002-10-17 Adams Dale R. Method, system and article of manufacture for identifying the source type and quality level of a video sequence
US6473460B1 (en) * 2000-03-31 2002-10-29 Matsushita Electric Industrial Co., Ltd. Method and apparatus for calculating motion vectors
US6519005B2 (en) * 1999-04-30 2003-02-11 Koninklijke Philips Electronics N.V. Method of concurrent multiple-mode motion estimation for digital video
US20030052996A1 (en) 1998-09-15 2003-03-20 Dvdo, Inc. Method and apparatus for detecting and smoothing diagonal features in video images
US6545719B1 (en) 2000-03-31 2003-04-08 Matsushita Electric Industrial Co., Ltd. Apparatus and method for concealing interpolation artifacts in a video interlaced to progressive scan converter
US6563550B1 (en) 2000-03-06 2003-05-13 Teranex, Inc. Detection of progressive frames in a video field sequence

Patent Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4998287A (en) 1988-10-14 1991-03-05 General Instrument Corporation Determination of sequential positions of video fields derived from film
US4982280A (en) 1989-07-18 1991-01-01 Yves C. Faroudja Motion sequence pattern detector for video
US5394196A (en) 1991-04-05 1995-02-28 Thomson-Csf Method of classifying the pixels of an image belonging to a sequence of moving images and method of temporal interpolation of images using the said classification
US5274442A (en) * 1991-10-22 1993-12-28 Mitsubishi Denki Kabushiki Kaisha Adaptive blocking image signal coding system
US5594813A (en) * 1992-02-19 1997-01-14 Integrated Information Technology, Inc. Programmable architecture and methods for motion estimation
US5901248A (en) * 1992-02-19 1999-05-04 8X8, Inc. Programmable architecture and methods for motion estimation
US5291280A (en) 1992-05-05 1994-03-01 Faroudja Y C Motion detection between even and odd fields within 2:1 interlaced television standard
US5317398A (en) 1992-08-17 1994-05-31 Rca Thomson Licensing Corporation Video/film-mode (3:2 pulldown) detector using patterns of two-field differences
US5398071A (en) 1993-11-02 1995-03-14 Texas Instruments Incorporated Film-to-video format detection for digital television
US5521644A (en) 1994-06-30 1996-05-28 Eastman Kodak Company Mechanism for controllably deinterlacing sequential lines of video data field based upon pixel signals associated with four successive interlaced video fields
EP0690617A2 (en) 1994-06-30 1996-01-03 Eastman Kodak Company Method for controllably deinterlacing sequential lines of video data fields based upon pixel signals associated with four successive interlaced video fields
US5563651A (en) 1994-12-30 1996-10-08 Thomson Consumer Electronics, Inc. Method and apparatus for identifying video fields produced by film sources employing 2-2 and 3-2 pull down sequences
US5689301A (en) 1994-12-30 1997-11-18 Thomson Consumer Electronics, Inc. Method and apparatus for identifying video fields produced by film sources
US5818968A (en) * 1995-03-20 1998-10-06 Sony Corporation High-efficiency coding method, high-efficiency coding apparatus, recording and reproducing apparatus, and information transmission system
US6014182A (en) 1997-10-10 2000-01-11 Faroudja Laboratories, Inc. Film source video detection
US6130723A (en) 1998-01-15 2000-10-10 Innovision Corporation Method and system for improving image quality on an interlaced video display
US6157412A (en) 1998-03-30 2000-12-05 Sharp Laboratories Of America, Inc. System for identifying video fields generated from film sources
US6340990B1 (en) * 1998-03-31 2002-01-22 Applied Intelligent Systems Inc. System for deinterlacing television signals from camera video or film
US20030052996A1 (en) 1998-09-15 2003-03-20 Dvdo, Inc. Method and apparatus for detecting and smoothing diagonal features in video images
US6421698B1 (en) * 1998-11-04 2002-07-16 Teleman Multimedia, Inc. Multipurpose processor for motion estimation, pixel processing, and general processing
US6519005B2 (en) * 1999-04-30 2003-02-11 Koninklijke Philips Electronics N.V. Method of concurrent multiple-mode motion estimation for digital video
US6563550B1 (en) 2000-03-06 2003-05-13 Teranex, Inc. Detection of progressive frames in a video field sequence
US6473460B1 (en) * 2000-03-31 2002-10-29 Matsushita Electric Industrial Co., Ltd. Method and apparatus for calculating motion vectors
US6545719B1 (en) 2000-03-31 2003-04-08 Matsushita Electric Industrial Co., Ltd. Apparatus and method for concealing interpolation artifacts in a video interlaced to progressive scan converter
US20020149703A1 (en) 2000-04-18 2002-10-17 Adams Dale R. Method, system and article of manufacture for identifying the source type and quality level of a video sequence

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Delogne et al., "Conversion from interlaced to progressive formats by means of motion compensation based techniques", IEEE International Conference on Image Processing and its Applications, pp. 409-412, Apr. 1992.
Sugiyama et al., "A method of de-interlacing with motion compensated interpolation", IEEE Transactions on Consumer Electronics, vol. 45, Iss. 3, pp. 611-616, Aug. 1999.

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080030613A1 (en) * 2004-12-09 2008-02-07 Thebault Cedric Method And Apparatus For Generating Motion Compensated Pictures
US8355441B2 (en) * 2004-12-09 2013-01-15 Thomson Licensing Method and apparatus for generating motion compensated pictures
WO2019058173A1 (en) 2017-09-22 2019-03-28 Interblock D.D. Electronic-field communication for gaming environment amplification
US10417857B2 (en) 2017-09-22 2019-09-17 Interblock D.D. Electronic-field communication for gaming environment amplification

Also Published As

Publication number Publication date
TW518884B (en) 2003-01-21
US6647062B2 (en) 2003-11-11
US20020109790A1 (en) 2002-08-15
US20020105596A1 (en) 2002-08-08
US6633612B2 (en) 2003-10-14

Similar Documents

Publication Publication Date Title
USRE41196E1 (en) Method and apparatus for detecting motion between odd and even video fields
US6421090B1 (en) Motion and edge adaptive deinterlacing
US7499103B2 (en) Method and apparatus for detecting frequency in digital video images
US6784942B2 (en) Motion adaptive de-interlacing method and apparatus
US6262773B1 (en) System for conversion of interlaced video to progressive video using edge correlation
US6829013B2 (en) Method and apparatus for detecting and smoothing diagonal features in video images
US7420618B2 (en) Single chip multi-function display controller and method of use thereof
US20030098924A1 (en) Method and apparatus for detecting the source format of video images
US7633559B2 (en) Interlace motion artifact detection using vertical frequency detection and analysis
US7477319B2 (en) Systems and methods for deinterlacing video signals
US8497937B2 (en) Converting device and converting method of video signals
US20080007614A1 (en) Frame rate conversion device, image display apparatus, and method of converting frame rate
EP0830018A2 (en) Method and system for motion detection in a video image
US8189105B2 (en) Systems and methods of motion and edge adaptive processing including motion compensation features
US20030098925A1 (en) Method of edge based interpolation
EP2188978A2 (en) Method and apparatus for line-based motion estimation in video image data
US20060209957A1 (en) Motion sequence pattern detection
US6614485B2 (en) Deinterlacing apparatus
US20040246546A1 (en) Scan line interpolation device, image processing device, image display device, and scan line interpolation method
US7616693B2 (en) Method and system for detecting motion between video field of same and opposite parity from an interlaced video source
US7430014B2 (en) De-interlacing device capable of de-interlacing video fields adaptively according to motion ratio and associated method
US20050163401A1 (en) Display image enhancement apparatus and method using adaptive interpolation with correlation
US20130044259A1 (en) System And Method For Vertical Gradient Detection In Video Processing
KR100252943B1 (en) scan converter
WO2010004468A1 (en) Reducing de-interlacing artifacts

Legal Events

Date Code Title Description
FPAY Fee payment

Year of fee payment: 8

FPAY Fee payment

Year of fee payment: 12