CA2146884A1 - Motion adaptive scan-rate conversion using directional edge interpolation - Google Patents
Motion adaptive scan-rate conversion using directional edge interpolationInfo
- Publication number
- CA2146884A1 CA2146884A1 CA002146884A CA2146884A CA2146884A1 CA 2146884 A1 CA2146884 A1 CA 2146884A1 CA 002146884 A CA002146884 A CA 002146884A CA 2146884 A CA2146884 A CA 2146884A CA 2146884 A1 CA2146884 A1 CA 2146884A1
- Authority
- CA
- Canada
- Prior art keywords
- data
- signal
- motion signal
- motion
- edge
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/01—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/01—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
- H04N7/0117—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving conversion of the spatial resolution of the incoming video signal
- H04N7/012—Conversion between an interlaced and a progressive signal
Abstract
A method for processing video data to produce a progressively scanned signal from an input of conventional interlaced video. The data is received at a processor (1), used to determine a motion signal (26) over time between field of the data. The motion signal is filtered to reduce errors caused by noise-corrupted video sources and then further filtered to spread out the determined motion signal. Edge information (30) is located and combined with the motion signal to produce an integrated progressive-scan signal (36) for display on a video display device, producing images with sharper edges and motion signals which have a lower susceptibility to noise.
Description
g ~
MOTION ADAPrIVE SCAN-RATE CONVERSION
USING DIRECIIONAL EDGE INTERPOLATION
BACKGROUND OF THE INVE~ION
1. Field of the [nvention This invention relates to digital video systems. more particularly to the displav of motion video sequences on digital display systems.
. Background of the Invention As television moves from an analog system to a digital system, several problems arise. One such problem occurs in the depiction of moving objects across the field of display.
When an object moves across the display in an analog system, the edges, or the object boundaries. remain true to life, with no real difficulties in portraying curves, diagonals and other features of the objects in motion. One example of an edge would he the curve of a red ball against a blue background. However, in a pixelated display with individual cells instead of lines of continuous images, the edge integrity becomes harder to maintain.
An additional problem is that most conventional televisions use an interlaced format, where the display device draws every other line during one inte~val, then draws the missing lines in the second interval. In a digital television using such techniques as progressive scan, where every line is "drawn"
during the same interval, the missing data from the second interval must be interpolated. Interpolation of moving objects creates artifacts, or visual images TI-189~7 Page 1 increase in the processing requirements.
that have errors in them.
Rounded edges on an object such as a ball present no real problems when stationarv. The curves smooth out through the use of prior field data ot the stationary ohject. Without the use of prior field data, a curve would have a jagged edge, looking much like a stair step or serrated edge. When the object moves, however, previous field data can no longer be used due to the lack of correlation between the present field and the past field. Hence, line averaging techniques using the current field are often employed for the interpolation process. Simple line averaging techniques suffer from a lack of perceived resolution which is evidenced by blurring and serrated edges. These visual artifacts are due to an interpolation process that does not take into consideration the actual edge content of the data.
The adaptive techniques used have been unsatisfactory in resolving moving edges. The resulting picture has artifacts such as the serrated edges mentioned above, that detract from the advantages of high-definition television (HDTV) or the generally sharper picture possible in digital televisions.
Some method is needed that allows the display to portray moving edges in keeping with the clarity and sharpness available in digital television, that also is usable in the higher speed environment of progressive scan without a huge increase in the processing requirements.
8 ~ Sl SUMMARY OF THE INVENTION
An interlaced-to-progressive-scan conversion process is disclosed herein.
The process employs motion-compensated interpolation. [t performs motion detection with the use of median-filtered inter-frame difference signals and uses a fast median filtering procedure. Tne process is edge adaptive, using edge orientations from the original interlace picture. The process provides for motion detection that has a low susceptibility to noise, while also providing for an adaptive interpolation process which preserves the integrity of edges found in the original interlaced picture. The process results in a picture with sharper moving edges and lower noise, overall resulting in a better scene presentation for the vlewer.
TI-189~7 Page 3 BRIEF DESCRIPTION OF THE DRAWINGS
For a more complete understanding of the present invention and for further advantages thereof, reference is now made to the following Detailed Description taken in conjunction with the accompanying Drawings in which:
Figure 1 shows a flow chart of a motion adaptive interlace-to-progressive-scan conversion.
Figure 2 shows an exploded view of a flow chart of a motion adaptive interlace-to-progressive-scan conversion with a more ~let~iled description of the process of generating motion and edge detection signals.
Figures 3a -3b show graphical examples of a process to determine a motion signal.
Figures 4a - 4d show graphical representation of a median filtering process.
Figure 5 shows a graphical example of edge detection.
1 4 ~
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
Motion adaptive interlaced-to-progress;ve-scan conversion (IPC) is used to eliminate artifacts attributed to the interlaced sc~nning format, where each field contains every other line and the two are interlaced into a complete frame. IPC techniques use the weighted sum of a motion signal, k, along with inter- and intra-field values.
One example of an IPC method is shown in Figure 1. In Figure 1, the discussion is to merely point out what signals travel where and at what times. The functionality of the signals and the purposes behind the delays will be flica1ssed in Figure 2.
The lllmin~nce signal, Y travels to Scan-line Video Processor (SVP) #1 along path 10. The same signal is passed unchanged to SVP #2, along path 12. Path 14 takes Y and delays it one field at 'Field DL' 16. This delayed field is passed directly to SVP #2 along path 18. Before travelling to SVP #1, however, the already-once-delayed signal goes through a one-line horizontal delay at 'lH' 20, and another field delay at 'Field DL' 22.
The twice-delayed signal now travels to SVP #1 along path 24.
SVP #1 produces three signals. The first signal is k' at line 26. It is delayed one field and reprocessed to aid in the production of the motion signal k at line 28. The edge information exits SVP #1 on line 30 and enters SVP #2. SVP #2 has the following inputs: the original h1min~nce signal Y on line 12, a once-delayed Y signal on line 18, a motion signal k on line 28, an edge information y .signal on line ~0; and two color difference signals, R-Y and B-Y on lines 32 and 34.
respectivelv. ~VP #:~ has output signals Yr~ for lllmin~nce proscan on line ~6, and color difference signals R-Y" and B-Y" on line 38.
It must be understood that, if the SVP is big enough and fast enough, the processes performed in SVP #2 could possibly be performed in a different section of SVP #1. However, for ease of discussion, it is more underst~n~ )le to use two SVPs. Additionally, the type of processor used does not have to be a SVP at all.
It is possible that other processors could be adapted to operate in substantially the same manner as needed to implement these algorithms.
Motion Si~nal Processing Looking at the various signals and their filnction in producing the proscan output Yp, turn now to Figure 2. Figure 2 shows a more detailed schematic of the internal processes of each SVP. The area enclosed in the upper dashed-line box is SVP #1 from Figure 1. The original Y signal again resides on line 10. As it enters SVP #1, lines 14 and 40 tap off of it. Line 14 delays the signal for one field at 'Field DL' 16 because of the manner in which motion is detected. In order to determine the m~gnitude of motion, a comparison must be made between the current field and the twice-delayed field. This once-delayed lllmin~nce field is then passed to SVP #2 along path 18. Path 14 continlle~ to 'lH' delay 20 to prevent any odd~even line mi~m~t~h between the delayed fields. It delays the field one horizontal line. The field is then delayed again at 'Field DL' 2~. This twice-delayed field passes along path 24. The current field entering the system on path TI-189~7 Page 6 / g g ~
10 then subtracts the twice-delayed field on path '~4. giving a comparison value of the two fields.
A graphical representation of this motion signal determination is shown in Figures ~a and 3b. The field difference is found by comparing the current field ~,vith the twice-delayed field at the difference sign in Figure 3a. The interpolated pixel X is determined using the motion signal k, in conjunction with the spatial neighbor pixels of X, as well as pixel Z from the previous field, in Figure 3b. This diagram brings together the concepts of edge information and the motion signal which will be discussed further in reference to Figure 5.
Because the comparison value is a signed number, it has nine bits. By taking the absolute value of the value at 'ABS' 4~, this is reduced to an eight-bit number. The nonlinear function 'NL' 44 then reduces the eight bits to four for passage into the median filter 45.
The median filtering process is shown in Figure 4a. By median filtering the motion signal, any point noise sources can be eliminated. thus adding reliability to the motion signal. To find the lowest noise target data, the median filter uses the values of neighboring data- points to find the target data as shown in Figure 4a.
The median filtering technique used merely as a specific example of this process represents a fast and effici~nt method of performing a 5-tap median calculation. Fast and efficient processes are necessary for many digital-signal-processing (D~3P) applications where execution time and program instruction space are at a premium. Real-time implementations, such as this process, place an even higher price on execution time and instruction space.
The 5-tap median filter process used in this procedure requires a total of 1~1 instructions, where a more conventional approach requires approximately ~7~
instructions out of a possible 910 instructions in the current configuration of scan-line video processors, such as SVP #1. The use of this fast median filter algorithm represents an approximate 35~/o savings in instruction space compared to the conventional algorithm.
The conventional approach to performing a 5-tap median filter involves either of the following:
MED(a,b,c,d,e) = MAX[min(a,b,c), min(a,b,d), min(a,b,e), min(a,c,d), min(a,c,e), min(a,d,e), min(b,c,d), min(b,c,e), min(b,d,e), min(c,d,e) or MED(a,b,c,d,e) = MIN[max(a,b,c), max(a,b,d), max(a,b,e), max(a,c,d), max(a,c,e), max(a,d,e), max(b,c,d), max(b,c,e), max(b,d,e), max(c,d,e)].
[n general, for an L-element sequence, the conventional method involves taking the minimum or m~imum of the m~imum or minimum of L!
MOTION ADAPrIVE SCAN-RATE CONVERSION
USING DIRECIIONAL EDGE INTERPOLATION
BACKGROUND OF THE INVE~ION
1. Field of the [nvention This invention relates to digital video systems. more particularly to the displav of motion video sequences on digital display systems.
. Background of the Invention As television moves from an analog system to a digital system, several problems arise. One such problem occurs in the depiction of moving objects across the field of display.
When an object moves across the display in an analog system, the edges, or the object boundaries. remain true to life, with no real difficulties in portraying curves, diagonals and other features of the objects in motion. One example of an edge would he the curve of a red ball against a blue background. However, in a pixelated display with individual cells instead of lines of continuous images, the edge integrity becomes harder to maintain.
An additional problem is that most conventional televisions use an interlaced format, where the display device draws every other line during one inte~val, then draws the missing lines in the second interval. In a digital television using such techniques as progressive scan, where every line is "drawn"
during the same interval, the missing data from the second interval must be interpolated. Interpolation of moving objects creates artifacts, or visual images TI-189~7 Page 1 increase in the processing requirements.
that have errors in them.
Rounded edges on an object such as a ball present no real problems when stationarv. The curves smooth out through the use of prior field data ot the stationary ohject. Without the use of prior field data, a curve would have a jagged edge, looking much like a stair step or serrated edge. When the object moves, however, previous field data can no longer be used due to the lack of correlation between the present field and the past field. Hence, line averaging techniques using the current field are often employed for the interpolation process. Simple line averaging techniques suffer from a lack of perceived resolution which is evidenced by blurring and serrated edges. These visual artifacts are due to an interpolation process that does not take into consideration the actual edge content of the data.
The adaptive techniques used have been unsatisfactory in resolving moving edges. The resulting picture has artifacts such as the serrated edges mentioned above, that detract from the advantages of high-definition television (HDTV) or the generally sharper picture possible in digital televisions.
Some method is needed that allows the display to portray moving edges in keeping with the clarity and sharpness available in digital television, that also is usable in the higher speed environment of progressive scan without a huge increase in the processing requirements.
8 ~ Sl SUMMARY OF THE INVENTION
An interlaced-to-progressive-scan conversion process is disclosed herein.
The process employs motion-compensated interpolation. [t performs motion detection with the use of median-filtered inter-frame difference signals and uses a fast median filtering procedure. Tne process is edge adaptive, using edge orientations from the original interlace picture. The process provides for motion detection that has a low susceptibility to noise, while also providing for an adaptive interpolation process which preserves the integrity of edges found in the original interlaced picture. The process results in a picture with sharper moving edges and lower noise, overall resulting in a better scene presentation for the vlewer.
TI-189~7 Page 3 BRIEF DESCRIPTION OF THE DRAWINGS
For a more complete understanding of the present invention and for further advantages thereof, reference is now made to the following Detailed Description taken in conjunction with the accompanying Drawings in which:
Figure 1 shows a flow chart of a motion adaptive interlace-to-progressive-scan conversion.
Figure 2 shows an exploded view of a flow chart of a motion adaptive interlace-to-progressive-scan conversion with a more ~let~iled description of the process of generating motion and edge detection signals.
Figures 3a -3b show graphical examples of a process to determine a motion signal.
Figures 4a - 4d show graphical representation of a median filtering process.
Figure 5 shows a graphical example of edge detection.
1 4 ~
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
Motion adaptive interlaced-to-progress;ve-scan conversion (IPC) is used to eliminate artifacts attributed to the interlaced sc~nning format, where each field contains every other line and the two are interlaced into a complete frame. IPC techniques use the weighted sum of a motion signal, k, along with inter- and intra-field values.
One example of an IPC method is shown in Figure 1. In Figure 1, the discussion is to merely point out what signals travel where and at what times. The functionality of the signals and the purposes behind the delays will be flica1ssed in Figure 2.
The lllmin~nce signal, Y travels to Scan-line Video Processor (SVP) #1 along path 10. The same signal is passed unchanged to SVP #2, along path 12. Path 14 takes Y and delays it one field at 'Field DL' 16. This delayed field is passed directly to SVP #2 along path 18. Before travelling to SVP #1, however, the already-once-delayed signal goes through a one-line horizontal delay at 'lH' 20, and another field delay at 'Field DL' 22.
The twice-delayed signal now travels to SVP #1 along path 24.
SVP #1 produces three signals. The first signal is k' at line 26. It is delayed one field and reprocessed to aid in the production of the motion signal k at line 28. The edge information exits SVP #1 on line 30 and enters SVP #2. SVP #2 has the following inputs: the original h1min~nce signal Y on line 12, a once-delayed Y signal on line 18, a motion signal k on line 28, an edge information y .signal on line ~0; and two color difference signals, R-Y and B-Y on lines 32 and 34.
respectivelv. ~VP #:~ has output signals Yr~ for lllmin~nce proscan on line ~6, and color difference signals R-Y" and B-Y" on line 38.
It must be understood that, if the SVP is big enough and fast enough, the processes performed in SVP #2 could possibly be performed in a different section of SVP #1. However, for ease of discussion, it is more underst~n~ )le to use two SVPs. Additionally, the type of processor used does not have to be a SVP at all.
It is possible that other processors could be adapted to operate in substantially the same manner as needed to implement these algorithms.
Motion Si~nal Processing Looking at the various signals and their filnction in producing the proscan output Yp, turn now to Figure 2. Figure 2 shows a more detailed schematic of the internal processes of each SVP. The area enclosed in the upper dashed-line box is SVP #1 from Figure 1. The original Y signal again resides on line 10. As it enters SVP #1, lines 14 and 40 tap off of it. Line 14 delays the signal for one field at 'Field DL' 16 because of the manner in which motion is detected. In order to determine the m~gnitude of motion, a comparison must be made between the current field and the twice-delayed field. This once-delayed lllmin~nce field is then passed to SVP #2 along path 18. Path 14 continlle~ to 'lH' delay 20 to prevent any odd~even line mi~m~t~h between the delayed fields. It delays the field one horizontal line. The field is then delayed again at 'Field DL' 2~. This twice-delayed field passes along path 24. The current field entering the system on path TI-189~7 Page 6 / g g ~
10 then subtracts the twice-delayed field on path '~4. giving a comparison value of the two fields.
A graphical representation of this motion signal determination is shown in Figures ~a and 3b. The field difference is found by comparing the current field ~,vith the twice-delayed field at the difference sign in Figure 3a. The interpolated pixel X is determined using the motion signal k, in conjunction with the spatial neighbor pixels of X, as well as pixel Z from the previous field, in Figure 3b. This diagram brings together the concepts of edge information and the motion signal which will be discussed further in reference to Figure 5.
Because the comparison value is a signed number, it has nine bits. By taking the absolute value of the value at 'ABS' 4~, this is reduced to an eight-bit number. The nonlinear function 'NL' 44 then reduces the eight bits to four for passage into the median filter 45.
The median filtering process is shown in Figure 4a. By median filtering the motion signal, any point noise sources can be eliminated. thus adding reliability to the motion signal. To find the lowest noise target data, the median filter uses the values of neighboring data- points to find the target data as shown in Figure 4a.
The median filtering technique used merely as a specific example of this process represents a fast and effici~nt method of performing a 5-tap median calculation. Fast and efficient processes are necessary for many digital-signal-processing (D~3P) applications where execution time and program instruction space are at a premium. Real-time implementations, such as this process, place an even higher price on execution time and instruction space.
The 5-tap median filter process used in this procedure requires a total of 1~1 instructions, where a more conventional approach requires approximately ~7~
instructions out of a possible 910 instructions in the current configuration of scan-line video processors, such as SVP #1. The use of this fast median filter algorithm represents an approximate 35~/o savings in instruction space compared to the conventional algorithm.
The conventional approach to performing a 5-tap median filter involves either of the following:
MED(a,b,c,d,e) = MAX[min(a,b,c), min(a,b,d), min(a,b,e), min(a,c,d), min(a,c,e), min(a,d,e), min(b,c,d), min(b,c,e), min(b,d,e), min(c,d,e) or MED(a,b,c,d,e) = MIN[max(a,b,c), max(a,b,d), max(a,b,e), max(a,c,d), max(a,c,e), max(a,d,e), max(b,c,d), max(b,c,e), max(b,d,e), max(c,d,e)].
[n general, for an L-element sequence, the conventional method involves taking the minimum or m~imum of the m~imum or minimum of L!
2 ) ( 2 subsequences. This factorial expression implies that as the length of the L-element sequence increases, the complexity increases in a factorial m~nner.
Conversely, the complexity of the present fast median filter increases in a linear manner as the length of the L-element sequence increases. Therefore, higher y complexity median filter implem.ont~tions could be achieved using the instant median filter process, while keeping the execution time and instruction space at a minim~1m Variables V0, Vl, V2, V3, and V4 represent the data points surrounding and including the point being interpolated, as shown in Figure 4b. Two of the given values arecompared and the extreme of these is removed in step 46. This filtering process can process either the m~imllm values and use the lowest value for median determination, or it can process the minimllm values and use the highest value for median determination.
The sought after result is to find the middle value of the five inputs. Rather than restrict the ~iccllccion~ these m~rimllm or minimllm values will be referred to as 'extreme' values.
If this were the implement~tion where the m~imllm values are processed, step 46 functions as shown in Figure 4c. If Vl is greater than or equal to V0, then S0 equals 0, otherwise, S0 equals 1. The output of step 50, on line 52, is V0 if S0 is 0, as shown in Figure 4d. This means that V0 is smaller than Vl.
This process continues until at step 54, the set of extremes Do through D3 represent the four highest or lowest values. This set is further reduced to a set of the three highest or lowest values, C0 through C2, by step 56. In steps 58 and 60 the three highest or lowest values are sorted to find the opposite extreme of those three variables. For example if the values C0 through C2 are the largest three values, steps 58 and 60 determine which is the minimllm of the three. This then becomes the median value.
8 ~ S~
The above example is for a ~-tap median filter. More or less taps can be used. At some point, the number of ~VP instructions will rise to a point that the extra precision advantage is no longer greater than the number of instr~ctions required. That point must be determined by each ~3esigner. However, at this point a 5-tap filter has been determined to be the best compromise between instruction count and precision obtained.
Returning now to Figures 1 and 2, the output of the median filter on line 26 is then sent in a feedback loop as signal k' as shovrn on the same line in Figure 1.
Signal k' is the motion signal used as an input to the temporal filter. The remaining processing done on motion signal k' is shown in more detail in Figure 2.
Dashed line 27 represents the processes done on motion signal k' from line 26.
This step temporally filters the signal by using a series of field delay lines 'FIELD
DL' and horizontal delay 'lH' in conjunction with the values determined from the median filtering process.
Dashed line 29 encompasses the spatial filtering perforrned after the temporal filtering. The spatial filtering step comprises a vertical low pass filter 'VLPF' and a horizontal low pass filter 'HLPF' both of which serve to spread out the motion in the final motion signal, k, which is output from SVP #1 on line 28.
The temporal filter 27 and the spatial filter 29 have a t~nrlency to spread the motion signal outward in a spatial m~nnPr. Therefore, any noise or errors in that signal tend to propagate. The heretofore unknown advantage of using the median filter before these filters is that the median filter elimin~qtes the noise and TI-189~7 Page 10 / g~ ~
prevents its propagation to the neighboring pixels, resulting in a much clearer picture.
Another problem that was previously mentioned is the detection of edges in conjunction with motion. While edge detection and motion signal processing are two separate topics, and can be implemented separatelyt edge information really only takes effect in the presence of motion. Therefore the motion signal processing can influence the amount of edge information used for interpolation of the miSsing lines.
Ed~e Detection Referring back now to Figures 1 and ~, the discussion moves to the edge inforrnation output on line 30 in Figure 1. A more detailed diagram of the process is shown in Figure 2. The edge detector 43 uses the inputs of line 10, the original lllmin~nce signal, and the lllmin~nce signal that has been delayed one horizontal line, along path 40. This process is shown gr~phic~lly in Figure 6.
Similar to Figure 3b, the pixel X has neighbors A-F. The direction of the edge could be determined to be any combination of the above neighbors and the below neighbors, not including redl~n~l~nt edge directions. For example, AD, BE, and CF are all vertical edges and do not require more than one ~esi~n~tion of direction. Therefore the possible edge directions are AE, AF, BE, CD and CE.
Note that AE and BF are the same edge, as are CE and BD. These lesign~tion of AF, etc., lesign~te the absolute value of the difference between the two variables.
AF is the absolute value of A - F. If the m~imum of the 5 values AE, AF, BE, (~D.and CE minus the minimum of these values is greater than a predetermined threshold value, then the edge direction is selected to be the minimum of the values. Otherwise, the edge is determined to be BE, or vertical. One way to implement this is to assign each possible edge direction a value that is passed to the second SVP, thereby informing SVP #2 which interpolation to use.
Returning to Figure 2, this edge information is tr~nsmitted on path 30 to SVP #2. There the edge information on path 30 is combined with the original lllmin~nce signal, a hori~ont~lly line-delayed lllmin~nce signal at the 'PIXEL
SELECT' step 70. Here the SVP perforrns a process where it selects one pixel from A, B, or C (see Figure 3b), and one from D, E, or F. These two pixels will then be used to compute the line average component of the weighted-mean interpolation. The two sign~ are weighted equally at V~ and combined. This resulting signal is input to the 'MIX' step on line 76, where it is processed with a field-delayed ll-min~nce signal from line 18, and the motion signal, k, on line 28.
The resulting output on line 36 is the interpolated proscan output signal, Yp.
A~l~ition~lly, the color difference sign~l~ R-Y and B-Y are output from SVP
#2 on lines 38 and 39, which are calculated using line average. As was previously mentioned, all of the above could possibly be performed in the same SVP, such as SVP #1. In that case, the edge information and motion signal would be c-)n~ red to be made available for further processing instead of transmitted to SVP #2.
The above process allows an interlaced signal to be converted to progressive scan, or proscan, with sharper edges and motion. The objects in motion in the g ~
scene have cleaner edges, giving a better picture to the viewer. Additionally, these processes can be installed in the scan-line video processors already necessary to the IPC process since the overall process uses a minimum number of instructions that can be implemented in the SVPs' unused portions. It is possible that all of the above steps could be performed in different active areas of one processor.
Thus, although there has been described to this point a particular embodiment for an interlace-to-progressive scan process, it is not intended that such specific references be considered as limitations upon the scope of this invention except in-so-far as set forth in the following claims.
Conversely, the complexity of the present fast median filter increases in a linear manner as the length of the L-element sequence increases. Therefore, higher y complexity median filter implem.ont~tions could be achieved using the instant median filter process, while keeping the execution time and instruction space at a minim~1m Variables V0, Vl, V2, V3, and V4 represent the data points surrounding and including the point being interpolated, as shown in Figure 4b. Two of the given values arecompared and the extreme of these is removed in step 46. This filtering process can process either the m~imllm values and use the lowest value for median determination, or it can process the minimllm values and use the highest value for median determination.
The sought after result is to find the middle value of the five inputs. Rather than restrict the ~iccllccion~ these m~rimllm or minimllm values will be referred to as 'extreme' values.
If this were the implement~tion where the m~imllm values are processed, step 46 functions as shown in Figure 4c. If Vl is greater than or equal to V0, then S0 equals 0, otherwise, S0 equals 1. The output of step 50, on line 52, is V0 if S0 is 0, as shown in Figure 4d. This means that V0 is smaller than Vl.
This process continues until at step 54, the set of extremes Do through D3 represent the four highest or lowest values. This set is further reduced to a set of the three highest or lowest values, C0 through C2, by step 56. In steps 58 and 60 the three highest or lowest values are sorted to find the opposite extreme of those three variables. For example if the values C0 through C2 are the largest three values, steps 58 and 60 determine which is the minimllm of the three. This then becomes the median value.
8 ~ S~
The above example is for a ~-tap median filter. More or less taps can be used. At some point, the number of ~VP instructions will rise to a point that the extra precision advantage is no longer greater than the number of instr~ctions required. That point must be determined by each ~3esigner. However, at this point a 5-tap filter has been determined to be the best compromise between instruction count and precision obtained.
Returning now to Figures 1 and 2, the output of the median filter on line 26 is then sent in a feedback loop as signal k' as shovrn on the same line in Figure 1.
Signal k' is the motion signal used as an input to the temporal filter. The remaining processing done on motion signal k' is shown in more detail in Figure 2.
Dashed line 27 represents the processes done on motion signal k' from line 26.
This step temporally filters the signal by using a series of field delay lines 'FIELD
DL' and horizontal delay 'lH' in conjunction with the values determined from the median filtering process.
Dashed line 29 encompasses the spatial filtering perforrned after the temporal filtering. The spatial filtering step comprises a vertical low pass filter 'VLPF' and a horizontal low pass filter 'HLPF' both of which serve to spread out the motion in the final motion signal, k, which is output from SVP #1 on line 28.
The temporal filter 27 and the spatial filter 29 have a t~nrlency to spread the motion signal outward in a spatial m~nnPr. Therefore, any noise or errors in that signal tend to propagate. The heretofore unknown advantage of using the median filter before these filters is that the median filter elimin~qtes the noise and TI-189~7 Page 10 / g~ ~
prevents its propagation to the neighboring pixels, resulting in a much clearer picture.
Another problem that was previously mentioned is the detection of edges in conjunction with motion. While edge detection and motion signal processing are two separate topics, and can be implemented separatelyt edge information really only takes effect in the presence of motion. Therefore the motion signal processing can influence the amount of edge information used for interpolation of the miSsing lines.
Ed~e Detection Referring back now to Figures 1 and ~, the discussion moves to the edge inforrnation output on line 30 in Figure 1. A more detailed diagram of the process is shown in Figure 2. The edge detector 43 uses the inputs of line 10, the original lllmin~nce signal, and the lllmin~nce signal that has been delayed one horizontal line, along path 40. This process is shown gr~phic~lly in Figure 6.
Similar to Figure 3b, the pixel X has neighbors A-F. The direction of the edge could be determined to be any combination of the above neighbors and the below neighbors, not including redl~n~l~nt edge directions. For example, AD, BE, and CF are all vertical edges and do not require more than one ~esi~n~tion of direction. Therefore the possible edge directions are AE, AF, BE, CD and CE.
Note that AE and BF are the same edge, as are CE and BD. These lesign~tion of AF, etc., lesign~te the absolute value of the difference between the two variables.
AF is the absolute value of A - F. If the m~imum of the 5 values AE, AF, BE, (~D.and CE minus the minimum of these values is greater than a predetermined threshold value, then the edge direction is selected to be the minimum of the values. Otherwise, the edge is determined to be BE, or vertical. One way to implement this is to assign each possible edge direction a value that is passed to the second SVP, thereby informing SVP #2 which interpolation to use.
Returning to Figure 2, this edge information is tr~nsmitted on path 30 to SVP #2. There the edge information on path 30 is combined with the original lllmin~nce signal, a hori~ont~lly line-delayed lllmin~nce signal at the 'PIXEL
SELECT' step 70. Here the SVP perforrns a process where it selects one pixel from A, B, or C (see Figure 3b), and one from D, E, or F. These two pixels will then be used to compute the line average component of the weighted-mean interpolation. The two sign~ are weighted equally at V~ and combined. This resulting signal is input to the 'MIX' step on line 76, where it is processed with a field-delayed ll-min~nce signal from line 18, and the motion signal, k, on line 28.
The resulting output on line 36 is the interpolated proscan output signal, Yp.
A~l~ition~lly, the color difference sign~l~ R-Y and B-Y are output from SVP
#2 on lines 38 and 39, which are calculated using line average. As was previously mentioned, all of the above could possibly be performed in the same SVP, such as SVP #1. In that case, the edge information and motion signal would be c-)n~ red to be made available for further processing instead of transmitted to SVP #2.
The above process allows an interlaced signal to be converted to progressive scan, or proscan, with sharper edges and motion. The objects in motion in the g ~
scene have cleaner edges, giving a better picture to the viewer. Additionally, these processes can be installed in the scan-line video processors already necessary to the IPC process since the overall process uses a minimum number of instructions that can be implemented in the SVPs' unused portions. It is possible that all of the above steps could be performed in different active areas of one processor.
Thus, although there has been described to this point a particular embodiment for an interlace-to-progressive scan process, it is not intended that such specific references be considered as limitations upon the scope of this invention except in-so-far as set forth in the following claims.
Claims (5)
1. A system for improved processing of video data, including:
a. a circuit for receiving said data;
b. a processor within said circuit operable to:
i. detect motion and edge signals in said data;
ii. filter said motion signal to reduce noise, wherein said filtering is performed by a fast median filter;
iii. spread said data through the use of temporal and spatial filters; and iv. reduce serrations along moving edges through the use of an edge correlator for directional interpolation; and c. a display device for receiving and displaying said filtered data.
a. a circuit for receiving said data;
b. a processor within said circuit operable to:
i. detect motion and edge signals in said data;
ii. filter said motion signal to reduce noise, wherein said filtering is performed by a fast median filter;
iii. spread said data through the use of temporal and spatial filters; and iv. reduce serrations along moving edges through the use of an edge correlator for directional interpolation; and c. a display device for receiving and displaying said filtered data.
2. A method of processing video data, comprising:
a. receiving said data into a processor;
b. determining a motion signal from said data;
c. finding edge information in said data;
d. filtering said motion signal;
e. passing said edge information and said filtered motion signal to a video processor; and f. using said filtered motion signal and said edge information to produce a progressive-scan signal for use on a video display device.
a. receiving said data into a processor;
b. determining a motion signal from said data;
c. finding edge information in said data;
d. filtering said motion signal;
e. passing said edge information and said filtered motion signal to a video processor; and f. using said filtered motion signal and said edge information to produce a progressive-scan signal for use on a video display device.
3. A method producing improved motion signals, comprising:
a. determining a motion signal from the magnitude of motion between field of data in a video signal at a processor;
b. using a fast median filter to eliminate errors from said motion signal.
producing a filtered motion signal;
c. temporally and spatially filtering said filtered motion signal, producing a thrice-filtered motion signal; and d. making available said thrice-filtered motion signal for further processing.
a. determining a motion signal from the magnitude of motion between field of data in a video signal at a processor;
b. using a fast median filter to eliminate errors from said motion signal.
producing a filtered motion signal;
c. temporally and spatially filtering said filtered motion signal, producing a thrice-filtered motion signal; and d. making available said thrice-filtered motion signal for further processing.
4. A method for improved edge detection comprising:
a. finding the minimum difference value between a set of first discrete edge directions at a processor;
b. determining if the said difference value is greater than a predetermined threshold value;
c. selecting a final edge direction based upon said determination, wherein said final edge direction is selected to be said difference value if said difference value is greater than said threshold value, otherwise, wherein said final edge direction is set to be vertical; and d. producing a signal from said processor that transmits said final edge direction.
a. finding the minimum difference value between a set of first discrete edge directions at a processor;
b. determining if the said difference value is greater than a predetermined threshold value;
c. selecting a final edge direction based upon said determination, wherein said final edge direction is selected to be said difference value if said difference value is greater than said threshold value, otherwise, wherein said final edge direction is set to be vertical; and d. producing a signal from said processor that transmits said final edge direction.
5. A method for fast median filtering comprising:
a. performing a series of extreme comparisons between pairs of data values, for a set of data samples;
b. updating a current most extreme value of said set of data as each one of said series of comparisons is performed;
c. comparing each data value of said set with the current extreme value, then storing the one closest to the opposite extreme from said current extreme value;
d. repeating said updating and said comparing steps until three said extreme values remaining; and e. finding a final most extreme value of said remaining values, wherein said final most extreme value is a median value.
a. performing a series of extreme comparisons between pairs of data values, for a set of data samples;
b. updating a current most extreme value of said set of data as each one of said series of comparisons is performed;
c. comparing each data value of said set with the current extreme value, then storing the one closest to the opposite extreme from said current extreme value;
d. repeating said updating and said comparing steps until three said extreme values remaining; and e. finding a final most extreme value of said remaining values, wherein said final most extreme value is a median value.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US227,816 | 1994-04-14 | ||
US08/227,816 US5519451A (en) | 1994-04-14 | 1994-04-14 | Motion adaptive scan-rate conversion using directional edge interpolation |
Publications (1)
Publication Number | Publication Date |
---|---|
CA2146884A1 true CA2146884A1 (en) | 1995-10-15 |
Family
ID=22854587
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA002146884A Abandoned CA2146884A1 (en) | 1994-04-14 | 1995-04-12 | Motion adaptive scan-rate conversion using directional edge interpolation |
Country Status (8)
Country | Link |
---|---|
US (3) | US5519451A (en) |
EP (1) | EP0677958B1 (en) |
JP (2) | JPH0884321A (en) |
KR (1) | KR100424600B1 (en) |
CN (1) | CN1087892C (en) |
CA (1) | CA2146884A1 (en) |
DE (1) | DE69519398T2 (en) |
TW (4) | TW348357B (en) |
Families Citing this family (103)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0669598B1 (en) * | 1994-02-28 | 1999-10-27 | STMicroelectronics S.r.l. | Filter for reducing noise and exalting image digital signals edges according to fuzzy logic |
DE4419891B4 (en) * | 1994-06-07 | 2006-06-29 | Micronas Gmbh | Method for the progressive conversion of interlaced television pictures |
KR100203270B1 (en) * | 1995-09-30 | 1999-06-15 | 윤종용 | Pixel interpolation method and apparatus using the correlation of wide region vector |
JPH09163373A (en) * | 1995-12-08 | 1997-06-20 | Toshiba Corp | Noise reduction device |
FR2742900B1 (en) * | 1995-12-22 | 1998-02-13 | Thomson Multimedia Sa | METHOD FOR INTERPOLATING PROGRESSIVE FRAMES |
US6104755A (en) * | 1996-09-13 | 2000-08-15 | Texas Instruments Incorporated | Motion detection using field-difference measurements |
JP3274980B2 (en) * | 1997-03-03 | 2002-04-15 | 株式会社ケンウッド | Two-dimensional video filter and video filter device |
US6639945B2 (en) | 1997-03-14 | 2003-10-28 | Microsoft Corporation | Method and apparatus for implementing motion detection in video compression |
US6173317B1 (en) | 1997-03-14 | 2001-01-09 | Microsoft Corporation | Streaming and displaying a video stream with synchronized annotations over a computer network |
US6100870A (en) * | 1997-05-30 | 2000-08-08 | Texas Instruments Incorporated | Method for vertical imaging scaling |
US6141056A (en) * | 1997-08-08 | 2000-10-31 | Sharp Laboratories Of America, Inc. | System for conversion of interlaced video to progressive video using horizontal displacement |
US6281942B1 (en) * | 1997-08-11 | 2001-08-28 | Microsoft Corporation | Spatial and temporal filtering mechanism for digital motion video signals |
US6452972B1 (en) * | 1997-09-12 | 2002-09-17 | Texas Instruments Incorporated | Motion detection using field-difference measurements |
US6262773B1 (en) * | 1997-09-15 | 2001-07-17 | Sharp Laboratories Of America, Inc. | System for conversion of interlaced video to progressive video using edge correlation |
US6061100A (en) * | 1997-09-30 | 2000-05-09 | The University Of British Columbia | Noise reduction for video signals |
US6546405B2 (en) | 1997-10-23 | 2003-04-08 | Microsoft Corporation | Annotating temporally-dimensioned multimedia content |
US6181382B1 (en) * | 1998-04-03 | 2001-01-30 | Miranda Technologies Inc. | HDTV up converter |
US7536706B1 (en) | 1998-08-24 | 2009-05-19 | Sharp Laboratories Of America, Inc. | Information enhanced audio video encoding system |
US6118488A (en) * | 1998-08-31 | 2000-09-12 | Silicon Integrated Systems Corporation | Method and apparatus for adaptive edge-based scan line interpolation using 1-D pixel array motion detection |
US6297848B1 (en) * | 1998-11-25 | 2001-10-02 | Sharp Laboratories Of America, Inc. | Low-delay conversion of 3:2 pulldown video to progressive format with field averaging |
KR100282397B1 (en) * | 1998-12-31 | 2001-02-15 | 구자홍 | Deinterlacing device of digital image data |
KR100323662B1 (en) * | 1999-06-16 | 2002-02-07 | 구자홍 | Deinterlacing method and apparatus |
KR100303728B1 (en) * | 1999-07-29 | 2001-09-29 | 구자홍 | Deinterlacing method of interlaced scanning video |
MXPA02001666A (en) * | 1999-08-16 | 2002-10-23 | Du Pont | Method for the production of calendic acid, a fatty acid containing delta-8,10,12 conjugated double bonds and related fatty acids having a modification at the delta-9 position. |
US7171703B2 (en) | 1999-11-30 | 2007-02-06 | Pool Cover Specialists National, Inc. | Pool cover tracking system |
US6731342B2 (en) * | 2000-01-06 | 2004-05-04 | Lg Electronics Inc. | Deinterlacing apparatus and method using edge direction detection and pixel interplation |
KR100731966B1 (en) * | 2000-01-28 | 2007-06-25 | 가부시키가이샤 후지츠 제네랄 | Scanning conversion circuit |
KR20020007411A (en) | 2000-03-15 | 2002-01-26 | 요트.게.아. 롤페즈 | Video-apparatus with peaking filter |
US6721458B1 (en) | 2000-04-14 | 2004-04-13 | Seiko Epson Corporation | Artifact reduction using adaptive nonlinear filters |
JP2002024815A (en) * | 2000-06-13 | 2002-01-25 | Internatl Business Mach Corp <Ibm> | Image conversion method for converting into enlarged image data, image processing device, and image display device |
KR100364762B1 (en) * | 2000-06-22 | 2002-12-16 | 엘지전자 주식회사 | Apparatus and Method for Progressive Scanning Image Converting, and Apparatus for Vertical Scanning Rate Converting Using the same |
US7647340B2 (en) | 2000-06-28 | 2010-01-12 | Sharp Laboratories Of America, Inc. | Metadata in JPEG 2000 file format |
KR100644601B1 (en) * | 2000-09-30 | 2006-11-10 | 삼성전자주식회사 | Apparatus for de-interlacing video data using motion-compensated interpolation and method thereof |
US7116372B2 (en) * | 2000-10-20 | 2006-10-03 | Matsushita Electric Industrial Co., Ltd. | Method and apparatus for deinterlacing |
EP1356670A1 (en) | 2000-12-11 | 2003-10-29 | Koninklijke Philips Electronics N.V. | Motion compensated de-interlacing in video signal processing |
US7095445B2 (en) * | 2000-12-20 | 2006-08-22 | Samsung Electronics Co., Ltd. | Method of detecting motion in an interlaced video sequence based on logical operation on linearly scaled motion information and motion detection apparatus |
JP2002199351A (en) * | 2000-12-27 | 2002-07-12 | Matsushita Electric Ind Co Ltd | Stillness judging device, and scanning line interpolating device provided with the same |
US6859554B2 (en) * | 2001-04-04 | 2005-02-22 | Mitsubishi Electric Research Laboratories, Inc. | Method for segmenting multi-resolution video objects |
US7110612B1 (en) | 2001-10-11 | 2006-09-19 | Pixelworks, Inc. | Weighted absolute difference based noise reduction method and apparatus |
EP1442426A2 (en) * | 2001-11-01 | 2004-08-04 | Koninklijke Philips Electronics N.V. | Improved spatial resolution of video images |
CN1322749C (en) * | 2001-11-01 | 2007-06-20 | 皇家飞利浦电子股份有限公司 | Edge oriented interpolation of video data |
US7245326B2 (en) * | 2001-11-19 | 2007-07-17 | Matsushita Electric Industrial Co. Ltd. | Method of edge based interpolation |
US7423691B2 (en) * | 2001-11-19 | 2008-09-09 | Matsushita Electric Industrial Co., Ltd. | Method of low latency interlace to progressive video format conversion |
US7375760B2 (en) * | 2001-12-31 | 2008-05-20 | Texas Instruments Incorporated | Content-dependent scan rate converter with adaptive noise reduction |
US7023487B1 (en) * | 2002-01-25 | 2006-04-04 | Silicon Image, Inc. | Deinterlacing of video sources via image feature edge detection |
US7154556B1 (en) | 2002-03-21 | 2006-12-26 | Pixelworks, Inc. | Weighted absolute difference based deinterlace method and apparatus |
KR100451258B1 (en) * | 2002-06-12 | 2004-10-06 | (주)씨앤에스 테크놀로지 | Virtual channel mapping and channel turning method in digital broadcasting |
US7099509B2 (en) * | 2002-10-11 | 2006-08-29 | Samsung Electronics Co., Ltd. | Method of edge direction detection based on the correlations between pixels of a vector and an edge direction detection system |
KR20040061244A (en) * | 2002-12-30 | 2004-07-07 | 삼성전자주식회사 | Method and apparatus for de-interlacing viedo signal |
EP1443776B1 (en) * | 2003-01-29 | 2012-08-15 | Sony Deutschland GmbH | Video signal processing system |
GB2402288B (en) * | 2003-05-01 | 2005-12-28 | Imagination Tech Ltd | De-Interlacing of video data |
US8682097B2 (en) | 2006-02-14 | 2014-03-25 | DigitalOptics Corporation Europe Limited | Digital image enhancement with reference images |
US8330831B2 (en) | 2003-08-05 | 2012-12-11 | DigitalOptics Corporation Europe Limited | Method of gathering visual meta data using a reference image |
US8989453B2 (en) | 2003-06-26 | 2015-03-24 | Fotonation Limited | Digital image processing using face detection information |
US7440593B1 (en) | 2003-06-26 | 2008-10-21 | Fotonation Vision Limited | Method of improving orientation and color balance of digital images using face detection information |
US7844076B2 (en) | 2003-06-26 | 2010-11-30 | Fotonation Vision Limited | Digital image processing using face detection and skin tone information |
US8948468B2 (en) | 2003-06-26 | 2015-02-03 | Fotonation Limited | Modification of viewing parameters for digital images using face detection information |
US7620218B2 (en) | 2006-08-11 | 2009-11-17 | Fotonation Ireland Limited | Real-time face tracking with reference images |
US8498452B2 (en) | 2003-06-26 | 2013-07-30 | DigitalOptics Corporation Europe Limited | Digital image processing using face detection information |
US7269292B2 (en) | 2003-06-26 | 2007-09-11 | Fotonation Vision Limited | Digital image adjustable compression and resolution using face detection information |
US8593542B2 (en) | 2005-12-27 | 2013-11-26 | DigitalOptics Corporation Europe Limited | Foreground/background separation using reference images |
US8896725B2 (en) | 2007-06-21 | 2014-11-25 | Fotonation Limited | Image capture device with contemporaneous reference image capture mechanism |
US7616233B2 (en) * | 2003-06-26 | 2009-11-10 | Fotonation Vision Limited | Perfecting of digital image capture parameters within acquisition devices using face detection |
US8155397B2 (en) | 2007-09-26 | 2012-04-10 | DigitalOptics Corporation Europe Limited | Face tracking in a camera processor |
US7471846B2 (en) | 2003-06-26 | 2008-12-30 | Fotonation Vision Limited | Perfecting the effect of flash within an image acquisition devices using face detection |
US9692964B2 (en) | 2003-06-26 | 2017-06-27 | Fotonation Limited | Modification of post-viewing parameters for digital images using image region or feature information |
US9129381B2 (en) | 2003-06-26 | 2015-09-08 | Fotonation Limited | Modification of post-viewing parameters for digital images using image region or feature information |
US8494286B2 (en) | 2008-02-05 | 2013-07-23 | DigitalOptics Corporation Europe Limited | Face detection in mid-shot digital images |
US7574016B2 (en) | 2003-06-26 | 2009-08-11 | Fotonation Vision Limited | Digital image processing using face detection information |
US7792970B2 (en) | 2005-06-17 | 2010-09-07 | Fotonation Vision Limited | Method for establishing a paired connection between media devices |
US7565030B2 (en) | 2003-06-26 | 2009-07-21 | Fotonation Vision Limited | Detecting orientation of digital images using face detection information |
AU2003258084A1 (en) * | 2003-08-04 | 2005-03-07 | Grass Valley (U.S.) Inc. | Apparatus and method for reducing noise in an image |
JP4003713B2 (en) * | 2003-08-06 | 2007-11-07 | ソニー株式会社 | Image processing apparatus and image processing method |
US7345708B2 (en) * | 2003-12-23 | 2008-03-18 | Lsi Logic Corporation | Method and apparatus for video deinterlacing and format conversion |
WO2006022705A1 (en) * | 2004-08-10 | 2006-03-02 | Thomson Licensing | Apparatus and method for indicating the detected degree of motion in video |
US8320641B2 (en) | 2004-10-28 | 2012-11-27 | DigitalOptics Corporation Europe Limited | Method and apparatus for red-eye detection using preview or other reference images |
US8503800B2 (en) | 2007-03-05 | 2013-08-06 | DigitalOptics Corporation Europe Limited | Illumination detection using classifier chains |
US7315631B1 (en) | 2006-08-11 | 2008-01-01 | Fotonation Vision Limited | Real-time face tracking in a digital image acquisition device |
US7362378B2 (en) | 2005-01-10 | 2008-04-22 | Matsushita Electric Industrial Co., Ltd. | Method of edge based pixel location and interpolation |
US7542095B2 (en) * | 2005-01-20 | 2009-06-02 | Samsung Electronics Co., Ltd. | Method and system of noise-adaptive motion detection in an interlaced video sequence |
US7403234B2 (en) * | 2005-05-02 | 2008-07-22 | Samsung Electronics Co., Ltd. | Method for detecting bisection pattern in deinterlacing |
US7450184B1 (en) | 2005-06-27 | 2008-11-11 | Magnum Semiconductor, Inc. | Circuits and methods for detecting 2:2 encoded video and systems utilizing the same |
US7522214B1 (en) | 2005-06-27 | 2009-04-21 | Magnum Semiconductor, Inc. | Circuits and methods for deinterlacing video display data and systems using the same |
US7414671B1 (en) * | 2005-06-30 | 2008-08-19 | Magnum Semiconductor, Inc. | Systems and methods for display object edge detection and pixel data interpolation in video processing systems |
US7538824B1 (en) | 2005-08-18 | 2009-05-26 | Magnum Semiconductor, Inc. | Systems and methods for reducing noise during video deinterlacing |
KR100843083B1 (en) * | 2005-12-14 | 2008-07-02 | 삼성전자주식회사 | Apparatus and method for compensating frame based on motion estimation |
ATE497218T1 (en) | 2006-06-12 | 2011-02-15 | Tessera Tech Ireland Ltd | ADVANCES IN EXPANSING AAM TECHNIQUES FROM GRAYSCALE TO COLOR IMAGES |
US7403643B2 (en) | 2006-08-11 | 2008-07-22 | Fotonation Vision Limited | Real-time face tracking in a digital image acquisition device |
US7916897B2 (en) | 2006-08-11 | 2011-03-29 | Tessera Technologies Ireland Limited | Face tracking for controlling imaging parameters |
GB2444534A (en) * | 2006-12-06 | 2008-06-11 | Sony Uk Ltd | Assessing the reliability of motion information in motion adaptive image processing |
US8115866B2 (en) * | 2006-12-29 | 2012-02-14 | Texas Instruments Incorporated | Method for detecting film pulldown cadences |
US8055067B2 (en) | 2007-01-18 | 2011-11-08 | DigitalOptics Corporation Europe Limited | Color segmentation |
TWI389568B (en) | 2007-01-26 | 2013-03-11 | Mstar Semiconductor Inc | Method and related apparatus for image de-interlace |
EP2115662B1 (en) | 2007-02-28 | 2010-06-23 | Fotonation Vision Limited | Separating directional lighting variability in statistical face modelling based on texture space decomposition |
KR101247147B1 (en) | 2007-03-05 | 2013-03-29 | 디지털옵틱스 코포레이션 유럽 리미티드 | Face searching and detection in a digital image acquisition device |
US7916971B2 (en) * | 2007-05-24 | 2011-03-29 | Tessera Technologies Ireland Limited | Image processing method and apparatus |
FR2925199A1 (en) * | 2007-12-13 | 2009-06-19 | Thomson Licensing Sas | METHOD FOR GENERATING DISTANCES REPRESENTATIVE OF CONTOUR ORIENTATIONS IN A VIDEO IMAGE, CORRESPONDING DEVICE AND USE OF THE METHOD FOR DISENGAGING OR CONVERTING FORMAT |
TWI392114B (en) * | 2008-03-04 | 2013-04-01 | Huga Optotech Inc | Light emitting diode and method |
US7855737B2 (en) | 2008-03-26 | 2010-12-21 | Fotonation Ireland Limited | Method of making a digital camera image of a scene including the camera user |
CN106919911A (en) | 2008-07-30 | 2017-07-04 | 快图有限公司 | Modified using the automatic face and skin of face detection |
US8049817B2 (en) * | 2008-10-02 | 2011-11-01 | Interra Systems Inc. | Method and system for calculating interlace artifact in motion pictures |
US8170370B2 (en) * | 2008-12-04 | 2012-05-01 | Cyberlink Corp. | Method and apparatus of processing interlaced video data to generate output frame by blending deinterlaced frames |
US8379917B2 (en) | 2009-10-02 | 2013-02-19 | DigitalOptics Corporation Europe Limited | Face recognition performance using additional image features |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4513440A (en) * | 1982-06-29 | 1985-04-23 | Harris Corporation | Hardware median filter |
US4713786A (en) * | 1985-02-15 | 1987-12-15 | Harris Corporation | Digital hardware selection filter |
US4672567A (en) * | 1985-02-19 | 1987-06-09 | Allied Corporation | Median filter for reducing data error in distance measuring equipment |
US4672445A (en) * | 1985-05-29 | 1987-06-09 | Rca Corporation | Progressive scan processor employing interpolation in luminance channel controlled by a motion signal and a vertical detail representative signal |
GB2194364B (en) * | 1986-08-20 | 1990-06-06 | Gec Avionics | A median filter |
US5144568A (en) * | 1987-05-26 | 1992-09-01 | Sundstrand Corporation | Fast median filter |
JP2732650B2 (en) * | 1989-02-28 | 1998-03-30 | 株式会社東芝 | Vertical edge detection circuit |
GB2231460B (en) * | 1989-05-04 | 1993-06-30 | Sony Corp | Spatial interpolation of digital video signals |
GB8920490D0 (en) * | 1989-09-11 | 1989-10-25 | Indep Broadcasting Authority | Improvements in or relating to adaptive video signal converters |
US5166788A (en) * | 1990-06-29 | 1992-11-24 | Samsung Electronics Co., Ltd. | Motion signal processor |
KR930009880B1 (en) * | 1990-11-19 | 1993-10-12 | 삼성전자 주식회사 | Motion signal detecting circuit |
FI89995C (en) * | 1991-12-31 | 1993-12-10 | Salon Televisiotehdas Oy | Procedure for randadaptive interpolation of a TV picture line and an interpolator |
DE4211955A1 (en) * | 1992-04-09 | 1993-10-14 | Thomson Brandt Gmbh | Method and device for an interlace progressive conversion |
US5384865A (en) * | 1992-06-01 | 1995-01-24 | Eastman Kodak Company | Adaptive, hybrid median filter for temporal noise suppression |
KR0151410B1 (en) * | 1992-07-03 | 1998-10-15 | 강진구 | Motion vector detecting method of image signal |
US5412436A (en) * | 1993-04-22 | 1995-05-02 | Thomson Consumer Electronics, Inc. | Motion adaptive video processing system |
-
1994
- 1994-04-14 US US08/227,816 patent/US5519451A/en not_active Expired - Lifetime
-
1995
- 1995-04-12 EP EP95105519A patent/EP0677958B1/en not_active Expired - Lifetime
- 1995-04-12 CA CA002146884A patent/CA2146884A1/en not_active Abandoned
- 1995-04-12 DE DE69519398T patent/DE69519398T2/en not_active Expired - Lifetime
- 1995-04-13 KR KR1019950008607A patent/KR100424600B1/en not_active IP Right Cessation
- 1995-04-13 CN CN95103953A patent/CN1087892C/en not_active Expired - Fee Related
- 1995-04-13 JP JP7088172A patent/JPH0884321A/en not_active Withdrawn
- 1995-06-07 US US08/477,743 patent/US5592231A/en not_active Expired - Lifetime
- 1995-06-07 US US08/475,738 patent/US5638139A/en not_active Expired - Lifetime
- 1995-06-16 TW TW086103482A patent/TW348357B/en not_active IP Right Cessation
- 1995-06-16 TW TW086103483A patent/TW348358B/en not_active IP Right Cessation
- 1995-06-16 TW TW084106155A patent/TW326612B/en not_active IP Right Cessation
- 1995-06-16 TW TW086103481A patent/TW348356B/en not_active IP Right Cessation
-
2007
- 2007-12-21 JP JP2007330055A patent/JP2008136227A/en not_active Abandoned
Also Published As
Publication number | Publication date |
---|---|
DE69519398D1 (en) | 2000-12-21 |
US5592231A (en) | 1997-01-07 |
TW348358B (en) | 1998-12-21 |
US5638139A (en) | 1997-06-10 |
EP0677958A2 (en) | 1995-10-18 |
CN1114813A (en) | 1996-01-10 |
KR950035448A (en) | 1995-12-30 |
TW348356B (en) | 1998-12-21 |
EP0677958B1 (en) | 2000-11-15 |
JP2008136227A (en) | 2008-06-12 |
KR100424600B1 (en) | 2004-06-23 |
TW348357B (en) | 1998-12-21 |
US5519451A (en) | 1996-05-21 |
DE69519398T2 (en) | 2001-05-17 |
JPH0884321A (en) | 1996-03-26 |
EP0677958A3 (en) | 1996-01-17 |
CN1087892C (en) | 2002-07-17 |
TW326612B (en) | 1998-02-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US5519451A (en) | Motion adaptive scan-rate conversion using directional edge interpolation | |
US5784115A (en) | System and method for motion compensated de-interlacing of video frames | |
US7742103B1 (en) | Motion object video on film detection and adaptive de-interlace method based on fuzzy logic | |
US7812884B2 (en) | Method and de-interlacing apparatus that employs recursively generated motion history maps | |
US5631706A (en) | Converter and method for converting video signals of interlace format to video signals of progressive format | |
US20050129306A1 (en) | Method and apparatus for image deinterlacing using neural networks | |
EP1164792A2 (en) | Format converter using bidirectional motion vector and method thereof | |
JP2002503428A (en) | A system for converting interlaced video to progressive video using edge correlation | |
JP5039192B2 (en) | Spatio-temporal adaptive video deinterlacing | |
US5579053A (en) | Method for raster conversion by interpolating in the direction of minimum change in brightness value between a pair of points in different raster lines fixed by a perpendicular interpolation line | |
US5444493A (en) | Method and apparatus for providing intra-field interpolation of video signals with adaptive weighting based on gradients of temporally adjacent fields | |
US6295091B1 (en) | Method and apparatus for de-interlacing video fields for superior edge preservation | |
KR20030007817A (en) | Scalable resolution enhancement of a video image | |
US20080174694A1 (en) | Method and apparatus for video pixel interpolation | |
KR20070030223A (en) | Pixel interpolation | |
KR100422575B1 (en) | An Efficient Spatial and Temporal Interpolation system for De-interlacing and its method | |
US20070103589A1 (en) | Image signal processing apparatus, image signal processing method and program | |
US6124900A (en) | Recursive noise reduction for progressive scan displays | |
US8704945B1 (en) | Motion adaptive deinterlacer | |
US20080165277A1 (en) | Systems and Methods for Deinterlacing Video Data | |
KR100931110B1 (en) | Deinterlacing apparatus and method using fuzzy rule-based edge recovery algorithm | |
Tai et al. | A motion and edge adaptive deinterlacing algorithm | |
US20030184676A1 (en) | Image scan conversion method and apparatus | |
JP4179089B2 (en) | Motion estimation method for motion image interpolation and motion estimation device for motion image interpolation | |
EP0517385B1 (en) | A method of doubling the number of lines of a video signal received in the form of sequential samples |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FZDE | Discontinued | ||
FZDE | Discontinued |
Effective date: 20030414 |