CA2407143C - Texture replacement in video sequences and images - Google Patents

Texture replacement in video sequences and images Download PDF

Info

Publication number
CA2407143C
CA2407143C CA002407143A CA2407143A CA2407143C CA 2407143 C CA2407143 C CA 2407143C CA 002407143 A CA002407143 A CA 002407143A CA 2407143 A CA2407143 A CA 2407143A CA 2407143 C CA2407143 C CA 2407143C
Authority
CA
Canada
Prior art keywords
texture
original
frames
synthesized
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CA002407143A
Other languages
French (fr)
Other versions
CA2407143A1 (en
Inventor
Adriana Dumitras
Barin Geoffry Haskell
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AT&T Corp
Original Assignee
AT&T Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by AT&T Corp filed Critical AT&T Corp
Publication of CA2407143A1 publication Critical patent/CA2407143A1/en
Application granted granted Critical
Publication of CA2407143C publication Critical patent/CA2407143C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/001Model-based coding, e.g. wire frame
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/167Position within a video image, e.g. region of interest [ROI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/20Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding
    • H04N19/23Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding with coding of regions that are present throughout a whole video segment, e.g. sprites, background or mosaic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/20Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding
    • H04N19/27Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding involving both synthetic and natural picture components, e.g. synthetic natural hybrid coding [SNHC]

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Probability & Statistics with Applications (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Image Generation (AREA)

Abstract

Systems and methods for reducing bit rates by replacing original texture in a video sequence with synthesized texture. Reducing the bit rate of the video sequence begins by identifying and removing selected texture from frames in a video sequence. The removed texture is analyzed to generate texture parameters. New texture is synthesized using the texture parameters in combination with a set of constraints. Then, the newly synthesized texture is mapped back into the frames of the video sequence from which the original texture was removed. The resulting frames are then encoded. The bit rate of the video sequence with the synthesized texture is less than the bit rate of the video sequence with the original texture. Also, the ability of a decoder to decode the new video sequence is not compromised because no assumptions are made about the texture synthesis capabilities of the decoder.

Description

I
TEXTURE REPLACEMENT IN VIDEO SEQUENCES AND IMAGES
BACKGROUND OF THE INVENTION
The Field of the Invention The present invention relates to systems and methods for reducing a bit rate of a video sequence. More particularly, the present invention relates to systems and methods for reducing a bit rate of a video sequence by replacing original texture of the video sequence with synthesized texture at the encoder.
Background and Relevant Art One of the goals of transmitting video sequences over computer networks is to 1o have a relatively low bit rate while still maintaining a high quality video at the decoder.
As technology improves and becomes more accessible, more users are leaving the realm of 56K modems and moving to Digital Subscriber Lines (DSL), including VDSL and ADSL, which support a higher bit rate than 56K modems. VDSL, for example, supports bit rates up to 28 Mbits/second, but the transmission distance is limited. The maximum ~5 transmission distance for a 13 Mbits/second bit rate is 1.5 km using VDSL.
ADSL, on the other hand, can support longer distances using existing loops while providing a bit rate of approximately 500 kbits/second.
Video standards, such as MPEG-2, MPEG-4, and ITU H.263, can achieve bit rates of 3 to 9 Mbits/second, 64 kbits to 38.4 Mbits/second, and 8 kbits to 1.5 Mbits/second, respectively. Even though video sequences with bit rates of hundreds of kbits/second can be achieved using these standards, the visual quality of these video sequences is unacceptably low, especially when the content of the video sequences is complex.
Solutions to this problem use model-based analysis-synthesis compression methods.
Model-based analysis-synthesis compression methods perform both analysis and synthesis at the encoder to modify parameters in order to minimize the error between the synthesized model and the original. The resulting parameters are transmitted to the decoder, which is ~o required to synthesize the model again for the purpose of reconstructing the video sequence.
Much of the model-based analysis-synthesis compression methods have focused on modeling human head-and-shoulders objects while fewer attempts have modeled background objects. Focusing on human head-and-shoulder objects often occurs because in many applications, such as videoconferencing applications, the background is very simple.
However, background modeling may also achieve a significant reduction of the bit rate as the bit rate of I (intra) frames is often dependent on the texture content of each picture. To a lesser extent, the bit rate of B (bi-directionally predicted) frames and P
(predicted) frames is also affected by texture content as moving objects uncover additional background objects.
One proposal for reducing the bit rate is to use sprite methods on the background objects. Sprites are panoramic pictures constructed using all of the background pixels that are visible over a set of video frames. Instead of coding each frame, the sprite is compressed and transmitted. The background image can be reconstructed using the sprite and associated camera motion parameters. Sprite methods r~uire exact object segmentation at the encoder, which is often a difficult task for complex video sequences. In addition, the motion or shape parameters that are transmitted with the sprite consume some of the available bit rate. These limitations may be addressed by filtering the textured areas.
Unfortunately, different filters must be designed for various textures.
Texture replacement has also been proposed as a method of background modeling. In one example, the original texture is replaced with another texture that is selected from a set of textures. However, this requires that the set of replacement textures be stored at the encoder. In another example, the texture of selected regions is replaced at the encoder with pixel values that represent an "illegal" color in the YIIV
color space. At the decoder, the processed regions are recovered using chroma keying.
There is an explicit assumption that texture synthesis, using texture parameters sent from the encoder, followed by mapping of the synthesized texture onto the decoded video sequences, is performed at the decoder. This method therefore assumes that the reconstruction is performed at the decoder using a method that is dependent on the decoder's processing capabilities. The drawbacks of these approaches are that the processing capabilities of the decoder are assumed and that the computational costs of the decoding stage are increased.
BRIEF SUMMARY OF THE INVENTION
Certain exemplary embodiments can provide a method for replacing original texture in a set of frames included in a video sequence with synthesized texture such that a bit rate of the video sequence that includes the synthesized texture is less than a bit rate of the original video sequence, the method comprising: selecting original texture from an initial frame within the set of frames; removing the selected original texture from the set of frames in the video sequence; determining texture parameters from an analysis of the removed texture; and inserting synthesized texture into the set of frames, wherein the synthesized texture is based on the texture parameters.
Certain exemplary embodiments can provide a method for replacing original texture in a set of frames included in a video sequence with synthesized texture such that a bit rate of the video sequence that includes the synthesized texture is less than a bit rate of the original video sequence, the method comprising: selecting original texture from an initial frame within the set of frames; removing the selected original texture from the set of frames in the video sequence; analyzing the removed texture to compute texture parameters; synthesizing new texture using the texture parameters; and inserting the synthesized texture into the set of frames.
Certain exemplary embodiments can provide in a system that distributes a video sequence over a network, a method for replacing original texture of a video sequence with synthesized texture in order to reduce a bit rate of the video sequence, the method comprising: removing original texture from a set of frames included in the video sequence, wherein the original texture is identified from a single frame included in the video sequence; analyzing the removed texture by computing texture parameters that include statistical texture descriptors; generating synthesized texture using the texture parameters and qualitative constraints; and replacing the original texture in the set of frames with the synthesized texture.
Certain exemplary embodiments can provide a method for replacing original texture in a plurality of frames that are included in a video sequence with synthesized texture, the method comprising: selecting a region of interest in an initial frame, wherein the region of interest has color characteristics; identifying an original texture in the initial frame using the color characteristics of the region of interest, wherein some of the pixels in the frame that have color characteristics that are similar to the color characteristics of the region of interest are included in the original texture; removing the original texture from the frame; computing texture parameters from the original texture; creating synthesized texture using the texture parameters, wherein the synthesized texture is distinguishable from the original texture; and inserting the synthesized texture into a set of frames that includes the initial frame.
Certain exemplary embodiments can provide a computer program product for implementing a method for replacing original texture in a set of frames included in a video sequence with synthesized texture such that a bit rate of the video sequence that includes the synthesized texture is less than a bit rate of the original video sequence, the computer program product comprising: a computer-readable medium having computer-executable instructions for performing the method, the method comprising:
selecting original texture from an initial frame included in the video sequence;
removing the selected original texture from a set of frames in the video sequence, wherein the set of 4a frames includes the initial frame; analyzing the removed texture to compute texture parameters; synthesizing new texture using the texture parameters; and inserting the synthesized texture into the set of frames.
Certain exemplary embodiments can provide a method for replacing original texture in a set of frames included in a video sequence with synthesized texture such that a bit rate of the video sequence that includes the synthesized texture is less than a bit rate of the original video sequence, the method comprising: selecting original texture from an initial frame within the set of frames; removing the selected original texture from the set of frames in the video sequence; determining texture parameters from an analysis of the removed texture; and inserting synthesized texture, based on the texture parameters, into the set of frames, the inserting synthesized texture into the set of frames further comprising: synthesizing the removed texture by: applying, in a prescribed order, coefficient magnitude correlations, cross scale statistics, coefficient correlations, marginal statistics, overall color, and color saturation qualitative constraints if the original texture is structured and busy; and applying, in a prescribed order, coefficient magnitude correlations, cross scale statistics, overall color, and color saturation qualitative constraints if the original texture is unstructured and smooth.
Certain exemplary embodiments can provide a method for replacing original texture in a set of frames included in a video sequence with synthesized texture such that a bit rate of the video sequence that includes the synthesized texture is less than a bit rate of the original video sequence, the method comprising: selecting original texture from an initial frame within the set of frames; removing the selected original texture from the set of frames in the video sequence; analyzing the removed texture to compute texture parameters; synthesizing new texture using the texture parameters; and inserting the synthesized texture into the set of frames, wherein synthesizing new texture using the texture parameters fiuther comprises: applying qualitative constraints to the new texture, wherein the qualitative constraints include at least one of marginal statistics, coefficient correlations, coefficient magnitude correlations, cross scale statistics, overall color, and color saturation; and synthesizing new texture using the texture parameters further comprises one of applying, in a prescribed order, coefficient magnitude correlations, cross scale statistics, coefficient correlations, marginal statistics, overall color, and color saturation qualitative constraints if the 4b original texture is structured and busy; and applying, in a prescribed order, coefficient magnitude correlations, cross scale statistics, overall color, and color saturation qualitative constraints if the original texture is unstructured and smooth.
Certain exemplary embodiments can provide in a system that distributes a video sequence over a network, a method for replacing original texture of a video sequence with synthesized texture in order to reduce a bit rate of the video sequence, the method comprising: removing original texture from a set of frames included in the video sequence, wherein the original texture is identified from a single frame included in the video sequence; analyzing the removed texture by computing texture parameters that include statistical texture descriptors; generating synthesized texture using the texture parameters and qualitative constraints; and replacing the original texture in the set of frames with the synthesized texture, wherein: the qualitative constraints include one or more of marginal statistics, coei~cient correlations, coefficient magnitude correlations, cross-scale statistics, overall color and color saturation, and wherein:
generating synthesized texture using the texture parameters and qualitative constraints further comprises: applying, in a prescribed order, coefficient magnitude correlations, cross-scale statistics, coefficient correlations, marginal statistics, overall color, and color saturation qualitative constraints if the original texture is structured and busy; and applying, in a prescribed order, coefficient magnitude correlations, cross-scale statistics, overall color, and color saturation qualitative constraints if the original texture is unstructured and smooth.
Certain exemplary embodiments can provide a method for replacing original texture in a plurality of frames that are included in a video sequence with synthesized texture, the method comprising: selecting a region of interest in an initial frame, wherein the region of interest has color characteristics; identifying an original texture in the initial frame using the color characteristics of the region of interest, wherein some of the pixels in the frame that have color characteristics that are similar to the color characteristics of the region of interest are included in the original texture; removing the original texture from the frame; computing texture parameters from the original texture; creating synthesized texture using the texture parameters, wherein the synthesized texture is distinguishable from the original texture; and inserting the synthesized texture into a set of frames that includes the initial frame, wherein:

4c creating synthesized texture using the texture parameters further comprises:
applying qualitative constraints to the synthesized texture, which further comprises:
applying, in a prescribed order, coefficient magnitude correlations, cross scale statistics, coefficient correlations, marginal statistics, overall color, and color saturation qualitative constraints if the original texture is structured and busy; and applying, in a prescribed order, coefficient magnitude correlations, cross scale statistics, overall color, and color saturation qualitative constraints if the original texture is unstructured and smooth.
Certain exemplary embodiments can provide a computer program product for implementing a method for replacing original texture in a set of frames included in a video sequence with synthesized texture such that a bit rate of the video sequence that includes the synthesized texture is less than a bit rate of the original video sequence, the computer program product comprising: a computer-readable medium having computer-executable instructions for performing the method, the method comprising:
selecting original texture from an initial frame included in the video sequence;
removing the selected original texture from a set of frames in the video sequence, wherein the set of frames includes the initial frame; analyzing the removed texture to compute texture parameters; synthesizing new texture using the texture parameters; and inserting the synthesized texture into the set of frames, wherein: synthesizing new texture using the texture parameters further comprises: applying qualitative constraints to the new texture, wherein the qualitative constraints include at least one of marginal statistics, coefficient correlations, coefficient magnitude correlations, cross-scale statistics, overall color, and color saturation; and synthesizing new texture using the texture parameters further comprises one of applying, in a prescribed order, coefficient magnitude correlations, cross scale statistics, coefficient correlations, marginal statistics, overall color, and color saturation qualitative constraints if the original texture is structured and busy; and applying, in a prescribed order, coefficient magnitude correlations, cross scale statistics, overall color, and color saturation qualitative constraints if the original texture is unstructured and smooth.
Certain exemplary embodiments can provide a method for replacing original texture from video with synthesized texture, the method comprising:
synthesizing texture associated with a set of frames of video data by applying, in a prescribed order, at least one of coefficient magnitude correlations, cross scale statistics, coefficient 4d correlations, marginal statistics, overall color, and color saturation qualitative constraints according to whether an original texture associated with the set of frames is structured or unstructured; and inserting the synthesized texture into the set of frames.
Certain exemplary embodiments can provide a system that replaces original texture from video with synthesized texture, the system comprising: a module configured to synthesize texture associated with a set of frames of video data by applying, in a prescribed order, at least one of coefficient magnitude correlations, cross scale statistics, coefficient correlations, marginal statistics, overall color, and color saturation qualitative constraints according to whether an original texture associated with the set of frames is structured or unstructured; and a module configured to insert the synthesized texture into the set of frames.
Certain exemplary embodiments can provide a computer-readable medium storing instructions for controlling a computing device to replace original texture from video with synthesized texture, the instructions comprising: synthesizing texture associated with a set of frames of video data by applying, in a prescribed order, at least one of-. coefficient magnitude correladons, cross scale statistics, coefficient correlations, marginal statistics, overall color, and color saturation qualitative constraints according to whether an original texture associated with the set of frames is structured or unstructured; and inserting the synthesized texture into the set of frames.
Certain exemplary embodiments can provide a method for decoding received data having synthesized texture, the method comprising: receiving data having synthesizing texture associated with a set of frames of video data wherein the texture was synthesized by applying, in a prescribed order, at least one of coefficient magnitude correlations, cross scale statistics, coefficient correlations, marginal statistics, overall color, and color saturation qualitative constraints according to whether an original texture associated with the set of frames is structured or unstructured; and decoding the set of frames having inserted synthesized texture.
Certain exemplary embodiments can provide a method for processing video data, the method comprising: removing texture from at least some frames of a set of frames in a video sequence; analyzing the removed texture to obtain texture parameters; synthesizing new texture based on the obtained texture parameters and at least one of the following qualitative constraints: dominant texture orientation, overall 4e color and color saturation; and inserting the synthesized new texture into the set of frames.
Certain exemplary embodiments can provide a system for processing video data, the system comprising: a module configured to remove texture from at least some frames of a set of frames in a video sequence; a module configured to analyze the removed texture to obtain texture parameters; a module configured to synthesize new texture based on the obtained texture parameters and at least one of the following qualitative constraints: dominant texture orientation, overall color and color saturation;
and a module configured to insert the synthesized new texture into the set of frames.
Certain exemplary embodiments can provide a computer-readable medium storing instructions for controlling a computing device to process video data, the instructions comprising: removing texture from at least some frames of a set of frames in a video sequence; analyzing the removed texture to obtain texture parameters;
synthesizing new texture based on the obtained texture parameters and at least one of the following qualitative constraints: dominant texture orientation, overall color and color saturation; and inserting the synthesized new texture into the set of frames.
These and other limitations of the prior art are overcome by various embodiments which relate to systems and methods for reducing the bit rate of a vide sequence through texture replacement at the encoder. The capabilities of the decoder are not assumed and the decoder is not required to perform texture analysis and synthesis. The synthesized texture that replaces the original texture has similar perceptual characteristics to the original texture. Thus, the video sequence with the synthesized texture is visually similar to the original video sequence. The bit rate of the video sequence with synthesized textures is reduced because the synthesized textures that have replaced the original textures can be coded more ei~ectively.
Texture replacement, in accordance with various embodiments, occurs at the encoder and is therefore independent of the capabilities of the decoder.
Texture replacement begins by selecting and removing texture from some or all of the original frames in the video sequence. The removed texture is analyzed to obtain texture parameters. Then, new texture is synthesized using the texture parameters in combination with a set of qualitative constraints. The synthesized texture can be compressed more effectively than the original texture and is also similar to, yet 4f distinguishable from, the original texture. After the new texture is synthesized, the synthesized texture is inserted back into the original frames and the video sequence that includes the synthesized texture is encoded. Advantageously, the bit rate of the compressed video sequence with the synthesized texture is lower than the bit rate of the compressed video sequence with the original texture.
Additional features and advantages of various embodiments will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention. The features and advantages of various embodiments may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of various embodiments will become a more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth hereinafter.
BRIEF DESCRIPTION OF THE DRAWINGS
In order to describe the manner in which the above-recited and other advantages and features of the invention can be obtained, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of o its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:
Figure 1 is a flow diagram illustrating an exemplary method for reducing a bit rate of a video sequence by replacing original texture with synthesized texture;
Figure 2 illustrates a frame that includes various candidate textures which can be ~5 replaced with synthesized textures;
Figure 3 is a flow diagram of a method for removing original texture from various frames of a video sequence; and Figure 4 illustrates a recursive transform used in image decomposition to obtain texture parameters.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
The present invention relates to systems and methods for texture replacement in video sequences and to reducing bit rates of video sequences through texture replacement.
The bit rate of a compressed video sequence is reduced because the synthesized texture can be more effectively compressed than the original texture. Because the texture replacement occurs as part of the encoding process, the computational costs of the decoder are not increased and the processing/synthesizing capabilities of the decoder are not assumed.
Figure 1 is a flow diagram that illustrates one embodiment of a method for replacing texture in a video sequence and for reducing the bit rate of the encoded video sequence.
1o Reducing the bit rate of a video sequence begins by removing selected texture from a frame or from a series of frames included in the video sequence ( 102). Typically, texture from a single frame is identified, while the identified texture from that particular frame is removed from a set of frames. This eliminates the need to identify texture in each frame, which would increase the computational costs of texture replacement. After the original texture has been removed from the selected frame, the removed texture is analyzed (104) to obtain texture parameters. The texture parameters, in combination with a set of constraints, are used to generate synthesized texture (106). The synthesized texture is then inserted into the original frames (108) from which the original texture was removed.
One advantage of texture replacement is that the synthesized texture can be 2o compressed more effectively than the original texture. The ability to more effectively compress the synthesized texture results in a reduced bit rate for the encoded video sequence. The constraints are applied during the synthesis of the new texture to ensure that the synthesized texture is similar to the original texture. This is useful, for example, in order to retain the artistic representation of the original frames. However, the present invention does not require that the synthesized texture resemble the original texture.
Figures 2 and 3 more fully illustrate how texture is selected and removed from the original frames of a video sequence. Figure 2 illustrates an exemplary frame 200. The frame 200 is a picture that, in this example, includes a person 202, a person 203 and a background. The frame 200 includes several textures that can be selected for removal.
Texture refers to those portions or regions of a frame that have the same or similar characteristics and/or appearance. Usually, the background is the primary source of texture, i0 although the present invention is not limited to background textures. One reason for selecting background textures is that these same textures or regions are present in multiple frames and in substantially the same locations. For example, if no movement is present in a particular set of frames, the background will remain constant across the set of frames.
However, the systems and methods described herein can be applied even when a set of ~5 frames includes moving objects.
In this example, the background includes several candidate textures that can be selected for removal. The background of the frame 200 includes the ground texture 204, the sky texture 206 and the cloud texture 208. Each of these objects in the background are examples of textures. It is likely that the region of the frame 200 covered by the ground 2o texture 204, the sky texture 206, or the cloud texture 208, respectively, have similar appearances or characteristics.

g When a region of texture is selected for removal, it is useful to select a texture that has a spatial region of support that covers a reasonable part of the frame area over a sufficiently large number of frames. In one embodiment, a texture that covers an area that is larger than 30% of the frame area is typically selected, although the present invention can apply to any texture, regardless of the frame area occupied by the selected texture. In addition, selecting a texture that can be replaced in a large number of frames is advantageous because it has an impact on the bit rate of the encoded video sequence.
It is also useful to select a texture that belongs to a class of textures that are amenable to replacement. Generally, candidate textures include regions of a frame that have similar to characteristics and/or appearance. Exemplary textures include natural textures (foliage, grass, ground, water, sky, building facades, etc.), and the like that can be identified in video sequences or test sequences such as movie sequences. Frames with replaceable texture typically have absent or slow global motion and/or only a few moving objects.
Thus, the present invention can be applied to particular frame sequences within a video sequence and does not have to be applied to the entire video sequence.
Texture Removal and Region Segmentation Figure 3 illustrates a flow diagram illustrating how a texture is selected and removed from the frame illustrated in Figure 2. First, a region of interest (ROI) 210 is selected within the texture to be replaced (302). In this example, the selected ROI 210 is within the ground 2o texture 204. The ROI 210 typically includes more than one pixel. The ROI
210, for example, may include a 7x7 array of pixels. After the ROI 210 is selected, the color characteristics of the ROI 210 are compared to the color characteristics (304) of all pixels within the frame 200. The pixels that have similar characteristics are classified or identified as belonging to the same region (306) as the ROI 210. The pixels identified in this manner are thus included in the texture that will be removed from the frame(s). The pixels that are in the identified region are removed and are temporarily replaced with pixels that have an arbitrary value. In one embodiment, the replacement pixels have a constant value equal to the mean color of the selected and identified region.
More particularly in one embodiment, region segmentation and texture removal occurs in stages. For example, let a color frame be represented by a set of two-dimensional planes in the YUV color space. Each of these image planes is represented as a matrix and 1o each matrix element includes a pixel value in row i and column j. More specifically, the Y
frames consist of all of the pixels {(i, j), with 1 < i < M, 1 < j < N}.
First, original frames from the YUV color space are converted to the RGB color space. Second, the location {ir, j~} is selected from a region-of interest, such as the ground texture 204 shown in Figure 2, and the ROI is constructed from:
ROI, _ ~i=iT +k;, j = j,+k;),with-~ ~ ~ 5 k;,k~ <- ~ ~ ~, (1) where the operator [ ] denotes "the integer part of ' and w~ is odd.
Alternatively, the region-of interest of a size equal to wr x wr pixels can be manually selected from a region of interest. The pixel values are smoothed by applying an averaging filter to the ROI 210 in each of the R, G and B color planes, and the mean vector ~.c R ,u c ,u B ~ is computed, where ,u R"u ,°, and ,uB stand for the mean values within the ROI in the R, G, and B color planes, respectively.

Next, the angular map and the modulus map of the frame are computed as follows:
a (i~ j~ = 1- 2 ~'ccos ~'O.i) yref ~ ~d (2) ~w~i.~)II II v~ef ~~
rl (it j) = 1- Iw(t.i) -v~~~ ~ ( 3x255 5 respectively, with respect to a reference color vector. The following works more fully describe angular and modulus maps and are hereby incorporated by reference:
(1) Dimitris Androutsos, Kostas Plataniotis, and Anastasios N. Venetsanopoulos, "A novel vector-based approach to color image retrieval using a vector angular-based distance measure," Computer Vision and Image Understanding, vol. 75, no. '/z, pp. 46-58, July/August 1999;
(2) Adriana 1o Dumitras and Anastasios N. Venetsanopoulos, "Angular map-driven snakes with application to object shape description in color images," accepted for publication in IEEE
Transactions on Image Processing., 2001; and (3) Adriana Dumitras and Anastasios N.
Venetsanopoulos, "Color image-based angular map-driven snakes," in Proceedings of IEEE
International Conference on Image Processing, Thessaloniki, Greece, October, 2001. Notations v(;,a )and vref stand for a color vector ~R(i, j) G(i, j) B(i. j)~ in the RGB color space, and the reference vector that is selected to be equal to ~.c R ~ ; ,u B~, respectively. The notation 8 stands for the value of the angle given by (2) between the vector v(;,~ ) and the reference vector v,~:
The notation rl stands for the value of the modulus difference given by (3) between the vector v(;"~ ) and the reference vector vef respectively.
2o In order to identify the pixels that have similar color characteristics to those of the reference vector, the distance measure is computed by d~'(i~,l)= exp~-9(i~.7~ r1(i~J)~
and the mean distance given is computed by d J (5) f~ Y = ~~d ~' (i~ J
where notation a stands for a mean operator over ROlr, All of the pixels within the frame that satisfy the constraint z d~"(i~.l~-,ua ~ ~
r are clustered into regions.
Next, the segmented regions that satisfy the constraint of (6) are identified and labeled. In one embodiment, all regions with a normalized area AR smaller than a threshold Ar ~ ~ where M x N is the frame area, are discarded. The remaining regions 1o area, i.e., MN
are labeled. If a segmentation map consisting of all of the segmented regions is not considered acceptable by the user, another ROI location is selected and processed as described above. The labeled regions are removed from the frame and texture removal of the current frame is complete. The frames having the texture and color removed using the segmentation map are obtained at the end of the texture removal stage.
To summarize, the region segmentation and texture removal procedures described above identify pixels in the original frame that have similar characteristics in terms of color to those of the pixel (location) or ROI selected by the user. The color characteristics of the identified pixels or segmented regions are evaluated using an angular map and a modulus 2o map of the color vectors in the RGB color space. Angular maps identify significant color changes within a picture or frame, which typically correspond to object boundaries, using snake models. To identify color changes or boundaries, simple gray-level edge detection is applied to the angular maps. Identification of major change of direction in vector data and color variation computation by differences between inside and outside contour points can s also be applied. In the present invention, however, a classification of the pixels is performed in each frame using the angular map. Despite the fact that the performance of the color-based region segmentation stage depends on the selection of the threshold values ~C, FA and the size of the ROI, the values of these parameters may be maintained constant for various video sequences.
to Texture Analysis Analyzing the removed texture includes computing a set of texture parameters from the removed texture. In one embodiment, a parametric statistical model is employed. This model, which employs an overcomplete multiscale wavelet representation, makes use of steerable pyramids for image decomposition. Steerable pyramids as known in the art are 15 more fully discussed in the following article, which is hereby incorporated by reference:
Javier Portilla and Eero P. Simoncelli, "A parametric texture model based on joint statistics of complex wavelet coefficients," International Journal of Computer Yision, vol. 40, no. 1, pp. 49-71, 2000. The statistical texture descriptors are based on pairs of wavelet coefficients at adjacent spatial locations, orientations and scales (in particular, the expected product of 2o the raw coefficient pairs and the expected product of their magnitudes), pairs of coefficients at adjacent scales (the expected product of the fine scale coefficient at adjacent scale coefficient), marginal statistics and lowpass coefficients at different scales.

First, a steerable pyramid decomposition is obtained by recursively decomposing the texture image into a set of oriented subbands and lowpass residual band. The block diagram of the transform is illustrated in Fig. 4, where the area enclosed by the dashed box 400 is inserted recursively at the point 402. Initially, the input image is decomposed into highpass L ~,9 and lowpass bands using the exemplary filters Lo (r, 9~ _ ~ and Ho (r, 8~ = H 2 , 8 . The lowpass band is then decomposed into a lower frequency band and a set of oriented bands. The filters used in this transformation are polar-separable in the Fourier domain and are given by:
2cos ~ log2 ~r 4 ( r ( 2 L(r, 8~ = 2, r S ~ , and 0' r ~ 2 l0 Bk (r, 8~ = H(r~ Gk (8~, k~ ~0~ K -1~
where cos ~ log 2 ~r ~ ( r H(r~ = 1, r >_ ~ , and 0' r ~ 4 2 K-' ~K 1~ ! cos 9 - ~ K 1 a _ _~x ~ ~c K 2 K-1 ! K ) ' K I 2 0, otherwise The notations r and 8 stand for polar coordinates in the frequency domain, K
denotes the total number of orientation bands.
Statistical texture descriptors are computed using the image decomposition previously obtained. More specifically, marginal statistics, correlations of the coefficients, correlations of the coefficients' magnitudes and cross-scale phase's statistics are computed.
In terms of marginal statistics (a) the skewness and kurtosis of the partially reconstructed lowpass images at each scale, (b) the variance of the highpass band, (c) the mean variance, skewness and kurtosis and (d) the minimum and maximum values of the image pixels to (range) are computed at each level of the pyramid. In terms of coefficient correlations, the autocorrelation of the lowpass images computed at each level of the pyramid decomposition is computed. In terms of magnitude correlation, the correlation of the complex magnitude of pairs of coefficients at adjacent positions, orientations and scales is computed. More specifically, (e) the autocorrelation of magnitude of each subband, (f) the crosscorrelation of each subband magnitudes with those of other orientations at the same scale, and (g) the crosscorrelation of subband magnitudes with all orientations at a coarser scale are obtained.
Finally, in terms of cross-scale statistics, the complex phase of the coarse-scale coefficients is doubled at all orientations and then the crosscorrelation between these coefficients and the fine-scale coefficients is computed. Doubling of the coarse-scale coefficients is motivated 2o by the fact that the local phase of the responses to local feature such as edges or lines changes at a rate that is, for fine-scale coefficients, twice the rate of that of the coefficients at a coarser scale.
Texture Synthesis During texture synthesis, a synthesized texture is created that can be compressed 5 more effectively than the original texture. In one embodiment, the synthesized texture is similar to, yet distinguishable from, the original texture. Because the synthesized texture can be compressed more effectively, the bit rate of the compressed frames with synthesized texture is lower than the bit rate of the compressed frames with the original texture.
Retaining the visual similarity ensures that the frames with synthesized texture convey the 0 same arkistic message as that of the frames with the original texture.
A set of qualitative texture synthesis constraints are used to achieve visual similarity between the synthesized texture and the original texture. The texture parameters selected for texture synthesis are derived using the set of qualitative constraints.
In one embodiment, the synthesis of a new texture that is visually similar to the ~5 original texture is subject to constraints in terms of dominant texture orientation, and overall color and color saturation. Exemplary constraints include, but are not limited to:
(C1) Marginal statistics;
(C2) Coefficient correlations;
(C3) Coefficient magnitude correlations;
2o (C4) Cross-scale statistics;
(CS) Overall color; and (C6) Color saturation.

If the original texture that has been removed from the original frames is structured and busy, the synthesized texture is subject to the constraints Cl, C2, C3, C4, CS and C6. If the original texture is unstructured and smooth, the synthesized texture is subject to the constraints C3, C4, CS and C6.
Using these constraints, the new texture is synthesized by first decomposing an image containing Gaussian white noise using a complex steerable pyramid. Next, a recursive coarse-to-fine procedure imposes the statistical constraints on the lowpass and bandpass bands while simultaneously reconstructing a lowpass image. In one embodiment of the present invention, the order in which the constraints are applied is C3, C4, C2, C1, 1o C5, and C6 for structured and busy textures, and C3, C4, C5, and C6 for unstructured and smooth textures.
The texture synthesis constraints are derived from the basic categories (vocabulary) and rules (grammar) used by humans when judging the similarity of color patterns. The following work, which more fully describes the vocabulary and grammar of color patterns is hereby incorporated by reference: Aleksandra Mojsilovic, Jelena Kovacevic, Jianying Hu, Robert J. Safranek, and S. Kicha Ganapathy, "Matching and retrieval based on the vocabulary and grammar of color patterns," IEEE Transactions on Image Processing, vol. 9, no. 1, pp. 38-54, Jan. 2000. In one embodiment, the basic pattern categories used by humans for similarity evaluation of color patterns are directionality, regularity and placement, complexity and heaviness and the basic color categories used by humans for the same task are overall color and color purity. The following table provides the meanings for these criteria:

Similarity Expresses Derived Criterion for Criterion constrained texture s nthesis Overall color Perception of a single dominantPreserve the overall color, or a multicolored image that createscolor in the synthesized the im ression of a dominant color texture DirectionalityPerception of the dominant orientationPreserve the dominant and in orientation edge distribution, or the dominantorientation in direction the in the re etition of the structurals nthesized texture element Regularity perception of the placement Not applied and (and small placement perturbation in placement), repetition and uniformit of the atterns Color purity Perception of the degree of Preserve the color colorfulness in (overall patterns saturation in the saturation) s nthesized texture Pattern Perception of a general impressionPreserve the type based on of the complexity the type of overall color (lightoverall color and versus dark) or (light or heaviness the overall chroma (saturated dark) versus unsaturated) or the spatial frequency in the repetition of the structural element or the color contrast Because different visual pathways process patterns and colors in the human visual system, separate constraints for texture and color are derived in the texture synthesis process. Moreover, texture is typically synthesized with gray levels, i.e.
luminance frames, and color constraints are imposed on the chrominance frames. In terms of texture, the synthesized texture should have similar dominant directionality as that of the original texture. In terms of color, the synthesized texture should have similar overall color and color saturation as those of the original texture.
To address the texture requirement, the magnitude correlation constraint (C3) which represents the structure in images and the cross-scale statistics constraint (C4) which allows distinguishing lines and edges are selected and applied. While these texture constraints are sufficient for unstructured and smooth textures, they do not allow synthesizing structured and busy textures with appearances that are similar to those of the original ones. Therefore, for structured and busy textures, the marginal statistics constraint (C1) and the coefficient correlation constraint (C2) are used. The correlation constraint characterizes the regularity of the texture as represented by periodic or globally oriented structure in the set of constraints. The overall color (CS) and color saturation (C6) are the color constraints that address the color requirements. The overall color constraint may be further expressed in terms of similar color moments (mean color and standard color deviation).
Because the pixel values in the segmented regions have been replaced with mean pixel values within the region, the overall color is easily preserved. Because preserving only the mean color would yield textures with discolored appearances in the cases of textures that exhibit large color variations, the color situations are also preserved.
By selecting different sets of constraints for unstructured and smooth textures, and structured and busy textures, different characteristics of the background textures that are present in video sequences can be utilized. For example, by removing the marginal statistics constraint (Cl), synthesized textures that differ in their first and second order statistics from the original textures can be obtained. Consequently, according to Julesz's conjecture, the synthesized and original textures would then be distinguishable in pre-attentive (undetailed) evaluation of less than 50 milliseconds (ms). At a frame rate of 24 frames per second, each 2o frame is indeed viewed in a pre-attentive mode, i.e., for approximately 40 (< 50) ms. By removing the coefficient correlation constraint (C2) for the same class of unstructured and smooth textures, the requirement that the synthesized and the original textures have similar periodicity is relaxed. This, in turn, improves the compression effectiveness.
The texture synthesized is mapped on the luminance frame of the video sequence.
The segmented regions within the chrominance frames remain filled with pixels having mean values as obtained by the region segmentation. Thus, the color constraints are satisfied.
Although the present invention has been described in terms of a video sequence, it is understood by one of skill in the art that the systems and methods described herein also apply to a still image or to a single frame. Thus removing texture from a set of frames to includes the case where the set of frames is a single frame or a still image.
The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All ~5 changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Claims (101)

1. A method for replacing original texture in a set of frames included in a video sequence with synthesized texture such that a bit rate of the video sequence that includes the synthesized texture is less than a bit rate of the original video sequence, the method comprising:
selecting original texture from an initial frame within the set of frames;
removing the selected original texture from the set of frames in the video sequence;
determining texture parameters from an analysis of the removed texture; and inserting synthesized texture into the set of frames, wherein the synthesized texture is based on the texture parameters.
2. A method as defined in claim 1, wherein selecting original texture from an initial frame within the set of frames further comprises:
identifying a region of interest within the texture of the initial frame;
computing a mean vector for the region of interest;
computing an angular map and a modulus map for the initial frame, wherein the angular map and the modulus map represent color characteristics;
comparing color characteristics of the region of interest with color characteristics of all pixels in the initial frame;

including, in the texture to be removed, pixels whose color characteristics are similar to the color characteristics of the region of interest;
identifying the texture to be removed from the initial frame, wherein the texture identified from the initial frame is removed from each frame in the set of frames.
3. A method as defined in claim 1, wherein determining texture parameters from an analysis of the removed texture further comprises:
decomposing the initial frame using steerable pyramids; and computing statistical texture descriptors that are included in the texture parameters from the decomposed initial frame, wherein the statistical texture descriptors include at least one of:
marginal statistics;
correlations of image coefficients;
correlations of the magnitudes of the coefficients; and cross-scale phase statistics.
4. A method as defined in claim 1, wherein inserting synthesized texture into the set of frames, wherein the synthesized texture is based on the texture parameters, further comprises:
synthesizing the removed texture by:

applying, in a prescribed order, coefficient magnitude correlations, cross scale statistics, coefficient correlations, marginal statistics, overall color, and color saturation qualitative constraints if the original texture is structured and busy; and applying, in a prescribed order, coefficient magnitude correlations, cross scale statistics, overall color, and color saturation qualitative constraints if the original texture is unstructured and smooth.
5. A method for replacing original texture in a set of frames included in a video sequence with synthesized texture such that a bit rate of the video sequence that includes the synthesized texture is less than a bit rate of the original video sequence, the method comprising:
selecting original texture from an initial frame within the set of frames;
removing the selected original texture from the set of frames in the video sequence;
analyzing the removed texture to compute texture parameters;
synthesizing new texture using the texture parameters; and inserting the synthesized texture into the set of frames.
6. A method as defined in claim 5, wherein selecting texture from an initial frame within the set of frames further comprises:
identifying a region of interest within the texture of the initial frame;

comparing color characteristics of the region of interest with color characteristics of all pixels in the initial frame; and including, in the texture to be removed, pixels whose color characteristics are similar to the color characteristics of the region of interest.
7. A method as defined in claim 6, wherein selecting texture from an initial frame within the set of frames further comprises:
computing a mean vector for the region of interest;
computing an angular map and a modulus map for the initial frame; and identifying the texture to be removed.
8. A method as defined in claim 5, wherein removing the selected original texture from the set of frames in the video sequence further comprises replacing the removed texture with pixels that have a constant value equal to a mean color of the selected texture.
9. A method as defined in claim 5, wherein analyzing the removed texture to compute texture parameters further comprises:
decomposing the initial frame using steerable pyramids; and computing statistical texture descriptors that are included in the texture parameters from the decomposed initial frame, wherein the statistical texture descriptors include at least one of:

marginal statistics;
correlations of image coefficients;
correlations of the magnitudes of the coefficients; and cross-scale phase statistics.
10. A method as defined in claim 5, wherein synthesizing new texture using the texture parameters further comprises applying qualitative constraints to the new texture, wherein the qualitative constraints include at least one of:
marginal statistics;
coefficient correlations;
coefficient magnitude correlations;
cross scale statistics;
overall color; and color saturation.
11. A method as defined in claim 10, wherein synthesizing new texture using the texture parameters further comprises one of:
applying, in a prescribed order, coefficient magnitude correlations, cross scale statistics, coefficient correlations, marginal statistics, overall color, and color saturation qualitative constraints if the original texture is structured and busy; and applying, in a prescribed order, coefficient magnitude correlations, cross scale statistics, overall color, and color saturation qualitative constraints if the original texture is unstructured and smooth.
12. A method as defined in claim 5, further comprising encoding the video sequence with the synthesized texture.
13. In a system that distributes a video sequence over a network, a method for replacing original texture of a video sequence with synthesized texture in order to reduce a bit rate of the video sequence, the method comprising:
removing original texture from a set of frames included in the video sequence, wherein the original texture is identified from a single frame included in the video sequence;
analyzing the removed texture by computing texture parameters that include statistical texture descriptors;
generating synthesized texture using the texture parameters and qualitative constraints; and replacing the original texture in the set of frames with the synthesized texture.
14. A method as defined in claim 13, wherein removing original texture from a set of frames included in the video sequence further comprises:
identifying a region of interest within the single frame;

comparing color characteristics of the region of interest with color characteristics of all pixels in the single frame; and segmenting those pixels whose color characteristics are similar to the color characteristics of the region of interest into regions for removal.
15. A method as defined in claim 13, wherein comparing color characteristics of the region of interest with characteristics of all pixels in the single frame further comprises:
computing a mean vector for the region of interest;
computing an angular map and a modulus map for the single frame; and computing a distance measure and a mean distance measure, wherein all pixels that satisfy a constraint defined by the distance measure and the mean distance measure are included in the original texture to be removed.
16. A method as defined in claim 13, wherein removing original texture from a set of frames included in the video sequence further comprises replacing the removed texture with pixels that have a constant value equal to a mean color of the selected texture.
17. A method as defined in claim 13, wherein analyzing the removed texture by computing texture parameters that include statistical texture descriptors further comprises decomposing the single frame using steerable pyramids, wherein the statistical texture descriptors include marginal statistics, correlations of image coefficients, correlations of the magnitudes of the coefficients, and cross-scale phase statistics.
18. A method as defined in claim 13, wherein the qualitative constraints include one or more of marginal statistics, coefficient correlations, coefficient magnitude correlations, cross-scale statistics, overall color and color saturation, and wherein generating synthesized texture using the texture parameters and qualitative constraints further comprises:
applying, in a prescribed order, coefficient magnitude correlations, cross-scale statistics, coefficient correlations, marginal statistics, overall color, and color saturation qualitative constraints if the original texture is structured and busy; and applying, in a prescribed order, coefficient magnitude correlations, cross-scale statistics, overall color, and color saturation qualitative constraints if the original texture is unstructured and smooth.
19. A computer program product having computer-executable instructions for performing the elements of claim 13.
20. A method for replacing original texture in a plurality of frames that are included in a video sequence with synthesized texture, the method comprising:
selecting a region of interest in an initial frame, wherein the region of interest has color characteristics;
identifying an original texture in the initial frame using the color characteristics of the region of interest, wherein some of the pixels in the frame that have color characteristics that are similar to the color characteristics of the region of interest are included in the original texture;
removing the original texture from the frame;
computing texture parameters from the original texture;
creating synthesized texture using the texture parameters, wherein the synthesized texture is distinguishable from the original texture; and inserting the synthesized texture into a set of frames that includes the initial frame.
21. ~A method as defined in claim 20, wherein identifying an original texture in the initial frame using the color characteristics of the region of interest further comprises computing an angular map and a modulus map for the initial frame, wherein the angular map and the modulus map are used to identify regions that are similar to the region of interest.
22. ~A method as defined in claim 20, wherein creating synthesized texture using the texture parameters further comprises applying qualitative constraints to the synthesized texture.
23. ~A method as defined in claim 22, wherein applying qualitative constraints to the synthesized texture further comprises:

applying, in a prescribed order, coefficient magnitude correlations, cross scale statistics, coefficient correlations, marginal statistics, overall color, and color saturation qualitative constraints if the original texture is structured and busy; and applying, in a prescribed order, coefficient magnitude correlations, cross scale statistics, overall color, and color saturation qualitative constraints if the original texture is unstructured and smooth.
24. A method as defined in claim 20, wherein computing texture parameters from the original texture further comprises decomposing the initial frame using steerable pyramids to compute statistical texture descriptors that are included in the texture parameters.
25. A method as defined in claim 20, further comprising encoding the video sequence that includes the synthesized texture.
26. A computer program product for implementing a method for replacing original texture in a set of frames included in a video sequence with synthesized texture such that a bit rate of the video sequence that includes the synthesized texture is less than a bit rate of the original video sequence, the computer program product comprising:
a computer-readable medium having computer-executable instructions for performing the method, the method comprising:

selecting original texture from an initial frame included in the video sequence;
removing the selected original texture from a set of frames in the video sequence, wherein the set of frames includes the initial frame;
analyzing the removed texture to compute texture parameters;
synthesizing new texture using the texture parameters; and inserting the synthesized texture into the set of frames.
27. A computer program product as defined in claim 26, wherein selecting texture from an initial frame included in the video sequence further comprises:
identifying a region of interest within the texture of the initial frame;
comparing color characteristics of the region of interest with color characteristics of all pixels in the initial frame; and including, in the texture to be removed, pixels whose color characteristics are similar to the color characteristics of the region of interest.
28. A computer program product as defined in claim 27, wherein selecting texture from an initial frame included in the video sequence further comprises:
computing a mean vector for the region of interest;
computing an angular map and a modulus map for the initial frame; and identifying the texture to be removed.
29. A computer program product as defined in claim 26, wherein removing the selected original texture from a set of frames in the video sequence further comprises replacing the removed texture with pixels that have a constant value equal to a mean color of the selected texture.
30. A computer program product as defined in claim 26, wherein analyzing the removed texture to compute texture parameters further comprises:
decomposing the initial frame using steerable pyramids; and computing statistical texture descriptors that are included in the texture parameters from the decomposed initial frame, wherein the statistical texture descriptors include at least one of:
marginal statistics;
correlations of image coefficients;
correlations of the magnitudes of the coefficients; and cross-scale phase statistics.
31. A computer program product as defined in claim 26, wherein synthesizing new texture using the texture parameters further comprises applying qualitative constraints to the new texture, wherein the qualitative constraints include at least one of:
marginal statistics;
coefficient correlations;
coefficient magnitude correlations;
32 cross-scale statistics;
overall color; and color saturation.

32. A computer program product as defined in claim 31, wherein synthesizing new texture using the texture parameters further comprises one of:
applying, in a prescribed order, coefficient magnitude correlations, cross scale statistics, coefficient correlations, marginal statistics, overall color, and color saturation qualitative constraints if the original texture is structured and busy; and applying, in a prescribed order, coefficient magnitude correlations, cross scale statistics, overall color, and color saturation qualitative constraints if the original texture is unstructured and smooth.
33 33. A method for replacing original texture in a set of frames included in a video sequence with synthesized texture such that a bit rate of the video sequence that includes the synthesized texture is less than a bit rate of the original video sequence, the method comprising:
selecting original texture from an initial frame within the set of frames;
removing the selected original texture from the set of frames in the video sequence;
determining texture parameters from an analysis of the removed texture; and inserting synthesized texture, based on the texture parameters, into the set of frames, the inserting synthesized texture into the set of frames further comprising:
synthesizing the removed texture by:
applying, in a prescribed order, coefficient magnitude correlations, cross scale statistics, coefficient correlations, marginal statistics, overall color, and color saturation qualitative constraints if the original texture is structured and busy; and applying, in a prescribed order, coefficient magnitude correlations, cross scale statistics, overall color, and color saturation qualitative constraints if the original texture is unstructured and smooth.
34. A method for replacing original texture in a set of frames included in a video sequence with synthesized texture such that a bit rate of the video sequence that includes the synthesized texture is less than a bit rate of the original video sequence, the method comprising:
selecting original texture from an initial frame within the set of frames;
removing the selected original texture from the set of frames in the video sequence;
analyzing the removed texture to compute texture parameters;
synthesizing new texture using the texture parameters; and inserting the synthesized texture into the set of frames, wherein synthesizing new texture using the texture parameters further comprises:
applying qualitative constraints to the new texture, wherein the qualitative constraints include at least one of:

marginal statistics, coefficient correlations, coefficient magnitude correlations, cross scale statistics, overall color, and color saturation; and synthesizing new texture using the texture parameters further comprises one of:
applying, in a prescribed order, coefficient magnitude correlations, cross scale statistics, coefficient correlations, marginal statistics, overall color, and color saturation qualitative constraints if the original texture is structured and busy; and applying, in a prescribed order, coefficient magnitude correlations, cross scale statistics, overall color, and color saturation qualitative constraints if the original texture is unstructured and smooth.
35. In a system that distributes a video sequence over a network, a method for replacing original texture of a video sequence with synthesized texture in order to reduce a bit rate of the video sequence, the method comprising:
removing original texture from a set of frames included in the video sequence, wherein the original texture is identified from a single frame included in the video sequence;
analyzing the removed texture by computing texture parameters that include statistical texture descriptors;
generating synthesized texture using the texture parameters and qualitative constraints; and replacing the original texture in the set of frames with the synthesized texture, wherein:
the qualitative constraints include one or more of marginal statistics, coefficient correlations, coefficient magnitude correlations, cross-scale statistics, overall color and color saturation, and wherein:

generating synthesized texture using the texture parameters and qualitative constraints further comprises:
applying, in a prescribed order, coefficient magnitude correlations, cross-scale statistics, coefficient correlations, marginal statistics, overall color, and color saturation qualitative constraints if the original texture is structured and busy; and applying, in a prescribed order, coefficient magnitude correlations, cross-scale statistics, overall color, and color saturation qualitative constraints if the original texture is unstructured and smooth.
36. A method for replacing original texture in a plurality of frames that are included in a video sequence with synthesized texture, the method comprising:
selecting a region of interest in an initial frame, wherein the region of interest has color characteristics;
identifying an original texture in the initial frame using the color characteristics of the region of interest, wherein some of the pixels in the frame that have color characteristics that are similar to the color characteristics of the region of interest are included in the original texture;
removing the original texture from the frame;
computing texture parameters from the original texture;
creating synthesized texture using the texture parameters, wherein the synthesized texture is distinguishable from the original texture; and inserting the synthesized texture into a set of frames that includes the initial frame, wherein:
creating synthesized texture using the texture parameters further comprises:
applying qualitative constraints to the synthesized texture, which further comprises:
applying, in a prescribed order, coefficient magnitude correlations, cross scale statistics, coefficient correlations, marginal statistics, overall color, and color saturation qualitative constraints if the original texture is structured and busy; and applying, in a prescribed order, coefficient magnitude correlations, cross scale statistics, overall color, and color saturation qualitative constraints if the original texture is unstructured and smooth.
37. A computer program product for implementing a method for replacing original texture in a set of frames included in a video sequence with synthesized texture such that a bit rate of the video sequence that includes the synthesized texture is less than a bit rate of the original video sequence, the computer program product comprising:
a computer-readable medium having computer-executable instructions for performing the method, the method comprising:
selecting original texture from an initial frame included in the video sequence;
removing the selected original texture from a set of frames in the video sequence, wherein the set of frames includes the initial frame;
analyzing the removed texture to compute texture parameters;
synthesizing new texture using the texture parameters; and inserting the synthesized texture into the set of frames, wherein:
synthesizing new texture using the texture parameters further comprises:
applying qualitative constraints to the new texture, wherein the qualitative constraints include at least one of:
marginal statistics, coefficient correlations, coefficient magnitude correlations, cross-scale statistics, overall color, and color saturation; and synthesizing new texture using the texture parameters further comprises one of:
applying, in a prescribed order, coefficient magnitude correlations, cross scale statistics, coefficient correlations, marginal statistics, overall color, and color saturation qualitative constraints if the original texture is structured and busy; and applying, in a prescribed order, coefficient magnitude correlations, cross scale statistics, overall color, and color saturation qualitative constraints if the original texture is unstructured and smooth.
38. A method for replacing original texture from video with synthesized texture, the method comprising:
synthesizing texture associated with a set of frames of video data by applying, in a prescribed order, at least one of coefficient magnitude correlations, cross scale statistics, coefficient correlations, marginal statistics, overall color, and color saturation qualitative constraints according to whether an original texture associated with the set of frames is structured or unstructured; and inserting the synthesized texture into the set of frames.
39. The method of claim 38, further comprising synthesizing the texture according to whether the original texture is busy or smooth.
40. The method of claim 38, further comprising:
selecting original texture from an initial frame within the set of frames;
removing the selected original texture from the set of frames in the video sequence;
determining texture parameters from an analysis of the removed texture; and inserting the synthesized texture based on the texture parameters into the set of frames.
41. The method of claim 38, wherein a bit rate of a video sequence that includes the synthesized texture is less than a bit rate of an original video sequence.
42. The method of claim 38, further comprising identifying a region of interest within texture of an initial frame.
43. The method of claim 42, further comprising computing a mean vector for the region of interest.
44. The method of claim 43, further comprising:
computing an angular map and a modulus map for the initial frame, wherein the angular map and the modulus map represent color characteristics;
comparing color characteristics of the region of interest with color characteristics of all pixels in the initial frame; and including in the texture to be removed pixels whose color characteristics are similar to the color characteristics of the region of interest.
45. A system that replaces original texture from video with synthesized texture, the system comprising:
a module configured to synthesize texture associated with a set of frames of video data by applying, in a prescribed order, at least one of: coefficient magnitude correlations, cross scale statistics, coefficient correlations, marginal statistics, overall color, and color saturation qualitative constraints according to whether an original texture associated with the set of frames is structured or unstructured; and a module configured to insert the synthesized texture into the set of frames.
46. The system of claim 45, further comprising a module configured to synthesize the texture according to whether the original texture is busy or smooth.
47. The system of claim 45, further comprising:
a module configured to select original texture from an initial frame within the set of frames;
a module configured to remove the selected original texture from the set of frames in the video sequence;
a module configured to determine texture parameters from an analysis of the removed texture; and a module configured to insert the synthesized texture based on the texture parameters into the set of frames.
48. The system of claim 45, wherein a bit rate of a video sequence that includes the synthesized texture is less than a bit rate of an original video sequence.
49. The system of claim 45, further comprising a module configured to identify a region of interest within texture of an initial frame.
50. The system of claim 45, further comprising a module configured to compute a mean vector for the region of interest.
51. The system of claim 50, further comprising:
a module configured to compute an angular map and a modulus map for the initial frame, wherein the angular map and the modulus map represent color characteristics;
a module configured to compare color characteristics of the region of interest with color characteristics of all pixels in the initial frame; and a module configured to include in the texture to be removed pixels whose color characteristics are similar to the color characteristics of the region of interest.
52. A computer-readable medium storing instructions for controlling a computing device to replace original texture from video with synthesized texture, the instructions comprising:
synthesizing texture associated with a set of frames of video data by applying, in a prescribed order, at least one of coefficient magnitude correlations, cross scale statistics, coefficient correlations, marginal statistics, overall color, and color saturation qualitative constraints according to whether an original texture associated with the set of frames is structured or unstructured; and inserting the synthesized texture into the set of frames.
53. The computer-readable medium of claim 52, wherein the instructions further comprise synthesizing the texture according to whether the original texture is busy or smooth.
54. The computer-readable medium of claim 52, wherein the instructions further comprise:
selecting original texture from an initial frame within the set of frames;
removing the selected original texture from the set of frames in the video sequence;
determining texture parameters from an analysis of the removed texture; and inserting the synthesized texture based on the texture parameters into the set of frames.
55. The computer-readable medium of claim 52, wherein a bit rate of a video sequence that includes the synthesized texture is less than a bit rate of an original video sequence.
56. The computer-readable medium of claim 52, wherein the instructions further comprise identifying a region of interest within texture of an initial frame.
57. The computer-readable medium of claim 56, wherein the instructions further comprise computing a mean vector for the region of interest.
58. The computer-readable medium of claim 55, the instructions further comprising:
computing an angular map and a modulus map for the initial frame, wherein the angular map and the modulus map represent color characteristics;
comparing color characteristics of the region of interest with color characteristics of all pixels in the initial frame; and including in the texture to be removed pixels whose color characteristics are similar to the color characteristics of the region of interest.
59. A method for decoding received data having synthesized texture, the method comprising:
receiving data having synthesizing texture associated with a set of frames of video data wherein the texture was synthesized by applying, in a prescribed order, at least one of coefficient magnitude correlations, cross scale statistics, coefficient correlations, marginal statistics, overall color, and color saturation qualitative constraints according to whether an original texture associated with the set of frames is structured or unstructured; and decoding the set of frames having inserted synthesized texture.
60. The method of claim 59, wherein the texture is synthesized according to whether the original texture is busy or smooth.
61. The method of claim 59, wherein the texture is synthesized and inserted according to a method comprising:
selecting original texture from an initial frame within the set of frames;
removing the selected original texture from the set of frames in the video sequence;
determining texture parameters from an analysis of the removed texture; and inserting the synthesized texture based on the texture parameters into the set of frames.
62. The method of claim 59, wherein a bit rate of a video sequence that includes the synthesized texture is less than a bit rate of an original video sequence.
63. The method of claim 59, wherein a region of interest is identified within texture of an initial frame.
64. The method of claim 63, wherein a mean vector is computed for the region of interest.
65. The method of claim 64, wherein the texture is synthesized and inserted further according to a method comprising:
computing an angular map and a modulus map for the initial frame, wherein the angular map and the modulus map represent color characteristics;

comparing color characteristics of the region of interest with color characteristics of all pixels in the initial frame; and including in the texture to be removed pixels whose color characteristics are similar to the color characteristics of the region of interest.
66. A method for processing video data, the method comprising:
removing texture from at least some frames of a set of frames in a video sequence;
analyzing the removed texture to obtain texture parameters;
synthesizing new texture based on the obtained texture parameters and at least one of the following qualitative constraints: dominant texture orientation, overall color and color saturation; and inserting the synthesized new texture into the set of frames.
67. The method of claim 66, wherein the synthesized new texture is visually similar to the original texture.
68. The method of claim 66, wherein the list of qualitative constraints further comprises: coefficient magnitude correlations, cross scale statistics, coefficient correlations and marginal statistics.
69. The method of claim 66, wherein the color saturation qualitative constraint is further based on whether an original texture associated with the set of frames is structured or unstructured.
70. The method of claim 66, wherein the texture is synthesized according to whether the original texture is structured and busy or unstructured and smooth.
71. The method of claim 70, wherein if the removed texture is structured and busy, then synthesizing the new texture is based on at least the constraints:
marginal statistics, coefficient correlations, coefficient magnitude correlations, cross-scale statistics, overall color and color saturation.
72. The method of claim 70, wherein if the removed texture is unstructured and smooth, then synthesizing the new texture is based on at least the constraints:
coefficient magnitude correlations, cross-scale statistics, overall color and color saturation.
73. The method of claim 66, wherein the synthesized new texture is similar to but distinguishable from the removed texture.
74. The method of claim 66, wherein a bit rate of a video sequence that includes the synthesized texture is less than a bit rate of an original video sequence.
75. The method of claim 66, wherein a region of interest is identified within texture of an initial frame.
76. The method of claim 75, wherein a mean vector is computed for the region of interest.
77. The method of claim 76, wherein the texture is synthesized and inserted further according to a method comprising:
computing an angular map and a modulus map for an initial frame, wherein the angular map and the modulus map represent color characteristics;
comparing color characteristics of the region of interest with color characteristics of all pixels in the initial frame; and including in the texture to be removed pixels whose color characteristics are similar to the color characteristics of the region of interest.
78. A system for processing video data, the system comprising:
a module configured to remove texture from at least some frames of a set of frames in a video sequence;
a module configured to analyze the removed texture to obtain texture parameters;

a module configured to synthesize new texture based on the obtained texture parameters and at least one of the following qualitative constraints: dominant texture orientation, overall color and color saturation; and a module configured to insert the synthesized new texture into the set of frames.
79. The system of claim 78, wherein the synthesized new texture is visually similar to the original texture.
80. The system of claim 78, wherein the list of qualitative constraints further comprises: coefficient magnitude correlations, cross scale statistics, coefficient correlations and marginal statistics.
81. The system of claim 78, wherein the color saturation qualitative constraint is further based on whether an original texture associated with the set of frames is structured or unstructured.
82. The system of claim 78, wherein the texture is synthesized according to whether the original texture is structured and busy or unstructured and smooth.
83. The system of claim 82, wherein if the removed texture is structured and busy, then the module configured to synthesize further synthesizes the new texture based on at least the constraints: marginal statistics; coefficient correlations, coefficient magnitude correlations, cross-scale statistics, overall color and color saturation.
84. The system of claim 82, wherein if the removed texture is unstructured and smooth, then the module configured to synthesize further synthesizes the new texture is based on at least the constraints: coefficient magnitude correlations, cross-scale statistics, overall color and color saturation.
85. The system of claim 78, wherein the synthesized new texture is similar to but distinguishable from the removed texture.
86. The system of claim 78, wherein a bit rate of a video sequence that includes the synthesized texture is less than a bit rate of an original video sequence.
87. The system of claim 78, wherein a region of interest is identified within texture of an initial frame.
88. The system of claim 87; wherein a mean vector is computed for the region of interest.
89. The system of claim 88, wherein the texture is synthesized and inserted by:
computing an angular map and a modulus map for an initial frame, wherein the angular map and the modulus map represent color characteristics;
comparing color characteristics of the region of interest with color characteristics of all pixels in the initial frame; and including in the texture to be removed pixels whose color characteristics are similar to the color characteristics of the region of interest.
90. A computer-readable medium storing instructions for controlling a computing device to process video data, the instructions comprising:
removing texture from at least some frames of a set of frames in a video sequence;
analyzing the removed texture to obtain texture parameters;
synthesizing new texture based on the obtained texture parameters and at least one of the following qualitative constraints: dominant texture orientation, overall color and color saturation; and inserting the synthesized new texture into the set of frames.
91. The computer-readable medium of claim 90, wherein synthesized new texture is visually similar to the original texture.
92. The computer-readable medium of claim 90, wherein the list of qualitative constraints further comprises: coefficient magnitude correlations, cross scale statistics, coefficient correlations and marginal statistics.
93. The computer-readable medium of claim 90, wherein the color saturation qualitative constraint is further based on whether an original texture associated with the set of frames is structured or unstructured.
94. The computer-readable medium of claim 90, wherein the texture is synthesized according to whether the original texture is structured and busy or unstructured and smooth.
95. The computer-readable medium of claim 94, wherein if the removed texture is structured and busy, then synthesizing the new texture is based on at least the constraints: marginal statistics, coefficient correlations, coefficient magnitude correlations, cross-scale statistics, overall color and color saturation.
96. The computer-readable medium of claim 94, wherein if the removed texture is unstructured and smooth, then synthesizing the new texture is based on at least the constraints: coefficient magnitude correlations, cross-scale statistics, overall color and color saturation.
97. The computer-readable medium of claim 90, wherein the synthesized new texture is similar to but distinguishable from the removed texture.
98. The computer-readable medium of claim 90, wherein a bit rate of a video sequence that includes the synthesized texture is less than a bit rate of an original video sequence.
99. The computer-readable medium of claim 90, wherein a region of interest is identified within texture of an initial frame.
100. The computer-readable medium of claim 99, wherein a mean vector is computed for the region of interest.
101. The computer-readable medium of claim 100, wherein the texture is synthesized and inserted further according to a method comprising:
computing an angular map and a modulus map for an initial frame, wherein the angular map and the modulus map represent color characteristics;
comparing color characteristics of the region of interest with color characteristics of all pixels in the initial frame; and including in the texture to be removed pixels whose color characteristics are similar to the color characteristics of the region of interest.
CA002407143A 2001-10-11 2002-10-04 Texture replacement in video sequences and images Expired - Fee Related CA2407143C (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US32862701P 2001-10-11 2001-10-11
US60/328,627 2001-10-11
US10/237,489 2002-09-09
US10/237,489 US6977659B2 (en) 2001-10-11 2002-09-09 Texture replacement in video sequences and images

Publications (2)

Publication Number Publication Date
CA2407143A1 CA2407143A1 (en) 2003-04-11
CA2407143C true CA2407143C (en) 2006-08-29

Family

ID=26930735

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002407143A Expired - Fee Related CA2407143C (en) 2001-10-11 2002-10-04 Texture replacement in video sequences and images

Country Status (2)

Country Link
US (7) US6977659B2 (en)
CA (1) CA2407143C (en)

Families Citing this family (52)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7023447B2 (en) * 2001-05-02 2006-04-04 Eastman Kodak Company Block sampling based method and apparatus for texture synthesis
US6977659B2 (en) 2001-10-11 2005-12-20 At & T Corp. Texture replacement in video sequences and images
US7606435B1 (en) * 2002-02-21 2009-10-20 At&T Intellectual Property Ii, L.P. System and method for encoding and decoding using texture replacement
JP4305071B2 (en) * 2003-06-30 2009-07-29 株式会社ニコン Signal correction method
EP1683360A1 (en) * 2003-10-31 2006-07-26 Koninklijke Philips Electronics N.V. Method of encoding video signals
US7567715B1 (en) * 2004-05-12 2009-07-28 The Regents Of The University Of California System and method for representing and encoding images
US8422546B2 (en) 2005-05-25 2013-04-16 Microsoft Corporation Adaptive video encoding using a perceptual model
US7605821B1 (en) * 2005-09-29 2009-10-20 Adobe Systems Incorporated Poisson image-editing technique that matches texture contrast
US7995649B2 (en) 2006-04-07 2011-08-09 Microsoft Corporation Quantization adjustment based on texture level
US8503536B2 (en) 2006-04-07 2013-08-06 Microsoft Corporation Quantization adjustments for DC shift artifacts
US8059721B2 (en) 2006-04-07 2011-11-15 Microsoft Corporation Estimating sample-domain distortion in the transform domain with rounding compensation
US8711925B2 (en) 2006-05-05 2014-04-29 Microsoft Corporation Flexible quantization
EP1926321A1 (en) * 2006-11-27 2008-05-28 Matsushita Electric Industrial Co., Ltd. Hybrid texture representation
KR101381600B1 (en) * 2006-12-20 2014-04-04 삼성전자주식회사 Method and apparatus for encoding and decoding using texture synthesis
US8238424B2 (en) 2007-02-09 2012-08-07 Microsoft Corporation Complexity-based adaptive preprocessing for multiple-pass video compression
US8498335B2 (en) 2007-03-26 2013-07-30 Microsoft Corporation Adaptive deadzone size adjustment in quantization
US20080240257A1 (en) * 2007-03-26 2008-10-02 Microsoft Corporation Using quantization bias that accounts for relations between transform bins and quantization bins
US8243797B2 (en) 2007-03-30 2012-08-14 Microsoft Corporation Regions of interest for quality adjustments
US8442337B2 (en) 2007-04-18 2013-05-14 Microsoft Corporation Encoding adjustments for animation content
US8331438B2 (en) 2007-06-05 2012-12-11 Microsoft Corporation Adaptive selection of picture-level quantization parameters for predicted video pictures
US8208556B2 (en) 2007-06-26 2012-06-26 Microsoft Corporation Video coding using spatio-temporal texture synthesis
DE102007036215B4 (en) * 2007-08-02 2009-09-10 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Detection of defects in pictures / videos
US20090033674A1 (en) * 2007-08-02 2009-02-05 Disney Enterprises, Inc. Method and apparatus for graphically defining surface normal maps
JP4659005B2 (en) * 2007-08-17 2011-03-30 日本電信電話株式会社 Moving picture encoding method, decoding method, encoding apparatus, decoding apparatus based on texture synthesis, program thereof, and recording medium thereof
US8013862B2 (en) * 2007-11-16 2011-09-06 Microsoft Corporation Texture codec
US8189933B2 (en) * 2008-03-31 2012-05-29 Microsoft Corporation Classifying and controlling encoding quality for textured, dark smooth and smooth video content
WO2009155089A1 (en) * 2008-05-29 2009-12-23 Telcordia Technologies, Inc. Method and system for generating and presenting mobile content summarization
US8584048B2 (en) * 2008-05-29 2013-11-12 Telcordia Technologies, Inc. Method and system for multi-touch-based browsing of media summarizations on a handheld device
US8897359B2 (en) 2008-06-03 2014-11-25 Microsoft Corporation Adaptive quantization for enhancement layer video coding
US8577158B2 (en) * 2008-06-27 2013-11-05 Thomson Licensing Methods and apparatus for texture compression using patch-based sampling texture synthesis
JP5680283B2 (en) * 2008-09-19 2015-03-04 株式会社Nttドコモ Moving picture encoding apparatus, moving picture decoding apparatus, moving picture encoding method, moving picture decoding method, moving picture encoding program, and moving picture decoding program
US8296675B2 (en) * 2009-03-09 2012-10-23 Telcordia Technologies, Inc. System and method for capturing, aggregating and presenting attention hotspots in shared media
EP2583460A1 (en) * 2010-06-15 2013-04-24 Thomson Licensing Method for coding and decoding a video picture
US8762890B2 (en) 2010-07-27 2014-06-24 Telcordia Technologies, Inc. System and method for interactive projection and playback of relevant media segments onto the facets of three-dimensional shapes
US9712847B2 (en) * 2011-09-20 2017-07-18 Microsoft Technology Licensing, Llc Low-complexity remote presentation session encoder using subsampling in color conversion space
US9324133B2 (en) * 2012-01-04 2016-04-26 Sharp Laboratories Of America, Inc. Image content enhancement using a dictionary technique
GB2522012B (en) * 2014-01-06 2017-04-19 Sony Interactive Entertainment Inc Apparatus and method of texture encoding and decoding
CN105261001A (en) * 2014-07-14 2016-01-20 王科 Image processing method and device
US9753625B2 (en) * 2015-03-17 2017-09-05 Adobe Systems Incorporated Image selection control
DE102015009981A1 (en) * 2015-07-31 2017-02-02 Eberhard Karls Universität Tübingen Method and apparatus for image synthesis
US11409791B2 (en) 2016-06-10 2022-08-09 Disney Enterprises, Inc. Joint heterogeneous language-vision embeddings for video tagging and search
CN106682424A (en) * 2016-12-28 2017-05-17 上海联影医疗科技有限公司 Medical image adjusting method and medical image adjusting system
US10311326B2 (en) * 2017-03-31 2019-06-04 Qualcomm Incorporated Systems and methods for improved image textures
WO2019110149A1 (en) * 2017-12-08 2019-06-13 Huawei Technologies Co., Ltd. Cluster refinement for texture synthesis in video coding
EP3662669A1 (en) 2017-12-08 2020-06-10 Huawei Technologies Co., Ltd. Frequency adjustment for texture synthesis in video coding
WO2019110125A1 (en) * 2017-12-08 2019-06-13 Huawei Technologies Co., Ltd. Polynomial fitting for motion compensation and luminance reconstruction in texture synthesis
US10791343B2 (en) * 2018-03-13 2020-09-29 Google Llc Mixed noise and fine texture synthesis in lossy image compression
US10558761B2 (en) * 2018-07-05 2020-02-11 Disney Enterprises, Inc. Alignment of video and textual sequences for metadata analysis
US11315256B2 (en) * 2018-12-06 2022-04-26 Microsoft Technology Licensing, Llc Detecting motion in video using motion vectors
US10810782B1 (en) 2019-04-01 2020-10-20 Snap Inc. Semantic texture mapping system
US10951902B2 (en) * 2019-06-12 2021-03-16 Rovi Guides, Inc. Systems and methods for multiple bit rate content encoding
US11551385B1 (en) * 2021-06-23 2023-01-10 Black Sesame Technologies Inc. Texture replacement system in a multimedia

Family Cites Families (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5097427A (en) 1988-07-06 1992-03-17 Hewlett-Packard Company Texture mapping for computer graphics display controller system
US5097437A (en) 1988-07-17 1992-03-17 Larson Ronald J Controller with clocking device controlling first and second state machine controller which generate different control signals for different set of devices
DE3930147A1 (en) 1989-09-09 1991-03-21 Danfoss As HYDRAULIC STEERING UNIT
US5204920A (en) 1990-01-12 1993-04-20 U.S. Philips Corporation Method and apparatus for region and texture coding
DE69228983T2 (en) 1991-12-18 1999-10-28 Koninkl Philips Electronics Nv System for transmitting and / or storing signals from textured images
FR2695743A1 (en) 1992-09-16 1994-03-18 Philips Electronique Lab Textured image processing system, texture analyzer and texture synthesizer.
JP3030485B2 (en) * 1994-03-17 2000-04-10 富士通株式会社 Three-dimensional shape extraction method and apparatus
US5926576A (en) * 1994-03-30 1999-07-20 Newton; Dale C. Imaging method and system concatenating image data values to form an integer, partition the integer, and arithmetically encode bit position counts of the integer
DE69636695T2 (en) * 1995-02-02 2007-03-01 Matsushita Electric Industrial Co., Ltd., Kadoma Image processing device
US5872867A (en) 1995-08-04 1999-02-16 Sarnoff Corporation Method and apparatus for generating image textures
JPH09135447A (en) * 1995-11-07 1997-05-20 Tsushin Hoso Kiko Intelligent encoding/decoding method, feature point display method and interactive intelligent encoding supporting device
US5764814A (en) 1996-03-22 1998-06-09 Microsoft Corporation Representation and encoding of general arbitrary shapes
JP3510733B2 (en) 1996-04-03 2004-03-29 ペンタックス株式会社 Video signal processing device connectable to electronic endoscope
US6381364B1 (en) * 1996-12-31 2002-04-30 Intel Corporation Content texture sensitive picture/video encoder/decoder
US6850249B1 (en) * 1998-04-03 2005-02-01 Da Vinci Systems, Inc. Automatic region of interest tracking for a color correction system
US6215526B1 (en) * 1998-11-06 2001-04-10 Tivo, Inc. Analog video tagging and encoding system
EP1777658B1 (en) * 1999-02-05 2008-01-23 Samsung Electronics Co., Ltd. Image texture retrieving method and apparatus thereof
JP3325239B2 (en) 1999-06-09 2002-09-17 日本テレビ放送網株式会社 Caption material creation system, caption material creation method and recording medium storing caption material creation program
US6647132B1 (en) 1999-08-06 2003-11-11 Cognex Technology And Investment Corporation Methods and apparatuses for identifying regions of similar texture in an image
US6594391B1 (en) * 1999-09-03 2003-07-15 Lucent Technologies Inc. Method and apparatus for texture analysis and replicability determination
US6512846B1 (en) * 1999-11-29 2003-01-28 Eastman Kodak Company Determining orientation of images containing blue sky
US6421386B1 (en) 1999-12-29 2002-07-16 Hyundai Electronics Industries Co., Ltd. Method for coding digital moving video including gray scale shape information
US6642940B1 (en) 2000-03-03 2003-11-04 Massachusetts Institute Of Technology Management of properties for hyperlinked video
GB2362533A (en) 2000-05-15 2001-11-21 Nokia Mobile Phones Ltd Encoding a video signal with an indicator of the type of error concealment used
KR100450793B1 (en) 2001-01-20 2004-10-01 삼성전자주식회사 Apparatus for object extraction based on the feature matching of region in the segmented images and method therefor
WO2002063883A1 (en) 2001-02-06 2002-08-15 Koninklijke Philips Electronics N.V. Preprocessing method applied to textures of arbitrarily shaped objects
US6919903B2 (en) * 2001-03-02 2005-07-19 Mitsubishi Electric Research Laboratories, Inc. Texture synthesis and transfer for pixel images
US7023447B2 (en) * 2001-05-02 2006-04-04 Eastman Kodak Company Block sampling based method and apparatus for texture synthesis
US7136072B2 (en) * 2001-08-03 2006-11-14 Hewlett-Packard Development Company, L.P. System and method for performing texture synthesis
US6977659B2 (en) 2001-10-11 2005-12-20 At & T Corp. Texture replacement in video sequences and images
US7606435B1 (en) 2002-02-21 2009-10-20 At&T Intellectual Property Ii, L.P. System and method for encoding and decoding using texture replacement
WO2008156437A1 (en) 2006-04-10 2008-12-24 Avaworks Incorporated Do-it-yourself photo realistic talking head creation system and method
CN1757240A (en) 2003-03-03 2006-04-05 皇家飞利浦电子股份有限公司 Video encoding
US7426285B2 (en) 2004-09-21 2008-09-16 Euclid Discoveries, Llc Apparatus and method for processing video data
US7308443B1 (en) 2004-12-23 2007-12-11 Ricoh Company, Ltd. Techniques for video retrieval based on HMM similarity
US8200033B2 (en) 2005-11-30 2012-06-12 Koninklijke Philips Electronics N.V. Encoding method and apparatus applying coefficient reordering
US8207989B2 (en) 2008-12-12 2012-06-26 Microsoft Corporation Multi-video synthesis
US8320666B2 (en) 2009-08-14 2012-11-27 Genesis Group Inc. Real-time image and video matting
JP5634111B2 (en) 2010-04-28 2014-12-03 キヤノン株式会社 Video editing apparatus, video editing method and program
US9067320B2 (en) 2010-06-09 2015-06-30 Disney Enterprises, Inc. Robotic texture
EP2661881A4 (en) 2010-12-29 2016-10-12 Nokia Technologies Oy Depth map coding
US8712122B2 (en) 2011-03-31 2014-04-29 International Business Machines Corporation Shape based similarity of continuous wave doppler images
US20130176390A1 (en) 2012-01-06 2013-07-11 Qualcomm Incorporated Multi-hypothesis disparity vector construction in 3d video coding with depth
US20130329987A1 (en) 2012-06-11 2013-12-12 Genesis Group Inc. Video segmentation method

Also Published As

Publication number Publication date
US20070268301A1 (en) 2007-11-22
CA2407143A1 (en) 2003-04-11
US7307639B1 (en) 2007-12-11
US10282866B2 (en) 2019-05-07
US20050243099A1 (en) 2005-11-03
US20080055332A1 (en) 2008-03-06
US20110292993A1 (en) 2011-12-01
US7564465B2 (en) 2009-07-21
US7545989B1 (en) 2009-06-09
US20030076334A1 (en) 2003-04-24
US8896616B2 (en) 2014-11-25
US7995072B2 (en) 2011-08-09
US6977659B2 (en) 2005-12-20
US20150077427A1 (en) 2015-03-19

Similar Documents

Publication Publication Date Title
CA2407143C (en) Texture replacement in video sequences and images
US10445903B2 (en) System and method for encoding and decoding using texture replacement
EP1570413B1 (en) Region-of-interest tracking method and device for wavelet-based video coding
Bradley et al. Visual attention for region of interest coding in JPEG 2000
JP2000508488A (en) System and method for multi-resolution conversion of digital image information
JPH07203435A (en) Method and apparatus for enhancing distorted graphic information
Stoffels et al. Object‐oriented image analysis for very‐low‐bitrate video‐coding systems using the CNN universal machine
WO2006131866A2 (en) Method and system for image processing
Chai et al. Automatic face location for videophone images
Clarke Image and video compression: a survey
Mohsenian et al. Subband coding of video using an edge-based vector quantization technique for compression of the upper bands
Giunta et al. Image sequence coding using oriented edges
Dumitraş et al. An encoder-only texture replacement method for effective compression of entertainment movie sequences
Luo et al. Three dimensional subband video analysis and synthesis with adaptive clustering in high frequency subbands
Hsu et al. WaveNet processing brassboards for live video via radio
Faura et al. MMJPEG2000: A video compression scheme based on JPEG2000
Dumitras et al. A background modeling method by texture replacement and mapping with application to content-based movie coding
WO1999010838A1 (en) System and method for processing digital image information
JPH07184208A (en) Moving picture coder and decoder
Chen et al. VISUAL COMMUNICATIONS AND IMAGE-PROCESSING IV

Legal Events

Date Code Title Description
EEER Examination request
MKLA Lapsed

Effective date: 20151005