CA2108338C - Adaptive video encoder for two-layer encoding of video signals on atm (asynchronous transfer mode) networks - Google Patents

Adaptive video encoder for two-layer encoding of video signals on atm (asynchronous transfer mode) networks

Info

Publication number
CA2108338C
CA2108338C CA002108338A CA2108338A CA2108338C CA 2108338 C CA2108338 C CA 2108338C CA 002108338 A CA002108338 A CA 002108338A CA 2108338 A CA2108338 A CA 2108338A CA 2108338 C CA2108338 C CA 2108338C
Authority
CA
Canada
Prior art keywords
cell loss
representation
level
low
stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CA002108338A
Other languages
French (fr)
Other versions
CA2108338A1 (en
Inventor
Caspar Horne
Amy Ruth Reibman
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AT&T Corp
Original Assignee
American Telephone and Telegraph Co Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by American Telephone and Telegraph Co Inc filed Critical American Telephone and Telegraph Co Inc
Publication of CA2108338A1 publication Critical patent/CA2108338A1/en
Application granted granted Critical
Publication of CA2108338C publication Critical patent/CA2108338C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/107Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/152Data rate or code amount at the encoder output by measuring the fullness of the transmission buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/187Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scalable video layer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/89Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/434Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams, extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/04Selecting arrangements for multiplex systems for time-division multiplexing
    • H04Q11/0428Integrated services digital network, i.e. systems for transmission of different types of digitised signals, e.g. speech, data, telecentral, television signals
    • H04Q11/0478Provisions for broadband connections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5614User Network Interface
    • H04L2012/5616Terminal equipment, e.g. codecs, synch.
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability

Abstract

The quality of video images received at the remote end of an ATM network capable of transmitting data at high and low priorities is greatly improved at high cell loss levels by employing a two-layered video encoding technique that adapts the method for encoding information transmitted in the low-priority digital information stream to the rate of cell loss on the network so that compression efficiency and image quality are high when the network load is low and resiliency to cell loss is high when the network load is high.
The encoder adapts its encoding method in response to a cell loss information signal generated by the remote decoder by selecting the prediction mode used to encode the low-priority digital information stream, and by changing the frequency at which slice-start synchronization codes are placed within the low-priority digital information stream (FIG. 1).

Description

21~8~38 ADAPTIVE VIDEO ENCODER FOR TWO-LAYER ENCODING OF VIDEO
SIGNALS ON ATM (ASYNCHRONOUS TRANSFER MODE) NETWORKS

Technical Field This invention relates to video image processing, and more particularly, to adapting video encoding parameters to ATM network load conditions for limiting the effect of lost cells on video quality.
s B~-~k~ of Ihe Invention The asynchronous transfer mode (ATM) environment is now widely recogni~d as the preferred way of implementing Broadband Integrated Services Digital Network (B-ISDN) mlllticervice networks for simultaneously carrying voice, data, and video on the network. ATM networks transmit an encoded video signal in short, fixed-si~ cells of information using st~tictir~l multiplexing.
An ATM network can transmit data using multiple priorities because it allows thet~rminal to mark each cell as either high or low-priority. If congestion develops, the ATM
network drops low-priority cells before high-priority cells are dropped. Video can be encoded to take advantage of multiple priorities by partitioning the video image into more and less important parts. The more important part, known as the base layer, typically includes enough basic video information for the decoder to reconstruct a minim~lly acceptable image, and is transmitted by the ATM network in the high-priority bit-stream.
The less important part, known as the enh~nr,ement layer, is used to enh~nce the quality of the image, and is tr~nsmitted in the low-priority bit-stream. The partitioning of video data into high and low priorities is described in detail in the Motion Picture Experts Group Phase 2 Test Model 5 Draft Version 2, Doc. MPEG93/225, April, 1993 (MPEG-2 TM5).Such methods include spatial scalability, frequency scalability, signal-to-noise ratio (SNR) scalability, and data partitioning.
One problem with ATM networks is that each network source is allocated less bandwidth than its peak requirement which results in a nonzero probability that cells will be lost or delayed during trancmicsion. Such probability of loss or delay increaces as the load on the network increases. In addition, cells may be effectively lost as random bit errors are introduced into the cell header during tr~nsmic.cion. A lost or delayed oell has the potential to .ci~nifi~ntly affect the image quality of the received video signal because real-time video cannot wait for retr~n.cmi.c.cion of errored oells. Lost oells in a given frame cause errors in decoding which can propagate into subsequent frames, or into a larger spatial area. An encoding method that provides for high video image quality at the remote 2108~8 ~.

end, even when there are cell losses on the network, is said to be resilient to cell loss. Cell loss resiliency, however, is less .cignifir~nt when there are no cell losses on the network, such as when the network load is low. Thus, it is desirable to encode video with good compression efficiency when network load is low, but with good resiliency to cell loss S when network traffic becomes congested.
Prior art video encoding systems with resiliency to cell loss using the high and low-priority transmiccion capabilities of ATM include adaptive encoders that dyn~mi~-~lly modify encoding in response to inforrnation fed back to the encoder from the remote end.
For ex~mpkP., one prior art system adjusts the partition between data encoded into high 10 and low-priorities in response to cell loss, while using a fixed encoding algorithm, to improve the efficiency of statistical multiplexing. This prior art system is not entirely s~ticf~Gtoty bec~use it requires that all sources on the ATM network adapt using the same partitioning scheme which complicates the call admission (i.e. connection) process. This results because the network needs to ascertain that a source will implement the adaptation 15 prior to making the admission.
Another prior art system provides resiliency to cell loss by decoding the received signal to dçle~ P the number and addresses of the blocks contained in lost cells at the remote end. Then this detPnnin~tion is relayed to the encoder which calculates the affected picture area in the locally decoded image to allow encoding from the point of the errored 20 blocks up to the ~;ulr~ ly encoded frame without using the errored area. This system requires that the decoder completely decode and process the tr~ncmitted bit-stream before any feedback can be relayed to the encoder. While this system provides for a measure of compression efficiency at low network loads, as the network load increases the feedb~
delay inherent in such a system can potentially defeat any advantage gained from adaptive 25 encoding when the delay exceeds the real-time encoding requirements of the encoder.

Sun,~ r~ of the Invention The quality of video images received at the remote end of an ATM network with high and low-priority tran.cmiccion capability is greatly improved at high cell loss levels by 30 employing a two-layered video encoding technique that adapts the algorithm used for encoding information tr~ncmittçd in the low-priority bit-stream to the level of cell loss on the network so that compression efficiency and image quality are high when the network load is low and resiliency to cell loss is high when the network load is high.
Specifically, the encoder encodes the prediction error blocks of the çnh~ncement35 layer using either spatial or temporal prediction, or a combination of both spatial and temporal prediction, in response to a cell loss information signal indicative of the level of ~8 33~
cell losses on the ATM network. This cell loss information signal is transmitted to the encoder from the remote decoder. In the no or low cell loss situation, to encode the current block, the encoder selects either temporal or spatial prediction, depending on which will produce the best compression efficiency. However, as the average level of cell loss over a predetermined number of frames increases, the encoder uses spatial prediction more often to prevent any decoding error resulting from cell loss from prop~g~ting into subsequent frames. If the cell loss level subsequently decreases, the encoder selects the best prediction as before.
The encoder also adapts its encoding method in response to the cell loss information signal by ch~nging the frequency of slice-start synchronization codes inserted in the low-priority bit-stream. The encoder inserts more frequent slice-start synchronization codes as the level of cell loss increases on the ATM network to allow the decoder to recover more rapidly from losses.
The invention provides a number of technical advantages in addition to improved video quality at high levels of cell loss. For example, the call ~(lmi.~ion process is simplified; the feedback delay to the encoder is minimi~ed allowing for more rapid adaptation to ch~n~ing network loading conditions; the encoder adapts to the average network characteristics rather than to individual cell losses to improve adaptation response;
and compression efficiency is high when the network is lightly loaded.
In accordance with one aspect of the present invention there is provided a method used by an encoder to encode an original video signal including frames, each frame cont:~ining at least one image representation, as an output for tr~n~mis~ion on an ATM
(asynchronous transfer mode) network, the method comprising the steps of: encoding each of said frames into high-priority and low-priority bit-streams; packetizing said encoded frames into cells and outputting said cells for tr~n~mi~sion on said ATM network;
depacketizing said transmitted cells to receive said encoded frames; monitoring a level of cell loss occurring in said received encoded frames; generating a representation of said level of cell loss for each of said received encoded frames; transmitting said representation of said level of cell loss to said encoder on said ATM network; and adapting encoding parameters used to encode said low-priority bit-stream in response to said representation of said level of cell loss, where said means for adapting is independent of said means for encoding said high-priority bit-stream.
~ r ~i -3a-In accordance with another aspect of the present invention there is provided a~a~dlus used by an encoder to encode an original video signal including frames, each frame col-t~il-il-g at least one image representation, as an output for tr~3n~mi~ion on an ATM
(asynchronous transfer mode) network, said al)paldlus comprising: means for encoding each 5 of said frames into high-priority and low-priority bit-streams; means for packetizing said encoded frames into cells, said packetizing means outputting said cells for tr~n~mi.c.cion on said ATM network; means for depacketizing said cells to receive said encoded frames;
means for monitoring a level of cell loss occurring in each of said received encoded frames;
means for generating a representation of said level of cell loss for each of said received 10 encoded frames; means for transmitting said representation of said level of cell loss to said encoder on said ATM network; and means for adapting encoding pardmeters used to encode said low-priority bit-stream in response to said representation of said level of cell loss, where said means for adapting is independent of said means for encoding said high-priority bit-stream.
15 Brief De~ lion of the Drawin~s Shown in FIG. 1, in simplified block diagram form, is an illustrative encoder unit and decoder unit embodying aspects of the invention, and an ATM with high and low-priority tr~n~mi~ion capability;
FIG. 2 shows the method of encoding the enhancement layer used in the 20 illustrative embodiment of FIG. 1;
FIG. 3 shows, in flowchart form, the principles behind the determination of the spatio-temporal weighting parameter and slice-start synchronization parameter in accordance with an aspect of the invention; and FIG. 4 shows the combinations in residual error energy for which spatial or 25 temporal prediction will be used to encode the enhancement layer used by the illustrative embodiment of FIG. 1.

~ 2lns3~s Detailed Des.; iulion FIG. 1 is a simplified block diagram of encoder unit 15, decoder unit 95, and ATM
network 70 with high and low-priority tr~n~mi.c~ n capability, incorporating the p~ plPs of the invention. In overall view, an original video signal, VIDIN, including frames, is supplied as an input to two-layer video encoder 10 in encoder unit 15. Such video signals are well known in the art. Two-layer video encoder 10 partitions and encodes the video signal into two bit-streams. One bit-stream includes the encoded base layer and the other includes the encoded enh~ncement layer. These bit-streams are indicated as base layer bit-stream BL and enhancement layer bit-stream EL in FIG. 1. Base layer bit-stream BL is tr~n.~mitfP.d over ATM network 70 at high-priority, and enhancement layer bit-stream EL
is tr~n~mittPd at low-priority.
In the illustrative embodiment of FIG.l, spatial scalability is used as the basis for generating base and enh~ncement layer bit-streams BL and EL from input signal VIDIN. It will be app~ent to those skilled in the art that it might be advantageous to use SNR or frequency sc~ ility in some applications. Base layer bit-stream BL is generated by encoding a low resolution base layer image using, for ex~mple, the Motion Picture Experts Group Phase 1 standard (MPEG-l) set forth in the TntPrn~tional Standards Org~ni7~tion CommittPe Draft 11172-2 "Coding of Moving Pictures and Associated Audio for Digital Storage Media at up to 1.5 Mbits/s," November 1991. MPEG-l encodes video using acombination of transform and predictive coding. The base layer is encoded using a constant bit-rate through the use of encoder base layer smoothing buffer 20. Base layer rate controller 30 generates as an input to two-layer video encoder 10, qu~nti7~tion step si_e QBL for each macroblock in the base layer in response to an input signal from encoder base layer smoothing buffer 40 that is representative of the fullness of the buffer.
Qu~nti7~tion step size QBL is the only variable parameter used for encoding the base layer.
All other parameters used by the encoding algorithm to encode the base layer are fixed.
Advantageously, the constant rate of base layer encoding significantly simplifiPs the call admission process as a constant rate channel can be allocated to base layer bit-stream BL
which is tr~n.~mittPd at high-priority over ATM network 70.
F.nh~n~ement layer bit-stream EL is generated by encoding the difference betweenthe original video signal VIDIN, and the upsampled base layer image, where the base layer image is produced by locally decoding base layer bit-stream BL by two-layer video encoder 20. The enhancement layer is encoded at a constant bit-rate through the use of encoder enhancement layer smoothing buffer 40. Enhancement layer rate controller 50 generates as an input to two-layer video encoder 10, quantization step size QEL for each macroblock in the enhancement layer in response to an input signal from encoder _ 210~338 enhancement layer smoothing buffer 40 that is representative of the fullness of the buffer.
Qll~nti7~tion step si_e QEL is the first variable encoding parameter used for encoding the enhancemçnt layer. The enh~nr,ement layer is also encoded using second and third variable encoding parameters generated by enhancemçnt layer adaptation device 60 in accoidance 5 with an aspect of the invention. The second variable encoding parameter is the spatio-temporal weighting parameter, w. The third variable encoding parameter is the number of slice-start synchroni_ation codes inserted within each encoded frame, Nsm. The rem~inin~
parameters used to encode the çnh~ncement layer are fLlced. Spatio-temporal weighting parameter w and slice-start syncl~ni~ation parameterNsm are generated by enh~nremçnt 10 layer adaptation device 60 as an input to two-layer video encoder 10.
Attention is directed to FIG. 2 which shows the method of encoding the çnh~ncement layer used in the ml~st~tive embodiment of nG. 1. As is disclosed inMPEG-2 TM5 noted above, there is a loose coupling between the base and enh~nrçmpnt layers, that is, the coding algorithms used to code the layers are indep~pn(lent~ but the 15 çnh~ncçm~P.nt coding algorithm can make use of the decoded images produced by the base layer algorithm. In FIG. 2, the predicted image on line 18 is subtracted from the original image on line 11 to produce the error image on line 12 which is to be coded onto line 13.
The predicted image is obtained from a weighted average of the çnh~nr,emPnt layer image from the previous frame and the base layer image from the current frame, where the 20 enh~nr,emP.nt layer image is produced after decoding the enhancement layer bit-stream and adding the result to the upsampled base layer image. The predicted image is added back to the locally decoded error image to produce an error free version of the decoded enhancement layer image on line 15. Spatio-temporal weighting parameter w dçtprmin-ps whether the enhancement layer encoding algorithm uses spatial prediction from the base 25 layer image in the current frame, temporal prediction from the cnh~ncement layer image from the previous frame, or a combination of both. Spatio-temporal wei~hting parameter w is generated by enhancemçnt layer adaptation device 60 in encoder unit 15 (FIG.l ), with such generation method described in greater detail below. For all pels in the macroblock, the prediction pel at the same location is determined using:

x = w * Xb + (1 W) * Xe (1) where x is the prediction pel, xb is the pel from the base layer, and xe is the pel from the enhancement layer. Thus, if spatio-temporal weighting parameter w = 0, the prediction 35 block is obtained purely through temporal prediction, while if spatio-temporal weighting parameter w= 1, the prediction block is obtained purely through spatial prediction. If 2108~38 spatio-temporal weighting parameter w is a value other than 1 or 0, then the prediction block is obtained through a combination of spatial and temporal prediction.
In the prior art, a typical choice of spatio-temporal wçi~hting parameter w in ATM
networks with no cell losses, is the use of the prediction mode, either temporal or spatial, that produces the smallest residual error energy to produce the best compressionefficiency. This will usually result in the encoder using temporal prediction with spatio-temporal weighting parameter w = 0. However, any prediction that uses a spatio-temporal weighting parameter w ~ 1 is not resilient to cell losses, because any lost data in the enhancement layer bit-stream can potentially cause the error to propagate into many subsequent enhancement layer images which degrades the video image quality.
Alternatively, spatial only prediction with spatio-temporal weighting parameter w= 1, while providing resiliency to cell losses, will not take the ms~ ." advantage of the information previously tr~n.cmit~,d and received in a loss free condition, which results in a loss of compression efficiency and a corresponding reduction in video image quality. Thus, choosing a spatio-temporal weighting parameter w according to the prior art, as di~ ed above, does not allow the encoder to dyn~mi~lly select between compression effi~ncy and resili~n~e to cell loss as network conditions change.
However, spatio-temporal weighting parameter w is adapted to varying cell loss levels and ATM network load conditions to provide for high compression and video image quality when the ATM network load is low and cell loss is rare, and improved re~ n~e to cell loss when the ATM network load and the level of cell loss increases. For a given macroblock, spatio-temporal weighting parameter w is determined as a function of the number of lost cells in a recent time interval, and, as function of the number of frames since the macroblock was last tr~n~mitted with spatial-only prediction. Advantageously, the adaptation performed by enhancement layer adaptation device 60 in encoder unit 15 (FIG. 1), provides for high video image quality at the remote end even if it is the only source on the ATM network using the adaptation. In addition, the adaptation does not influence the performance of other sources, either positively or negatively, nor does the network need to know that any adaptation is taking place.
FIG. 3 shows, in flowchart form, the principles behind the deterrnin~tion of spatio-temporal weighting parameter w performed by enhancement layer adaptation device 60 (FM. 1). Accordingly, the routine is entered via step 300 upon the arrival of the current frame of original video signal VIDIN at two-layer video encoder 10 (FIG. 1). In steps 301 to 303, enhancement layer adaptation device 60 initiali~s the current frame number, f = 0, the previous received losses, Lf-F---Lf =~, and the most recent update of all macroblocks, Nup[~]---Nup[Nmblks]=~~ where Nmblks is the number of macroblocks in the 2108~3~

frame. In step 304, enh~nre-mPnt layer adaptation device 60 receives the number of cell loss events D frames ago from decoder AAL 75, Lf-D~ where D corresponds to the delay imposed by tr~n~mi~sion and ~"lLre,il~g, and computes a running average of the number of cell loss events in the last F frames, as:

Lavg = ( ~ Lj-D ) (2) i= f -F

As F increases, so does the latency in reacting to ch~nging network conditions. However, if F is too short, enhancement layer adaptation device 60 may be adapting to individual 10 losses rather than to actual network conditions. Although not a limitation on the invention, for purposes of this ex~mple, values of F in the range of ten to fifteen have been shown to be effective to allow enhancemP.nt layer adaptation device 60 to adapt to average network conditions rather than to individual cell losses. Therefore, in a heavily congested nc;l~olL
enh~ncement layer adaptation device 60 does not need to wait until it receives information lS about particular cell losses before it adapts because it has already received information that cell losses will likely occur, and it can adjust its generation of encoding parameters accordingly.
In step 305, enh~nrement layer adaptation device 60 (FIG. 1) de~llnines how frequently two-layer video encoder 10 (FIG. 1) inserts slice-start synchlol~i~tion codes in 20 the enhancement layer bit-stream. In this illustrative embodiment, slice-start ~chrû~ ation codes are evenly distributed throughout the frame, with slice-start synchlun~ation parameter Ns~ being equal to the number of macroblocks between each slice-start ~ychrullizadon code determined according to:

N~rt = Max(2, Min(l + Luv8 * 3, v)) , (3) where v is the number of macroblocks contained vertically in the frame. Accordingly, the minimum number of slices is equal to the number of macroblocks vertically in the frame, while the maximum number is half the total number of macroblocks in the frame. When 30 there are no cell losses on the network, the best video image quality can be obtained when two-layer encoder 10 (FIG. 1) inserts a single slice-start synchronization code per frame as each new slice consumes at least 40 bits of overhead in the enhancement layer bit-stream.
However, more slices provides more immunity to error because if two-layer video decoder 90 (FIG. 1) becomes lost in decoding a bit-stream, whether due to a random bit error or 2108~38 lost cell, it can recover by waiting for the next slice-start sychroni_ation code. Therefore, as cell losses on the network increase, the spatial extent of cell losses is reduced and video image quality improved, when a greater number of slice-start synchroni7~tion codes are inserted in the low-priority bit-stream according to equadon (3).
S In step 306, enhancement layer adaptation device 60 initi~li7Ps the macroblock number i = 0 to begin coding the frame. In step 307, enhancement layer adaptation device 60 determines the spatial prediction and the best temporal prediction, using the available enhancement layer images, according to methods known to those skilled in the art, for example, the MPEG-l video encoding standard noted above.
In step 308, enh~ncement layer adaptation device 60 computes the m~ximllm number of frames, N~[i] that can elapse before macroblock i is sent using spatial prediction from the base layer image:

N""~[i] = 15 - Lo"8 * 3 - r (4) where r is a random integer between -2 and 2 inclusive. The random ~lçmçnt is incorporated so that all predictions from the base layer image, produced in accordance with equation (4), do not occur in just one frame, but are distributed randomly among several frames.
In step 309, çnh~ncement layer adaptation device 60 compal~s the two values Nup[i] and N",~ [i]. If Nup[i] 2 N",o~[i], enhancement layer adaptation device 60 will set w= 1 in step 313, resulting in two-layer video encoder 10 encoding using spatial only prediction, otherwise it continues with step 310.
In step 310, enhancement layer adaptation device 60 computes the residual energyin the error after the temporal prediction from the çnh~ncement layer image from the previous frame Eenh and computes the residual energy of the error after spatial prediction from the current base layer image E~,ase. These computations are well known to those skilled in the art and are not discussed in detail here.
In step 311, enhancement layer adaptation device 60 computes an adjusted error, E~

E"dj = Eoffse~ + Eenh * Eslope (S) where for purposes of this illustrative embodiment, '_ EO~qsa =(NUp[i]+1)*Lavg * 3*F (6) and SEs,ope =l+LaVg(Nup[i]+l)/4 (7) In step 312, enhancement layer adaptation device 60 (FIG. 1) compares the two values Eadj and Ebase- If Eadj > Ebase~ enhancement layer adaptation device 60 sets w = 1 in step 313 so that two-layer video encoder 10 uses only spatial prediction to encode the enhancement layer image, and resets the most recent update of the current macroblock NUp[i] = O in step 314. Otherwise, enhancement layer adaptation device 60 sets w = 1 in step 315 so that two-layer video encoder 10 uses temporal prediction, and increments the most recent update of the current macroblock NUp[i]+=l in step 316. As will be appreciated by those skilled in the art, the use of a more general spatio-~ll")ol~l weightin~
parameter w with a value other than 0 or 1 in the foregoing process is readily appa,~ent.
For example, the methodology discussed in appendix G. l of MPEG-2 TMS, noted above, is apprupliately used by the inwntion in that spatio-temporal weighting parameters may be selected for each field in the enhancement layer image. For example, it is also possible to use a spatio-temporal weighting parameter value w = 0.5 for both fields.
FIG. 4 is helpful in understanding the operations in steps 311 to 316 in the flowchart shown in FIG. 3. The shaded region in FIG. 4 graphically shows the combinations of E"ase and Eenh for which spatial prediction is used. The Im~h~ded region shows the combination of E~asc and Ecnh for which temporal prediction is used. Points on the line dividing the two regions are regarded as belonging to the spatial prediction region.
The line dividing the two regions varies as a function of the number of frames since the last spatial prediction for the current macroblock i, and as a function of the average number of lost cells in the last F frames according to the values of Eo~se~ and Es~ope as determined in step 310 in the flowchart shown in FIG. 3. Thus, FIG. 4 shows that the likelihood of using spatial prediction increases when there are many cell losses on the network. In this illustrative embodiment, the value of Eoffse~ is determined so that flat areas in the frame are encoded using spatial prediction to gain resiliency to cell loss since the prediction errors in these areas are not so large that much compression effi~iPncy is lost over temporal prediction. The value of Eslope is determined such that the slope increases from unity as the number of cell losses in the last F frames increases, or, as the number of frames since the last spatial prediction increases. Accordingly, Eo~set = ~ and Eslope = 1 -lO- 2~08338 ',,_ when there have been no cell losses in the last F frames. It will be app~ent to those skilled in the art that regions with shapes other than those shown in FIG. 4 may also beadvantageous to use in some applications of the invention.
Returning to FIG.3, in step 317, the current macroblock is encoded by two-layer video encoder 10 (FIG. 1) using the value generated by enhancement layer adaptation device 60 (FIG.l) in step 313 or 315 for spatio-temporal weighting parameter w. In step 318, the macroblock number i is incremented i+=l. In step 319, the value of the macroblock number i is compared with the number of macroblocks in a frame,N",b,~. If i > N",~ s, the process continues by going to step 320. If i < N""""~ the process repeats by returning to step 307. In step 320, the frame numberfgets incremented f+=l. In step 321, two-layer video encoder 10 determines if there are any more frames from video signal VIDIN to encode. If there are still frames to encode, the process repeats by relulnil~g to step 304. If there are no more frames to encode, two-layer video encoder 10 stops encoding in step 321.
Returning to FIG. 1, after original video input signal VIDIN is partitioned and encoded by two-layer encoder 10 as described above, base layer bit-stream BL andenhancement layer bit-stream EL are transmitted as an input to encoder base layer smoothing buffer 20, and encoder enhancement layer smoothing buffer 40, respectively, where the bit-streams are stored in a first-in-first-out basis for output to encoder ATM
adaptation layer (AAL) device 65. Encoder AAL device 65 pac~loti7ps base layer bit-stream BL and enhancement layer bit-stream EL into fLlced length cells as bit-streams BLPACK and ELPACK for tr~n.cmi.~ion across ATM network 70. An indication of the fullness of encoder base layer smoothing buffer 20iS received by base layer rate controller 30 for det~.rmining quantization step size QBL for encoding the base layer as described above. Similarly, an indication of the fullness of encoder enhancement layer smoothing buffer 40is received by çnh~n(~Rment layer rate controller 50 for det~.rmining qu~nti7~tion step si~ QEL for encoding the enhancement layer. Buffers and AAL devices and thefunctions employed therein are well known in the art.
At decoder unit 95, decoder AAL device 75 depacketizes packetized base layer bit-stream BLPACKR and packeti_ed enhancement layer bit-stream ELPACKR and counts the number of lost cells in each frame of video (with the R subscript denoting that the bit-streams within the decoder unit 95 may contain errors due to cell loss during tr~n.~mi.~.~ion on ATM network 70). During each frame period, decoder AAL device 75 in decoder unit 95 transmits the number of oell loss events in the frame in bit-stream LOSS
to enhancement layer adaptation device 60 across ATM network 70. Enhancement layer adaptation device 60 can thus adapt spatio-temporal weighting parameter w and slice-start _ ~lQ8338 sychronization parameter N~, to the cell loss rate on ATM network 70 that is monitored by decoder AAL device 75. Advantageously, bit-stream LOSS consists only of a simple count of lost cells per frame, rather than consisting of the exact location of errored macroblocks, thereby reducing the feedback delay to the two-layer encoder 10. The feedback delay is limited the round trip tr~ncmiccion delay plus the coding buffer delay because the cell losses are detected prior to decoding. Therefore, the time it takes decoder unit 95 to decode the incoming bit-stream is not a factor in the adaptation response time of encoder unit 10.
Decoder AAL device 75 supplies depacketi~d base layer bit-stream BLR and enhancement layer bit-stream ELR as an input to decoder base layer smoothing buffer 80 and decoder enhancement layer smoothing buffer 85, respectively, where the bit-streams are stored on first-in-f1rst-out basis for output to two-layer video decoder 90 (again, with the R subscript denoting that the bit-streams within decoder unit 95 may contain errors due to cell loss during tr~n.cmi.c.cion on ATM network 70). Two-layer video decoder 90 decodes base layer bit-stream BLR and enhancement layer bit-stream ELR at a constant bit-rate through the use of decoder base layer smoothing buffer 80 and decoder enhancement layer smoothing buffer 85. Two-layer video decoder supplies as an output video signal VIDOUT, a reconstructed version of original video signal VIDIN. Two-layer video decoders and the techniques employed therein are well known in the art.
The foregoing merely illustrates the principles of the present invention. Although particular applications using the adaptive encoder of the present invention are disclosed, video signals have only been used herein in an exemplary manner, and therefore, the scope of the invention is not limited to the use of video signals. The present invention can be used whenever two-way communication is possible using any signal capable of being divided into units and encoded. For example, other applications include news retrieval services and database browsing. It will be appreciated that those skilled in the art will be able to devise numerous and various ~ltprnative arrangements which, although notexplicitly shown or described herein, embody the principles of the invention and are within its spirit and scope.

Claims (24)

Claims:
1. A method used by an encoder to encode an original video signal including frames, each frame containing at least one image representation, as an output for transmission on an ATM (asynchronous transfer mode) network, the method comprising the steps of:
encoding each of said frames into high-priority and low-priority bit-streams;
packetizing said encoded frames into cells and outputting said cells for transmission on said ATM network;
depacketizing said transmitted cells to receive said encoded frames;
monitoring a level of cell loss occurring in said received encoded frames;
generating a representation of said level of cell loss for each of said receivedencoded frames;
transmitting said representation of said level of cell loss to said encoder on said ATM network; and adapting encoding parameters used to encode said low-priority bit-stream in response to said representation of said level of cell loss, where said means for adapting is independent of said means for encoding said high-priority bit-stream.
2. The method as defined in claim 1 wherein said step of adapting includes selecting a prediction mode used to encode said low-priority bit-stream in response to said representation of said level of cell loss.
3. The method as defined in claim 1 wherein said step of adapting includes inserting synchronization information into the encoded low-priority bit-stream in response to said representation of said level of cell loss.
4. The method as defined in claim 1 wherein said step of monitoring includes counting a number of cells lost in each of said received encoded frames.
5. The method as defined in claim 4 wherein said step of transmitting includes sending said number of cells lost in each of said received encoded frames to said encoder.
6. The method as defined in claim 5 wherein said step of adapting includes inserting slice-start synchronization codes into said encoded low-priority bit-stream in response to said representation of said level of cell loss.
7. A method to encode an original video signal including frames, each frame containing at least one image representation, as an output for transmission on an ATM
(asynchronous transfer mode) network, the method comprising the steps of:
encoding each of said frames into high-priority and low-priority bit-streams;
and adapting encoding parameters used to encode said low-priority bit-stream in response to a representation of a level of cell loss on said ATM network, where said adapting is independent of said encoding of said high-priority bit-stream.
8. The method as defined in claim 7 wherein said step of adapting includes selecting a prediction mode used to encode said low-priority bit-stream in response to said representation of said level of cell loss.
9. The method as defined in claim 7 wherein said step of adapting includes inserting synchronization information into said encoded low-priority bit-stream in response to said representation of said level of cell loss.
10. The method as defined in claim 9 wherein said step of adapting includes inserting slice-start synchronization codes into said encoded low-priority bit-stream in response to said representation of said level of cell loss.
11. A method used by an encoder to encode an original video signal including frames, each frame containing at least one image representation, into high and low-priority bit-streams as an output for transmission on an ATM (asynchronous transfer mode) network in packetized cells, the method comprising the steps of:
receiving said transmitted packetized cells including said high and low-prioritybit-streams, where encoding parameters used to encode said low-priority bit-stream are adapted in response to a representation of a level of cell loss on said ATM network and said adaptation is independent of said encoding of said high-priority bit-stream;

depacketizing said transmitted cells to receive said encoded frames, monitoring a level of cell loss occurring in said received encoded frames;
generating a representation of said level of cell loss for each of said receivedencoded frames; and transmitting said representation of said level of cell loss to said encoder on said ATM network
12. The method as defined in claim 11 wherein said step of monitoring includes counting a number of cells lost in each of said received encoded frames.
13. Apparatus used by an encoder to encode an original video signal including frames, each frame containing at least one image representation, as an output for transmission on an ATM (asynchronous transfer mode) network, said apparatus comprising:
means for encoding each of said frames into high-priority and low-priority bit-streams;
means for packetizing said encoded frames into cells, said packetizing means outputting said cells for transmission on said ATM network;
means for depacketizing said cells to receive said encoded frames;
means for monitoring a level of cell loss occurring in each of said received encoded frames;
means for generating a representation of said level of cell loss for each of said received encoded frames;
means for transmitting said representation of said level of cell loss to said encoder on said ATM network; and means for adapting encoding parameters used to encode said low-priority bit-stream in response to said representation of said level of cell loss, where said means for adapting is independent of said means for encoding said high-priority bit-stream.
14. The apparatus as defined in claim 13 wherein said means for adapting includes means for selecting a prediction mode used to encode said low-priority bit-stream in response to said representation of said level of cell loss.
15. The apparatus as defined in claim 13 wherein said means for adapting includes means for inserting synchronization information into the encoded low-priority bit-stream in response to said representation of said level of cell loss.
16. The apparatus as defined in claim 13 wherein said means for monitoring includes means for counting a number of cells lost in each of said received encoded frames.
17. The apparatus as defined in claim 16 wherein said means for transmitting includes means for sending said number of cells lost in each of said received encoded frames to said encoder.
18. The apparatus as defined in claim 16 wherein said means for adapting includes means for inserting slice-start synchronization codes into the encoded low-priority bit-stream in response to said representation of said level of cell loss.
19. Apparatus to encode an original video signal including frames, each frame containing at least one image representation, as an output for transmission on an ATM (asynchronous transfer mode) network, said apparatus comprising:
means for encoding each of said frames into high-priority and low-priority bit-streams; and means for adapting encoding parameters used to encode said low-priority bit-stream in response to a representation of a level of cell loss on said ATM network, where said means for adapting is independent of said means for encoding said high-priority bit-stream.
20. The apparatus as defined in claim 19 wherein said means for adapting includes means for selecting a prediction mode used to encode said low-priority bit-stream in response to said representation of said level of cell loss.
21. The apparatus as defined in claim 19 wherein said means for adapting includes means for inserting synchronization information into said low-priority bit-stream in response to said representation of said level of cell loss.
22. The apparatus as defined in claim 19 wherein said means for adapting includes means for inserting slice start synchronization codes into said encoded low-priority bit-stream in response to said representation of said level of cell loss.
23. Apparatus used by an encoder to encode an original video signal including frames, each frame containing at least one image representation, into high and low-priority bit-streams as an output for transmission on an ATM (asynchronous transfer mode) network in packetized cells, said apparatus comprising:
means for receiving said transmitted packetized cells including said high and low-priority bit-streams, where encoding parameters used to encode said low-priority bit-stream are adapted in response to a representation of a level of cell loss on said ATM
network and means for adaptation is independent of said means for encoding said high-priority bit-stream;
means for depacketizing said transmitted cells to receive said encoded frames;
means for monitoring a level of cell loss occurring in said received encoded frames;
means for generating a representation of said level of cell loss for each of said received encoded frames; and means for transmitting said representation of said level of cell loss to said encoder on said ATM network.
24. The apparatus as defined in claim 23 wherein said means for monitoring includes means for counting a number of cells lost in each of said received encoded frames.
CA002108338A 1993-09-02 1993-10-13 Adaptive video encoder for two-layer encoding of video signals on atm (asynchronous transfer mode) networks Expired - Fee Related CA2108338C (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/113,788 US5515377A (en) 1993-09-02 1993-09-02 Adaptive video encoder for two-layer encoding of video signals on ATM (asynchronous transfer mode) networks
US113,788 1993-09-02

Publications (2)

Publication Number Publication Date
CA2108338A1 CA2108338A1 (en) 1995-03-03
CA2108338C true CA2108338C (en) 1999-07-13

Family

ID=22351529

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002108338A Expired - Fee Related CA2108338C (en) 1993-09-02 1993-10-13 Adaptive video encoder for two-layer encoding of video signals on atm (asynchronous transfer mode) networks

Country Status (3)

Country Link
US (1) US5515377A (en)
JP (1) JP3027492B2 (en)
CA (1) CA2108338C (en)

Families Citing this family (86)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE4408974C2 (en) * 1994-03-16 1996-07-11 Siemens Ag Modularly structured ATM communication system with communication modules
GB2306867B (en) * 1995-10-26 1998-06-03 Bosch Gmbh Robert Method of optimizing the transmission of signals
US6075768A (en) 1995-11-09 2000-06-13 At&T Corporation Fair bandwidth sharing for video traffic sources using distributed feedback control
DE19547707A1 (en) * 1995-12-20 1997-07-03 Thomson Brandt Gmbh Process, encoder and decoder for the transmission of hierarchical digital signals divided into several parts
JP3335069B2 (en) * 1996-04-11 2002-10-15 富士通株式会社 Fixed-length cell multiplex transmission device, fixed-length cell multiplex transmission method, fixed-length cell transmission device, and fixed-length cell multiplex device
CA2208950A1 (en) * 1996-07-03 1998-01-03 Xuemin Chen Rate control for stereoscopic digital video encoding
JPH10117353A (en) * 1996-10-09 1998-05-06 Nec Corp Data processor and receiver
US6011590A (en) * 1997-01-03 2000-01-04 Ncr Corporation Method of transmitting compressed information to minimize buffer space
US6141053A (en) * 1997-01-03 2000-10-31 Saukkonen; Jukka I. Method of optimizing bandwidth for transmitting compressed video data streams
US6078958A (en) * 1997-01-31 2000-06-20 Hughes Electronics Corporation System for allocating available bandwidth of a concentrated media output
US6084910A (en) * 1997-01-31 2000-07-04 Hughes Electronics Corporation Statistical multiplexer for video signals
US6097435A (en) * 1997-01-31 2000-08-01 Hughes Electronics Corporation Video system with selectable bit rate reduction
US6188436B1 (en) 1997-01-31 2001-02-13 Hughes Electronics Corporation Video broadcast system with video data shifting
US6005620A (en) * 1997-01-31 1999-12-21 Hughes Electronics Corporation Statistical multiplexer for live and pre-compressed video
US6091455A (en) * 1997-01-31 2000-07-18 Hughes Electronics Corporation Statistical multiplexer for recording video
DE69835388T2 (en) * 1997-03-17 2007-07-19 Sony Corp. Image encoder and image decoder
CA2255923C (en) * 1997-04-01 2005-06-07 Sony Corporation Picture coding device, picture coding method, picture decoding device, picture decoding method, and providing medium
US7197190B1 (en) * 1997-09-29 2007-03-27 Canon Kabushiki Kaisha Method for digital data compression
US5969579A (en) * 1997-10-17 1999-10-19 Ncr Corporation ECL pulse amplitude modulated encoder driver circuit
WO1999021367A1 (en) * 1997-10-20 1999-04-29 Mitsubishi Denki Kabushiki Kaisha Image encoder and image decoder
KR19990033458A (en) * 1997-10-24 1999-05-15 전주범 Heuristic Parameter Estimation Method for Variable Bit Rate Video Traffic Using Fourier Filtering
KR19990033459A (en) * 1997-10-24 1999-05-15 전주범 Hurst Parameter Estimation Method for Variable Bit Rate Video Traffic Using Fractal Area
US6731811B1 (en) 1997-12-19 2004-05-04 Voicecraft, Inc. Scalable predictive coding method and apparatus
US6215766B1 (en) * 1998-01-30 2001-04-10 Lucent Technologies Inc. Hierarchical rate control of receivers in a communication system transmitting layered video multicast data with retransmission (LVMR)
DE19804564A1 (en) * 1998-02-05 1999-08-12 Fraunhofer Ges Forschung Communication network, method for transmitting a signal, network connection unit and method for adapting the data rate of a scaled data stream
JP3558522B2 (en) * 1998-06-12 2004-08-25 三菱電機株式会社 Rate control communication device and rate control communication method
JP2001169293A (en) * 1999-12-08 2001-06-22 Nec Corp Picture transmission apparatus
US7823182B1 (en) * 1999-12-22 2010-10-26 AT & T Intellectual Property II Method and system for adaptive transmission of smoothed data over wireless channels
US6445696B1 (en) * 2000-02-25 2002-09-03 Network Equipment Technologies, Inc. Efficient variable rate coding of voice over asynchronous transfer mode
US6973501B1 (en) * 2000-06-21 2005-12-06 Adc Telecommunications, Inc. Reducing loss in transmission quality under changing network conditions
DE10033110B4 (en) * 2000-07-07 2005-06-16 Siemens Ag Method, and system for transmitting digitized moving pictures from a transmitter to a receiver and associated decoder
FI120125B (en) * 2000-08-21 2009-06-30 Nokia Corp Image Coding
WO2002047388A2 (en) 2000-11-14 2002-06-13 Scientific-Atlanta, Inc. Networked subscriber television distribution
US8127326B2 (en) 2000-11-14 2012-02-28 Claussen Paul J Proximity detection using wireless connectivity in a communications system
US7319667B1 (en) 2000-11-15 2008-01-15 Cisco Technology, Inc. Communication system with priority data compression
US6987728B2 (en) 2001-01-23 2006-01-17 Sharp Laboratories Of America, Inc. Bandwidth allocation system
US7958532B2 (en) * 2001-06-18 2011-06-07 At&T Intellectual Property Ii, L.P. Method of transmitting layered video-coded information
US7039113B2 (en) * 2001-10-16 2006-05-02 Koninklijke Philips Electronics N.V. Selective decoding of enhanced video stream
JP4150951B2 (en) * 2002-02-19 2008-09-17 ソニー株式会社 Video distribution system, video distribution apparatus and method, and program
AU2003237486A1 (en) * 2002-06-11 2003-12-22 Thomson Licensing S.A. Multimedia server with simple adaptation to dynamic network loss conditions
US20050220441A1 (en) * 2002-07-16 2005-10-06 Comer Mary L Interleaving of base and enhancement layers for hd-dvd
US7516470B2 (en) 2002-08-02 2009-04-07 Cisco Technology, Inc. Locally-updated interactive program guide
US7908625B2 (en) 2002-10-02 2011-03-15 Robertson Neil C Networked multimedia system
US8046806B2 (en) 2002-10-04 2011-10-25 Wall William E Multiroom point of deployment module
US7545935B2 (en) * 2002-10-04 2009-06-09 Scientific-Atlanta, Inc. Networked multimedia overlay system
US7360235B2 (en) 2002-10-04 2008-04-15 Scientific-Atlanta, Inc. Systems and methods for operating a peripheral record/playback device in a networked multimedia system
KR100552169B1 (en) * 2002-10-15 2006-02-13 에스케이 텔레콤주식회사 Video streaming signal compression device of mobile telecommunication system
US8204079B2 (en) * 2002-10-28 2012-06-19 Qualcomm Incorporated Joint transmission of multiple multimedia streams
US8130831B2 (en) * 2002-11-25 2012-03-06 Thomson Licensing Two-layer encoding for hybrid high-definition DVD
US7852406B2 (en) * 2002-12-06 2010-12-14 Broadcom Corporation Processing high definition video data
US7487532B2 (en) * 2003-01-15 2009-02-03 Cisco Technology, Inc. Optimization of a full duplex wideband communications system
US8094640B2 (en) 2003-01-15 2012-01-10 Robertson Neil C Full duplex wideband communications system for a local coaxial network
CN1860791A (en) * 2003-09-29 2006-11-08 皇家飞利浦电子股份有限公司 System and method for combining advanced data partitioning and fine granularity scalability for efficient spatio-temporal-snr scalability video coding and streaming
US7599002B2 (en) * 2003-12-02 2009-10-06 Logitech Europe S.A. Network camera mounting system
US20050120128A1 (en) * 2003-12-02 2005-06-02 Wilife, Inc. Method and system of bandwidth management for streaming data
JP2007513565A (en) * 2003-12-03 2007-05-24 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ System and method with improved scalability support in MPEG-2 system
EP1555821A1 (en) * 2004-01-13 2005-07-20 Sony International (Europe) GmbH Method for pre-processing digital data, digital to analog and analog to digital conversion system
US20060171453A1 (en) * 2005-01-04 2006-08-03 Rohlfing Thomas R Video surveillance system
KR100723403B1 (en) * 2005-02-28 2007-05-30 삼성전자주식회사 A prediction image generating method and apparatus using using single coding mode among color components, and an image and video encoding/decoding method and apparatus using it
US20070143776A1 (en) * 2005-03-01 2007-06-21 Russ Samuel H Viewer data collection in a multi-room network
US20060218581A1 (en) * 2005-03-01 2006-09-28 Barbara Ostrowska Interactive network guide with parental monitoring
US7725799B2 (en) * 2005-03-31 2010-05-25 Qualcomm Incorporated Power savings in hierarchically coded modulation
US8619860B2 (en) * 2005-05-03 2013-12-31 Qualcomm Incorporated System and method for scalable encoding and decoding of multimedia data using multiple layers
US7974341B2 (en) * 2005-05-03 2011-07-05 Qualcomm, Incorporated Rate control for multi-layer video design
US20060255931A1 (en) * 2005-05-12 2006-11-16 Hartsfield Andrew J Modular design for a security system
DE102005032080A1 (en) * 2005-07-08 2007-01-11 Siemens Ag A method for transmitting a media data stream and a method for receiving and creating a reconstructed media data stream, and associated transmitting device and receiving device
US8345768B1 (en) * 2005-07-28 2013-01-01 Teradici Corporation Progressive block encoding using region analysis
US20070030833A1 (en) * 2005-08-02 2007-02-08 Pirzada Fahd B Method for managing network content delivery using client application workload patterns and related systems
US7876998B2 (en) 2005-10-05 2011-01-25 Wall William E DVD playback over multi-room by copying to HDD
JP4967020B2 (en) * 2006-07-21 2012-07-04 ヴィドヨ,インコーポレーテッド System and method for jitter buffer reduction in scalable coding
JP5059862B2 (en) * 2006-08-21 2012-10-31 トライデント マイクロシステムズ インコーポレイテッド Method and apparatus for motion compensation processing of video signal
FR2919976B1 (en) * 2007-08-09 2009-11-13 Alcatel Lucent Sas METHOD OF TRANSMITTING, TO HETEROGENEOUS TERMINALS AND VIA TDM / TDMA-TYPE MULTIPLEXING INFRASTRUCTURE, LAYERED MULTIMEDIA CONTENT, AND PROCESSING DEVICE AND DECODER THEREFOR
JP5339697B2 (en) * 2007-08-14 2013-11-13 キヤノン株式会社 Transmission device, transmission method, and computer program
US8588583B2 (en) * 2007-08-22 2013-11-19 Adobe Systems Incorporated Systems and methods for interactive video frame selection
US10607454B2 (en) * 2007-12-20 2020-03-31 Ncr Corporation Device management portal, system and method
US8531961B2 (en) 2009-06-12 2013-09-10 Cygnus Broadband, Inc. Systems and methods for prioritization of data for intelligent discard in a communication network
US8627396B2 (en) 2009-06-12 2014-01-07 Cygnus Broadband, Inc. Systems and methods for prioritization of data for intelligent discard in a communication network
WO2010144833A2 (en) 2009-06-12 2010-12-16 Cygnus Broadband Systems and methods for intelligent discard in a communication network
EP2452501B1 (en) * 2009-07-10 2020-09-02 Samsung Electronics Co., Ltd. Spatial prediction method and apparatus in layered video coding
US8750370B2 (en) * 2009-09-04 2014-06-10 Brocade Communications Systems, Inc. Congestion-adaptive compression
US8411743B2 (en) * 2010-04-30 2013-04-02 Hewlett-Packard Development Company, L.P. Encoding/decoding system using feedback
JP2013526795A (en) * 2010-05-10 2013-06-24 サムスン エレクトロニクス カンパニー リミテッド Method and apparatus for transmitting and receiving layer coding video
US8711928B1 (en) 2011-10-05 2014-04-29 CSR Technology, Inc. Method, apparatus, and manufacture for adaptation of video encoder tuning parameters
US20130208809A1 (en) * 2012-02-14 2013-08-15 Microsoft Corporation Multi-layer rate control
US20150334389A1 (en) * 2012-09-06 2015-11-19 Sony Corporation Image processing device and image processing method
US9357211B2 (en) * 2012-12-28 2016-05-31 Qualcomm Incorporated Device and method for scalable and multiview/3D coding of video information

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5140417A (en) * 1989-06-20 1992-08-18 Matsushita Electric Co., Ltd. Fast packet transmission system of video data
US5029164A (en) * 1990-04-13 1991-07-02 Digital Equipment Corporation Congestion avoidance in high-speed network carrying bursty traffic
JPH0490645A (en) * 1990-08-03 1992-03-24 Mitsubishi Electric Corp Voice packet assembly/diassembly equipment
JP3241716B2 (en) * 1990-08-31 2001-12-25 株式会社東芝 ATM exchange method
JPH04220839A (en) * 1990-12-21 1992-08-11 Nippon Telegr & Teleph Corp <Ntt> Packet transmitter
JPH04257145A (en) * 1991-02-12 1992-09-11 Hitachi Ltd Method and device for packet flow rate control
GB2261798B (en) * 1991-11-23 1995-09-06 Dowty Communications Ltd Packet switching networks

Also Published As

Publication number Publication date
JPH07107096A (en) 1995-04-21
CA2108338A1 (en) 1995-03-03
JP3027492B2 (en) 2000-04-04
US5515377A (en) 1996-05-07

Similar Documents

Publication Publication Date Title
CA2108338C (en) Adaptive video encoder for two-layer encoding of video signals on atm (asynchronous transfer mode) networks
EP1454452B1 (en) Method, system and device for data transmission
US7095782B1 (en) Method and apparatus for streaming scalable video
WO1995028684A1 (en) Device, method and system for variable bit-rate packet video communications
Kieu et al. Cell-loss concealment techniques for layered video codecs in an ATM network
KR100601615B1 (en) Apparatus for compressing video according to network bandwidth
JP3439361B2 (en) Image encoding device and moving image transmission system
Huang Source modelling for packet video
Reibman DCT-based embedded coding for packet video
Ghanbari et al. Effect of bit rate variation of the base layer on the performance of two-layer video codecs
Nomura et al. Layered coding for ATM based video distribution systems
Zheng et al. TSFD: two stage frame dropping for scalable video transmission over data networks
US6947387B1 (en) Video data resending method
Rose et al. Impact of MPEG video traffic on an ATM multiplexer
Leduc et al. Universal VBR videocodecs for ATM networks in the Belgian broadband experiment
Ogino et al. ATM video signal multiplexer with congestion control function
JPH043684A (en) Variable rate moving image encoder
Mitchell et al. Issues in video transmission over broadband ATM networks
Frossard et al. MPEG-2 over Lossy Packet Networks: QoS Analysis and Improvement
Chien et al. Automatic network-adaptive ultra-low-bit-rate video coding
Murthy et al. Impact of QOS requirements on video coding for ATM networks
Shaffer et al. Improving perceptual quality and network performance for transmission of H. 263 video over ATM
Ghanbari A motion vector replenishment video codec for ATM networks
Blake Optimized two-layer DCT-based video compression algorithm for packet-switched network transmission
Benelli et al. Controlled degradation of video images over ATM networks

Legal Events

Date Code Title Description
EEER Examination request
MKLA Lapsed
MKLA Lapsed

Effective date: 20091013