US20070230583A1 - System and method for video error concealment - Google Patents

System and method for video error concealment Download PDF

Info

Publication number
US20070230583A1
US20070230583A1 US11/753,465 US75346507A US2007230583A1 US 20070230583 A1 US20070230583 A1 US 20070230583A1 US 75346507 A US75346507 A US 75346507A US 2007230583 A1 US2007230583 A1 US 2007230583A1
Authority
US
United States
Prior art keywords
macroblock
video
macroblocks
data
lost
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/753,465
Inventor
Michael Horowitz
Rick Flott
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Polycom Inc
Original Assignee
Polycom Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Polycom Inc filed Critical Polycom Inc
Priority to US11/753,465 priority Critical patent/US20070230583A1/en
Publication of US20070230583A1 publication Critical patent/US20070230583A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/59Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/12Systems in which the television signal is transmitted via one channel or a plurality of parallel channels, the bandwidth of each channel being less than the bandwidth of the television signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/107Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/129Scanning of coding units, e.g. zig-zag scan of transform coefficients or flexible macroblock ordering [FMO]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/164Feedback from the receiver or from the transmission channel
    • H04N19/166Feedback from the receiver or from the transmission channel concerning the amount of transmission errors, e.g. bit error rate [BER]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/65Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using error resilience
    • H04N19/66Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using error resilience involving data partitioning, i.e. separation of data into packets or partitions according to importance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/89Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder
    • H04N19/895Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder in combination with error concealment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/93Run-length coding

Definitions

  • This present invention relates generally to video communication, and more particularly to video error concealment.
  • Video images have become an increasingly important part of global communication.
  • video conferencing and video telephony have a wide range of applications such as desktop and room-based conferencing, video over the Internet and over telephone lines, surveillance and monitoring, telemedicine, and computer-based training and education.
  • video and accompanying audio information is transmitted across telecommunication links, including telephone lines, ISDN, DSL, and radio frequencies.
  • CIF Common Intermediate Format
  • ITU International Telecommunications Union
  • FCIF Full CIF
  • FIG. 1 is a table of the resolution and bit rate requirements for various video formats under the assumption that 12 bits are required to represent one pixel, according to the prior art
  • the bit rates (in megabits per second, Mbps) shown are for uncompressed color frames.
  • Video compression coding is a method of encoding digital video data such that it requires less memory to store the video data and reduces required transmission bandwidth.
  • Certain compression/decompression (CODEC) schemes are frequently used to compress video frames to reduce required transmission bit rates.
  • CODEC hardware and software allow digital video data to be compressed into a smaller binary format than required by the original (i.e., uncompressed) digital video format.
  • a macroblock is a unit of information containing four 8.times.8 blocks of luminance data and two corresponding 8.times.8 blocks of chrominance data in accordance with a 4:2:0 sampling structure, where the chrominance data is subsampled 2:1 in both vertical and horizontal directions.
  • H.320 ISDN-based video conferencing
  • H.324 POTS-based video telephony
  • H.323 LAN or IP-based video conferencing
  • H.263 (or its predecessor, H.261) provides the video coding part of these standards groups.
  • a motion estimation and compensation scheme is one conventional method typically used for reducing transmission bandwidth requirements for a video signal. Because the macroblock is the basic data unit, the motion estimation and compensation scheme may compare a given macroblock in a current video frame with the given macroblock's surrounding area in a previously transmitted video frame, and attempt to find a close data match. Typically, a closely matched macroblock in the previously transmitted video frame is spatially offset from the given macroblock by less than a width of the given macroblock. If a close data match is found, the scheme subtracts the given macroblock in the current video frame from the closely matched, offset macroblock in the previously transmitted video frame so that only a difference (i.e., residual) and the spatial offset needs to be encoded and transmitted.
  • a difference i.e., residual
  • the spatial offset is commonly referred to as a motion vector. If the motion estimation and compensation process is efficient, the remaining residual macroblock should contain only an amount of information necessary to describe data associated with pixels that change from the previous video frame to the current video frame and a motion vector. Thus, areas of a video frame that do not change (e.g., the background) are not encoded and transmitted.
  • the H.263 standard specifies that the motion vectors used for motion estimation and motion compensation be differentially encoded. Although differential encoding reduces data amounts required for transmission, any error in which motion vector data is lost or corrupted for one macroblock negatively impacts adjacent macroblocks. The result is a propagation of error due to the corrupted data which leads to lower video quality.
  • video data may be transmitted on heterogeneous communications networks in which one of the endpoints is associated with a circuit-switched network and a gateway or other packet-switched to circuit switched network bridging device is used.
  • the present system and method overcome or substantially alleviate prior problems associated with packet loss of video data.
  • the present invention provides a system and method that encodes, reorders, and packetizes video information for transmission across a packet switched network with a capability to conceal video error caused by video data packet loss.
  • video signals are encoded into sets of macroblocks.
  • a macroblock reordering engine then assigns integer labels called macroblock group identifiers (MBGIDs) to each macroblock.
  • MGIDs macroblock group identifiers
  • adjacent macroblocks are not assigned identical MBGIDs in one exemplary embodiment.
  • a macroblock packetization engine then enables packetizing of the macroblocks, such that macroblocks assigned identical MBGIDs are packetized together. For embodiments of the invention in which adjacent macroblocks are not assigned identical MBGIDs, it follows that spatially adjacent macroblocks are not packetized together.
  • corresponding data such as an intra-macroblock map, may be incorporated in a picture header or conveyed by some other mechanism to facilitate a corresponding decoding process.
  • an image processing engine when an image processing engine receives data packets containing encoded macroblocks, the data packets are depacketized, and the encoded macroblocks are ordered and decoded.
  • the image processing engine depacketizes the received data packets, then decodes the macroblocks in an order in which they were received to reduce processing delay. If one or more data packets are lost, data accompanying the macroblocks of successfully transmitted data packets are used to attenuate effects of the lost data packets.
  • Various methods based on whether the lost macroblocks were intra-coded or inter-coded compensate for the missing macroblocks. Upon compensation, the video signal may then be displayed. As a result, the present system and method is capable of concealing video errors resulting from data packet loss.
  • FIG. 1 is a table of the resolution and bit rate requirements for various video formats, according to the prior art
  • FIG. 2 is a block diagram of an exemplary video conferencing system, according to the present invention.
  • FIG. 3 is a block diagram of an exemplary video conference station of the video conferencing system of FIG. 2 ;
  • FIG. 4 is a block diagram of an exemplary embodiment of the image processing engine of FIG. 3 ;
  • FIG. 5 is an exemplary diagram of a macroblock reorder pattern for a QCIF formatted video frame, where each number is an MBGID assigned to a macroblock in a corresponding spatial location;
  • FIG. 7 is a block diagram of a two-dimensional interpolation scheme using data associated with pixels located in adjacent macroblocks, according to one embodiment of the present invention.
  • FIG. 8 is an exemplary block diagram of adjacent macroblocks used to estimate the motion vector of lost macroblock m, according to the present invention.
  • FIG. 9 is an exemplary flowchart of method steps for video data processing, according to one embodiment of the present invention.
  • FIG. 10 is an exemplary flowchart of method steps for video error concealment when receiving video data, according to the present invention.
  • the present invention conceals errors in video signals caused by data packet loss.
  • the present system and method departs from existing technologies by packetizing macroblocks in a flexible (e.g., non-raster-scan) order in a video frame.
  • macroblocks are packetized in an order specified by a macroblock reorder pattern.
  • motion vectors for each macroblock may be non-differentially encoded.
  • FIG. 2 illustrates an exemplary video conferencing system 200 .
  • the video conferencing system 200 includes a local video conference station 202 and a remote video conference station 204 connected through a network 206 .
  • FIG. 2 only shows two video conference stations 202 and 204 , those skilled in the art will recognize that more video conference stations may be coupled to the video conferencing system 200 .
  • the network may be any type of electronic transmission medium, such as, but not limited to, POTS, cable, fiber optic, and radio transmission media.
  • FIG. 3 is a block diagram of an exemplary video conference station 300 .
  • the video conference station 300 will be described as the local video conference station 202 ( FIG. 2 ), although the remote video conference station 204 ( FIG. 2 ) may contain a similar configuration.
  • the video conference station 300 includes a display device 302 , a CPU 304 , a memory 306 , at least one video capture device 308 , an image processing engine 310 , and a communication interface 312 .
  • other devices may be provided in the video conference station 300 , or not all above named devices provided.
  • the at least one video capture device 308 may be implemented as a charge coupled device (CCD) camera, a complementary metal oxide semiconductor (CMOS) camera, or any other type of image capture device.
  • CCD charge coupled device
  • CMOS complementary metal oxide semiconductor
  • the at least one video capture device 308 captures images of a user, conference room, or other scenes, and sends the images to the image processing engine 310 .
  • the image processing engine 310 processes the video image into data packets before the communication interface 312 transmits the data packets to the remote video conference station 204 .
  • the image processing engine 310 will be discussed in more detail in connection with FIG. 4 .
  • the image processing engine 310 also transforms received data packets from the remote video conference station 204 into a video signal for display on the display device 302 .
  • FIG. 4 is an exemplary embodiment of the image processing engine 310 of FIG. 3 .
  • the image processing engine 310 includes a coding engine 402 , a macroblock reordering engine 404 , a macroblock packetization engine 406 , and a communication buffer 408 .
  • a video signal from the video capture device 308 enters the coding engine 402 , which converts each frame of video into a desired format, and transforms each frame of the video signal into a set of macroblocks.
  • a macroblock is a data unit that contains blocks of data comprising luminance and chrominance components associated with picture elements (also referred to as pixels).
  • a macroblock consists of four 8.times.8 blocks of luminance data and two corresponding 8.times.8 blocks of chrominance data in a 4:2:0 chroma sampling format.
  • An 8.times.8 block of data is an eight-column by eight-row matrix of data, where each data corresponds to a pixel of the video frame.
  • a 4:2:0 chroma formatted macroblock comprises data covering a 16 pixel by 16 pixel section of the video frame.
  • the present invention is not limited to macroblocks as conventionally defined, but may be extended to any data unit comprising luminance and/or chrominance data.
  • sampling formats such as a 4:2:2 chroma sampling format comprising four 8.times.8 blocks of luminance data and four corresponding 8.times.8 blocks of chrominance data, or a 4:4:4 chroma sampling format comprising four 8.times.8 blocks of luminance data and eight corresponding 8.times.8 blocks of chrominance data.
  • each macroblock may be “intra-coded” or “inter-coded,” and a frame may be comprised of any combination of intra-coded and inter-coded macroblocks.
  • Inter-coded macroblocks are encoded using temporal similarities (i.e., similarities that exist between a macroblock from one frame and a closely matched macroblock from a previous frame).
  • a given inter-coded macroblock comprises encoded differences between the given macroblock and a closely matched macroblock from a previous video frame.
  • the closely matched macroblock from the previous video frame may comprise data associated with pixels that are offset from the pixels associated with the given macroblock.
  • intra-coded macroblocks are encoded without use of information from other video frames in a manner similar to that employed by the JPEG still image encoding standard.
  • the coding engine 402 computes differences between data of the given macroblock of a current video frame with data of a macroblock from a previous video frame (referred to as an offset macroblock), where the differences may be realized, for example, by a mean-absolute error or a mean-squared error between data corresponding to pixels located at co-located positions within the macroblocks. For the given macroblock, the coding engine 402 computes errors for a plurality of offset macroblocks. If the coding engine 402 only finds errors greater than a predetermined difference threshold value, then significant similarities do not exist between data from the given macroblock and data from the previous frame, and the macroblock is intra-coded. However, if one error is found to be less than the predetermined difference threshold value for the given macroblock and a given offset macroblock from the previous frame, then the given macroblock is inter-coded.
  • the coding engine 402 subtracts the given macroblock's data from the offset macroblock's data (i.e., luminance and chrominance data associated with a pixel of the given macroblock is subtracted from luminance and chrominance data associated with a corresponding pixel of the offset macroblock for every pixel) to give difference data, encodes the difference data using standard coding techniques such as Discrete Cosine Transforms and quantization methods among others, determines an offset vector from the given macroblock to the offset macroblock (referred to as a motion vector), and encodes the motion vector.
  • standard coding techniques such as Discrete Cosine Transforms and quantization methods among others
  • video coding standards such as H.261 and H.263, specify that motion vectors of inter-coded macroblocks be differentially encoded to improve coding efficiency.
  • differential encoding causes errors created by lost or corrupted motion vector data to propagate to adjacent macroblocks that would otherwise be decoded without error, since encoded motion vector data associated with a given macroblock is, in general, not independent of the motion vector data of neighboring macroblocks.
  • encoded motion vector data associated with a given macroblock is, in general, not independent of the motion vector data of neighboring macroblocks.
  • the effects of the motion vector data of a given macroblock are not spatially localized to the given macroblock.
  • the motion vectors of each inter-coded macroblock are non-differentially encoded, then the effects of the motion vector data are localized to the given macroblock, resulting in a significant increase in error resilience.
  • a change in motion vector coding method from a differential to a non-differential technique results in a small loss in overall coding efficiency (typically less than a few percent).
  • the motion vector components associated with each inter-coded macroblock are not differentially encoded, according to one embodiment of the present invention.
  • the coding engine 402 may intra-code macroblocks of a frame using a “walk-around-refresh” mechanism.
  • the “walk-around-refresh” mechanism is a deterministic mechanism to clean up reference frame mismatches, called data drift, by intra-coding a specific pattern of macroblocks for each frame.
  • the coding engine 402 uses macroblocks of a reference frame as offset macroblocks in decoding inter-coded macroblocks of a current frame.
  • the “walk-around-refresh” mechanism is enabled to intra-code a pattern of macroblocks using an integer walk-around interval w selected from a set of predetermined integer walk-around intervals.
  • the coding engine 402 intra-codes every w.sup.th macroblock.
  • the walk-around interval may be selected based upon video data transmission rates and error rates.
  • these “walk-around-refresh” intra-coded macroblocks replace corresponding macroblocks from previous frames that may be corrupted due to video data transmission errors. Any macroblock that may be corrupted due to video data transmission errors (and is not replaced) further propagates and possibly magnifies data drift when the coding engine of the remote video conference station 204 uses the corrupted macroblocks as reference macroblocks for decoding other received macroblocks.
  • the “walk-around-refresh” intra-coded macroblocks provide the coding engine of the remote video conference station 204 with a “fresh” set of intra-coded macroblocks to be used as reference macroblocks, thereby reducing the propagation of data drift.
  • the coding engine 402 may generate an intra-macroblock map that identifies which macroblocks in a coded video frame are intra-coded. After the intra-macroblock map is generated, the image processing engine 310 sends the map to the remote video conference station 204 .
  • the map may be sent as part of a picture header field associated with the coded video frame, for example, although other fields may be used.
  • the coding engine 402 may generate the intra-macroblock map in one of two ways.
  • the coding engine 402 uses run-length encoding to describe locations of intra-coded macroblocks within the frame. Run-length encoding is a technique to reduce the size of a repeating string of characters.
  • the coding engine 402 generates a bitmap, where each bit in the bitmap corresponds to one macroblock of the frame. A bit's value identifies a corresponding macroblock's coding type. For example, in one embodiment of the invention, a “1” bit signifies that a corresponding macroblock is intra-coded. In another embodiment of the invention, a “1” bit signifies that the corresponding macroblock is inter-coded. Other methods for generating the intra-macroblock map may be contemplated for use in the present invention.
  • the coding engine 402 selects the intra-macroblock map coding method that produces the fewest number of bits.
  • a 352.times.288 pixel i.e., a 352 pixel horizontal resolution by 288 pixel vertical resolution
  • FCIF video frame comprises 396 macroblocks configured as a 22.times.18 macroblock matrix.
  • the bitmap encoding method requires 396 bits (one bit for each macroblock).
  • 396 bits are used to transmit the bitmap encoded intra-macroblock map, independent of the number of intra-coded macroblocks within the FCIF frame.
  • the number of bits utilized to transmit the run-length encoded intra-macroblock map is dependent upon the number of intra-coded macroblocks within the FCIF frame.
  • the cost of transmitting a run-length encoded intra-macroblock map is eight bits per intra-coded macroblock (i.e., eight bits per run value), where the run value identifies a location of the intra-coded macroblock within the FCIF frame. Therefore, if the FCIF frame contains n intra-coded macroblocks, then 8n bits are required to transfer the run-length encoded intra-macroblock map.
  • the source coding engine 402 selects the run-length encoding method, otherwise the source coding engine 402 selects the bitmap encoding method.
  • the selection of an intra-macroblock map encoding method depends upon the video format, of which the FCIF video frame is an exemplary example.
  • each macroblock is assigned a macroblock group identifier (MBGID) from a plurality of MBGIDs.
  • MBGID macroblock group identifier
  • the macroblocks are numbered one to six according to an exemplary macroblock assignment pattern illustrated in FIG. 5 for a QCIF formatted frame having nine rows of eleven macroblocks per row.
  • the maximum MBGID is referred to as a maximum group identifier (MGID).
  • MGID maximum group identifier
  • MGID 6.As shown, the MBGIDs are assigned in a manner so as to minimize adjacent macroblocks being assigned the same MBGID. Alternatively, other assignment patterns may assign the same MBGID to adjacent macroblocks or in any other assignment order.
  • the assigning of macroblocks advantageously minimizes a concentration of errors in one region of a frame because macroblocks of a lost data packet are spatially distributed across the frame. Since errors due to lost packets are less likely to be concentrated in one region of the frame, lost data associated with lost macroblocks may be more accurately reconstructed using data from neighboring macroblocks. In other words, spatial interpolation of data from neighboring macroblocks or an estimation of a missing macroblock's motion vectors are more accurately determined, if the loss of data is not spatially localized within the frame.
  • the coding engine 402 ( FIG. 4 ) of the image processing engine 310 ( FIG. 3 ) of the remote video conference station 204 ( FIG. 2 ) may use a variety of error concealment techniques in conjunction with the reordering of macroblocks to improve video quality. For example, in one embodiment of the invention, the coding engine 402 decodes the neighboring macroblocks of a lost inter-coded macroblock, estimates a motion vector of the lost macroblock, and then uses the estimated motion vector to reconstruct data of the lost macroblock. In another embodiment of the invention, the coding engine 402 may decode the neighboring macroblocks of a lost intra-coded macroblock, and spatially interpolate the decoded neighboring data to reconstruct the lost data. The scope of the present invention covers other error concealment techniques used in conjunction with macroblock reordering to improve video quality due to lost or corrupted macroblocks.
  • the macroblock reordering engine 404 selects a MGID based on video data rates and/or video format.
  • the packetization engine 406 places the macroblocks into six data packets per QCIF frame. However, the packetization engine 406 may use more than one packet with a given MBGID to transport macroblocks with the given MBGID.
  • the splitting of the packets in this manner is typically governed by a maximum transfer unit size (MTU) associated with the network 206 ( FIG. 2 ).
  • MTU maximum transfer unit size
  • the data packets and picture header are forwarded to the communication buffer 408 for transmission across the network 206 ( FIG. 2 ) by the communication interface 312 ( FIG. 3 ).
  • the picture header may be transmitted more than once per frame.
  • the picture header may include the intra-macroblock map.
  • the image processing engine 310 also processes video data packets received from a remote location and provides video signals for display. Initially, video data packets are received by the communication interface 312 ( FIG. 3 ), and forwarded to the communication buffer 408 . The video data packets are then sent to the macroblock packetization engine 406 , which unpacks the macroblocks. Next, the macroblock reordering engine 404 orders the macroblocks back into their original, ordered pattern (i.e., pattern prior to macroblock reordering at the remote video conference station 204 , which is typically raster-scan ( FIG. 2 )).
  • the lost macroblocks are marked with an “x”. It should be noted that the lost macroblocks are advantageously spatially distributed across the QCIF frame, according to one embodiment of the present invention, thus allowing for accurate, low complexity error concealment techniques employing such methods as spatial interpolation or motion vector estimation and compensation.
  • the coding engine 402 determines whether the lost macroblock is intra-coded or inter-coded. For example, the coding engine 402 may examine the intra-macroblock map to determine whether the lost macroblock is intra-coded. As mentioned above, the intra-macroblock map may be sent in picture header fields or as side information conveyed outside a video stream, and may be compressed using a run-length encoding algorithm, configured as a bitmap which identifies intra-coded macroblocks, or some other efficient coding method.
  • the lost macroblocks are intra-coded, then several error concealment techniques may be utilized. For example, if the lost macroblock is intra-coded as part of a “walk-around-refresh” mechanism, the coding engine 402 replaces the lost macroblock with contents of a “corresponding” macroblock from a previous frame, where two “corresponding” macroblocks cover the same spatial area of their respective frames. According to the present invention, the “walk-around-refresh” mechanism's clean up rate is a function of the data and error rates.
  • FIG. 7 illustrates an exemplary interpolation scheme using data associated with pixels located in adjacent macroblocks.
  • FIG. 7 includes a lost macroblock 705 , a left adjacent macroblock 710 , an upper adjacent macroblock 715 , and a right adjacent macroblock 720 .
  • the coding engine 402 ( FIG. 4 ) interpolates data in a last column of data 730 (indicated by x's) from an 8.times.8 upper right-hand block 735 of the left adjacent macroblock 710 , and data in a last row of data 740 (indicated by an x's) from an 8.times.8 lower left-hand block 745 of the upper adjacent macroblock 715 .
  • the coding engine 402 interpolates data in a first column of data 755 from an 8.times.8 upper left-hand block 760 of the right adjacent macroblock 720 , and data in a last row of data 765 from an 8.times.8 lower right-hand block 770 of the upper adjacent macroblock 715 .
  • Other forms of interpolation may also be applied and other blocks of adjacent macroblocks may be utilized, and are within the scope of the present invention.
  • FIG. 8 is a block diagram of adjacent macroblocks used to estimate the motion vector of lost macroblock m, according to one embodiment of the present invention. For the lost macroblock m, a median of motion vectors of three neighboring macroblocks a, b, and c is computed.
  • FIG. 8 embodiment of the invention uses motion vectors from adjacent macroblocks a, b, and c to compute an estimated motion vector for macroblock m, any number and any combination of adjacent macroblocks may be used to estimate a lost macroblock's motion vector.
  • the coding engine 402 motion compensates the lost macroblock by using the estimated motion vector to rebuild the lost macroblock's data content. After the data content of all lost macroblocks of a given frame is rebuilt, the coding engine 402 transforms the macroblocks into a video signal for display on the display device 302 ( FIG. 3 ). Although illustrated with only one lost data packet, the present invention may be utilized to conceal errors with multiple lost data packets.
  • FIG. 9 is an exemplary flowchart 900 of method steps for video error concealment when transmitting video data over packet switched networks, according to one embodiment of the present invention.
  • the video capture device 308 FIG. 3
  • the coding engine 402 FIG. 4
  • a video frame may comprise inter-coded macroblocks, intra-coded macroblocks, or any combination of intra-coded and inter-coded macroblocks.
  • a “walk-around-refresh” mechanism is enabled to intra-code a pattern of macroblocks using a walk-around interval selected from a set of predetermined walk-around intervals.
  • the walk-around interval may be selected based upon video data rates and error rates.
  • the coding engine 402 computes a non-differentially encoded motion vector for each inter-coded macroblock.
  • the coding engine 402 generates an intra-macroblock map that identifies locations of the intra-coded macroblocks.
  • the intra-macroblock map is coded using either a run-length encoding method or a bitmap encoding method based upon total number of bits required to code the intra-macroblock map.
  • a macroblock reordering engine 404 assigns each macroblock an MBGID in step 920 .
  • the macroblocks may be assigned MBGIDs in a pattern such as that shown in FIG. 5 .
  • the macroblocks are assigned so as to minimize adjacent macroblocks being assigned the same MBGIDs.
  • other embodiments may contemplate assigning adjacent macroblocks the same MBGIDs.
  • the macroblock packetization engine 406 ( FIG. 4 ) creates discrete data packets and places the macroblocks into the discrete data packets according to their MBGIDs in step 925 .
  • the macroblock packetization engine 406 may be a transport engine for placing macroblocks into a particular format for transport on a circuit-switched network.
  • the data packets and a picture header (including the intra-macroblock map) are sent to the communication buffer 408 ( FIG. 4 ) for transmission to the remote video conference station 204 ( FIG. 2 ).
  • FIG. 10 is an exemplary flowchart 1000 of method steps for video error concealment when receiving video data, according to the present invention.
  • the communication buffer 408 receives transmitted data packets from the remote video conference station 204 ( FIG. 2 ) via the network 206 ( FIG. 2 ).
  • the macroblock packetization engine 406 de-packetizes the received data packets into macroblocks.
  • the macroblock reordering engine 404 ( FIG. 4 ) orders the macroblocks and places the macroblocks in proper spatial configuration within a video frame.
  • the coding engine 402 decodes the macroblocks in step 1020 .
  • the coding engine 402 (functioning as a decoder) or some other mechanism related to a video data packet transform (e.g., RTP sequence numbers) determines if any macroblocks comprising the video frame are missing in step 1025 . Macroblocks are lost if one or more video data packets are lost or corrupted via transmission of the video data packets over the network 206 . If, in step 1025 , it is determined that no macroblocks are missing, then the macroblocks are displayed by the display device 302 ( FIG. 3 ) in step 1030 .
  • step 1025 it is determined that one or more macroblocks are missing, then the data associated with the one or more missing macroblocks are reconstructed, based on macroblock coding type, in step 1035 .
  • the coding engine 402 may use the intra-macroblock map to determine the coding type of each lost macroblock.
  • the coding engine 402 replaces the lost macroblock's content with the data content of a corresponding macroblock from a previous frame.
  • the lost macroblock's content are spatially interpolated from nearest-neighbor adjacent macroblocks.
  • the coding engine 402 uses a two-dimensional interpolation to interpolate data from adjacent macroblocks ( FIG. 7 ).
  • the coding engine 402 estimates the lost macroblock's motion vector by examining the motion vectors of adjacent macroblocks. In one embodiment of the invention, the motion vector is computed as a median of three neighboring macroblocks' motion vectors ( FIG. 8 ). The coding engine 402 then uses the estimated motion vector to compensate for the data content of the lost macroblock by reconstructing an estimate of the lost macroblock's data content. Once the data contents of the missing macroblocks have been reconstructed, the macroblocks are displayed by the display device 302 , in step 1025 .

Abstract

The present invention provides, in one embodiment, a system and method for concealing video errors. The system encodes, reorders, and packetizes video information into video data packets for transmission over a communication network such that the system conceals errors caused by lost video data packets when the system receives, depacketizes, orders, and decodes the data packets. In one embodiment, the system and method encodes and packetizes video information, such that adjacent macroblocks are not placed in the same video data packets. Additionally, the system and method may provide information accompanying the video data packets to facilitate the decoding process. An advantage to such a scheme is that errors due to video data packet loss are spatially distributed over a video frame. Thus, if regions of data surrounding a lost macroblock are successfully decoded, the decoder may predict motion vectors and spatial content with a higher degree of accuracy, which leads to higher video quality.

Description

    CROSS REFERENCE TO RELATED APPLICATION
  • This application is a continuation application of U.S. patent application Ser. No. 10/226,504, filed Aug. 23, 2002, which is incorporated by reference in its entirety, and to which priority is claimed. This application also claims the benefit of Provisional Patent Application Ser. No. 60/314,413, filed Aug. 23, 2001, entitled which is also incorporated herein by reference.
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • This present invention relates generally to video communication, and more particularly to video error concealment.
  • 2. Description of Related Art
  • Video images have become an increasingly important part of global communication. In particular, video conferencing and video telephony have a wide range of applications such as desktop and room-based conferencing, video over the Internet and over telephone lines, surveillance and monitoring, telemedicine, and computer-based training and education. In each of these applications, video and accompanying audio information is transmitted across telecommunication links, including telephone lines, ISDN, DSL, and radio frequencies.
  • A standard video format used in video conferencing is Common Intermediate Format (CIF), which is part of the International Telecommunications Union (ITU) H.261 videoconferencing standard. The primary CIF format is also known as Full CIF or FCIF.
  • Additional formats with resolutions higher and lower than FCIF have also been established. FIG. 1 is a table of the resolution and bit rate requirements for various video formats under the assumption that 12 bits are required to represent one pixel, according to the prior art The bit rates (in megabits per second, Mbps) shown are for uncompressed color frames.
  • Presently, efficient transmission and reception of video signals may require encoding and compression of video and accompanying audio data. Video compression coding is a method of encoding digital video data such that it requires less memory to store the video data and reduces required transmission bandwidth. Certain compression/decompression (CODEC) schemes are frequently used to compress video frames to reduce required transmission bit rates. Thus, CODEC hardware and software allow digital video data to be compressed into a smaller binary format than required by the original (i.e., uncompressed) digital video format.
  • Several conventional approaches and standards to encoding and compressing source video signals exist. Some standards are designed for a particular application such as JPEG (Joint Photographic Experts Group) for still images, and H.261, H.263, MPEG (Moving Pictures Experts Group), MPEG-2 and MPEG-4 for moving images. These coding standards, typically, use block-based motion-compensated prediction on 16.times.16 pixels, commonly referred to as macroblocks. A macroblock is a unit of information containing four 8.times.8 blocks of luminance data and two corresponding 8.times.8 blocks of chrominance data in accordance with a 4:2:0 sampling structure, where the chrominance data is subsampled 2:1 in both vertical and horizontal directions.
  • As a practicality, audio data also must be compressed, transmitted, and synchronized along with the video data. Synchronization, multiplexing, and protocol issues are covered by standards such as H.320 (ISDN-based video conferencing), H.324 (POTS-based video telephony), and H.323 (LAN or IP-based video conferencing). H.263 (or its predecessor, H.261) provides the video coding part of these standards groups.
  • A motion estimation and compensation scheme is one conventional method typically used for reducing transmission bandwidth requirements for a video signal. Because the macroblock is the basic data unit, the motion estimation and compensation scheme may compare a given macroblock in a current video frame with the given macroblock's surrounding area in a previously transmitted video frame, and attempt to find a close data match. Typically, a closely matched macroblock in the previously transmitted video frame is spatially offset from the given macroblock by less than a width of the given macroblock. If a close data match is found, the scheme subtracts the given macroblock in the current video frame from the closely matched, offset macroblock in the previously transmitted video frame so that only a difference (i.e., residual) and the spatial offset needs to be encoded and transmitted. The spatial offset is commonly referred to as a motion vector. If the motion estimation and compensation process is efficient, the remaining residual macroblock should contain only an amount of information necessary to describe data associated with pixels that change from the previous video frame to the current video frame and a motion vector. Thus, areas of a video frame that do not change (e.g., the background) are not encoded and transmitted.
  • Conventionally, the H.263 standard specifies that the motion vectors used for motion estimation and motion compensation be differentially encoded. Although differential encoding reduces data amounts required for transmission, any error in which motion vector data is lost or corrupted for one macroblock negatively impacts adjacent macroblocks. The result is a propagation of error due to the corrupted data which leads to lower video quality.
  • When preparing video frame information for transmission over a packet switched communication network, encoding schemes transform the video frame information, compressed by motion estimation and compensation techniques, into data packets for transmission across a communication network. Although data packets allow for greater transmission efficiency, lost, corrupted, or delayed data packets can also introduce errors resulting in video quality degradation. Alternatively, video data may be transmitted on heterogeneous communications networks in which one of the endpoints is associated with a circuit-switched network and a gateway or other packet-switched to circuit switched network bridging device is used.
  • Currently, lost or corrupted data packets often cause reduced video quality. Therefore, there is a need for a system and method which organizes and transmits data packets in order to conceal errors caused by data packet loss.
  • SUMMARY OF THE INVENTION
  • The present system and method overcome or substantially alleviate prior problems associated with packet loss of video data. In general, the present invention provides a system and method that encodes, reorders, and packetizes video information for transmission across a packet switched network with a capability to conceal video error caused by video data packet loss.
  • In an exemplary embodiment, video signals are encoded into sets of macroblocks. A macroblock reordering engine then assigns integer labels called macroblock group identifiers (MBGIDs) to each macroblock. Advantageously, adjacent macroblocks are not assigned identical MBGIDs in one exemplary embodiment. A macroblock packetization engine then enables packetizing of the macroblocks, such that macroblocks assigned identical MBGIDs are packetized together. For embodiments of the invention in which adjacent macroblocks are not assigned identical MBGIDs, it follows that spatially adjacent macroblocks are not packetized together. Additionally, corresponding data, such as an intra-macroblock map, may be incorporated in a picture header or conveyed by some other mechanism to facilitate a corresponding decoding process.
  • In yet another embodiment of the invention, when an image processing engine receives data packets containing encoded macroblocks, the data packets are depacketized, and the encoded macroblocks are ordered and decoded. In an alternate embodiment, the image processing engine depacketizes the received data packets, then decodes the macroblocks in an order in which they were received to reduce processing delay. If one or more data packets are lost, data accompanying the macroblocks of successfully transmitted data packets are used to attenuate effects of the lost data packets. Various methods based on whether the lost macroblocks were intra-coded or inter-coded compensate for the missing macroblocks. Upon compensation, the video signal may then be displayed. As a result, the present system and method is capable of concealing video errors resulting from data packet loss.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a table of the resolution and bit rate requirements for various video formats, according to the prior art;
  • FIG. 2 is a block diagram of an exemplary video conferencing system, according to the present invention;
  • FIG. 3 is a block diagram of an exemplary video conference station of the video conferencing system of FIG. 2;
  • FIG. 4 is a block diagram of an exemplary embodiment of the image processing engine of FIG. 3;
  • FIG. 5 is an exemplary diagram of a macroblock reorder pattern for a QCIF formatted video frame, where each number is an MBGID assigned to a macroblock in a corresponding spatial location;
  • FIG. 6 is an exemplary diagram of the QCIF frame macroblock reorder pattern of FIG. 5, where a data packet containing coded macroblock data for macroblocks with MBGID=5 is lost;
  • FIG. 7 is a block diagram of a two-dimensional interpolation scheme using data associated with pixels located in adjacent macroblocks, according to one embodiment of the present invention;
  • FIG. 8 is an exemplary block diagram of adjacent macroblocks used to estimate the motion vector of lost macroblock m, according to the present invention;
  • FIG. 9 is an exemplary flowchart of method steps for video data processing, according to one embodiment of the present invention; and
  • FIG. 10 is an exemplary flowchart of method steps for video error concealment when receiving video data, according to the present invention.
  • DETAILED DESCRIPTION OF THE DRAWINGS
  • The present invention conceals errors in video signals caused by data packet loss. The present system and method departs from existing technologies by packetizing macroblocks in a flexible (e.g., non-raster-scan) order in a video frame. In contrast to existing video coding standards, macroblocks are packetized in an order specified by a macroblock reorder pattern. In addition, motion vectors for each macroblock may be non-differentially encoded. These improvements seek to attenuate the disturbances caused by data packet loss across a communication link. The scope of the present invention covers a variety of video standards, including, but not limited to, H.261, H.263, H.264, MPEG, MPEG-2, and MPEG-4.
  • FIG. 2 illustrates an exemplary video conferencing system 200. The video conferencing system 200 includes a local video conference station 202 and a remote video conference station 204 connected through a network 206. Although FIG. 2 only shows two video conference stations 202 and 204, those skilled in the art will recognize that more video conference stations may be coupled to the video conferencing system 200. It should be noted that the present system and method may be utilized in any communication system where video data is transmitted over a network. The network may be any type of electronic transmission medium, such as, but not limited to, POTS, cable, fiber optic, and radio transmission media.
  • FIG. 3 is a block diagram of an exemplary video conference station 300. For simplicity, the video conference station 300 will be described as the local video conference station 202 (FIG. 2), although the remote video conference station 204 (FIG. 2) may contain a similar configuration. In one embodiment, the video conference station 300 includes a display device 302, a CPU 304, a memory 306, at least one video capture device 308, an image processing engine 310, and a communication interface 312. Alternatively, other devices may be provided in the video conference station 300, or not all above named devices provided. The at least one video capture device 308 may be implemented as a charge coupled device (CCD) camera, a complementary metal oxide semiconductor (CMOS) camera, or any other type of image capture device. The at least one video capture device 308 captures images of a user, conference room, or other scenes, and sends the images to the image processing engine 310. Typically, the image processing engine 310 processes the video image into data packets before the communication interface 312 transmits the data packets to the remote video conference station 204. The image processing engine 310 will be discussed in more detail in connection with FIG. 4. Conversely, the image processing engine 310 also transforms received data packets from the remote video conference station 204 into a video signal for display on the display device 302.
  • FIG. 4 is an exemplary embodiment of the image processing engine 310 of FIG. 3. The image processing engine 310 includes a coding engine 402, a macroblock reordering engine 404, a macroblock packetization engine 406, and a communication buffer 408. Initially, a video signal from the video capture device 308 (FIG. 3) enters the coding engine 402, which converts each frame of video into a desired format, and transforms each frame of the video signal into a set of macroblocks. A macroblock is a data unit that contains blocks of data comprising luminance and chrominance components associated with picture elements (also referred to as pixels). For example, in H.263, a macroblock consists of four 8.times.8 blocks of luminance data and two corresponding 8.times.8 blocks of chrominance data in a 4:2:0 chroma sampling format. An 8.times.8 block of data is an eight-column by eight-row matrix of data, where each data corresponds to a pixel of the video frame. A 4:2:0 chroma formatted macroblock comprises data covering a 16 pixel by 16 pixel section of the video frame. However, the present invention is not limited to macroblocks as conventionally defined, but may be extended to any data unit comprising luminance and/or chrominance data. In addition, the scope of the present invention covers other sampling formats, such as a 4:2:2 chroma sampling format comprising four 8.times.8 blocks of luminance data and four corresponding 8.times.8 blocks of chrominance data, or a 4:4:4 chroma sampling format comprising four 8.times.8 blocks of luminance data and eight corresponding 8.times.8 blocks of chrominance data.
  • In addition, the coding engine 402 encodes (i.e., compresses) each macroblock to reduce the number of bits used to represent data content. Each macroblock may be “intra-coded” or “inter-coded,” and a frame may be comprised of any combination of intra-coded and inter-coded macroblocks. Inter-coded macroblocks are encoded using temporal similarities (i.e., similarities that exist between a macroblock from one frame and a closely matched macroblock from a previous frame). Specifically, a given inter-coded macroblock comprises encoded differences between the given macroblock and a closely matched macroblock from a previous video frame. The closely matched macroblock from the previous video frame may comprise data associated with pixels that are offset from the pixels associated with the given macroblock. Alternatively, intra-coded macroblocks are encoded without use of information from other video frames in a manner similar to that employed by the JPEG still image encoding standard.
  • For example, to determine if a given macroblock may be encoded as an inter-coded macroblock, the coding engine 402 computes differences between data of the given macroblock of a current video frame with data of a macroblock from a previous video frame (referred to as an offset macroblock), where the differences may be realized, for example, by a mean-absolute error or a mean-squared error between data corresponding to pixels located at co-located positions within the macroblocks. For the given macroblock, the coding engine 402 computes errors for a plurality of offset macroblocks. If the coding engine 402 only finds errors greater than a predetermined difference threshold value, then significant similarities do not exist between data from the given macroblock and data from the previous frame, and the macroblock is intra-coded. However, if one error is found to be less than the predetermined difference threshold value for the given macroblock and a given offset macroblock from the previous frame, then the given macroblock is inter-coded.
  • To inter-code the given macroblock, the coding engine 402 subtracts the given macroblock's data from the offset macroblock's data (i.e., luminance and chrominance data associated with a pixel of the given macroblock is subtracted from luminance and chrominance data associated with a corresponding pixel of the offset macroblock for every pixel) to give difference data, encodes the difference data using standard coding techniques such as Discrete Cosine Transforms and quantization methods among others, determines an offset vector from the given macroblock to the offset macroblock (referred to as a motion vector), and encodes the motion vector.
  • Presently, video coding standards, such as H.261 and H.263, specify that motion vectors of inter-coded macroblocks be differentially encoded to improve coding efficiency. However, differential encoding causes errors created by lost or corrupted motion vector data to propagate to adjacent macroblocks that would otherwise be decoded without error, since encoded motion vector data associated with a given macroblock is, in general, not independent of the motion vector data of neighboring macroblocks. Thus, the effects of the motion vector data of a given macroblock are not spatially localized to the given macroblock. However, if the motion vectors of each inter-coded macroblock are non-differentially encoded, then the effects of the motion vector data are localized to the given macroblock, resulting in a significant increase in error resilience. In most cases, a change in motion vector coding method from a differential to a non-differential technique results in a small loss in overall coding efficiency (typically less than a few percent). Advantageously, the motion vector components associated with each inter-coded macroblock, contrary to conventional methods, are not differentially encoded, according to one embodiment of the present invention.
  • In another embodiment of the invention, the coding engine 402 may intra-code macroblocks of a frame using a “walk-around-refresh” mechanism. The “walk-around-refresh” mechanism is a deterministic mechanism to clean up reference frame mismatches, called data drift, by intra-coding a specific pattern of macroblocks for each frame. The coding engine 402 uses macroblocks of a reference frame as offset macroblocks in decoding inter-coded macroblocks of a current frame. In one embodiment of the invention, the “walk-around-refresh” mechanism is enabled to intra-code a pattern of macroblocks using an integer walk-around interval w selected from a set of predetermined integer walk-around intervals. For example, if w=47, then the coding engine 402 intra-codes every w.sup.th macroblock. The walk-around interval may be selected based upon video data transmission rates and error rates. When the “walk-around-refresh” intra-coded macroblocks are received by the coding engine of the remote video conference station 204 (FIG. 2), these “walk-around-refresh” intra-coded macroblocks replace corresponding macroblocks from previous frames that may be corrupted due to video data transmission errors. Any macroblock that may be corrupted due to video data transmission errors (and is not replaced) further propagates and possibly magnifies data drift when the coding engine of the remote video conference station 204 uses the corrupted macroblocks as reference macroblocks for decoding other received macroblocks. Thus, the “walk-around-refresh” intra-coded macroblocks provide the coding engine of the remote video conference station 204 with a “fresh” set of intra-coded macroblocks to be used as reference macroblocks, thereby reducing the propagation of data drift.
  • Furthermore, the coding engine 402 may generate an intra-macroblock map that identifies which macroblocks in a coded video frame are intra-coded. After the intra-macroblock map is generated, the image processing engine 310 sends the map to the remote video conference station 204. The map may be sent as part of a picture header field associated with the coded video frame, for example, although other fields may be used.
  • According to the present invention, the coding engine 402 may generate the intra-macroblock map in one of two ways. In one embodiment of the invention, the coding engine 402 uses run-length encoding to describe locations of intra-coded macroblocks within the frame. Run-length encoding is a technique to reduce the size of a repeating string of characters. In another embodiment of the invention, the coding engine 402 generates a bitmap, where each bit in the bitmap corresponds to one macroblock of the frame. A bit's value identifies a corresponding macroblock's coding type. For example, in one embodiment of the invention, a “1” bit signifies that a corresponding macroblock is intra-coded. In another embodiment of the invention, a “1” bit signifies that the corresponding macroblock is inter-coded. Other methods for generating the intra-macroblock map may be contemplated for use in the present invention.
  • In yet another embodiment of the invention, the coding engine 402 selects the intra-macroblock map coding method that produces the fewest number of bits. For example, a 352.times.288 pixel (i.e., a 352 pixel horizontal resolution by 288 pixel vertical resolution) FCIF video frame comprises 396 macroblocks configured as a 22.times.18 macroblock matrix. Not including any bit overhead that may be required, the bitmap encoding method requires 396 bits (one bit for each macroblock). Thus, 396 bits are used to transmit the bitmap encoded intra-macroblock map, independent of the number of intra-coded macroblocks within the FCIF frame. In contrast, however, the number of bits utilized to transmit the run-length encoded intra-macroblock map is dependent upon the number of intra-coded macroblocks within the FCIF frame. The cost of transmitting a run-length encoded intra-macroblock map is eight bits per intra-coded macroblock (i.e., eight bits per run value), where the run value identifies a location of the intra-coded macroblock within the FCIF frame. Therefore, if the FCIF frame contains n intra-coded macroblocks, then 8n bits are required to transfer the run-length encoded intra-macroblock map.
  • Thus, if the CIF frame contains less than 50 intra-coded macroblocks (n<50), then the source coding engine 402 selects the run-length encoding method, otherwise the source coding engine 402 selects the bitmap encoding method. The selection of an intra-macroblock map encoding method depends upon the video format, of which the FCIF video frame is an exemplary example.
  • Subsequently, the encoded macroblocks are forwarded to the macroblock reordering engine 404. The macroblock reordering engine 404 reorders the encoded macroblocks. Specifically, each macroblock is assigned a macroblock group identifier (MBGID) from a plurality of MBGIDs. In an exemplary embodiment, the macroblocks are numbered one to six according to an exemplary macroblock assignment pattern illustrated in FIG. 5 for a QCIF formatted frame having nine rows of eleven macroblocks per row. The maximum MBGID is referred to as a maximum group identifier (MGID). In the FIG. 5 exemplary embodiment, MGID=6.As shown, the MBGIDs are assigned in a manner so as to minimize adjacent macroblocks being assigned the same MBGID. Alternatively, other assignment patterns may assign the same MBGID to adjacent macroblocks or in any other assignment order.
  • As will be discussed further below in conjunction with FIG. 6, the assigning of macroblocks, whereby adjacent macroblocks are not assigned the same MBGID, advantageously minimizes a concentration of errors in one region of a frame because macroblocks of a lost data packet are spatially distributed across the frame. Since errors due to lost packets are less likely to be concentrated in one region of the frame, lost data associated with lost macroblocks may be more accurately reconstructed using data from neighboring macroblocks. In other words, spatial interpolation of data from neighboring macroblocks or an estimation of a missing macroblock's motion vectors are more accurately determined, if the loss of data is not spatially localized within the frame.
  • The coding engine 402 (FIG. 4) of the image processing engine 310 (FIG. 3) of the remote video conference station 204 (FIG. 2) may use a variety of error concealment techniques in conjunction with the reordering of macroblocks to improve video quality. For example, in one embodiment of the invention, the coding engine 402 decodes the neighboring macroblocks of a lost inter-coded macroblock, estimates a motion vector of the lost macroblock, and then uses the estimated motion vector to reconstruct data of the lost macroblock. In another embodiment of the invention, the coding engine 402 may decode the neighboring macroblocks of a lost intra-coded macroblock, and spatially interpolate the decoded neighboring data to reconstruct the lost data. The scope of the present invention covers other error concealment techniques used in conjunction with macroblock reordering to improve video quality due to lost or corrupted macroblocks.
  • Different reorder patterns and MBGIDs may be utilized according to the present invention. In one embodiment of the invention, the macroblock reordering engine 404 selects a MGID based on video data rates and/or video format.
  • Referring back to FIG. 4, once the macroblocks have been assigned MBGIDs, the macroblock packetization engine 406 places the macroblocks into discrete data packets according to their MBGIDs. Thus, macroblocks with the same MBGID (e.g., MBGID=1) would be placed into a common, discrete data packet (e.g., data packet 1). Referring to the FIG. 5 exemplary embodiment of the invention, the packetization engine 406 places the macroblocks into six data packets per QCIF frame. However, the packetization engine 406 may use more than one packet with a given MBGID to transport macroblocks with the given MBGID. For example, packetization engine 406 may create a first data packet 1 comprising a portion of the macroblocks with MBGID=1 and a second data packet 1 comprising a remainder of the macroblocks with MBGID=1.The splitting of the packets in this manner is typically governed by a maximum transfer unit size (MTU) associated with the network 206 (FIG. 2).
  • Subsequently, the data packets and picture header are forwarded to the communication buffer 408 for transmission across the network 206 (FIG. 2) by the communication interface 312 (FIG. 3). To further promote resilience against packet loss, the picture header may be transmitted more than once per frame. The picture header may include the intra-macroblock map.
  • Conversely, the image processing engine 310 also processes video data packets received from a remote location and provides video signals for display. Initially, video data packets are received by the communication interface 312 (FIG. 3), and forwarded to the communication buffer 408. The video data packets are then sent to the macroblock packetization engine 406, which unpacks the macroblocks. Next, the macroblock reordering engine 404 orders the macroblocks back into their original, ordered pattern (i.e., pattern prior to macroblock reordering at the remote video conference station 204, which is typically raster-scan (FIG. 2)).
  • Subsequently, the coding engine 402 functions as a decoder, and determines whether a video data packet was lost in transit across the network 206. FIG. 6 is a diagram of the QCIF frame macroblock reorder pattern of FIG. 5, when a data packet containing coded macroblock data for macroblocks with MBGID=5 is lost. The lost macroblocks are marked with an “x”. It should be noted that the lost macroblocks are advantageously spatially distributed across the QCIF frame, according to one embodiment of the present invention, thus allowing for accurate, low complexity error concealment techniques employing such methods as spatial interpolation or motion vector estimation and compensation. Although FIG. 6 illustrates a single missing data packet for convenience of discussion, the scope of the present invention covers error concealment when any number of data packets are corrupted or lost during transit. It should further be noted that although the same components are described herein as being used for both transmission and receiving functions, the components may be embodied in separate receiver and transmitter devices.
  • Referring back to FIG. 4, for each lost macroblock, the coding engine 402 determines whether the lost macroblock is intra-coded or inter-coded. For example, the coding engine 402 may examine the intra-macroblock map to determine whether the lost macroblock is intra-coded. As mentioned above, the intra-macroblock map may be sent in picture header fields or as side information conveyed outside a video stream, and may be compressed using a run-length encoding algorithm, configured as a bitmap which identifies intra-coded macroblocks, or some other efficient coding method.
  • If the lost macroblocks are intra-coded, then several error concealment techniques may be utilized. For example, if the lost macroblock is intra-coded as part of a “walk-around-refresh” mechanism, the coding engine 402 replaces the lost macroblock with contents of a “corresponding” macroblock from a previous frame, where two “corresponding” macroblocks cover the same spatial area of their respective frames. According to the present invention, the “walk-around-refresh” mechanism's clean up rate is a function of the data and error rates.
  • Alternatively, if a lost intra-coded macroblock is not coded as part of the “walk-around-refresh” mechanism, then the coding engine 402 spatially interpolates the contents of the lost macroblock from adjacent macroblocks. In one embodiment of the invention, each 8.times.8 block of the lost macroblock is spatially interpolated from the two nearest blocks located in adjacent macroblocks. FIG. 7 illustrates an exemplary interpolation scheme using data associated with pixels located in adjacent macroblocks. FIG. 7 includes a lost macroblock 705, a left adjacent macroblock 710, an upper adjacent macroblock 715, and a right adjacent macroblock 720. For example, to reconstruct (i.e., interpolate) data for an 8.times.8 upper left-hand block 725 of the lost 16.times.16 macroblock 705, the coding engine 402 (FIG. 4) interpolates data in a last column of data 730 (indicated by x's) from an 8.times.8 upper right-hand block 735 of the left adjacent macroblock 710, and data in a last row of data 740 (indicated by an x's) from an 8.times.8 lower left-hand block 745 of the upper adjacent macroblock 715.
  • Similarly, to reconstruct data for an 8.times.8 upper right-hand block 750 of the lost macroblock 705, the coding engine 402 interpolates data in a first column of data 755 from an 8.times.8 upper left-hand block 760 of the right adjacent macroblock 720, and data in a last row of data 765 from an 8.times.8 lower right-hand block 770 of the upper adjacent macroblock 715. Other forms of interpolation may also be applied and other blocks of adjacent macroblocks may be utilized, and are within the scope of the present invention.
  • If the lost macroblock is inter-coded, then the coding engine 402 computes an estimate of the lost macroblock's motion vector by examining the motion vectors of adjacent macroblocks. FIG. 8 is a block diagram of adjacent macroblocks used to estimate the motion vector of lost macroblock m, according to one embodiment of the present invention. For the lost macroblock m, a median of motion vectors of three neighboring macroblocks a, b, and c is computed. For example, the x-component of estimated motion vector of macroblock m is MV.sup.m.sub.x=median (MV.sup.a.sub.x, MV.sup.b.sub.x, MV.sup.c.sub.x) and the y-component of estimated motion vector of macroblock m is MV.sup.m.sub.y=median (MV.sup.a.sub.y, MV.sup.b.sub.y, MV.sup.c.sub.y), where MV.sup.a.sub.x, MV.sup.b.sub.x, MV.sup.c.sub.x are the x-components of motion vectors of macroblocks a, b, and c, respectively, and MV.sup.a.sub.y, MV.sup.b.sub.y, MV.sup.c.sub.y are the y-components of motion vectors of macroblocks a, b, and c, respectively. Although the FIG. 8 embodiment of the invention uses motion vectors from adjacent macroblocks a, b, and c to compute an estimated motion vector for macroblock m, any number and any combination of adjacent macroblocks may be used to estimate a lost macroblock's motion vector.
  • Once the lost macroblock's motion vector is estimated, the coding engine 402 (FIG. 4) motion compensates the lost macroblock by using the estimated motion vector to rebuild the lost macroblock's data content. After the data content of all lost macroblocks of a given frame is rebuilt, the coding engine 402 transforms the macroblocks into a video signal for display on the display device 302 (FIG. 3). Although illustrated with only one lost data packet, the present invention may be utilized to conceal errors with multiple lost data packets.
  • FIG. 9 is an exemplary flowchart 900 of method steps for video error concealment when transmitting video data over packet switched networks, according to one embodiment of the present invention. In step 905, the video capture device 308 (FIG. 3) captures a video image and generates a video signal. Next, in step 910, the coding engine 402 (FIG. 4) (also referred to as an encoder when processing data for transmission) receives the video signal and transforms the video signal into one or more intra-coded and inter-coded macroblocks. A video frame may comprise inter-coded macroblocks, intra-coded macroblocks, or any combination of intra-coded and inter-coded macroblocks. In one embodiment of the invention, a “walk-around-refresh” mechanism is enabled to intra-code a pattern of macroblocks using a walk-around interval selected from a set of predetermined walk-around intervals. The walk-around interval may be selected based upon video data rates and error rates. In addition, the coding engine 402 computes a non-differentially encoded motion vector for each inter-coded macroblock.
  • Subsequently, in step 915, the coding engine 402 generates an intra-macroblock map that identifies locations of the intra-coded macroblocks. In one embodiment of the present invention, the intra-macroblock map is coded using either a run-length encoding method or a bitmap encoding method based upon total number of bits required to code the intra-macroblock map.
  • Next, a macroblock reordering engine 404 (FIG. 4) assigns each macroblock an MBGID in step 920. For example, the macroblocks may be assigned MBGIDs in a pattern such as that shown in FIG. 5. In one embodiment, the macroblocks are assigned so as to minimize adjacent macroblocks being assigned the same MBGIDs. Alternatively, other embodiments may contemplate assigning adjacent macroblocks the same MBGIDs.
  • Subsequently, the macroblock packetization engine 406 (FIG. 4) creates discrete data packets and places the macroblocks into the discrete data packets according to their MBGIDs in step 925. For example, macroblocks with the same MBGID would be placed into a common discrete data packet. Alternatively, the macroblock packetization engine 406 may be a transport engine for placing macroblocks into a particular format for transport on a circuit-switched network. Finally, in step 930, the data packets and a picture header (including the intra-macroblock map) are sent to the communication buffer 408 (FIG. 4) for transmission to the remote video conference station 204 (FIG. 2).
  • FIG. 10 is an exemplary flowchart 1000 of method steps for video error concealment when receiving video data, according to the present invention. In step 1005, the communication buffer 408 (FIG. 4) receives transmitted data packets from the remote video conference station 204 (FIG. 2) via the network 206 (FIG. 2). Then, in step 1010, the macroblock packetization engine 406 (FIG. 4) de-packetizes the received data packets into macroblocks. Subsequently in step 1015, the macroblock reordering engine 404 (FIG. 4) orders the macroblocks and places the macroblocks in proper spatial configuration within a video frame.
  • Next, the coding engine 402 (FIG. 4) decodes the macroblocks in step 1020. The coding engine 402 (functioning as a decoder) or some other mechanism related to a video data packet transform (e.g., RTP sequence numbers) determines if any macroblocks comprising the video frame are missing in step 1025. Macroblocks are lost if one or more video data packets are lost or corrupted via transmission of the video data packets over the network 206. If, in step 1025, it is determined that no macroblocks are missing, then the macroblocks are displayed by the display device 302 (FIG. 3) in step 1030. However, if in step 1025, it is determined that one or more macroblocks are missing, then the data associated with the one or more missing macroblocks are reconstructed, based on macroblock coding type, in step 1035. The coding engine 402 may use the intra-macroblock map to determine the coding type of each lost macroblock.
  • For example, if the lost macroblock is intra-coded as part of the “walk-around-refresh” mechanism, then the coding engine 402 replaces the lost macroblock's content with the data content of a corresponding macroblock from a previous frame. Alternatively, if a lost intra-coded macroblock is not coded as part of the “walk-around-refresh” mechanism, then the lost macroblock's content are spatially interpolated from nearest-neighbor adjacent macroblocks. In one embodiment of the present invention, the coding engine 402 uses a two-dimensional interpolation to interpolate data from adjacent macroblocks (FIG. 7).
  • Alternatively, if the lost macroblock is inter-coded, then the coding engine 402 estimates the lost macroblock's motion vector by examining the motion vectors of adjacent macroblocks. In one embodiment of the invention, the motion vector is computed as a median of three neighboring macroblocks' motion vectors (FIG. 8). The coding engine 402 then uses the estimated motion vector to compensate for the data content of the lost macroblock by reconstructing an estimate of the lost macroblock's data content. Once the data contents of the missing macroblocks have been reconstructed, the macroblocks are displayed by the display device 302, in step 1025.
  • The invention has been explained above with reference to exemplary embodiments. It will be evident to those skilled in the art that various modifications may be made thereto without departing from the broader spirit and scope of the invention. Further, although the invention has been described in the context of its implementation in particular environments and for particular applications, those skilled in the art will recognize that the present invention's usefulness is not limited thereto and that the invention can be beneficially utilized in any number of environments and implementations. The foregoing description and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.

Claims (1)

1. A system for processing video data, comprising:
a coding engine for processing each frame of a video signal to generate macroblocks and to encode the macroblocks;
a macroblock reordering engine for assigning a macroblock group identifier (MBGID) from a plurality of MBGIDs to each encoded macroblock; and
a macroblock packetization engine for placing each of the encoded macroblocks into a particular data packet according to the MBGID.
US11/753,465 2001-08-23 2007-05-24 System and method for video error concealment Abandoned US20070230583A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/753,465 US20070230583A1 (en) 2001-08-23 2007-05-24 System and method for video error concealment

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US31441301P 2001-08-23 2001-08-23
US10/226,504 US7239662B2 (en) 2001-08-23 2002-08-23 System and method for video error concealment
US11/753,465 US20070230583A1 (en) 2001-08-23 2007-05-24 System and method for video error concealment

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/226,504 Continuation US7239662B2 (en) 2001-08-23 2002-08-23 System and method for video error concealment

Publications (1)

Publication Number Publication Date
US20070230583A1 true US20070230583A1 (en) 2007-10-04

Family

ID=23219857

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/226,504 Active 2024-05-19 US7239662B2 (en) 2001-08-23 2002-08-23 System and method for video error concealment
US11/753,465 Abandoned US20070230583A1 (en) 2001-08-23 2007-05-24 System and method for video error concealment

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10/226,504 Active 2024-05-19 US7239662B2 (en) 2001-08-23 2002-08-23 System and method for video error concealment

Country Status (15)

Country Link
US (2) US7239662B2 (en)
EP (1) EP1421787A4 (en)
JP (2) JP4881543B2 (en)
KR (1) KR100691307B1 (en)
CN (1) CN100581238C (en)
AU (1) AU2002326713B2 (en)
BR (2) BRPI0212000B1 (en)
CA (1) CA2457882C (en)
IL (2) IL160476A0 (en)
MX (1) MXPA04001656A (en)
NO (2) NO339116B1 (en)
NZ (1) NZ531863A (en)
RU (1) RU2291586C2 (en)
WO (1) WO2003019939A1 (en)
ZA (1) ZA200401377B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9542611B1 (en) * 2011-08-11 2017-01-10 Harmonic, Inc. Logo detection for macroblock-based video processing
CN107888931A (en) * 2017-11-28 2018-04-06 上海大学 A kind of method using video statistics feature prediction error susceptibility

Families Citing this family (64)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6965644B2 (en) * 1992-02-19 2005-11-15 8×8, Inc. Programmable architecture and methods for motion estimation
US8780970B2 (en) * 2001-12-21 2014-07-15 Polycom, Inc. Motion wake identification and control mechanism
BR0317943A (en) * 2003-01-10 2005-11-29 Thomson Licensing Sa Spatial error concealment based on intrapreview modes transmitted in a coded stream
KR20050098240A (en) * 2003-01-10 2005-10-11 톰슨 라이센싱 소시에떼 아노님 Technique for defining concealment order to minimize error propagation
US7827458B1 (en) 2003-03-03 2010-11-02 Apple Inc. Packet loss error recovery
US7817716B2 (en) * 2003-05-29 2010-10-19 Lsi Corporation Method and/or apparatus for analyzing the content of a surveillance image
US8705613B2 (en) * 2003-06-26 2014-04-22 Sony Corporation Adaptive joint source channel coding
US7826526B2 (en) * 2003-10-20 2010-11-02 Logitech Europe S.A. Methods and apparatus for encoding and decoding video data
US8582640B2 (en) * 2003-12-16 2013-11-12 Sony Corporation Adaptive joint source channel coding
US20050281339A1 (en) * 2004-06-22 2005-12-22 Samsung Electronics Co., Ltd. Filtering method of audio-visual codec and filtering apparatus
US20060013315A1 (en) * 2004-07-19 2006-01-19 Samsung Electronics Co., Ltd. Filtering method, apparatus, and medium used in audio-video codec
JP2006060813A (en) 2004-08-20 2006-03-02 Polycom Inc Error concealment in video decoder
EP1638337A1 (en) 2004-09-16 2006-03-22 STMicroelectronics S.r.l. Method and system for multiple description coding and computer program product therefor
US7543064B2 (en) * 2004-09-30 2009-06-02 Logitech Europe S.A. Multiplayer peer-to-peer connection across firewalls and network address translators using a single local port on the local host
US7463755B2 (en) * 2004-10-10 2008-12-09 Qisda Corporation Method for correcting motion vector errors caused by camera panning
US20060262860A1 (en) * 2005-02-23 2006-11-23 Chou Jim C Macroblock adaptive frame/field coding architecture for scalable coding
US7738468B2 (en) * 2005-03-22 2010-06-15 Logitech Europe S.A. Method and apparatus for packet traversal of a network address translation device
US9749655B2 (en) * 2005-05-11 2017-08-29 Qualcomm Incorporated Method and apparatus for unified error concealment framework
US9661376B2 (en) * 2005-07-13 2017-05-23 Polycom, Inc. Video error concealment method
US9055298B2 (en) 2005-07-15 2015-06-09 Qualcomm Incorporated Video encoding method enabling highly efficient partial decoding of H.264 and other transform coded information
US7916796B2 (en) * 2005-10-19 2011-03-29 Freescale Semiconductor, Inc. Region clustering based error concealment for video data
US9516326B1 (en) 2005-12-09 2016-12-06 Nvidia Corporation Method for rotating macro-blocks of a frame of a video stream
US9794593B1 (en) * 2005-12-09 2017-10-17 Nvidia Corporation Video decoder architecture for processing out-of-order macro-blocks of a video stream
US8238442B2 (en) * 2006-08-25 2012-08-07 Sony Computer Entertainment Inc. Methods and apparatus for concealing corrupted blocks of video data
US8375304B2 (en) * 2006-11-01 2013-02-12 Skyfire Labs, Inc. Maintaining state of a web page
US8711929B2 (en) * 2006-11-01 2014-04-29 Skyfire Labs, Inc. Network-based dynamic encoding
US8443398B2 (en) * 2006-11-01 2013-05-14 Skyfire Labs, Inc. Architecture for delivery of video content responsive to remote interaction
US9247260B1 (en) * 2006-11-01 2016-01-26 Opera Software Ireland Limited Hybrid bitmap-mode encoding
CN101202923B (en) * 2006-12-15 2010-09-01 扬智科技股份有限公司 Method for detecting code stream error of image decoder
CN102176751B (en) * 2006-12-27 2013-12-25 松下电器产业株式会社 Moving picture decoding apparatus and method
US20080184128A1 (en) * 2007-01-25 2008-07-31 Swenson Erik R Mobile device user interface for remote interaction
US7957307B2 (en) * 2007-03-14 2011-06-07 Microsoft Corporation Reducing effects of packet loss in video transmissions
KR101125846B1 (en) * 2007-03-23 2012-03-28 삼성전자주식회사 Method for transmitting image frame data based on packet system and apparatus thereof
US8582656B2 (en) * 2007-04-13 2013-11-12 Apple Inc. Method and system for video encoding and decoding
US8605779B2 (en) 2007-06-20 2013-12-10 Microsoft Corporation Mechanisms to conceal real time video artifacts caused by frame loss
DE102007058033A1 (en) * 2007-11-30 2009-06-04 Paterok, Peter, Dr. Method and apparatus for improved video output
FR2929466A1 (en) * 2008-03-28 2009-10-02 France Telecom DISSIMULATION OF TRANSMISSION ERROR IN A DIGITAL SIGNAL IN A HIERARCHICAL DECODING STRUCTURE
US20100104003A1 (en) * 2008-10-24 2010-04-29 Manufacturing Resources International Inc. System and method for securely transmitting video data
US9812047B2 (en) 2010-02-25 2017-11-07 Manufacturing Resources International, Inc. System and method for remotely monitoring the operating life of electronic displays
US8648858B1 (en) 2009-03-25 2014-02-11 Skyfire Labs, Inc. Hybrid text and image based encoding
CN102036061B (en) 2009-09-30 2012-11-21 华为技术有限公司 Video data transmission and sending processing method, device and network system
KR101457418B1 (en) * 2009-10-23 2014-11-04 삼성전자주식회사 Method and apparatus for video encoding and decoding dependent on hierarchical structure of coding unit
US20110249127A1 (en) * 2010-04-07 2011-10-13 Cisco Technology, Inc. Estimating Video Quality Corruption in Lossy Networks
JP5485851B2 (en) * 2010-09-30 2014-05-07 日本電信電話株式会社 Video encoding method, video decoding method, video encoding device, video decoding device, and programs thereof
CN103179468B (en) * 2011-12-22 2018-03-30 海尔集团公司 Multi-medium data transmitting device, system and method
GB2499831B (en) * 2012-03-02 2015-08-05 Canon Kk Method and device for decoding a bitstream
RU2485592C1 (en) * 2012-03-07 2013-06-20 Федеральное государственное унитарное предприятие "Государственный научно-исследовательский институт авиационных систем" Method of forming integer non-orthogonal decorrelating matrices of given dimensions and apparatus for realising said method
US9386326B2 (en) 2012-10-05 2016-07-05 Nvidia Corporation Video decoding error concealment techniques
US9479788B2 (en) * 2014-03-17 2016-10-25 Qualcomm Incorporated Systems and methods for low complexity encoding and background detection
JP6481457B2 (en) * 2015-03-26 2019-03-13 富士通株式会社 Moving picture coding apparatus, moving picture coding method, moving picture decoding apparatus, and moving picture decoding method
US10319408B2 (en) 2015-03-30 2019-06-11 Manufacturing Resources International, Inc. Monolithic display with separately controllable sections
US10922736B2 (en) 2015-05-15 2021-02-16 Manufacturing Resources International, Inc. Smart electronic display for restaurants
US10269156B2 (en) 2015-06-05 2019-04-23 Manufacturing Resources International, Inc. System and method for blending order confirmation over menu board background
JP6639653B2 (en) 2015-09-10 2020-02-05 マニュファクチャリング・リソーシズ・インターナショナル・インコーポレーテッド System and method for system detection of display errors
CN105611290B (en) * 2015-12-28 2019-03-26 惠州Tcl移动通信有限公司 A kind of processing method and system of the wireless transmission picture based on mobile terminal
US10319271B2 (en) 2016-03-22 2019-06-11 Manufacturing Resources International, Inc. Cyclic redundancy check for electronic displays
KR102204132B1 (en) 2016-05-31 2021-01-18 매뉴팩처링 리소시스 인터내셔널 인코포레이티드 Electronic display remote image verification system and method
US10510304B2 (en) 2016-08-10 2019-12-17 Manufacturing Resources International, Inc. Dynamic dimming LED backlight for LCD array
US10908863B2 (en) 2018-07-12 2021-02-02 Manufacturing Resources International, Inc. System and method for providing access to co-located operations data for an electronic display
CN109936624B (en) * 2019-01-31 2022-03-18 平安科技(深圳)有限公司 Adaptation method and device for HTTP request message header and computer equipment
US11402940B2 (en) 2019-02-25 2022-08-02 Manufacturing Resources International, Inc. Monitoring the status of a touchscreen
WO2020176416A1 (en) 2019-02-25 2020-09-03 Manufacturing Resources International, Inc. Monitoring the status of a touchscreen
US11921010B2 (en) 2021-07-28 2024-03-05 Manufacturing Resources International, Inc. Display assemblies with differential pressure sensors
US11895362B2 (en) 2021-10-29 2024-02-06 Manufacturing Resources International, Inc. Proof of play for images displayed at electronic displays

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5400076A (en) * 1991-11-30 1995-03-21 Sony Corporation Compressed motion picture signal expander with error concealment
US5481297A (en) * 1994-02-25 1996-01-02 At&T Corp. Multipoint digital video communication system
US5535275A (en) * 1993-07-08 1996-07-09 Sony Corporation Apparatus and method for producing scrambled digital video signals
US5557479A (en) * 1993-05-24 1996-09-17 Sony Corporation Apparatus and method for recording and reproducing digital video signal data by dividing the data and encoding it on multiple coding paths
US5583573A (en) * 1992-04-28 1996-12-10 Mitsubishi Denki Kabushiki Kaisha Video encoder and encoding method using intercomparisons of pixel values in selection of appropriation quantization values to yield an amount of encoded data substantialy equal to nominal amount
US5585930A (en) * 1993-05-10 1996-12-17 Matsushita Electric Inudstrial Co., Ltd. Apparatus for recording and reproducing a digital video signal
US5719724A (en) * 1995-09-07 1998-02-17 Sony Corporation Rotary head apparatus for recording and/or reproducing an information track to/from a magnetic tape having predetermined orientation characteristics to prevent unbalance of performance between the channel
US5724369A (en) * 1995-10-26 1998-03-03 Motorola Inc. Method and device for concealment and containment of errors in a macroblock-based video codec
US5754553A (en) * 1993-09-30 1998-05-19 Kabushiki Kaisha Toshiba Packet conversion apparatus and system
US5835144A (en) * 1994-10-13 1998-11-10 Oki Electric Industry Co., Ltd. Methods of coding and decoding moving-picture signals, using self-resynchronizing variable-length codes
US6115076A (en) * 1999-04-20 2000-09-05 C-Cube Semiconductor Ii, Inc. Compressed video recording device with non-destructive effects addition
US6124995A (en) * 1994-07-26 2000-09-26 Samsung Electronics Co., Ltd. Fixed bit-rate encoding method and apparatus therefor, and tracking method for high-speed search using the same
US6154495A (en) * 1995-09-29 2000-11-28 Kabushiki Kaisha Toshiba Video coding and video decoding apparatus for changing a resolution conversion according to a reduction ratio setting information signal
US6154780A (en) * 1996-12-18 2000-11-28 Intel Corporation Method and apparatus for transmission of a flexible and error resilient video bitstream
US6163868A (en) * 1997-10-23 2000-12-19 Sony Corporation Apparatus and method for providing robust error recovery for errors that occur in a lossy transmission environment
US6178289B1 (en) * 1996-06-29 2001-01-23 Samsung Electronics Co., Ltd. Video data shuffling method and apparatus
US6233392B1 (en) * 1997-02-19 2001-05-15 Thomson Consumer Electronics Constrained encoded motion vectors
US20010050955A1 (en) * 2000-03-24 2001-12-13 Cha Zhang Methods and arrangements for handling concentric mosaic image data
US6421385B1 (en) * 1997-10-01 2002-07-16 Matsushita Electric Industrial Co., Ltd. Apparatus and method for efficient conversion of DV (digital video) format encoded video data into MPEG format encoded video data by utilizing motion flag information contained in the DV data
US6639945B2 (en) * 1997-03-14 2003-10-28 Microsoft Corporation Method and apparatus for implementing motion detection in video compression
US6721362B2 (en) * 2001-03-30 2004-04-13 Redrock Semiconductor, Ltd. Constrained discrete-cosine-transform coefficients for better error detection in a corrupted MPEG-4 bitstreams
US6754271B1 (en) * 1999-04-15 2004-06-22 Diva Systems Corporation Temporal slice persistence method and apparatus for delivery of interactive program guide

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0286241A (en) * 1988-09-21 1990-03-27 Nippon Telegr & Teleph Corp <Ntt> Variable rate image hierarchy coding transmission system
US5576902A (en) * 1993-01-13 1996-11-19 Hitachi America, Ltd. Method and apparatus directed to processing trick play video data to compensate for intentionally omitted data
JPH0730896A (en) 1993-06-25 1995-01-31 Matsushita Electric Ind Co Ltd Moving vector coding and decoding method
JPH08256333A (en) * 1995-03-16 1996-10-01 Matsushita Electric Ind Co Ltd Method and device for image coding decoding
JP3400428B2 (en) * 1996-05-17 2003-04-28 松下電器産業株式会社 Image transmission method
JPH11298878A (en) * 1998-04-08 1999-10-29 Nec Corp Image scrambling method and device therefor
GB2347038A (en) * 1999-02-18 2000-08-23 Nokia Mobile Phones Ltd A video codec using re-transmission
JP2001078042A (en) * 1999-09-03 2001-03-23 Fuji Xerox Co Ltd Picture expansion processing device and picture compression processing device
JP3976975B2 (en) * 1999-12-22 2007-09-19 キヤノン株式会社 Image processing apparatus and method, and storage medium

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5400076A (en) * 1991-11-30 1995-03-21 Sony Corporation Compressed motion picture signal expander with error concealment
US5583573A (en) * 1992-04-28 1996-12-10 Mitsubishi Denki Kabushiki Kaisha Video encoder and encoding method using intercomparisons of pixel values in selection of appropriation quantization values to yield an amount of encoded data substantialy equal to nominal amount
US5585930A (en) * 1993-05-10 1996-12-17 Matsushita Electric Inudstrial Co., Ltd. Apparatus for recording and reproducing a digital video signal
US5557479A (en) * 1993-05-24 1996-09-17 Sony Corporation Apparatus and method for recording and reproducing digital video signal data by dividing the data and encoding it on multiple coding paths
US5535275A (en) * 1993-07-08 1996-07-09 Sony Corporation Apparatus and method for producing scrambled digital video signals
US5754553A (en) * 1993-09-30 1998-05-19 Kabushiki Kaisha Toshiba Packet conversion apparatus and system
US5481297A (en) * 1994-02-25 1996-01-02 At&T Corp. Multipoint digital video communication system
US6124995A (en) * 1994-07-26 2000-09-26 Samsung Electronics Co., Ltd. Fixed bit-rate encoding method and apparatus therefor, and tracking method for high-speed search using the same
US5835144A (en) * 1994-10-13 1998-11-10 Oki Electric Industry Co., Ltd. Methods of coding and decoding moving-picture signals, using self-resynchronizing variable-length codes
US5719724A (en) * 1995-09-07 1998-02-17 Sony Corporation Rotary head apparatus for recording and/or reproducing an information track to/from a magnetic tape having predetermined orientation characteristics to prevent unbalance of performance between the channel
US6154495A (en) * 1995-09-29 2000-11-28 Kabushiki Kaisha Toshiba Video coding and video decoding apparatus for changing a resolution conversion according to a reduction ratio setting information signal
US5724369A (en) * 1995-10-26 1998-03-03 Motorola Inc. Method and device for concealment and containment of errors in a macroblock-based video codec
US6178289B1 (en) * 1996-06-29 2001-01-23 Samsung Electronics Co., Ltd. Video data shuffling method and apparatus
US6154780A (en) * 1996-12-18 2000-11-28 Intel Corporation Method and apparatus for transmission of a flexible and error resilient video bitstream
US6233392B1 (en) * 1997-02-19 2001-05-15 Thomson Consumer Electronics Constrained encoded motion vectors
US6639945B2 (en) * 1997-03-14 2003-10-28 Microsoft Corporation Method and apparatus for implementing motion detection in video compression
US6421385B1 (en) * 1997-10-01 2002-07-16 Matsushita Electric Industrial Co., Ltd. Apparatus and method for efficient conversion of DV (digital video) format encoded video data into MPEG format encoded video data by utilizing motion flag information contained in the DV data
US6163868A (en) * 1997-10-23 2000-12-19 Sony Corporation Apparatus and method for providing robust error recovery for errors that occur in a lossy transmission environment
US6754271B1 (en) * 1999-04-15 2004-06-22 Diva Systems Corporation Temporal slice persistence method and apparatus for delivery of interactive program guide
US6115076A (en) * 1999-04-20 2000-09-05 C-Cube Semiconductor Ii, Inc. Compressed video recording device with non-destructive effects addition
US20010050955A1 (en) * 2000-03-24 2001-12-13 Cha Zhang Methods and arrangements for handling concentric mosaic image data
US6721362B2 (en) * 2001-03-30 2004-04-13 Redrock Semiconductor, Ltd. Constrained discrete-cosine-transform coefficients for better error detection in a corrupted MPEG-4 bitstreams

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9542611B1 (en) * 2011-08-11 2017-01-10 Harmonic, Inc. Logo detection for macroblock-based video processing
CN107888931A (en) * 2017-11-28 2018-04-06 上海大学 A kind of method using video statistics feature prediction error susceptibility

Also Published As

Publication number Publication date
KR100691307B1 (en) 2007-03-12
ZA200401377B (en) 2005-07-27
CA2457882C (en) 2009-06-02
NO339116B1 (en) 2016-11-14
IL160476A0 (en) 2004-07-25
US7239662B2 (en) 2007-07-03
JP2012070391A (en) 2012-04-05
IL160476A (en) 2009-02-11
CA2457882A1 (en) 2003-03-06
AU2002326713B2 (en) 2006-12-14
RU2291586C2 (en) 2007-01-10
CN1679330A (en) 2005-10-05
RU2004105598A (en) 2005-07-20
CN100581238C (en) 2010-01-13
NZ531863A (en) 2005-10-28
NO343205B1 (en) 2018-12-03
WO2003019939A1 (en) 2003-03-06
NO20161599A1 (en) 2004-04-23
US20030039312A1 (en) 2003-02-27
MXPA04001656A (en) 2004-11-22
BR0212000A (en) 2004-09-28
NO20040754L (en) 2004-04-23
KR20040027982A (en) 2004-04-01
JP4881543B2 (en) 2012-02-22
EP1421787A1 (en) 2004-05-26
EP1421787A4 (en) 2008-10-08
JP2005501488A (en) 2005-01-13
BRPI0212000B1 (en) 2017-12-12

Similar Documents

Publication Publication Date Title
US7239662B2 (en) System and method for video error concealment
US7020203B1 (en) Dynamic intra-coded macroblock refresh interval for video error concealment
AU2002326713A1 (en) System and method for video error concealment
US9661376B2 (en) Video error concealment method
US8780970B2 (en) Motion wake identification and control mechanism
JP4494789B2 (en) Coding dynamic filters
CA2409499C (en) Video coding using the sequence numbers of reference pictures for error correction
US20110026592A1 (en) Intra block walk around refresh for h.264
US6614845B1 (en) Method and apparatus for differential macroblock coding for intra-frame data in video conferencing systems
EP1127467A1 (en) Error concealment in a video signal
Shiu et al. A DCT-domain H. 263 based video combiner for multipoint continuous presence video conferencing
Karlekar Content based robust video coding for videoconferencing

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION