US20100158130A1 - Video decoding method - Google Patents

Video decoding method Download PDF

Info

Publication number
US20100158130A1
US20100158130A1 US12/340,872 US34087208A US2010158130A1 US 20100158130 A1 US20100158130 A1 US 20100158130A1 US 34087208 A US34087208 A US 34087208A US 2010158130 A1 US2010158130 A1 US 2010158130A1
Authority
US
United States
Prior art keywords
video frame
video
frame
missing
undecodable
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/340,872
Inventor
Ying-Jui Chen
Chung-Bin Wu
Ya-Ting Chuang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xueshan Technologies Inc
Original Assignee
MediaTek Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MediaTek Inc filed Critical MediaTek Inc
Priority to US12/340,872 priority Critical patent/US20100158130A1/en
Assigned to MEDIATEK INC. reassignment MEDIATEK INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHEN, YING-JUI, CHUANG, YA-TING, WU, CHUNG-BIN
Priority to TW098107807A priority patent/TWI495344B/en
Priority to CN2009101295216A priority patent/CN101765016B/en
Priority to CN201210393821.7A priority patent/CN102905139B/en
Publication of US20100158130A1 publication Critical patent/US20100158130A1/en
Priority to US13/343,593 priority patent/US9264729B2/en
Priority to US14/991,830 priority patent/US10075726B2/en
Assigned to XUESHAN TECHNOLOGIES INC. reassignment XUESHAN TECHNOLOGIES INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MEDIATEK INC.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/577Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/89Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder
    • H04N19/895Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder in combination with error concealment

Definitions

  • the invention relates in general to video decoding, and in particular, to a video decoding method capable of detecting and correcting missing or undecodable video frames.
  • Video CODECs encoders/decoders
  • Video CODECs typically comply with video coding standards such as MPEG 1/2/4 and H.26x to perform digital data manipulation and compression.
  • These compression techniques achieve relatively high compression ratios by discrete cosine transform (DCT) techniques and motion compensation (MC) techniques, so that the compressed video streams can be transmitted across various digital networks or stored in various storage medium in an efficient manner.
  • DCT discrete cosine transform
  • MC motion compensation
  • MPEG 1/2/4 and H.26x video encoding employs compression schemes which encode later video frames based on earlier video frames, when unrecoverable errors are introduced into a video bitstream during transmission, these errors found in earlier video frames can render all the later dependent frames undecodable.
  • the video encoder skips the undecodable video frame and repeats an early decodable frame, resulting in abrupt scene changes or discontinuous movement due to a number of video frames being skipped, thus resulting in unpleasant viewing experiences for users. Therefore, there exists a need for video decoding methods capable of detecting unrecoverable errors in video bitstreams and reducing motion jerkiness to alleviate degradation of video quality due to these unrecoverable errors.
  • a video decoding method comprising providing a historical syntax element of a previous video frame, receiving a current video frame to determine a current syntax element therein, determining whether a high-level syntax error is present in the current syntax element, and upon detection of the high-level syntax error, determining a replaced syntax element according to the historical syntax element to replace the current error syntax element.
  • the high-level syntax error is a syntax error above a Macroblock layer.
  • a method for detecting a missing video frame comprising a demultiplexer receiving a Transport Stream to recover video Packetized Elementary Stream (PES) to determine a presentation time stamp (PTS) and a decoding time stamp (DTS) in a PES header of the PES, a decoder retrieving a video frame from the video PES to determine temporal reference of the video frame, and a controller receiving the PTS, the DTS, and the temporal reference to determine whether there is a missing video frame.
  • PES presentation time stamp
  • DTS decoding time stamp
  • a video decoding method comprising decoding a current video frame to detect a missing video frame or an undecodable video frame, and in response to detecting the missing video frame or the undecodable video frame, replacing the missing video frame or the undecodable video frame with a closest decodable video frame in display order.
  • a video decoding method comprising decoding a current video frame to detect a missing video frame or an undecodable video frame, and upon detecting the missing video frame or the undecodable video frame, generating a replacement video frame according to motion vectors of a decodable video frame and a temporal distance between the current video frame and the decodable video frame, and in response to detecting the missing video frame or the undecodable video frame, replacing the missing video frame or the undecodable video frame with the replacement video frame.
  • a video decoding method comprising providing a historical syntax element of a previous video frame, receiving a current video frame to determine a current syntax element therein, determining whether a high-level syntax error is present in the current syntax element, upon detection of the high-level syntax error, determining a replaced syntax element according to the historical syntax element to replace the current error syntax element, decoding the syntax-replaced current video frame to detect whether there is a missing video frame or an undecodable video frame, and upon detection of the missing video frame or the undecodable video frame, generating a replacement video frame to substitute the missing or the undecodable video frame.
  • the high-level syntax error is a syntax error above a Macroblock layer.
  • FIG. 1 a shows an example of groups of pictures in a display order.
  • FIG. 1 b shows an example of groups of pictures in a decoding order.
  • FIG. 2 a shows another example of groups of pictures in a display order.
  • FIG. 2 b shows another example of groups of pictures in a decoding order.
  • FIG. 3 a illustrates effect of a corrupted video frame in groups of pictures comprising I-frames, P-frames, and B frames.
  • FIG. 3 b illustrates effect of a corrupted video frame in groups of pictures comprising only I-frames and P-frames.
  • FIG. 4 is a flowchart of a conventional video decoding method.
  • FIG. 5 is a flowchart of an exemplary video decoding method according to the invention.
  • FIG. 6 illustrates an embodiment of a high level bitsteam organization for parsing the high-level syntax elements according to the video decoding method in FIG. 5 .
  • FIG. 7 shows data format of an elementary stream ES, a packetized elementary stream PES, and a transport stream TS.
  • FIG. 8 is a block diagram of an exemplary MPEG decoder according to the invention.
  • FIG. 9 is a flowchart of an exemplary detection method 9 for missing video frame according to the invention, incorporating the MPEG decoder in FIG. 8 .
  • the detection method 9 may be incorporated in step S 514 in FIG. 5 to detect a missing video frame in a group of picture.
  • FIG. 10 is a flowchart of an exemplary video decoding method to substitute a missing video frame or an undecodable video frame according to the invention.
  • FIG. 11 illustrates an embodiment of substituting a replacement video frame for the missing or undecodable video frame according to the video decoding method in FIG. 10 .
  • FIG. 12 is a flowchart of another exemplary video decoding method to substitute a missing video frame or an undecodable video frame according to the invention.
  • FIG. 13 is a flowchart of a detailed video frame generation method incorporated in steps S 1208 or S 1212 in FIG. 12 .
  • FIG. 14 is a flowchart of a detailed MV extrapolation method incorporated in steps S 1304 in FIG. 13 .
  • FIG. 15 a illustrates an embodiment for generating a replacement video frame according to the MV extrapolation method in FIG. 14 .
  • FIG. 15 b illustrates an embodiment for generating a forward motion vector for a P-frame according to the embodiment in FIG. 15 a.
  • FIG. 16 illustrates another embodiment for generating a replacement video frame according to the MV extrapolation method in FIG. 14 .
  • FIG. 17 illustrates yet another embodiment for generating a replacement video frame according to the MV extrapolation method in FIG. 14 .
  • FIG. 18 a illustrates an embodiment for generating a replacement video frame according to the pseudo-direct method in FIG. 13 .
  • FIG. 18 b illustrates an embodiment for generating a forward motion vector for a B-frame according to the method in FIG. 13 .
  • the video decoding scheme embodiments disclosed herein fully comply with the Moving Picture Expert Group MPEG Standards.
  • FIG. 1 a shows groups of pictures (GOP) in a display order
  • FIG. 1 b shows groups of pictures in a decoding order.
  • the groups of pictures of FIGS. 1 a and 1 b comprise frame sequences of intra (I-frame), prediction (P-frame), or bidirectional (B-frame) frames, wherein each video frame is displayed at a fixed interval, and each video frame is represented by a letter indicating the type of frame and a number indicating the displaying order or the decoding order.
  • An I-frame is usually the first frame of a GOP, and is intra-coded or intra-prediction coded without temporal motion compensation.
  • a P-frame is predicted from an immediately preceding I-frame or P-frame.
  • a B-frame is predicted bidirectionally from preceding and succeeding I-frames or P-frames.
  • I-frames and P-frames are known as reference frames as they are used to define future frames in the decoding order. Please note, it is also possible for a P-frame to refer multiple frames, a B-frame to refer multiple preceding and succeeding frames, as well as utilizing B-frames as reference frames in more recent video coding standards.
  • the video decoder decodes sets of video frames GOP 0 and GOP 1 in a decoding order ⁇ I 0 , P 1 , B 0 , B 1 , P 2 , B 2 , B 3 , I 1 , B 4 , B 5 , P 3 , B 6 , B 7 ⁇ as shown in FIG. 1 b, while playing in a different display order ⁇ I 0 , B 0 , B 1 , P 1 , B 2 , B 3 , P 2 , B 4 , B 5 , I 1 , B 6 , B 7 , P 3 ⁇ as shown in FIG. 1 a.
  • Predictive coding and decoding of P-frames and B-frames are dependent on preceding or/and succeeding video frames, as decoding a later predictive video frame typically requires decoded data derived from decoding one or more earlier reference frames, thus when an earlier reference frame is missing or undecodable, the later predictive video frames can no longer be decoded and displayed, resulting in motion jerkiness in the video and unpleasant viewing experience for users.
  • FIGS. 2 a and 2 b show groups of pictures comprising only I-frames and P-frames in a display order and decoding order.
  • the video decoder decodes sets of video frames GOP 0 and GOP 1 in a decoding order ⁇ I 0 , P 1 , P 2 , P 3 , P 4 , P 5 , P 6 , I 1 , P 7 , P 8 , P 9 , P 10 , P 11 ⁇ as shown in FIG. 2 b, while playing in an identical display order ⁇ I 0 , P 1 , P 2 , P 3 , P 4 , P 5 , P 6 , I 1 , P 7 , P 8 , P 9 , P 10 , P 11 ⁇ as shown in FIG.
  • Predictive coding and decoding of P-frames are dependent on preceding reference frames, as decoding a later predictive video frame typically requires decoded data derived from decoding one or more earlier reference frames, thus when an earlier reference frames is missing or undecodable, the later predictive video frames can no longer be decoded and displayed, causing unpleasant viewing experience for users.
  • FIG. 3 a illustrates effect of a corrupted video frame in a group of pictures comprising I-frames, P-frames, and B-frames.
  • FIG. 3 b illustrates effect of a corrupted video frame in a group of pictures comprising only I-frames and P-frames.
  • FIG. 3 a if the first I-frame I 0 is undecodable, rendering its dependent frames P 1 , B 0 , and B 1 undecodable, and subsequently causing subsequent video frames of the entire GOP ⁇ P 2 , B 2 , B 3 ⁇ to be undecodable.
  • Video frames B 2 and B 3 may also be undecodable if they use forward reference.
  • first P-frame P 1 is undecodable, its dependent frames P 2 , B 2 , and B 3 are also undecodable, thus causing the subsequent video frames B 4 and B 5 to be undecodable, video frames B 4 and B 5 are also undecodable if they use forward reference, i.e., the corrupted or missing prior reference frame may render all later video frames in decoding order undecodable.
  • first I-frame I 0 is undecodable
  • all video frames in the same GOP are undecodable
  • the second P-frame P 2 is undecodable, remaining frames of the GOP ⁇ P 3 , P 4 , P 5 , P 6 ⁇ are also undecodable.
  • the video decoder drops any undecodable video frame.
  • the video frame drops the corrupted reference frame and all subsequent frames affected thereby, resulting in abrupt scene changes due to a number of pictures being skipped, causing an unpleasant viewing experience for viewers.
  • FIG. 4 is a flowchart of a conventional video decoding method.
  • a video decoder receives a video bitstream and locates a picture start thereof (S 402 ), parses high-level syntax elements (S 404 ) to determine whether the video frame is decodable (S 406 ), skips decoding of the video frame if the video frame is undecodable (S 408 ), or continues video decoding and processing of the decoded video data if otherwise (S 410 ).
  • the high-level syntax elements are syntax elements above the macroblock layer, and may be the syntax elements in a sequence header, group of picture header, or picture header.
  • the video frame is said to be undecodable when the reference frame is missing, high-level syntax error occurs, or when part or all of the picture data is corrupted. Because the video decoder skips all undecodable video frames in step S 408 , there might be abrupt scene changes due to a large number of pictures being left out, causing unpleasant viewing experience for viewers.
  • FIG. 5 showing a flowchart of an exemplary video decoding method incorporated by a video decoder.
  • the video decoder retrieves a video bitstream from a video buffer and locates the start of a video frame (S 502 ), provides a historical syntax element of a previous video frame (S 504 ), receives a current video frame to determine a current syntax element (S 506 ), determines whether a high-level syntax error is present in the current syntax element (S 508 ), and determines a replaced syntax element according to the historical syntax element to replace the current syntax element upon detection of the high-level syntax error (S 510 ).
  • each video frame contains a synchronization sequence at the beginning to indicate the start of a video frame, known as the picture start.
  • the timing relationship of the previous and current video frames is in terms of the decoding order, i.e., the previous video frame is decoded before a current one.
  • the high-level syntax error is a syntax error above a Macroblock layer, including the syntax elements in a sequence header, group of picture header, or picture header. For example, when a syntax element in the picture header exceeds a legal value boundary, the video decoder may identify it as a picture-layer syntax error. In step S 510 , the video decoder may assign the historical syntax element to be the replaced syntax element, or determine a likely value for the replaced syntax element based on the historical syntax elements if the syntax element has a periodic property.
  • the video decoder When there is no error detected in the syntax elements or the high-level syntax element has been fixed by the replaced syntax element, the video decoder continues to perform MPEG decoding on the picture data in the current video frame in step S 512 .
  • the video decoder decodes the current video frame to detect whether there is a missing video frame or the current video frame is undecodable, and upon detection of the missing video frame or undecodable video frame, generates a replacement video frame to substitute the missing or undecodable video frame (S 516 ).
  • the video decoder determines the video frame to be undecodable when the reference frame is missing, high-level syntax error occurs, or when part or all of the picture data is corrupted.
  • the generation of the replacement video frame comprises assigning a closest decodable video frame in display order to be the replacement video frame, or generating a replacement video frame according to motion vectors of a decodable video frame and a temporal distance between the current video frame and the decodable video frame.
  • FIG. 6 illustrates an embodiment of a high level bitsteam organization for parsing the high-level syntax elements according to the video decoding method in FIG. 5 .
  • the video bitstream received by the video decoder undergoes various levels of data checks including the syntax checks for a sequence header (S 600 ), sequence extension (S 602 ), extension and user (S 604 ), group of picture header (S 606 ), user data (S 608 ), picture header (S 610 ), picture coding extension (S 612 ), sequence extension and user (S 614 ), and finally the video decoder performs predictive decoding on the picture data at the macroblock level (S 616 ). All syntax checks prior to the video decoding in step S 616 are within the scope of the high-level syntax check in the invention.
  • the video decoder parses the syntax elements in the sequence header (S 600 ), and two of the syntax elements aspect_ratio and frame_rate contain illegal value 0000.
  • the video decoder of the invention corrects these errors by assigning the value derived from the historical syntax element as the current syntax element (S 510 in FIG. 5 ) and continues video decoding of the picture data, using the available information to reduce the undecodable number of video frames in the picture set and enhance video quality.
  • the video decoder parses the syntax elements in the picture header (S 610 ), and detects an illegal value “100” for a syntax element “picture_coding_type”.
  • the video decoder carries out an estimation for the picture coding type of the current video frame to be either I, P, or B-frame based on the picture types of the historical syntax elements (S 510 ), and continues to perform video decoding.
  • FIG. 7 shows data format of an elementary stream ES, a packetized elementary stream PES, and a transport stream TS.
  • the video elementary data ES including encoded image data are packetized into an appropriate size to thereby generate a packetized ES (PES).
  • the packetized elementary stream PES is a specification defined by the MPEG communication protocol that allows the elementary stream ES to be divided into packets for data transmission.
  • the video or audio elementary stream ES are passed to a video or audio encoder to be converted to video or audio PES packets, and then be encapsulated inside the transport stream TS or program stream.
  • the video, audio and system TS packets can then be multiplexed and transmitted using broadcasting techniques, such as those used in an ATSC and DVB, which can be picked up by a receiver to perform demultiplex and decoding operations thereon to recover the video or audio elementary stream ES.
  • broadcasting techniques such as those used in an ATSC and DVB
  • FIG. 8 is a block diagram of an exemplary receiver implementing the decoding method according to the invention, comprising demultiplexer 800 , video buffer 810 , video decoder 812 , video controller 814 , audio buffer 820 , audio decoder 822 , system buffer 830 , and system decoder 832 .
  • the demultiplexer 800 is coupled to the video buffer 810 , audio buffer 820 , and system buffer 830 .
  • the video buffer 810 is coupled to the video decoder 812 , and subsequently coupled to the video controller 814 .
  • the audio buffer 820 is coupled to the audio decoder 822
  • the system buffer 830 is coupled to the system decoder 832 .
  • the input transport stream TS is demultiplexed into video, audio, or system TS packets by the demultiplexer 800 according to a selection signal Sel, and the video, audio and system TS packets are passed to the video buffer 810 , audio buffer 820 , and system buffer 830 respectively.
  • a TS header is removed from the video TS packet to provide a video PES VPES and then the video PES VPES is stored in the video buffer 810 . Meanwhile, the TS headers are stripped off from audio and system TS to provide audio PES APES and system data D sys to be stored in the audio decoder 822 and system decoder 832 .
  • the video decoder 812 obtains a video frame F V from the video buffer 810 by removing a PES header at the beginning of the video PES data.
  • the PES header contains Presentation Time Stamp (PTS) and Decoding Time Stamp (DTS) information in an optional field thereof, which can be used to identify if there is a missing video frame.
  • audio frame F A and system data may be obtained by removing the PES headers at the audio decoder 822 and system decoder 832 .
  • the video decoder 812 After locating the picture start of the video frame, the video decoder 812 also acquires temporal reference information T ref successive thereto, and transfers the PTS, the DTS, and the temporal reference T ref to the video controller 814 .
  • the video controller 814 receives the PTS, the DTS, and the temporal reference T ref to determine whether there is a missing video frame, and informs the video decoder 812 of the determination of a missing frame by signal D miss .
  • the video decoder 812 Upon detection of a missing frame, the video decoder 812 performs video generation of a replacement video frame and performs video predictive decoding based on the replacement video frame.
  • the Decode Time Stamp indicates the time at which a video frame F V should be instantaneously removed from the video buffer 810 and decoded by the video decoder 812 .
  • the Presentation Time Stamp indicates the instant at which the decoded video frame F V should be removed from the receiver buffer, and presented for display.
  • the PTS or DTS define the bitstream to be retrieved at intervals not exceeding 700 ms.
  • the temporal reference T ref is reset to 0 after a GOP header, and is incremented by one for each video frame in display order.
  • the video controller 814 may determine that there is no missing picture when the current temporal reference is consecutive to the previous temporal reference in display order, the current DTS of the current video frame does not exceed the previous DTS of the previous video frame by 700 ms, and the current PTS of the current video frame does not exceed the previous PTS of the previous video frame by 700 ms; otherwise the video controller 814 can indicate a missing video frame by signal D miss .
  • FIG. 9 is a flowchart of an exemplary detection method for missing video frame according to the invention, incorporating the receiver in FIG. 8 .
  • the detection method 9 may be incorporated in step S 514 in FIG. 5 to detect a missing video frame in a group of picture.
  • the demultiplexer 800 Upon start of a detection method 9 for missing video frame (S 900 ), the demultiplexer 800 receives the input Transport Stream TS to recover video Packetized Elementary Stream VPES and determines the presentation time stamp PTS and the decoding time stamp DTS in a PES header of the video PES (S 902 ).
  • the video decoder 812 retrieves the video frame F V from the video PES VPES and determines the temporal reference T ref of the video frame (S 904 ), and the video controller 814 receives the PTS, the DTS, and the temporal reference to determine whether there is a missing video frame (S 906 ).
  • the video decoder 812 Upon detection of a missing video frame, the video decoder 812 generates a replacement video frame to substitute for the missing video frame as provided in step S 516 in FIG. 5 .
  • the generation of the replacement video frame may be by assigning a closest decodable video frame in display order to be the replacement video frame, or generating a replacement video frame according to motion vectors of a decodable video frame and a temporal distance between the current video frame and the decodable video frame.
  • FIG. 10 is a flowchart of an exemplary video decoding method to substitute for a missing video frame or an undecodable video frame according to the invention.
  • the video decoder Upon start of the video decoding method 10 (S 1000 ), the video decoder retrieves a current video frame from a video buffer by locating the picture start of a video bitstream (S 1002 ), and parses a current syntax element (S 1004 ). In some embodiments, the video decoder checks and corrects the high-level syntax error before step S 1004 . The detailed description of the detection and correction of the high-level syntax error is provided in the embodiment in FIG. 5 , and is not repeated here.
  • step S 1006 the video decoder determines whether there is a missing video frame in the group of picture, and if so, replaces the missing video frame with a closest decodable video frame in display order (S 1008 ), and otherwise continues to determine whether the current video frame is decodable (S 1010 ), and replaces the undecodable video frame with a closest decodable video frame in display order upon the detection of an undecodable video frame (S 1012 ).
  • the video decoder then carries out the video predictive decoding on the current video frame to recover video data D V and performs relevant data processing thereon (S 1014 ), and exits the video decoding process 10 for the current video frame (S 1016 ).
  • the missing picture may be identified according to the detection method disclosed in FIG. 9 , i.e., determining whether there is a missing video frame using the PTS, the DTS, and the temporal reference information.
  • the video frame is undecodable when the reference frame is missing, or when part or all of the picture data is corrupted.
  • FIG. 11 illustrates an embodiment of substituting a replacement video frame for the missing or undecodable video frame according to the frame generation method in FIG. 10 .
  • a reference video frame P 2 is missing or undecodable, and the video decoder therefore generates a replacement for the missing or undecodable P-frame P 2 according to a closest decodable video frame in display order, e.g., B-frame B 3 , so that the video decoder can carries out the predictive decoding for dependent video frames of P-frame P 2 , reducing the number of undecodable video frames and preventing a serious degradation of viewing quality.
  • the replacement for P-frame P 2 can be either a reference frame (e.g. P 1 ) or non-reference frame that only have forward reference (e.g. B 3 ).
  • FIG. 12 is a flowchart of another exemplary video decoding method to substitute for a missing video frame or an undecodable video frame according to the invention.
  • the video decoder Upon start of the video decoding method 12 (S 1200 ), the video decoder retrieves a video bitstream from a video buffer to locate the start of a picture (S 1202 ), and parses high-level syntax elements (S 1204 ). In some embodiments, the video decoder checks and corrects the high-level syntax error before step S 1204 . The detailed description of the detection and correction of the high-level syntax error is provided in the embodiment in FIG. 5 , and is not repeated here.
  • step S 1206 the video decoder determines whether there is a missing video frame in the group of picture, if so, generates a replacement video frame according to motion vectors of a decodable video frame and a temporal distance between the current video frame and the decodable video frame, and replaces the missing video frame with the replacement video frame (S 1208 ), and otherwise continues to determine whether the current video frame is decodable (S 1210 ).
  • the current video frame is determined as undecodable if a reference frame of the current video frame is missing, or a picture-layer syntax error is detected.
  • the video decoder If the current video frame is undecodable, the video decoder generates a replacement video frame according to motion vectors of a decodable video frame and a temporal distance between the current video frame and the decodable video frame, and replaces the undecodable video frame with the replacement video frame (S 1212 ). When no missing nor undecodable video frame is detected, the video decoder then carries out the video predictive decoding on the current video frame to recover video data D V and performs relevant data processing thereon (S 1214 ), and exits the video decoding process for the current video frame in step S 1216 .
  • the missing picture may be identified according to the detection method disclosed in FIG. 9 , i.e., determining whether there is a missing video frame using the PTS, the DTS, and the temporal reference information.
  • the video frame is undecodable when the reference frame is missing, or when part or all of the picture data is corrupted.
  • the replacement video frames are generated according to the motion vectors of the decodable video frame, the temporal distance (time difference in display order) between the current video frame and the decodable video frame, and also the frame type of the missing or undecodable video frames.
  • FIG. 13 is a flowchart of an exemplary method for generating the replacement video frame, incorporated in steps S 1208 or S 1212 in FIG. 12 .
  • step S 1300 the video decoder detects the missing or undecodable video frame as in steps S 1206 and S 1210 , and determines whether the missing or undecodable video frame is a reference video frame (e.g. I-frame and P-frame) in step S 1302 . If so, the video decoder then carries out motion vector (MV) extrapolation to generate the replacement video frame (S 1304 ), and if not, the video decoder performs motion vector interpolation (pseudo direct mode) to generate the replacement video frame (S 1306 ). After the replacement video frame is produced, the video decoder replaces the missing video frame or the undecodable video frame with the replacement video frame and the generation method 13 is exited (S 1308 ).
  • MV motion vector
  • FIG. 14 is a flowchart of a detailed MV extrapolation method incorporated in steps S 1304 in FIG. 13 .
  • the video decoder Upon start of MV extrapolation method 14 , the video decoder firstly determines whether the missing or undecodable video frame has a dependent non-reference frame (e.g. B-frame), and goes to step S 1404 if so, and otherwise to step S 1406 .
  • a dependent non-reference frame e.g. B-frame
  • the replacement video frame comprises only forward motion vectors.
  • the replacement video frame is generated by extrapolating motion vectors of the preceding decodable dependent B-frame according to the temporal distances between the preceding decodable dependent B-frame and the current video frame, and between the other preceding decodable reference video frame and the current video frame, to generate the forward motion vectors of the replacement video frames.
  • the preceding decodable dependent B-frame precedes the current video frame in display order.
  • the replacement video frame comprises only forward motion vectors.
  • the replacement video frame is generated by extrapolating motion vectors of a preceding decodable reference frame according to the temporal distances between the preceding decodable reference frame, another preceding decodable frame, and the current video frame, to generate the forward motion vectors of the replacement video frames.
  • the preceding decodable frames precede the current video frame in display order.
  • MV extrapolation method 14 exits in step S 1408 .
  • FIG. 15 a illustrates an embodiment for generating a replacement video frame according to the MV extrapolation method in FIG. 14 .
  • a reference frame P 2 is missing or undecodable (S 1404 )
  • the video decoder generates the replacement video frame by extrapolating motion vectors of a preceding decodable dependent B-frame B 3 according to the temporal distances between the preceding decodable dependent B-frame B 3 and the current video frame, and between the other preceding decodable video frame P 1 and the current video frame, to generate the forward motion vectors of the replacement video frames.
  • FIG. 15 b illustrates an embodiment for generating a forward motion vector for a P-frame according to the embodiment in FIG. 15 a.
  • the video decoder uses only forward motion vector MV F of the B-frame B 3 and the temporal distances between frame B 3 and frame P 2 , and between frame P 1 and frame P 2 , to perform the MV extrapolation for generating forward motion vector MV F of the replacement video frame, as shown at the MB in frame P 2 at the right hand side illustration in FIG. 15 b. If the resulting forward motion vector MV F of the replacement video frame exceeds the frame boundary, a MV clipping technique can be applied on the resulting forward motion vector MV F .
  • FIG. 16 illustrates another embodiment for generating a replacement video frame according to the MV extrapolation method in FIG. 14 .
  • a reference frame P 2 is missing or undecodable (S 1404 )
  • the video decoder generates the replacement video frame by extrapolating motion vectors of the preceding decodable B-frame B 2 according to the temporal distances between the preceding decodable B-frame B 2 and the current video frame P 2 , and between the other preceding decodable P-frame P 1 and the preceding decodable B-frame B 2 , to generate the forward motion vectors of the replacement video frames.
  • the B-frame B 2 backward refers to the missing or undecodable frame P 2
  • a backward motion vector MV B of B 2 is properly scaled according to the temporal distance; for example, MV B of B 2 can be just negated and used to reference B 1 , or it can be halved, negated and then used to reference P 1 .
  • FIG. 17 illustrates yet another embodiment for generating a replacement video frame according to the MV extrapolation method in FIG. 14 .
  • a reference frame P 3 is missing or undecodable (S 1406 )
  • the video decoder generates the replacement video frame by extrapolating motion vectors of a preceding decodable reference frame P 2 according to the temporal distance between the preceding decodable reference frame P 2 and another preceding decodable reference frame P 1 , and the temporal distance between the preceding decodable reference frame P 2 and the current video frame, to generate the forward motion vectors of the replacement video frame.
  • FIG. 18 a illustrates an embodiment for generating a replacement video frame according to the pseudo-direct method S 1306 in FIG. 13 .
  • the replacement video frame comprises forward and backward motion vectors.
  • the replacement video frame is generated by proportionating (forward) motion vectors of the backward decodable reference frame of the missing or undecodable bidirectional frame according to the temporal distance T F between the preceding decodable reference frame and the undecodable video frame, and the temporal distance TB between the succeeding decodable reference frame and the undecodable video frame, to generate the forward and backward motion vectors of the replacement video frames.
  • FIG. 18 a illustrates that when a bidirectional frame B 3 is missing or undecodable, the video decoder computes the forward motion vector MV F (B 3 ) and backward MV B (B 3 ) according to the following equation:
  • the preceding decodable reference frame is frame P 1
  • the succeeding decodable reference frame is frame P 2
  • the temporal distance T F is a time difference between the preceding decodable reference frame P 1 and the undecodable video frame B 3
  • the temporal distance T B is a time difference between the succeeding decodable reference frame P 2 and the undecodable video frame B 3 .
  • FIG. 18 b illustrates an embodiment for generating a forward motion vector for a B-frame according to the method in FIG. 13 and equation (1).
  • the replacement video frame for video frame B 3 comprises the forward motion vector MV F and the backward motion vector MV B , computed according to the forward motion vector of a succeeding reference frame P 2 and an equation (1).

Abstract

Video decoding methods are disclosed. The video decoding method comprises providing a historical syntax element of a previous video frame, receiving a current video frame to determine a current syntax element therein, determining whether a high-level syntax error is present in the current syntax element, wherein upon detection of the high-level syntax error, determining a replacement syntax element according to the historical syntax element to replace the current syntax element, decoding the replaced current video frame to detect whether there is a missing video frame or undecodable video frame, and upon detection of the missing video frame or undecodable video frame, generating a replacement video frame to substitute for the missing or the undecodable video frame.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The invention relates in general to video decoding, and in particular, to a video decoding method capable of detecting and correcting missing or undecodable video frames.
  • 2. Description of the Related Art
  • Various functionalities are implemented in video apparatuses in order to conveniently manipulate video data. Video CODECs (encoders/decoders) typically comply with video coding standards such as MPEG 1/2/4 and H.26x to perform digital data manipulation and compression. These compression techniques achieve relatively high compression ratios by discrete cosine transform (DCT) techniques and motion compensation (MC) techniques, so that the compressed video streams can be transmitted across various digital networks or stored in various storage medium in an efficient manner.
  • However, since MPEG 1/2/4 and H.26x video encoding employs compression schemes which encode later video frames based on earlier video frames, when unrecoverable errors are introduced into a video bitstream during transmission, these errors found in earlier video frames can render all the later dependent frames undecodable. Typically, the video encoder skips the undecodable video frame and repeats an early decodable frame, resulting in abrupt scene changes or discontinuous movement due to a number of video frames being skipped, thus resulting in unpleasant viewing experiences for users. Therefore, there exists a need for video decoding methods capable of detecting unrecoverable errors in video bitstreams and reducing motion jerkiness to alleviate degradation of video quality due to these unrecoverable errors.
  • BRIEF SUMMARY OF THE INVENTION
  • A detailed description is given in the following embodiments with reference to the accompanying drawings.
  • A video decoding method is provided, comprising providing a historical syntax element of a previous video frame, receiving a current video frame to determine a current syntax element therein, determining whether a high-level syntax error is present in the current syntax element, and upon detection of the high-level syntax error, determining a replaced syntax element according to the historical syntax element to replace the current error syntax element. The high-level syntax error is a syntax error above a Macroblock layer.
  • According to another aspect of the invention, a method for detecting a missing video frame is disclosed, comprising a demultiplexer receiving a Transport Stream to recover video Packetized Elementary Stream (PES) to determine a presentation time stamp (PTS) and a decoding time stamp (DTS) in a PES header of the PES, a decoder retrieving a video frame from the video PES to determine temporal reference of the video frame, and a controller receiving the PTS, the DTS, and the temporal reference to determine whether there is a missing video frame.
  • According to another aspect of the invention, a video decoding method is disclosed, comprising decoding a current video frame to detect a missing video frame or an undecodable video frame, and in response to detecting the missing video frame or the undecodable video frame, replacing the missing video frame or the undecodable video frame with a closest decodable video frame in display order.
  • According to yet another aspect of the invention, a video decoding method is disclosed, comprising decoding a current video frame to detect a missing video frame or an undecodable video frame, and upon detecting the missing video frame or the undecodable video frame, generating a replacement video frame according to motion vectors of a decodable video frame and a temporal distance between the current video frame and the decodable video frame, and in response to detecting the missing video frame or the undecodable video frame, replacing the missing video frame or the undecodable video frame with the replacement video frame.
  • According to still another aspect of the invention, a video decoding method is disclosed, comprising providing a historical syntax element of a previous video frame, receiving a current video frame to determine a current syntax element therein, determining whether a high-level syntax error is present in the current syntax element, upon detection of the high-level syntax error, determining a replaced syntax element according to the historical syntax element to replace the current error syntax element, decoding the syntax-replaced current video frame to detect whether there is a missing video frame or an undecodable video frame, and upon detection of the missing video frame or the undecodable video frame, generating a replacement video frame to substitute the missing or the undecodable video frame. The high-level syntax error is a syntax error above a Macroblock layer.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The invention can be more fully understood by reading the subsequent detailed description and examples with references made to the accompanying drawings, wherein:
  • FIG. 1 a shows an example of groups of pictures in a display order.
  • FIG. 1 b shows an example of groups of pictures in a decoding order.
  • FIG. 2 a shows another example of groups of pictures in a display order.
  • FIG. 2 b shows another example of groups of pictures in a decoding order.
  • FIG. 3 a illustrates effect of a corrupted video frame in groups of pictures comprising I-frames, P-frames, and B frames.
  • FIG. 3 b illustrates effect of a corrupted video frame in groups of pictures comprising only I-frames and P-frames.
  • FIG. 4 is a flowchart of a conventional video decoding method.
  • FIG. 5 is a flowchart of an exemplary video decoding method according to the invention.
  • FIG. 6 illustrates an embodiment of a high level bitsteam organization for parsing the high-level syntax elements according to the video decoding method in FIG. 5.
  • FIG. 7 shows data format of an elementary stream ES, a packetized elementary stream PES, and a transport stream TS.
  • FIG. 8 is a block diagram of an exemplary MPEG decoder according to the invention.
  • FIG. 9 is a flowchart of an exemplary detection method 9 for missing video frame according to the invention, incorporating the MPEG decoder in FIG. 8. The detection method 9 may be incorporated in step S514 in FIG. 5 to detect a missing video frame in a group of picture.
  • FIG. 10 is a flowchart of an exemplary video decoding method to substitute a missing video frame or an undecodable video frame according to the invention.
  • FIG. 11 illustrates an embodiment of substituting a replacement video frame for the missing or undecodable video frame according to the video decoding method in FIG. 10.
  • FIG. 12 is a flowchart of another exemplary video decoding method to substitute a missing video frame or an undecodable video frame according to the invention.
  • FIG. 13 is a flowchart of a detailed video frame generation method incorporated in steps S1208 or S1212 in FIG. 12.
  • FIG. 14 is a flowchart of a detailed MV extrapolation method incorporated in steps S1304 in FIG. 13.
  • FIG. 15 a illustrates an embodiment for generating a replacement video frame according to the MV extrapolation method in FIG. 14.
  • FIG. 15 b illustrates an embodiment for generating a forward motion vector for a P-frame according to the embodiment in FIG. 15 a.
  • FIG. 16 illustrates another embodiment for generating a replacement video frame according to the MV extrapolation method in FIG. 14.
  • FIG. 17 illustrates yet another embodiment for generating a replacement video frame according to the MV extrapolation method in FIG. 14.
  • FIG. 18 a illustrates an embodiment for generating a replacement video frame according to the pseudo-direct method in FIG. 13.
  • FIG. 18 b illustrates an embodiment for generating a forward motion vector for a B-frame according to the method in FIG. 13.
  • DETAILED DESCRIPTION OF THE INVENTION
  • The following description is of the best-contemplated mode of carrying out the invention. This description is made for the purpose of illustrating the general principles of the invention and should not be taken in a limiting sense. The scope of the invention is best determined by reference to the appended claims.
  • The video decoding scheme embodiments disclosed herein fully comply with the Moving Picture Expert Group MPEG Standards.
  • When a video decoder decodes and plays video data, the decoding order and the display order may be different, as shown in FIGS. 1 a and 1 b. FIG. 1 a shows groups of pictures (GOP) in a display order and FIG. 1 b shows groups of pictures in a decoding order. The groups of pictures of FIGS. 1 a and 1 b comprise frame sequences of intra (I-frame), prediction (P-frame), or bidirectional (B-frame) frames, wherein each video frame is displayed at a fixed interval, and each video frame is represented by a letter indicating the type of frame and a number indicating the displaying order or the decoding order. An I-frame is usually the first frame of a GOP, and is intra-coded or intra-prediction coded without temporal motion compensation. A P-frame is predicted from an immediately preceding I-frame or P-frame. A B-frame is predicted bidirectionally from preceding and succeeding I-frames or P-frames. I-frames and P-frames are known as reference frames as they are used to define future frames in the decoding order. Please note, it is also possible for a P-frame to refer multiple frames, a B-frame to refer multiple preceding and succeeding frames, as well as utilizing B-frames as reference frames in more recent video coding standards. The video decoder decodes sets of video frames GOP0 and GOP1 in a decoding order {I0, P1, B0, B1, P2, B2, B3, I1, B4, B5, P3, B6, B7} as shown in FIG. 1 b, while playing in a different display order {I0, B0, B1, P1, B2, B3, P2, B4, B5, I1, B6, B7, P3} as shown in FIG. 1 a. Predictive coding and decoding of P-frames and B-frames are dependent on preceding or/and succeeding video frames, as decoding a later predictive video frame typically requires decoded data derived from decoding one or more earlier reference frames, thus when an earlier reference frame is missing or undecodable, the later predictive video frames can no longer be decoded and displayed, resulting in motion jerkiness in the video and unpleasant viewing experience for users.
  • FIGS. 2 a and 2 b show groups of pictures comprising only I-frames and P-frames in a display order and decoding order. The video decoder decodes sets of video frames GOP0 and GOP1 in a decoding order {I0, P1, P2, P3, P4, P5, P6, I1, P7, P8, P9, P10, P11} as shown in FIG. 2 b, while playing in an identical display order {I0, P1, P2, P3, P4, P5, P6, I1, P7, P8, P9, P10, P11} as shown in FIG. 2 a. Predictive coding and decoding of P-frames are dependent on preceding reference frames, as decoding a later predictive video frame typically requires decoded data derived from decoding one or more earlier reference frames, thus when an earlier reference frames is missing or undecodable, the later predictive video frames can no longer be decoded and displayed, causing unpleasant viewing experience for users.
  • FIG. 3 a illustrates effect of a corrupted video frame in a group of pictures comprising I-frames, P-frames, and B-frames. FIG. 3 b illustrates effect of a corrupted video frame in a group of pictures comprising only I-frames and P-frames. In FIG. 3 a, if the first I-frame I0 is undecodable, rendering its dependent frames P1, B0, and B1 undecodable, and subsequently causing subsequent video frames of the entire GOP {P2, B2, B3} to be undecodable. Video frames B2 and B3 may also be undecodable if they use forward reference. If the first P-frame P1 is undecodable, its dependent frames P2, B2, and B3 are also undecodable, thus causing the subsequent video frames B4 and B5 to be undecodable, video frames B4 and B5 are also undecodable if they use forward reference, i.e., the corrupted or missing prior reference frame may render all later video frames in decoding order undecodable. In FIG. 3 b, if the first I-frame I0 is undecodable, all video frames in the same GOP are undecodable, and if the second P-frame P2, is undecodable, remaining frames of the GOP {P3, P4, P5, P6} are also undecodable. In the conventional video decoding approach, the video decoder drops any undecodable video frame. In the cases in FIGS. 3 a and 3 b, the video frame drops the corrupted reference frame and all subsequent frames affected thereby, resulting in abrupt scene changes due to a number of pictures being skipped, causing an unpleasant viewing experience for viewers.
  • FIG. 4 is a flowchart of a conventional video decoding method. Upon start of video decoding, a video decoder receives a video bitstream and locates a picture start thereof (S402), parses high-level syntax elements (S404) to determine whether the video frame is decodable (S406), skips decoding of the video frame if the video frame is undecodable (S408), or continues video decoding and processing of the decoded video data if otherwise (S410). The high-level syntax elements are syntax elements above the macroblock layer, and may be the syntax elements in a sequence header, group of picture header, or picture header. The video frame is said to be undecodable when the reference frame is missing, high-level syntax error occurs, or when part or all of the picture data is corrupted. Because the video decoder skips all undecodable video frames in step S408, there might be abrupt scene changes due to a large number of pictures being left out, causing unpleasant viewing experience for viewers.
  • To counter this problem, a video decoding scheme according to an embodiment of the present invention is provided in FIG. 5, showing a flowchart of an exemplary video decoding method incorporated by a video decoder.
  • Upon start of the video decoding method 5 (S500), the video decoder retrieves a video bitstream from a video buffer and locates the start of a video frame (S502), provides a historical syntax element of a previous video frame (S504), receives a current video frame to determine a current syntax element (S506), determines whether a high-level syntax error is present in the current syntax element (S508), and determines a replaced syntax element according to the historical syntax element to replace the current syntax element upon detection of the high-level syntax error (S510). In MPEG systems, each video frame contains a synchronization sequence at the beginning to indicate the start of a video frame, known as the picture start. The timing relationship of the previous and current video frames is in terms of the decoding order, i.e., the previous video frame is decoded before a current one. The high-level syntax error is a syntax error above a Macroblock layer, including the syntax elements in a sequence header, group of picture header, or picture header. For example, when a syntax element in the picture header exceeds a legal value boundary, the video decoder may identify it as a picture-layer syntax error. In step S510, the video decoder may assign the historical syntax element to be the replaced syntax element, or determine a likely value for the replaced syntax element based on the historical syntax elements if the syntax element has a periodic property.
  • When there is no error detected in the syntax elements or the high-level syntax element has been fixed by the replaced syntax element, the video decoder continues to perform MPEG decoding on the picture data in the current video frame in step S512. Next in step S514, the video decoder decodes the current video frame to detect whether there is a missing video frame or the current video frame is undecodable, and upon detection of the missing video frame or undecodable video frame, generates a replacement video frame to substitute the missing or undecodable video frame (S516). The video decoder determines the video frame to be undecodable when the reference frame is missing, high-level syntax error occurs, or when part or all of the picture data is corrupted. The generation of the replacement video frame comprises assigning a closest decodable video frame in display order to be the replacement video frame, or generating a replacement video frame according to motion vectors of a decodable video frame and a temporal distance between the current video frame and the decodable video frame.
  • FIG. 6 illustrates an embodiment of a high level bitsteam organization for parsing the high-level syntax elements according to the video decoding method in FIG. 5. The video bitstream received by the video decoder undergoes various levels of data checks including the syntax checks for a sequence header (S600), sequence extension (S602), extension and user (S604), group of picture header (S606), user data (S608), picture header (S610), picture coding extension (S612), sequence extension and user (S614), and finally the video decoder performs predictive decoding on the picture data at the macroblock level (S616). All syntax checks prior to the video decoding in step S616 are within the scope of the high-level syntax check in the invention.
  • In one embodiment, the video decoder parses the syntax elements in the sequence header (S600), and two of the syntax elements aspect_ratio and frame_rate contain illegal value 0000. Next, instead of outputting the error response “forbidden value in 13818-2” to indicate the syntax errors and stop decoding and data processing as in the conventional art, the video decoder of the invention corrects these errors by assigning the value derived from the historical syntax element as the current syntax element (S510 in FIG. 5) and continues video decoding of the picture data, using the available information to reduce the undecodable number of video frames in the picture set and enhance video quality. In another embodiment, the video decoder parses the syntax elements in the picture header (S610), and detects an illegal value “100” for a syntax element “picture_coding_type”. Next, instead of dropping the undecodable data in the conventional art, the video decoder carries out an estimation for the picture coding type of the current video frame to be either I, P, or B-frame based on the picture types of the historical syntax elements (S510), and continues to perform video decoding.
  • FIG. 7 shows data format of an elementary stream ES, a packetized elementary stream PES, and a transport stream TS. Under MPEG transmission standards, the video elementary data ES including encoded image data are packetized into an appropriate size to thereby generate a packetized ES (PES). The packetized elementary stream PES is a specification defined by the MPEG communication protocol that allows the elementary stream ES to be divided into packets for data transmission. Typically, the video or audio elementary stream ES are passed to a video or audio encoder to be converted to video or audio PES packets, and then be encapsulated inside the transport stream TS or program stream. The video, audio and system TS packets can then be multiplexed and transmitted using broadcasting techniques, such as those used in an ATSC and DVB, which can be picked up by a receiver to perform demultiplex and decoding operations thereon to recover the video or audio elementary stream ES.
  • FIG. 8 is a block diagram of an exemplary receiver implementing the decoding method according to the invention, comprising demultiplexer 800, video buffer 810, video decoder 812, video controller 814, audio buffer 820, audio decoder 822, system buffer 830, and system decoder 832. The demultiplexer 800 is coupled to the video buffer 810, audio buffer 820, and system buffer 830. The video buffer 810 is coupled to the video decoder 812, and subsequently coupled to the video controller 814. The audio buffer 820 is coupled to the audio decoder 822, and the system buffer 830 is coupled to the system decoder 832. The input transport stream TS is demultiplexed into video, audio, or system TS packets by the demultiplexer 800 according to a selection signal Sel, and the video, audio and system TS packets are passed to the video buffer 810, audio buffer 820, and system buffer 830 respectively. A TS header is removed from the video TS packet to provide a video PES VPES and then the video PES VPES is stored in the video buffer 810. Meanwhile, the TS headers are stripped off from audio and system TS to provide audio PES APES and system data Dsys to be stored in the audio decoder 822 and system decoder 832.
  • The video decoder 812 obtains a video frame FV from the video buffer 810 by removing a PES header at the beginning of the video PES data. The PES header contains Presentation Time Stamp (PTS) and Decoding Time Stamp (DTS) information in an optional field thereof, which can be used to identify if there is a missing video frame. Similarly, audio frame FA and system data may be obtained by removing the PES headers at the audio decoder 822 and system decoder 832. After locating the picture start of the video frame, the video decoder 812 also acquires temporal reference information Tref successive thereto, and transfers the PTS, the DTS, and the temporal reference Tref to the video controller 814. The video controller 814 receives the PTS, the DTS, and the temporal reference Tref to determine whether there is a missing video frame, and informs the video decoder 812 of the determination of a missing frame by signal Dmiss. Upon detection of a missing frame, the video decoder 812 performs video generation of a replacement video frame and performs video predictive decoding based on the replacement video frame.
  • The Decode Time Stamp (DTS) indicates the time at which a video frame FV should be instantaneously removed from the video buffer 810 and decoded by the video decoder 812. The Presentation Time Stamp (PTS) indicates the instant at which the decoded video frame FV should be removed from the receiver buffer, and presented for display. The PTS or DTS define the bitstream to be retrieved at intervals not exceeding 700 ms. The temporal reference Tref is reset to 0 after a GOP header, and is incremented by one for each video frame in display order. The video controller 814 may determine that there is no missing picture when the current temporal reference is consecutive to the previous temporal reference in display order, the current DTS of the current video frame does not exceed the previous DTS of the previous video frame by 700 ms, and the current PTS of the current video frame does not exceed the previous PTS of the previous video frame by 700 ms; otherwise the video controller 814 can indicate a missing video frame by signal Dmiss.
  • FIG. 9 is a flowchart of an exemplary detection method for missing video frame according to the invention, incorporating the receiver in FIG. 8. The detection method 9 may be incorporated in step S514 in FIG. 5 to detect a missing video frame in a group of picture.
  • Upon start of a detection method 9 for missing video frame (S900), the demultiplexer 800 receives the input Transport Stream TS to recover video Packetized Elementary Stream VPES and determines the presentation time stamp PTS and the decoding time stamp DTS in a PES header of the video PES (S902). The video decoder 812 retrieves the video frame FV from the video PES VPES and determines the temporal reference Tref of the video frame (S904), and the video controller 814 receives the PTS, the DTS, and the temporal reference to determine whether there is a missing video frame (S906). Upon detection of a missing video frame, the video decoder 812 generates a replacement video frame to substitute for the missing video frame as provided in step S516 in FIG. 5. The generation of the replacement video frame may be by assigning a closest decodable video frame in display order to be the replacement video frame, or generating a replacement video frame according to motion vectors of a decodable video frame and a temporal distance between the current video frame and the decodable video frame.
  • FIG. 10 is a flowchart of an exemplary video decoding method to substitute for a missing video frame or an undecodable video frame according to the invention.
  • Upon start of the video decoding method 10 (S1000), the video decoder retrieves a current video frame from a video buffer by locating the picture start of a video bitstream (S1002), and parses a current syntax element (S1004). In some embodiments, the video decoder checks and corrects the high-level syntax error before step S1004. The detailed description of the detection and correction of the high-level syntax error is provided in the embodiment in FIG. 5, and is not repeated here. In step S1006, the video decoder determines whether there is a missing video frame in the group of picture, and if so, replaces the missing video frame with a closest decodable video frame in display order (S1008), and otherwise continues to determine whether the current video frame is decodable (S1010), and replaces the undecodable video frame with a closest decodable video frame in display order upon the detection of an undecodable video frame (S1012). When there is no missing or undecodable video frame detected, the video decoder then carries out the video predictive decoding on the current video frame to recover video data DV and performs relevant data processing thereon (S1014), and exits the video decoding process 10 for the current video frame (S1016).
  • In step S1006, the missing picture may be identified according to the detection method disclosed in FIG. 9, i.e., determining whether there is a missing video frame using the PTS, the DTS, and the temporal reference information. In step S1010, the video frame is undecodable when the reference frame is missing, or when part or all of the picture data is corrupted.
  • FIG. 11 illustrates an embodiment of substituting a replacement video frame for the missing or undecodable video frame according to the frame generation method in FIG. 10. In one embodiment, a reference video frame P2 is missing or undecodable, and the video decoder therefore generates a replacement for the missing or undecodable P-frame P2 according to a closest decodable video frame in display order, e.g., B-frame B3, so that the video decoder can carries out the predictive decoding for dependent video frames of P-frame P2, reducing the number of undecodable video frames and preventing a serious degradation of viewing quality. The replacement for P-frame P2 can be either a reference frame (e.g. P1) or non-reference frame that only have forward reference (e.g. B3).
  • FIG. 12 is a flowchart of another exemplary video decoding method to substitute for a missing video frame or an undecodable video frame according to the invention.
  • Upon start of the video decoding method 12 (S1200), the video decoder retrieves a video bitstream from a video buffer to locate the start of a picture (S1202), and parses high-level syntax elements (S1204). In some embodiments, the video decoder checks and corrects the high-level syntax error before step S1204. The detailed description of the detection and correction of the high-level syntax error is provided in the embodiment in FIG. 5, and is not repeated here. In step S1206, the video decoder determines whether there is a missing video frame in the group of picture, if so, generates a replacement video frame according to motion vectors of a decodable video frame and a temporal distance between the current video frame and the decodable video frame, and replaces the missing video frame with the replacement video frame (S1208), and otherwise continues to determine whether the current video frame is decodable (S1210). In some embodiments, the current video frame is determined as undecodable if a reference frame of the current video frame is missing, or a picture-layer syntax error is detected. If the current video frame is undecodable, the video decoder generates a replacement video frame according to motion vectors of a decodable video frame and a temporal distance between the current video frame and the decodable video frame, and replaces the undecodable video frame with the replacement video frame (S1212). When no missing nor undecodable video frame is detected, the video decoder then carries out the video predictive decoding on the current video frame to recover video data DV and performs relevant data processing thereon (S1214), and exits the video decoding process for the current video frame in step S1216.
  • In step S1206, the missing picture may be identified according to the detection method disclosed in FIG. 9, i.e., determining whether there is a missing video frame using the PTS, the DTS, and the temporal reference information. In step S1212, the video frame is undecodable when the reference frame is missing, or when part or all of the picture data is corrupted.
  • In steps S1208 and S1212, the replacement video frames are generated according to the motion vectors of the decodable video frame, the temporal distance (time difference in display order) between the current video frame and the decodable video frame, and also the frame type of the missing or undecodable video frames. FIG. 13 is a flowchart of an exemplary method for generating the replacement video frame, incorporated in steps S1208 or S1212 in FIG. 12.
  • In step S1300, the video decoder detects the missing or undecodable video frame as in steps S1206 and S1210, and determines whether the missing or undecodable video frame is a reference video frame (e.g. I-frame and P-frame) in step S1302. If so, the video decoder then carries out motion vector (MV) extrapolation to generate the replacement video frame (S1304), and if not, the video decoder performs motion vector interpolation (pseudo direct mode) to generate the replacement video frame (S1306). After the replacement video frame is produced, the video decoder replaces the missing video frame or the undecodable video frame with the replacement video frame and the generation method 13 is exited (S1308).
  • FIG. 14 is a flowchart of a detailed MV extrapolation method incorporated in steps S1304 in FIG. 13. Upon start of MV extrapolation method 14, the video decoder firstly determines whether the missing or undecodable video frame has a dependent non-reference frame (e.g. B-frame), and goes to step S1404 if so, and otherwise to step S1406.
  • In step S1404, when the missing video frame or the undecodable video frame is a reference frame with a dependent B-frame, the replacement video frame comprises only forward motion vectors. The replacement video frame is generated by extrapolating motion vectors of the preceding decodable dependent B-frame according to the temporal distances between the preceding decodable dependent B-frame and the current video frame, and between the other preceding decodable reference video frame and the current video frame, to generate the forward motion vectors of the replacement video frames. The preceding decodable dependent B-frame precedes the current video frame in display order.
  • In step S1406, when the missing video frame or the undecodable video frame is a reference frame without a dependent B-frame, the replacement video frame comprises only forward motion vectors. The replacement video frame is generated by extrapolating motion vectors of a preceding decodable reference frame according to the temporal distances between the preceding decodable reference frame, another preceding decodable frame, and the current video frame, to generate the forward motion vectors of the replacement video frames. The preceding decodable frames precede the current video frame in display order.
  • After the replacement video frame is reconstructed, MV extrapolation method 14 exits in step S1408.
  • FIG. 15 a illustrates an embodiment for generating a replacement video frame according to the MV extrapolation method in FIG. 14. In FIG. 15 a, a reference frame P2 is missing or undecodable (S1404), the video decoder generates the replacement video frame by extrapolating motion vectors of a preceding decodable dependent B-frame B3 according to the temporal distances between the preceding decodable dependent B-frame B3 and the current video frame, and between the other preceding decodable video frame P1 and the current video frame, to generate the forward motion vectors of the replacement video frames. FIG. 15 b illustrates an embodiment for generating a forward motion vector for a P-frame according to the embodiment in FIG. 15 a. The video decoder uses only forward motion vector MVF of the B-frame B3 and the temporal distances between frame B3 and frame P2, and between frame P1 and frame P2, to perform the MV extrapolation for generating forward motion vector MVF of the replacement video frame, as shown at the MB in frame P2 at the right hand side illustration in FIG. 15 b. If the resulting forward motion vector MVF of the replacement video frame exceeds the frame boundary, a MV clipping technique can be applied on the resulting forward motion vector MVF.
  • FIG. 16 illustrates another embodiment for generating a replacement video frame according to the MV extrapolation method in FIG. 14. In FIG. 16, a reference frame P2 is missing or undecodable (S1404), and the video decoder generates the replacement video frame by extrapolating motion vectors of the preceding decodable B-frame B2 according to the temporal distances between the preceding decodable B-frame B2 and the current video frame P2, and between the other preceding decodable P-frame P1 and the preceding decodable B-frame B2, to generate the forward motion vectors of the replacement video frames. In this embodiment, the B-frame B2 backward refers to the missing or undecodable frame P2, and a backward motion vector MVB of B2 is properly scaled according to the temporal distance; for example, MVB of B2 can be just negated and used to reference B1, or it can be halved, negated and then used to reference P1.
  • FIG. 17 illustrates yet another embodiment for generating a replacement video frame according to the MV extrapolation method in FIG. 14. In FIG. 17, a reference frame P3 is missing or undecodable (S1406), thus the video decoder generates the replacement video frame by extrapolating motion vectors of a preceding decodable reference frame P2 according to the temporal distance between the preceding decodable reference frame P2 and another preceding decodable reference frame P1, and the temporal distance between the preceding decodable reference frame P2 and the current video frame, to generate the forward motion vectors of the replacement video frame.
  • FIG. 18 a illustrates an embodiment for generating a replacement video frame according to the pseudo-direct method S1306 in FIG. 13. When the missing video frame or the undecodable video frame is a bidirectional frame with decodable reference frames, the replacement video frame comprises forward and backward motion vectors. The replacement video frame is generated by proportionating (forward) motion vectors of the backward decodable reference frame of the missing or undecodable bidirectional frame according to the temporal distance TF between the preceding decodable reference frame and the undecodable video frame, and the temporal distance TB between the succeeding decodable reference frame and the undecodable video frame, to generate the forward and backward motion vectors of the replacement video frames. FIG. 18 a illustrates that when a bidirectional frame B3 is missing or undecodable, the video decoder computes the forward motion vector MVF(B3) and backward MVB(B3) according to the following equation:
  • MV F ( B 3 ) = ( T F T F + T B ) · MV F ( P 2 ) MV B ( B 3 ) = ( - T B T F + T B ) · MV F ( P 2 ) ( 1 )
  • where the preceding decodable reference frame is frame P1, the succeeding decodable reference frame is frame P2, the temporal distance TF is a time difference between the preceding decodable reference frame P1 and the undecodable video frame B3, and the temporal distance TB is a time difference between the succeeding decodable reference frame P2 and the undecodable video frame B3.
  • FIG. 18 b illustrates an embodiment for generating a forward motion vector for a B-frame according to the method in FIG. 13 and equation (1). The replacement video frame for video frame B3 comprises the forward motion vector MVF and the backward motion vector MVB, computed according to the forward motion vector of a succeeding reference frame P2 and an equation (1).
  • While the invention has been described by way of examples and in terms of preferred embodiments, it is to be understood that the invention is not limited thereto. To the contrary, it is intended to cover various modifications and similar arrangements (as would be apparent to those skilled in the art). Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.

Claims (20)

1. A video decoding method, comprising:
providing a historical syntax element of a previous video frame;
receiving a current video frame to determine a current syntax element therein;
determining whether a high-level syntax error is present in the current syntax element; and
upon detection of the high-level syntax error, determining a replacement syntax element according to the historical syntax element to replace the current syntax element,
wherein the high-level syntax error is a syntax error above a Macroblock layer.
2. The video decoding method of claim 1, wherein the high-level syntax error is a picture-layer syntax error.
3. The video decoding method of claim 1, wherein the previous video frame is in terms of the video decoding order.
4. The video decoding method of claim 1, wherein the determination of the replacement syntax element comprises using the historical syntax element to estimate the correct syntax element and assigning the estimated result to be the replacement syntax element.
5. The video decoding method of claim 1, further comprising:
decoding the current video frame to detect whether there is a missing video frame or undecodable video frame; and
upon detection of the missing video frame or undecodable video frame, generating a replacement video frame to substitute for the missing or the undecodable video frame.
6. The video decoding method of claim 5, wherein the generation of the replacement video frame comprises assigning a closest decodable video frame in a display order to be the replacement video frame.
7. The video decoding method of claim 5, wherein the generation of the replacement video frame comprises generating a replacement video frame according to motion vectors of a decodable video frame and a temporal distance between the current video frame and the decodable video frame.
8. A video decoding method capable of detecting a missing video frame, comprising:
a demultiplexer receiving a Transport Stream to recover video Packetized Elementary Stream (PES) to determine a presentation time stamp (PTS) and a decoding time stamp (DTS) in a PES header of the PES;
a decoder retrieving a video frame from the video PES to determine temporal reference of the video frame; and
a controller receiving the PTS, the DTS, and the temporal reference to determine whether there is a missing video frame.
9. The method of claim 8, further comprising:
upon detection of the missing video frame, generating a replacement video frame to substitute for the missing video frame.
10. The method of claim 9, wherein the generation of the replacement video frame comprises assigning a closest decodable video frame in a display order to be the replacement video frame.
11. The method of claim 9, wherein the generation of the replacement video frame comprises generating a replacement video frame according to motion vectors of a decodable video frame and a temporal distance between the current video frame and the decodable video frame.
12. A video decoding method, comprising:
decoding a current video frame to detect a missing video frame or an undecodable video frame; and
in response to detecting the missing video frame or undecodable video frame, replacing the missing video frame or undecodable video frame with a closest decodable video frame in display order.
13. The video decoding method of claim 12, further comprising:
providing a historical syntax element of a previous video frame;
receiving the current video frame to determine a current syntax element therein;
determining whether a high-level syntax error is present in the current syntax element; and
upon detection of the high-level syntax error, determining a replaced syntax element according to the historical syntax element to replace the current syntax element,
wherein the high-level syntax error is a syntax error above a Macroblock layer.
14. The video decoding method of claim 12, further comprising:
a demultiplexer receiving a Transport Stream to recover video Packetized Elementary Stream (PES) to determine a presentation time stamp (PTS) and a decoding time stamp (DTS) in a PES header of the PES;
a decoder retrieving the current video frame from the video PES to determine temporal reference of the video frame; and
a controller receiving the PTS, the DTS, and the temporal reference to determine whether there is a missed video frame.
15. A video decoding method, comprising:
decoding a current video frame to detect a missing video frame or an undecodable video frame;
upon detection of the missing video frame or undecodable video frame, generating a replacement video frame according to motion vectors of a decodable video frame and a temporal distance between the current video frame and the decodable video frame; and
in response to detecting the missing video frame or undecodable video frame, replacing the missing video frame or undecodable video frame with the replacement video frame.
16. The video decoding method of claim 15, wherein when the missing video frame or undecodable video frame is a reference frame with a dependent B-frame, the replacement video frame comprises only forward motion vectors, the decodable video frame is a preceding non-reference frame in display order, and the generation comprises extrapolating motion vectors of the preceding non-reference frame and another preceding video frame according to the temporal distance to generate the forward motion vectors of the replacement video frame.
17. The video decoding method of claim 15, wherein when the missing video frame or undecodable video frame is a reference frame with reference to a decodable reference frame, the replacement video frame comprises only forward motion vectors, the decodable video frame is a preceding decodable reference frame in display order, and the generation comprises extrapolating motion vectors of the preceding decodable reference frame according to the temporal distance to generate the forward motion vectors of the replacement video frames.
18. The video decoding method of claim 15, wherein when the missing video frame or undecodable video frame is a non-reference frame with a decodable reference frame, the replacement video frame comprises forward and backward motion vectors, and the generation comprises proportionating motion vectors of backward decodable reference frame of the missing video frame or undecodable non-reference frame according to the temporal distances between the preceding decodable reference frame and the undecodable video frame and between the succeeding decodable reference frame and the undecodable video frame, to generate the forward and backward motion vectors of the replacement video frame.
19. The video decoding method of claim 15, further comprising:
providing a historical syntax element of a previous video frame;
receiving the current video frame to determine a current syntax element therein;
determining whether a high-level syntax error is present in the current syntax element; and
upon detection of the high-level syntax error, determining a replaced syntax element according to the historical syntax element to replace the current syntax element,
wherein the high-level syntax error is a syntax error above a Macroblock layer.
20. The video decoding method of claim 15, further comprising:
a demultiplexer receiving a Transport Stream to recover video Packetized Elementary Stream (PES) to determine a presentation time stamp (PTS) and a decoding time stamp (DTS) in a PES header of the PES;
a decoder retrieving the current video frame from the video PES to determine temporal reference of the video frame; and
a controller receiving the PTS, the DTS, and the temporal reference to determine whether there is a missed video frame.
US12/340,872 2008-12-22 2008-12-22 Video decoding method Abandoned US20100158130A1 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
US12/340,872 US20100158130A1 (en) 2008-12-22 2008-12-22 Video decoding method
TW098107807A TWI495344B (en) 2008-12-22 2009-03-11 Video decoding method
CN2009101295216A CN101765016B (en) 2008-12-22 2009-03-20 Video decoding method
CN201210393821.7A CN102905139B (en) 2008-12-22 2009-03-20 Video decoding method
US13/343,593 US9264729B2 (en) 2008-12-22 2012-01-04 Video decoding method/device of detecting a missing video frame
US14/991,830 US10075726B2 (en) 2008-12-22 2016-01-08 Video decoding method/device of detecting a missing video frame

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/340,872 US20100158130A1 (en) 2008-12-22 2008-12-22 Video decoding method

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/343,593 Division US9264729B2 (en) 2008-12-22 2012-01-04 Video decoding method/device of detecting a missing video frame

Publications (1)

Publication Number Publication Date
US20100158130A1 true US20100158130A1 (en) 2010-06-24

Family

ID=42266059

Family Applications (3)

Application Number Title Priority Date Filing Date
US12/340,872 Abandoned US20100158130A1 (en) 2008-12-22 2008-12-22 Video decoding method
US13/343,593 Expired - Fee Related US9264729B2 (en) 2008-12-22 2012-01-04 Video decoding method/device of detecting a missing video frame
US14/991,830 Active 2029-05-23 US10075726B2 (en) 2008-12-22 2016-01-08 Video decoding method/device of detecting a missing video frame

Family Applications After (2)

Application Number Title Priority Date Filing Date
US13/343,593 Expired - Fee Related US9264729B2 (en) 2008-12-22 2012-01-04 Video decoding method/device of detecting a missing video frame
US14/991,830 Active 2029-05-23 US10075726B2 (en) 2008-12-22 2016-01-08 Video decoding method/device of detecting a missing video frame

Country Status (3)

Country Link
US (3) US20100158130A1 (en)
CN (2) CN101765016B (en)
TW (1) TWI495344B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100231797A1 (en) * 2009-03-10 2010-09-16 Broadcom Corporation Video transition assisted error recovery for video data delivery
CN103563380A (en) * 2011-05-27 2014-02-05 联发科技股份有限公司 Method and apparatus for line buffer reduction for video processing
US20140161198A1 (en) * 2012-12-12 2014-06-12 Lsi Corporation Multi-layer approach for frame-missing concealment in a video decoder
US20150264297A1 (en) * 2014-03-11 2015-09-17 Taeko Ishizu Information processing apparatus, information processing system, and storage medium
US20160227257A1 (en) * 2015-01-31 2016-08-04 Yaniv Frishman REPLAYING OLD PACKETS FOR CONCEALING VIDEO DECODING ERRORS and VIDEO DECODING LATENCY ADJUSTMENT BASED ON WIRELESS LINK CONDITIONS
US20160261920A1 (en) * 2009-11-30 2016-09-08 Time Warner Cable Enterprises Llc Methods and apparatus for supporting vod requests in a system with hierarchical content stores
US9736489B2 (en) 2011-09-17 2017-08-15 Qualcomm Incorporated Motion vector determination for video coding
US10714101B2 (en) * 2017-03-20 2020-07-14 Qualcomm Incorporated Target sample generation

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110037590A (en) * 2009-10-07 2011-04-13 삼성전자주식회사 P2p network system and data transmission and reception method thereof
TWI450538B (en) * 2011-03-22 2014-08-21 System and method for decrypting multi-media stream data
CN110401848A (en) * 2018-04-24 2019-11-01 北京视联动力国际信息技术有限公司 A kind of video broadcasting method and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5784527A (en) * 1996-03-22 1998-07-21 Cirrus Logic, Inc. System and method for error handling during playback of an audio/video data stream
US6052415A (en) * 1997-08-26 2000-04-18 International Business Machines Corporation Early error detection within an MPEG decoder
US20030053546A1 (en) * 2001-07-10 2003-03-20 Motorola, Inc. Method for the detection and recovery of errors in the frame overhead of digital video decoding systems
US20050166123A1 (en) * 2004-01-20 2005-07-28 Sony Corporation Transmission/reception system, transmitter and transmitting and method, receiver and receiving method, recording medium, and program
US20050226336A1 (en) * 1996-03-18 2005-10-13 Renesas Technology Corp. Method of coding and decoding image
US6990151B2 (en) * 2001-03-05 2006-01-24 Intervideo, Inc. Systems and methods for enhanced error concealment in a video decoder
US7006576B1 (en) * 1999-07-19 2006-02-28 Nokia Mobile Phones Limited Video coding
US20090213938A1 (en) * 2008-02-26 2009-08-27 Qualcomm Incorporated Video decoder error handling

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5875192A (en) 1996-12-12 1999-02-23 Pmc-Sierra Ltd. ATM inverse multiplexing system
KR100247978B1 (en) * 1997-08-08 2000-03-15 윤종용 Picture decoding synchronizing circuit and method therefor
US6662329B1 (en) * 2000-03-23 2003-12-09 International Business Machines Corporation Processing errors in MPEG data as it is sent to a fixed storage device
GB2362531A (en) * 2000-05-15 2001-11-21 Nokia Mobile Phones Ltd Indicating the temporal order of reference frames in a video sequence
JP2004336405A (en) * 2003-05-08 2004-11-25 Ricoh Co Ltd Dynamic image processing apparatus, program, storage medium, and dynamic image processing method
US7302385B2 (en) * 2003-07-07 2007-11-27 Electronics And Telecommunications Research Institute Speech restoration system and method for concealing packet losses
WO2005046167A1 (en) * 2003-11-07 2005-05-19 Matsushita Electric Industrial Co., Ltd. System and method for time based digital content access
US9560367B2 (en) * 2004-09-03 2017-01-31 Nokia Technologies Oy Parameter set and picture header in video coding
CN101288315B (en) * 2005-07-25 2012-05-30 汤姆森特许公司 Method and apparatus for the concealment of missing video frames
JP4730183B2 (en) 2006-04-17 2011-07-20 株式会社日立製作所 Video display device
JP2008017351A (en) * 2006-07-07 2008-01-24 Toshiba Corp Packet stream receiver
US8958486B2 (en) * 2007-07-31 2015-02-17 Cisco Technology, Inc. Simultaneous processing of media and redundancy streams for mitigating impairments
CN101188771B (en) * 2007-12-18 2010-11-10 北京中星微电子有限公司 Method and device for detecting and eliminating video decoding error

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050226336A1 (en) * 1996-03-18 2005-10-13 Renesas Technology Corp. Method of coding and decoding image
US5784527A (en) * 1996-03-22 1998-07-21 Cirrus Logic, Inc. System and method for error handling during playback of an audio/video data stream
US6052415A (en) * 1997-08-26 2000-04-18 International Business Machines Corporation Early error detection within an MPEG decoder
US7006576B1 (en) * 1999-07-19 2006-02-28 Nokia Mobile Phones Limited Video coding
US6990151B2 (en) * 2001-03-05 2006-01-24 Intervideo, Inc. Systems and methods for enhanced error concealment in a video decoder
US20030053546A1 (en) * 2001-07-10 2003-03-20 Motorola, Inc. Method for the detection and recovery of errors in the frame overhead of digital video decoding systems
US20050166123A1 (en) * 2004-01-20 2005-07-28 Sony Corporation Transmission/reception system, transmitter and transmitting and method, receiver and receiving method, recording medium, and program
US20090213938A1 (en) * 2008-02-26 2009-08-27 Qualcomm Incorporated Video decoder error handling

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100231797A1 (en) * 2009-03-10 2010-09-16 Broadcom Corporation Video transition assisted error recovery for video data delivery
US10405048B2 (en) * 2009-11-30 2019-09-03 Time Warner Cable Enterprises Llc Methods and apparatus for supporting VOD requests in a system with hierarchical content stores
US20160261920A1 (en) * 2009-11-30 2016-09-08 Time Warner Cable Enterprises Llc Methods and apparatus for supporting vod requests in a system with hierarchical content stores
KR101539312B1 (en) * 2011-05-27 2015-07-24 미디어텍 인크. Method and apparatus for line buffer reduction for video processing
US9762918B2 (en) 2011-05-27 2017-09-12 Hfi Innovation Inc. Method and apparatus for line buffer reduction for video processing
US9866848B2 (en) 2011-05-27 2018-01-09 Hfi Innovation Inc. Method and apparatus for line buffer reduction for video processing
US9986247B2 (en) 2011-05-27 2018-05-29 Hfi Innovation Inc. Method and apparatus for line buffer reduction for video processing
CN103563380A (en) * 2011-05-27 2014-02-05 联发科技股份有限公司 Method and apparatus for line buffer reduction for video processing
US9736489B2 (en) 2011-09-17 2017-08-15 Qualcomm Incorporated Motion vector determination for video coding
US20140161198A1 (en) * 2012-12-12 2014-06-12 Lsi Corporation Multi-layer approach for frame-missing concealment in a video decoder
US9510022B2 (en) * 2012-12-12 2016-11-29 Intel Corporation Multi-layer approach for frame-missing concealment in a video decoder
US20150264297A1 (en) * 2014-03-11 2015-09-17 Taeko Ishizu Information processing apparatus, information processing system, and storage medium
US20160227257A1 (en) * 2015-01-31 2016-08-04 Yaniv Frishman REPLAYING OLD PACKETS FOR CONCEALING VIDEO DECODING ERRORS and VIDEO DECODING LATENCY ADJUSTMENT BASED ON WIRELESS LINK CONDITIONS
US10158889B2 (en) * 2015-01-31 2018-12-18 Intel Corporation Replaying old packets for concealing video decoding errors and video decoding latency adjustment based on wireless link conditions
US10714101B2 (en) * 2017-03-20 2020-07-14 Qualcomm Incorporated Target sample generation

Also Published As

Publication number Publication date
US10075726B2 (en) 2018-09-11
TWI495344B (en) 2015-08-01
CN101765016B (en) 2012-12-12
CN102905139A (en) 2013-01-30
US9264729B2 (en) 2016-02-16
TW201026078A (en) 2010-07-01
CN102905139B (en) 2016-04-13
US20120099654A1 (en) 2012-04-26
US20160127740A1 (en) 2016-05-05
CN101765016A (en) 2010-06-30

Similar Documents

Publication Publication Date Title
US10075726B2 (en) Video decoding method/device of detecting a missing video frame
JP4575357B2 (en) Fast channel change for staggercast in robust mode
US8761162B2 (en) Systems and methods for applications using channel switch frames
US8958486B2 (en) Simultaneous processing of media and redundancy streams for mitigating impairments
US8804845B2 (en) Non-enhancing media redundancy coding for mitigating transmission impairments
US8229983B2 (en) Channel switch frame
US7839930B2 (en) Signaling valid entry points in a video stream
EP1811788A2 (en) Picture encoding method and apparatus and picture decoding method and apparatus
KR100967731B1 (en) Channel switch frame
JP2008010997A (en) Information processing apparatus and method, and semiconductor integrated circuit

Legal Events

Date Code Title Description
AS Assignment

Owner name: MEDIATEK INC.,TAIWAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHEN, YING-JUI;WU, CHUNG-BIN;CHUANG, YA-TING;REEL/FRAME:022013/0984

Effective date: 20081216

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: XUESHAN TECHNOLOGIES INC., CANADA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MEDIATEK INC.;REEL/FRAME:055486/0870

Effective date: 20201223