CA2650663A1 - Dvc delta commands - Google Patents

Dvc delta commands Download PDF

Info

Publication number
CA2650663A1
CA2650663A1 CA002650663A CA2650663A CA2650663A1 CA 2650663 A1 CA2650663 A1 CA 2650663A1 CA 002650663 A CA002650663 A CA 002650663A CA 2650663 A CA2650663 A CA 2650663A CA 2650663 A1 CA2650663 A1 CA 2650663A1
Authority
CA
Canada
Prior art keywords
pixel
bit
color
current
pixels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002650663A
Other languages
French (fr)
Inventor
Gary W. Shelton
William Lazenby
Michael Potter
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vertiv IT Systems Inc
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of CA2650663A1 publication Critical patent/CA2650663A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/93Run-length coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component

Abstract

A video compression system compresses video frames comprising pixels defined by n-bit color values. Encoder of video compression system determines the difference between a current pixel value and a plurality reference pixel values. Encoder sends difference value to decoder. Decoder determines current pixel value by adjusting a reference pixel color value by delta value.

Description

DVC DELTA COMMANDS

CROSS-REFERENCE To RELATED APPLICATIONS
[0001] This application claims priority to U.S. provisional application number 60/795,577, the entire contents of which are incorporated herein by reference.
[0002] This application is also related to the following co-pending U.S.
Patent application which is commonly owned with the present application, contents of which are incorporated herein by reference:
1. . U.S. Application No. 10/260,534 entitled "Video Compression System" filed on October 1, 2002.

FIELD OF THE DISCLOSURE
[0003] This disclosure relates to a computer video compression system.
INTRODUCTION
[0004] A Video Compression Unit is disclosed herein that uses a compression scheme based on the directional algorithm concepts previously disclosed in Application 10/260,534. That algorithm, so called "DVC encoding," is employed herein with some newly added extensions. The present application reduces the bandwidth used in transmitting a video frame buffer across an extension link. The contents of U.S.
application No. 10/260,534 are assumed to be known to the reader. Products employing the "DVC encoding" of U.S. application No. 101260,534 have been commercialized and should be considered prior art.
[0005] One of the aspects of the "DVC encoding" algorithm is that each side of the link always has a complete version of the previous frame to use as a reference.
This allows each pixel in subsequent frames to be defined by commands:
1. No change from pixel in previous frame (NO CHANGE) 2. Same as pixel in line above (COPY ABOVE) 3. Same as pixel to the left (COPY LEFT) 4. Series of pixels from a preceding known subset (MAKE SERIES) 5. New pixel (NEW_PIXEL) [0006] Only the New Pixel option requires that a complete pixel be sent across the link.
The first three require only that a short command message be sent indicating which type of encoding is used and how many consecutive pixels are encoded according to that encoding type. During encoding, the pixel data for both the current frame being compressed and, if applicable, the previous frame are read from memory. The current pixel is then compared against a reference pixel: PreviousPixel (akin to COPY
LEFT), PreviousLine (akin to COPY ABOVE), and PreviousFrame (akin to NO CHANGE).
For each of the three directional commands, if the command is active and the associated comparison matches, then the command remains active and the prospective set increases by one more pixel. When all directional commands have terminated, due to either failures or end conditions, then the last active command is chosen as the encoding for that set of pixels.
[0007] In the event of a tie, then priority can be assigned in the following order:
NO CHANGE, COPY LEFT, COPY ABOVE, for example. This is the order historically used by previous DVC-encoding products, where it was arranged in terms of ease of decoding. However, other orders can be used. With double or triple buffering on each end, all three commands require similar effort by the decoder.
[000$] A single copy of the previous pixel (PreviousPixel) is kept for doing the COPY LEFT comparison, and a full line of pixel data (PreviousLine) is kept for doing COPY ABOVE comparisons. PreviousFrame pixels are being supplied by the memory subsystem along with the CurrentPixel.
[00091 Because NEW PIXEL is the least efficient compression method, it is least favored and used only when the other compression types do not apply to a current pixel.
Thus, a NEW PIXEL determination always ternzinates a pixel encoding stream and sends the preceding command string for transmission and decoding. Then, NEW PIXEL
commands are accomplished on a pixel-by-pixel basis until another encoding type will again apply to a current pixel.
[0010] The MAKE SERIES encoding type takes advantage of a sequence of pixels all being from a subset of preceding unique pixel colors. The standard mode is to use a two-color subset, which is ideal for text windows. This can be expa.nded up to four-colors or more (powers of two) depending upon the number of series comparators and registers the hardware implementation chooses to incorporate. A series comparator is required in the hardware for each pixel in the subset. As each pixel is processed (read from memory) it is compared against each of the pixels in the current series subset registers.
All the comparisons are done in parallel, with the results being sent to the command process. As long as any one (and it should be no more than one) of the subset comparators is true, then the Series command is valid and can continue.
[0011] These first five command types are what are referred to as the original DVC-based commands which are described in greater detail in Application 10/260,534.
[0012] This disclosure uses the original DVC-based commands in conjunction with more complex encoding commands. The more complex encoding commands are as follows:
6. Delta from the same pixel in the previous frame (DELTA NC) 7. Delta from the pixel immediately above (DELTA CA) 8. Delta from the pixel immediately preceding (DELTA CL) [0013] Delta commands are an alternative to sending the full precision color, instead sending a much smaller value which is the difference (delta) between the real color and one of the neighboring colors used as a reference. In the case of 24-bit color, the full precision color is 24-bits whereas the delta is either 4 bits or 12 bits.
There are several different types of delta commands that could be implemented. Some include a unique delta value for each color channel, while others contain a uniform delta that is applied to all color channels. The size (in terms of bits) of each delta can also vary according to the configuration.
[0014] The video compression unit can also use "Color reduction" to reduce the bandwidth usage. Color reduction removes some number of least significant bits from each color channel, thus increasing the likelihood that neighboring pixels will appear "identical" to the comparators, and also reducing the size of data required to send an uncompressed pixel.

BRIEF DESCRIPTION UF THE PRESENTLY PREFERRED EXEMPLARY EMBODIMENTS
[0015] The following description, given with respect to the attached drawings, may be better understood with reference to the non-limiting examples of the drawing, wherein the drawings show:

[00161 Figure 1: An exemplary Video Compression System;
[0017] Figure 2: An exemplary Input Message Cell Format;
[0018] Figure 3: An Exemplary Read Command Cell Format;
[0019] Figure 4: An exemplary encoding Header Cell Format;
[0020] Figure 5: Exemplary 8-bit Cells for the original DVC commands;
[00211 Figure 6: Exemplary 8-bit Cells for the Delta Commands;
[0022] Figure 7: Exemplary Delta Modes;
[0023] Figure 8: Cells incorporating exemplary Delta Modes;
[0024] Figure 9: An exemplary Comparison block; =
[0025] Figure 10: Exeinplary Command Cell Formats and Table of Commands;
[0026] Figure 11: Exemplary Color Depth Command Cell and Color Depth Mode Table;
[0027] Figure 12: Exemplary Command Cells;
[0028] Figure 13: Exemplary multi-byte Command Cells;
[0029] Figure 14: An exemplary Command Data Cell; and [0030] Figure 15: Exemplary table of Core Clock Rates.

THE PRESENTLY PREFERRED EXEMPLARY EMBODIMENTS
[0031] Figure 1 shows an exemplary Video Compression System 100 of a video system.
[0032] The Video Compression System 100 includes, in part, Command Process 102 and an Output Process 103. The Command Process 102 is the main process that responds to input from the Digital Video Input (DVI) Unit 104 and then starts processing a frame by making a request to the Memory Controller 106. Output Process 103 generates the output for the rest of the system. The Command Process 102 comprises a Message Interface 105 for receiving messages from the DVI Unit 104 and a Memory Interface 107 for communicating with Memory Controller 106.
[0033] The Message Interface 105 of the Command Process 102 initially receives an Input Message 101 from the DVI Unit 104. The message is a FIFO 8-bit message written by the DVI Unit 104. The most common messages fit into a single byte, while the rather infrequent-timing values take multiple bytes and therefore multiple clocks.

[0034] The basic message header (the first byte of any message) of the input message 101 is shown in Fig 2. The message header has a four-bit Type field (7:4) and a four bit Data field (3:0). The Type field specifies one of the following message types:
Start Frame (0001), End Frame (0010), Horizontal Timing (0100), Vertical Ti.ming (0101), and Pixel Clock Rate (0110). It should be noted that although five message types are currently defmed more message types can be defined using the "Reserved" bit values.
[0035] The Data field of the header provides message information in accordance with the type of message. If the message type is a Start Frame or End Frame message, the message is contained within the Data field of the header.
[0036] Video Timing messages (Horizontal Timing,. Vertical Timing, or Pixel Clock Rate) require 5 or 6 bytes of data after the message header. For these messages, the Data field of the header specifies the number of bytes to follow. The data structures for each type of Video Timing messages are shown in Figure 2. The bytes are transmitted least significant byte and bit first, so using the Horizontal Timing as a reference bit 0 of ActiveData maps to bit 0 of the byte 0. ActiveData bit 7 maps to bit 7 of byte 0.
ActiveData bit 8 maps to bit 0 of byte 1.
[0037] The Memory Interface 107 handles two types of data: a read command 108 sent to memory and the subsequently returned data (pixel data). The Command Process uses a read command 108 to send a request to start reading pixel data from the current and previous frames and specify the size of the frame via width and height in pixels. The read command 108 is shown in Fig. 3 and contains Command type, FrameWidth, FrameHeight and two frame ID fields (one for the identification of the oldest previous frame to access and the other for the identification of the most current frame to access).
These fields specify which pixels should be sent to the Command Process 102.
[0038] Once the Memory Controller 106 has received a read command 108 identifying pixels, the Memory Controller 106 should start filling the current data FIFO
110 and previous data FIFO 112 with pixel data from the two source frames. The data returned is assumed to be full 24-bit pixel data, and will be subsequently reduced by the Video Compression System 100 if necessary for lower color depths. The Memory Controller 106 writes the returned data directly to the current data FIFO 110 and previous data FIFO
112. The Command Process 102 monitors the empty flags of both current data and previous data FIFO 112 to determine when each has pixel data to process.
The flags may be combined using a Boolean OR operation to form one signal.
[0039] Once the Command Process 102 determines that there is pixel data to process, it then begins processing a frame of data (a set number of pixels) as it is available. The Command Process 102 is the core that makes the decisions as to which encoding to perform and sends the resulting decision, per active clock, to the Output Process 103.
This decision may be any of the following:
1. Non-decision in the case where the pixel has been encoded into an actively-running command (nothing is sent to the Output Process 103 at this point) 2. A request to store a value in the output buffer (pixel, delta, or series) 3. A request to generate command output directly (typically completion of a directional command) 4. A request to generate a command, copying data from one of the output buffers (completion of a pixel, delta, or series-based command).
[0040] Two types of encoding the Command Process 102 can perform are shown in Figs. 5-6. As in DVC-based encoding, the video packets are based on 8-bit cells, but other cell numbers can be substituted. There are several types of header cells defined in accordance with the commands shown in Figs. 5-6, but each has the following basic format for the first byte (the header) which is shown in Fig. 4.
[0041] The first three bits (7:5) of the header are Command bits that specify the command. The fourth bit (4) is the Ext bit. When a header has the Ext bit set, then it is guaranteed that the command has at least one more byte to follow. The main purpose of the Ext bit is to extend the size of the count field for commands that contain a count i.e applying a command to be applied for a greater number of pixels than can be specified with four bits. The basic format of an Extension byte is shown in Fig. 4. The byte will have an additional Ext bit and 7 count bits. For these commands, subsequent cell(s) contain additional most significant bits of the count.
[0042] Thus, a single-byte command, which is limited to 16 (24) pixels, can be extended with a two-byte command to 2048 (211) pixels, with a three-byte command to 262144 (218) pixels, and four bytes (the most that should be required) to over 33 million pixels. Since four bytes is the most that should be required with current video coloration and video resolutions the command is typically limited to a maximum number of four bytes. Another reason for limiting the command to a maximum of 4 bytes is for the convenience of being able to write the command to a 32-bit wide FIFO in one cycle. It should be noted that although the command is typically limited to four bytes, the command can be extended to more bytes if necessary without departing from the scope of the present invention.
[0043] Fig. 5 shows the packet format of the original-DVC commands.
[0044] NO CHANGE (NC) (Command = 000) specifies that the Count number of consecutive pixels have not changed since the previous frame, and thus may be copied from the previous (reference) frame.
[0045] COPY LEFT (CL) (Command = 001) specifies that the Count number of consecutive pixels are identical to the pixel immediately to their left (the previous pixel).
[0046] COPY ABOVE (CA) (Command =010) specifies that the Count number of consecutive pixels are identical to the pixel immediately above them (the previous line).
[0047] 1VIAKE SERIES (MS) (Command = 011) represents a series of pixels of Count length that are each one of two possible colors. This is typically very useful for a window of single color text on a solid background. One data bit for each pixel represented specifies whether that pixel is the first or the second of the designated colors.
In one embodiment, the designated colors are the two most recent and different colors encountered in a line. This can be the color of the immediately preceding pixel to the series (selected when a data bit is clear) and the color of the last pixel before it that was a different color (selected when a data bit is set). This can be easily extended to a larger depth by using multiple data bits to represent each pixel. The depth is defmed by the implementation and is communicated downstream via the appropriate INFO
conunand (Figure 10).
[0048] Due to the size of the output holding buffer, the maximum series depth is typically 384 bits, or 12 four-byte words. This permits support for up to 384 pixels at a depth of 2 colors, 192 at 4 colors, 96 at 8 colors, etc. The maximum series length is limited to 256 in this implementation. It should be noted that the maximum series depth can be extended if the size of the output holding buffer is increased.

[0049] Since MakeSeries does not offer the most efficient compression, (that is the number of pixels that can be represented per byte) compared to the directional run-length encodings (NC, CL and CA), there are times when it is better to terminate a MakeSeries and switch to a directional encoding instead. It is not always easy to determine ahead of time whether it is better to terminate the MakeSeries or to stay in it. The present embodiment keeps a count of the consecutive NC, CL, and CA pixel comparisons during all operations, including MakeSeries. The counts are reset at the start of a MakeSeries, and then potentially after every 8 pixels processed by the MakeSeries command.
[0050] A directional command of 8 or more pixels is guaranteed to be no worse than another 8 pixels tacked on to the end of a MakeSeries. But, if the next pixels after those 8 can only be encoded with the same MakeSeries that was interrupted, then the interruption has actually required an extra MakeSeries header of at least one byte. For an interruption and subsequent resumption of MakeSeries, the set needs to be more than 32 pixels in length to assure that it is better to interrupt the MakeSeries. This allows for a potential restart using a 2-byte MakeSeries header for a count greater than 16.
[0051] One embodiment makes the decision based on a set size of 16_ This leaves open the possibility of switching into and out of MakeSeries, but since that is expected.to be a very infrequent occurrence, the simplification should be worth it.
[0052] After every 8 pixels processed by MakeSeries, the directional lengths will be checked. If any length is 16 or more, then a directional set could have processed the last 16 pixels. The MakeSeries count is reduced by 16, and the following directional command is seeded with a starting count of 16. This needs to be based on the 8-pixel boundaries within a MakeSeries. Clearing the directional check at the start of each byte if it is less than 8 takes care of this by eliminating any partial bytes.
[00531 The NEW PIXEL command specifies the complete color values for a series of pixels that could not be otherwise compressed. The amount of data is equal to the number of bytes of color per pixel times the number of pixels encoded in this command.
No attempt is made to extend this command beyond support for 16 pixels due to the amount of buffering required for the algorithm to buffer up pixels and then go back and fill in the header's count at a later time. Thus, NEW PIXEL with the Ext set is currently unused.

[0054] In one example implementation, when the Video Compression System 100 is operating in 7-bit color mode, the NewPixel command has a set count of 1 and a NewPixel header cell is required for each pixel of data. This simple header cell is shown as "Single Pixel-7" in Fig. 5. This can result in up to a 2X bandwidth impact for 8-bit data, or 33% additional overhead for 24-bit data. The protocol therefore allows for a count to be used so that an implementation can choose to buffer up or otherwise delay sending the pixel data until a count can be determined. Again, imposing a limit of 16 pixels of data alleviates the buffering requirements and latency.
[0055] It should be noted that the PreviousLine buffer 130 may provide this capability if it is designed with two independent read ports. Otherwise a separate 16x24-bit FIFO
will be required.
[0056] Due to the header format, 7-bit mode does not. support NewPixel rans or the additional Delta Commands described below_ [0057] Fig. 6 shows the additional Delta commands. These command formats shown in Fig. 6 are tailored to 24-bit and 15-bit color modes.
[0058] The Delta commands attempt to limit the number of bits transmitted when the directional-based conunands fail and an inefficient NEW PIXEL has to be sent.
The Delta commands operate under the assumption that the pixel may be close in color to one of its neighbors. Rather than sending the NEW PIXEL, a simple reference to that neighbor (via the command field) and a signed color delta may reduce the data to as little as 25% of what would otherwise have been required for the NEW PIXEL command, if the pixel is close in color to one of its neighbors.
[0059] The Delta commands as described below can either specify the difference in color between two pixels as the absolute difference (the four-bit signed difference between two 24-bit numbers) or the difference for each color channel (three four-bit differences for each color). As shown in Fig. 6 when the Ext bit is not set (a "0" in bit 4), a single delta value is used as the signed difference between the 24-bit string of the current pixel and the previous pixel. When the Ext is set (a "1" in bit 4), a delta value for each color channel is provided. Delta-values are added to respective values from the pixel which is compared to the current pixel to generate the color of the current pixel.
The Delta commands can use signed 4-bit values supporting a range of [-8,7] or another range depending on the Delta Mode.
[0060] DELTA NC specifies that this pixel is only slightly different from the same pixel in the previous frame, that is, it is different only by the specified Delta.
[0061] DELTA CL specifies that this pixel is only slightly different from the previous pixel, again by the specified Delta:
[0062] DELTA CA specifies that this pixel is only slightly different from the pixel immediately above, again by the specified Delta.
[0063] The Delta commands shown in Fig. 6 are the standard Delta Commands. As described below, there are many Delta modes. The Delta Commands are enabled and configured according to the DeltaMode. The Delta modes that are defined are shown in Fig. 7.
[0064] The Delta commands shown in Fig. 6 are Mode 0, the standard format as shown above with a single pixel represented.per command. Modes 4, 5, 8, and 9 are embodiments of delta commands employing 3-bit delta values. Modes 4, 5, 8 and 9 are distinguishable by the range of delta values (i.e. [-4,3] or [0,7]) and the type of packaging scheme employed (i.e. Type I or Type 2). Type 1 and Type 2 are shown in Fig. 8 and are additional formats that support packing multiple pixel deltas into a single command, thus reducing overhead. Modes 4 and 5 are Type 1 and Modes 8 and 9 are Type 2.
[0065] In Type 1, the Ext (extension) field is used to identify whether another pixel follows. When a cleared Ext field is encountered (Ext = 0), then the comrnand has ended, and the unnecessary bits in the current byte (if any) are ignored. As shown in Fig. 8, using Type 1 allows two pixels to be represented by three bytes. It would require four bytes to represent two pixels if the delta command had to be sent for each pixel.
Likewise, five pixels can be represented by seven bytes instead of ten.
[0066] Type 2, takes advantage of the unused Command =111 and Ext =1 shown in Fig. 5. In Type 2, when Command =111 and Ext = 1, one of Delta NC, Delta CL, or Delta CA applies. A two-bit command (Dcmd) embedded within the stream for each pixel determines which one of the Delta NC, Delta CL, or Delta CA applies and replaces the single-bit extension field of the previous format. It is particularly useful for long runs of deltas that are a mix of multiple directions. Dcmd specifies one of Delta NC (00), Delta CL (01), Delta CA (10), or termination of command (11). [0067] Type 2 Packed 3-bit Delta becomes a better compression option over a string of 4-bit non-uniform Deltas as soon as two consecutive Deltas are encountered.
With Delta data being written to the DeltaBuffer, the only additional information necessary is whether any of the current consecutive Deltas has exceeded 3-bit format. The decision to switch to packed deltas is made when the second consecutive Delta is encountered. At this time, the type of all subsequent Deltas will need to be written, along with the 3-bit data fields, to the 12-bit DeltaBuffer. The Output Process 103 can then handle using the current DeltaType and ConsecutiveDeltas count to parse and output the DeltaBuffer accordingly.
[0068] Fig. 7 also shows Mode 1 which is similar to Mode 0 except that the range is [0,15] instead of [-8,7]. Mode 1 is useful for adding a delta value to previously truncated colors.
[0069] The Command Process 102 determines which of the DVC or Delta commands are applicable through the three directional comparison blocks 114, 116 and 118, which send information to the Command Process 102. The comparison blocks 114,116, and 118 operate in parallel, each comparing the current pixel (cPixel) with their respective reference pixel and sending a TRUE signal to the Command Process 102 if the pixels are determined to be equivalent. The comparison blocks also compute deltas of each channel and send that information packed into a 12-bit value, for example, for use by the Command Process 102. A separate Delta flag is passed in addition to the 12-bit value in order to indicate whether the observed deltas were within the current range or not. This keeps the Command Process 102 from having to decode a delta value of all zeros.
[0070] Fig. 9 shows an exemplary Delta comparison block. Each Delta comparison block receives two 24-bit strings (DataA[24] and DataB[24]), one representing a present pixel and another representing a previous pixel. The bit strings are compared using a 24-bit comparator 902. The 24-bit comparator 902 outputs a true value if the bit strings are equal. The exemplary Delta comparison block also includes three 8-bit subtraction blocks 904, 906, and 908 for subtracting 8-bit color values of a 24-bit number representing the current pixel from the respective 8-bit color values from a 24-bit number representing the previous pixel. The four most significant bits from the subtraction blocks 904, 906, and 908 are sent to 12-Bit NOR gate 910 which determines whether a delta condition exists, i.e. the four most significant bits of each color channel are equal, i.e. the result of subtraction is 0000. The subtraction blocks 904, 906, and 908 also transmit the delta value (i.e. the difference between the four least significant bits) for each color channel to the Command Process 102. It should be noted that although the exemplary Delta comparison block are configured for a 24-bit color implementation and a delta mode where a four-bit delta value is generated for each color channel, this is simply for explanatory purposes and not intended to be limiting. One ordinary skill in the art would appreciate other hardware configurations could be used for the current implerrientation and that other hardware configurations may be necessary for other implementations. Block 912 is optional logic that is used for make series determination.
[0071) The Video Compression System 100 also has some number of series comparison blocks 120 and 122 (with no delta computation) operating in parallel to determine whether a make series condition occurs. Comparison blocks 120 and compare the current pixel with the most recent unique pixels and send a TRUE
signal to the Command Process 102 if the pixels are equivalent. It should be noted that although the present embodiment is using only two series comparators for the two most recent pixels, more comparators could be used.
[0072] The Command Process 102 tracks which commands (signals) are permissible (based on position within a frame, etc), which ones still have active runs, and how many pixels are in the current run. When the Command Process 102 determines that a run ends, the end of the line is reached, or the end of a frame is reached, then the appropriate command 124 and data 126 are generated and sent to Output Process 103. The Command Process 102 also updates PreviousPixe1128 and the current location in the PreviousLine buffer 130.
[0073) Position is tracked by XPos variable 132 and YPos variable 134 to count horizontal and vertical positions within a frame in units of pixels. These positions are used to address the current location within the PreviousLine buffer 130 and to determine line and frame boundaries.

12.

[0074] The Output Process 103 generates the output 136 to the rest of the system. If the Output Process 103 can not write, or is busy writing multiple pieces of data, and can not accept input, then everything else in the unit must stall waiting. The reason for this separate process is because of the buffered nature of the outgoing commands.
The commands are buffered so that the header can be filled in once the size (count) is known.
Without this separate process, the main Command Process 102 would take as much as twice as long to process pixels. With this dual process implementation, both processes operate in parallel in a pipelined fashion.
[00751 The Output Process 103 tracks the compression performance during each frame in terms of the number of bytes required to compress a number of pixels. This ratio can be tracked after each command, and when it exceeds a threshold (defmed via an applet by the user), the Output Process 103 will inform the Command Process 102 that it needs to reduce the color depth in an effort to conserve bandwidth. The Output Process 103 will defme and track BytesPerFrame and PixelsPerFrame, both of which are reset at the end of each frame. To avoid erroneous decisions, the values are not actually used until the PixelsPerFrame indicates that the process is at least one line into the frame.
[0076] The most likely threshold is a limit on the number of Bytes/second.
This can be translated to a ratio of bytes/pixel (BP) as follows:
X bytes/second >= (width *height )* BP * fps BP = X/(width*height*fps) [0077] For a maximum bandwidth allocation of, say, 8MBps, using 1024x768 at 30fps, this would equate to a Byte/Pixel ratio of (0078) BP ratio = 8M/(1024*768*30) = 0.34 [0079] The straightforward implementation would multiply the current pixel count by this value and then see if the result is still greater than the current byte count. If it is not, then the bandwidth is being exceeded and the Command Process 102 needs to throttle back if possible.

(0080] Including a floating-point multiplier, however, is likely unnecessary.
Reversing the ratio and defining it as pixels per byte allows for integer multiplication with a little less accuracy on the desired MBps rate.
[00811 PB ratio = 1024*768*30/8M = 2.94 = 3 (is actually 7.86MBps) [0082] Rounding this ratio up to the nearest integer is the conservative approach, but it could potentially give up a significant amount of bandwidth.
[0083] The Video Compression System 100 oan also use "Color reduction" to reduce the bandwidth usage. This is separate from the operating system based color depth setting that the user will have configured for the computer. Regardless of the operating system setting, the Video Compression System 100 is storing 24-bit color data.
The Video Compression System 100 will likewise read 24-bit color data from memory, but color reduction will remove some number of least significant bits from each color channel (e.g setting them to zero), thus increasing the likelihood that neighboring pixels will appear "identical" to the comparators 114, 116, and 118, and also reducing the size of data required to send an uncornpressed pixel.
100841 The Video Compression System 100 could be designed to operate on 8 or 16-bit color throughout, but since it needs to handle 24-bit color at high resolution and frame rates, there is nothing to be gained by trying to handle the lower color depths natively.
This would simply complicate the implementation for something that is expected to be rare. If it were the only way to support the higher resolutions, it may be desirable.
[0085] It also may be desirable for an embodiment which has no intention of supporting 24-bit color, but such implementation would need to have global changes made throughout the Video Compression System 100 to best optimize the performance and minimize gate count and cost.
[0086] Besides the operating system and native color depths, there are a couple of other terms related to color depth.- "Comparison depth" is the term used to describe the number of most significant color bits used (per color channel) when comparing two colors. There is also the current color depth (not the native depth) of the subsystem, which is what is referred to when the Video Compression System 100 references "Color Depth."
The comparison depth does not necessarily have to be the same as the ColorDepth.
For instance, comparisons can be done with 15-bit color (5 bit per channel), and yet 24-bit pixels can be sent via the Delta and NEW PIXEL commands. This will result in somewhat more than 215 colors potentially displayed on the screen, but will reduce the color fidelity by slightly changing some neighboring colors. The advantage is that compression is potentially improved via the relaxed equality requirement.

[0087] The valid ColorDepth values are limited by the byte nature of the video protocol to the following: 24-bit, 15-bit, and 7-bit. The comparison depths that can be used to improve comparisons are: 3, 6, 9, 12, 15, 18, 21, and 24. The depth cbuld be varied per channel.
[0088] The comparison depth to be used by the comparison logic is held in the ComparisonDepth register. This specifies the depth of a single channel and is used to create a mask that is applied to each pixel being fed into the comparator source registers.
[0089] The ColorDepth register controls how the Output Process 103 constructs NEW PIXEL commands, and how logic in the rest of the system operates. If the ColorDepth is set to 24-bit or 15-bit modes, then the NEW PIXEL command is constructed with an 8-bit header (supporting up to 16 pixels) and 24-bits (or 15-bits) of color data per pixel.
[0090] For a ColorDepth of 7, DVC algorithms have typically used a special mapping of the color ranges. That is not done in this current implementation that favors color masking. Instead, 7 bits are used with 2-bits for each of red and blue, and 3 bits for green. Green is commonly favored in digital imagiing color formats due to the fact that the human eye more readily detects green and the subtle shading variations than it does either red or blue. This results in a 7-bit protocol with a NEW PIXEL that is constructed with a single-bit header and 7-bits of color data per pixel.
[0091] Both ColorDepth and ComparisonDepth registers are part of the state that is configurable by the user. The defaults can be controlled via jumpers or hard-coded into the hardware. The user should be able to adjust these either directly or indirectly via an applet that lets them adjust for bandwidth, etc. This applet communicates with the hardware through the Control Unit 102. When these values are being set, it must be assured that the video subsystem is either idle, or the values must be staged so that the changes are not made until the video subsystem is at a point where it can deal with the change.
[0092] The Video Compression System 100 needs to know the pixel depth (number of bits) and frame size (height and width in terms of pixels) to account for the line width and any padding that may be necessary at the end of a line or frame. It also must know the start of each frame since COPY ABOVE may be turned off for the first line of a frame, and COPY LEFT may be turned off for the first pixel of each line.
[0093] Note that this particular implementation does not suppress COPY ABOVE
or COPY LEFT, but instead compares to the last row of the previous frame or the last pixel of the previous line respectively. In the event of a timing change which results in starting with a clean slate after an EndFrame:Clear message, left and above are compared against black pixel values of Ox000000.
[0094] Fig. 10 shows control commands that are generated at the Command Process 102. Control commands need to be distinguished from the encoding commands.
Control commands are recognized by the fact that the least significant 5 bits are all 0, indicating a zero count, and the most significant bit is clear indicating that it is a command that requires a count (versus a Delta or New Pixel command with a color).
[0095] The group of control commands shown in Fig. 10 is grouped under the INFO HEADER with sub-header byte that specifies Type and a Value for the Type.
Types are defined in the table shown in Fig. 10. Value is specific to each Type, but in general is the number of bytes following this two-byte header (not including the two-byte header) for commands longer than this header. For commands consisting only of the two bytes of this header, value is a 4-bit data value.
[0096] Fig. 11 shows the INFO HEADER for each Type.
[0097] The ColorDepth command is used to specify the color mode. Currently, 24-bit, 15-bit, and 7-bit color modes are supported in this implementation. ColorDepth modes are defined in Fig. 11. Color Depth is provided to the Video Compression System 100 via a configuration register, and does not come in through the message stream.
It should be noted that although the present embodiment only defmes three color modes, more color modes could be defmed in the future without departing from the scope of the present invention.
[0098] The SERIES DEPTH command shown in Fig. 11 is used to speoify the width of the bit-field used to identify the color of each pixel in a Series command.
The default depth is 1, which allows for series composed of 2 colors. The maximum depth supported varies by implementation, but in the preferred embodiment, a depth of 4 bits (16 colors) is the maximum desirable.

[0099] DELTA MODE specifies the delta mode to be used for all subsequent commands. There are no data bytes following.
[00100] The CLEAR FRAME command is used to indicate that a previous frame is to be cleared to zeros, or at least that the next frame should be compared to all zeros rather than the contents. of the previous frame.
[00101] The COMPARISON DEPTH command is used to control the number of bits that are used when comparing pixels to each other. This is specified in terms of bits/channel. Comparison Depth is provided to the Video Compression System 100 via a configuration register,.and does not come in through the message stream.
[00102] The FRAMESTATUS command is used to convey the status of a frame, signifying either successful completion or the need to abort further processing of the current frame. A Status field of 0 indicates success, while 1 indicates the need to abort.
[001031 Fig. 12 shows the FR.AME SIZE command. FrameWidth and FrameHeight specify the dimensions of the subsequent frames in terms of pixels, ranging from 0 to 65,535. Implementations will generally have a smaller maximum supported value on the order of 4,000 or less depending upon the application. BitsPerPixel specifies the number of bits used to define the color per pixel. This typically is in the range of 24 bits or less, although larger depths are possible in the future. This is fixed to 24-bits for this implementation.
[00104] The only intent for this Frame Size message is to more neatly convey the Frame Size information to the Video Decompression Unit (VDU) (not shown) on the other side of the link. This unit typically gathers the separate pieces of data from the timing messages, which are destined for the DVI Output Unit on the far side of the link.
Packaging all the information in this message (which is bound for the VDU) allows the VDU to ignore the timing messages. The fields of the FRA.ME SIZE command are defined as follows:
[00105] Pixel Clock: This command is used to transmit the values necessary to regenerate a pixel clock for the display on the other side of the link.
[00106] Horizontal timing: This command is used to transmit the horizontal display timing values to the other side of the link.

[00107] Vertical timing: This command is used to transmit the vertical display timing values to the other side of the link.
[00108] Fig. 13 shows a general data packet. This general purpose data packet is used to send data across the link. It is assumed that the other side of the link can parse the data.
[00109] It should be noted that maximum compression is achieved when an entire frame can be encoded as one NoChange message. However, in the double-buffered architecture of the present embodiment such encoding would result in a full frame-time of latency (on the order of 16ms) between the times that the Video Compression System 100 starts processing a frame and when the VDU starts to receive the results of that processing.
Instead of letting a run-length command build throughout the entire frame, the commands are limited in size and sent'periodically throughout the frame. For simplicity, this is done at the end of each line. If absolute performance becomes more important than latency, this could be fme tuned in numerous ways.
[00110] It should be noted that due to the architecture of the Digital Input Unit (DIU) (not shown), the Video Compression System 100 may already be processing a frame when it receives information indicating that the resolution is changing. This information will be in the form of an EndFrame message with the Clear bit set.
[00111] When the Video Compression System 100 finishes a frame, it should see an EndFrame message with a 0 Clear bit in its incoming FIFO. If it instead sees an EndFrame with Clear bit or any Timing message at the front of its FIFO, then it knows that the previous frame (the one it just finished processing) was invalid. The Video Compression System 100 then sends a FrameAbort message to the VDU to indicate that the preceding frame should not be displayed. If there is no message in the incoming FIFO, the Video Compression System 100 waits until one is present.
[00112] If the resolution has decreased, then the Video Compression System 100 will have attempted a much larger read to memory than what memory will actually be able to supply. The Memory Interface Unit (1V11U)107 assures that the Video Compression =
System 100 read pointer trails behind the DIU write pointer so that the Video Compression System 100 does not overtake the data being written. The early termination of DVI 104 data going to memory 106 will mean that the DIU write pointer will stall for the timeout period, causing the Video Compression System 100 to stall as well waiting on data. When the Memory Interface Unit 107 decides to terminate the request, it will simply abort the processing of the Video Compression System's 100 read request.
[00113] The Video Compression System 100 must continually monitor its incoming command FIFO while processing pixels coming from memory 106. If the Video Compression System 100 sees an EndFrame, Clear or Vertical Timing message from the DIU, it immediately terminates processing of the current frame and sends the FrameAbort message to the VDU. The Video Compression System 100 also immediately sends a Purge Frame message to the MIU 107. This message tells the MIU
107 to stop sending further data to the Video Compression System 100 for the frame currently in progress. The MIU 107 subsequently will flush any data remaining to be read from the DIU. The Video Compression System 100 is responsible for flushing the Current and Previous FIFOs after notifying the MIU 107 to stop adding to them.
[00114] The Video Compression System 100 needs to process a pixel per clock with per-frame latency on the order of only a few pixels and no latency between pixels. This depends heavily on the memory controller 106 being able to provide input data at the necessary rate, and also on the output not backing up and forcing a stall.
[00115] Assuming that the above criteria is met, and that a pixel can be processed per clock, then the clock rate is based on the number of pixels per frame and the desired frame rate. The table in Fig. 15 shows the rates required for various combinations.
[00116] Allowing for some overhead, the table shows that an 80MHz core clock would be sufficient for 30 fps HDTV. A more likely target would be something on the order of 100-1251VIHz, which would handle even QXGA at 30 fps, and would support HDTV
at over 40 fps. While very high resolutions at very high frame rates rnayrequire foo high*a clock rate to be commercially/economically feasible, advances in processing rates and costs will eventually enable even those very high resolutions and frame rates to be implemented with economically feasible off-the-shelf products. Likewise, clock rates that are border-line for conunercial/economic reasonableness today will be economically feasible with off-the-shelf products even sooner.
[00117) Being able to compress a-single frame at a 60fps rate improves the latency perceived by the end user, and is therefore a worthwhile goal even if frames are intentionally dropped or are otherwise not transmitted 60 times per second.

[00118] While the invention has been described in connection with what is presently considered to be the most practical and preferred embodiment, it is to be understood that the invention is not to be limited to the disclosed embodiment, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

APPENDIX

#define MAX_FRAME_WIDTH 2048 #define MAX_FRAMEHEIGHT 1536 #define MAX_LINE_BUFFER_SIZE MAX_FRAME_WIDTH
#define MAX_SERIES_LENGTH 360 #define MAX_SERIES_DEPTH 2 #define MAX_PIXELRUN 15 #define OUTPUT FIFO SIZE 17 #define MESSAGESTARTFRAME 1 #define MESSAGEENDFRAME 2 #define MESSAGE_HORIZONTAL_TIMING 4 #define MESSAGE_VERTICALTIMING 5 #define MESSAGE PIXEL CLOCK RATE 6 #define COLOR_DEPTH_MODE_24 8 #define COLORDEPTH_MODE_7 7 #define COLOR DEPTH MODE 15 5.

. //
// Compression Commands #define CMDNC OxO
#define CMD_CL Ox1 #define CMDCA 0x2 #define CMD_SERIES 0x3 #define CMD DELTANC 0x4 #define CMD^DELTA_CL Ox5 #define CMDDELTA_CA Ox6 #define CMD_NEW PIXEL 0x7 #define CMD_DELTANC UNIFORM Oxc #define CMD_DELTA_CL7UNIFORM Oxd #define CMD DELTA CA UNIFORM Oxe #define INFO_HEADER 0x20 #define FRAMESIZEHEADER 0x85 #define FRAMESTATUS 0x50 #define CLEAR FRAME 0x30 #define FRAMESTATUSOK 0 #define FRAME STATUS ABORT 1 #define MEMORY_TIMEOUT 100000000 on the order of one second?
#define MAX ENCODE_COUNT MAX_FRAME WIDTH;

uint24 LineBuffer[MAX LINE_BUFFER_SIZE];
uint24 CPixe1FIF0[MEMORY OUTPUT_FIFO_DEPTH];
uint24 PPixe1FIF0[MEMORY~OUTPUT_FIFODEPTH3;
=uint8 InFIFO[8];

struct MemFIFO_s {
uintl3 width;
uintl3 height;
uint2 previousBuffer;
uint2 currentBuffer;
uint2 command;
} MemFIFO[4];

{
struct CmdFIFO s uint8 command;
uint24 data;
} CmdFIFO[4];

uint32 OutFIFO[OUTPUT FIFO SIZE];

uint24 DataFIFO[15]; // This provides space for up to 15 pixels, or 360 bits worth of series data.

//
Do we need to stage frame parameters (height, width, series depth,etc) or are the processes in sync enough to just stall the whole VCU pipe momentarily when a parameter changes?
struct Header {
uint4 data;
uintl extend;
uint3 type;

uintl6 FrameHeight;
uint16 FrameWidth;
uintl ClearFrame;
uint4 ColorDepthMode; // current color depth output uint4 Feedback; register used to provide feedback from the output process to the command process.

The following are configuration registers //
uint4 InitialColorDepthMode; // controls color depth output uint24 ComparisonMask; // controls comparison depth uintl6 PBRatio; // controls color depth throttling (0 disables) Command Process uint2 CurrentBuffer;
uint2 PreviousBuffer;
uintl ClearFrame;
uintl3 FrameWidth;
uintl3 FrameHeight;
uint24 ComparisonMask;
uint4 ComparisonDepth;
uint2 SeriesBitWidth;
uint2 CurrentSeriesMatch;
uint2 PreviousSeriesMatch;
uint4 ConsecutivePixels;
uintlO EncodeCount;
uint8 SeriesDataCount;
uint24 SeriesData;
uint4 SeriesRun;
bool FrameStatusPending;
=bool FrameSizeChanged;
typedef struct {
boolean equal;
boolean delta;
boolean uniform;
uint12 value;
} cmpStruct;

void OnReset() {
PreviousBuffer = 0;
CurrentBuffer = 0;
ColorDepthMode = COLOR_DEPTH - MODE_24;
ComparisonMask = Oxffffff;
ComparisonDepth = 8;
ClearFrame = TRUE;
FrameWidth = 0;
FrameHeight = 0;
SeriesBitWidth = 1;
FrameStatusPending = FALSE;
FrameSizeChanged = FALSE;
Feedback = 0;
}
void Forever() {
uint8 message;
while (InFIFO empty) ; // spin Read message from InFIFO;
switch((message >> 4) & Oxf) {
case MESSAGESTARTFRAME:
if (FrameStatusPending) SendFrameBtatus(FRAME_STATUS OK);

if (ColorDepthModeChanged I ComparisonMaskChanged) SendModeInfo();
if (FrameSizeChanged) SendFrameSize(;
ProcessFrame(message);
break;

case MESSAGEHORIZONTALTIMING:
case MESSAGEVERTICALTIMING:
case MESSAGE_PIXEL_CLOCK_RATE:
if (FrameStatusPending) SendFrameStatus(FRAME_STATUS ABORT);
ForwardMessage(mess'age); ~
FrameSizeChanged = TRUE;
break;
case MESSAGEENDFRAME:
if (message & Oxf) ClearFrame = TRUE;
else ClearFrame = FALSE;

if (ClearFrame & FrameStatusPending) SendFrameStatus(FRAME_STATUS ABORT);
else if (FrameStatusPending) ~
SendFrameStatus(FRAME STATU6 OK);
Flush byte from InFIFO;

/I Send a ClearFrame command over the link if (ClearFrame) {
while (CzndFIFO is full) ; // spin Write CLEARrFRAME to CmdFIFO;
break;

default:
unrecognized header, Error.
Flush byte from InFIFO;
break;
}
}

void ForwardMessage(uint8 message) {
uint8 data;
uintl6 snoop = 0;
int count = message & Oxf;
int type = (message & OxfO) >> 4;
Fiush byte from InFIFO;

for (int b = 0; b < count; b++) {
Read data from InFIFO;
while (CmdFIFO is full) ; // spin Write data to CmdFIFO;
snoop = (snoop << 8) ( data;
if (b == 1) {

snooped frame size ready if (type == MESSAGE_HORIZONTAL_TIMING) Fram.eWidth = snoop >> 3;
else if (type == MESSAGE_VERTICAL_TIMING) FrameHeight = snoop >> 3;
}
Flush byte from InFIFO;
}
}
bool ResolutionChange() {
if (InFIFO not Empty) {
uint8 data;
uint4 peek;

Read data from InFIFO; // do not flush!
peek = (data >> 4) & Oxf;
if (((peek == MESSAGE_END_FRAME) && (data & Oxl)) (peek == MESSAGE VERTICAL TIMING)) {

Stop! Resolution has changed.
Forward an Abort message to the VDU
/I and abort the current frame processing.
SendFrameStatus(FRAME_STATUS_ABORT);
return(TRUE);
}
}
return(FALSE);
}

void ProcessFrame(message) {
MemRequest request;

Init some frame specific variables //
CurrentSeriesMatch = 0;
PreviousSeriesMatch = 0;
ConsecutivePixels = 0;
EncodeCount = 0;
SeriesDataCount = 0;
SeriesData = 0;
SeriesRun = 0;

// Reset mode to that selected initially.

if (ColorDepthMode != InitialColorDepthMode) {
ChanqeColorDepth(0);
ColorDepthMode = InitialColorDepthMode;
}

CurrentBuffer = (message & OxfO) >> 4;
PreviousBuffer = 1- CurrentBuffer; // 2-buffer specific Issue a read request to memory for the relevant frame information.

while (MemFIFO.FULL) spin request.currentBuffer = CurrentBuffer;
request.previousBuffer = PreviousBuffer;
request.width = FrameWidth;
request.height = FrameHeight;
Write request to MemFIFO;

Now wait for the requested data to show up and then /I proceed to process it.
Note: This does incur a startup delay on each frame of compression waiting for memory FIFOs to be "primed".
l/
uintl6 x, y;

for (y = 0; y < FrameHeight; y++) {
for (x = 0; x < FrameWidth; ) {
if (Feedback != 0) PendingChange = TRUE;
if (ResolutionChange()) return;

Spin waiting for data from memory. Keep an eye out for incoming messages indicating a resolution change and the need to abort the current frame processing.
uint32 time = 0;
while (CurFIFO.empty -1 PrevFIFO.empty) {
CheckForResolutionChange();
if (ResolutionChange()) {
return;
}
) CurPixel = data from CurFIFO;
OldPixel = data from PrevFIFO;
LinePixel = LineBuffer[x];
if (ProcessPixel(x,y)) {
/J
update line buffer and previous pixel LineBuffer[x] = CurPixel;
PrevPixel = CurPixel;

Update the Series Pixels if series is not active. CurrentSeriesMatch will have been // set to zero if series is still active, otherwise CurrentSeriesMatch will be the index of the matching entry. If no match, it will be the index of the oldest entry which will cause all // entries to shift.
//
if (CurrentSeriesMatch != 0) {
for (i = CurrentSeriesMatch; i > 1; i--) {
sPixel[i] = sPixel[i-1);

sPixel[0] = CurPixel;
} =
x++;

Flush pixel from CurFIFO and PrevFIFO;
}

if (PendingChange) {
ChangeCol.orDepth(Feedback);
Feedback = 0;
}
}

ClearFrame = FALSE;
FrameStatusPending = TRUE;
}

Global inputs to ProcessPixel are:
/I CurPixel - the output of the CurPixel FIFO from Memory OldPixel - the output of the OldPixel FIFO from Memory PrevPixel - the previous pixel register LinePixel - the pixel read from Line Buffer bool ProcessPixel(uintl6 x, uintl6 y) {
uintl2 deltaValue;
boolean currentPixelProcessed = TRUE;
unsigned command;
bool endHere = FALSE;
uint6 startRL = 0;
int MSTerminate = 0;

cmpStruct frameCmp, lineCmp, pixelCmp;
bool seriesCmp[MAX_SERIES,DEPTH];

if (ClearFrame) {
if (y == 0) LinePixel = 0;
if ((x == 0) && (y == 0)) PrevPixel = 0;
OldPixel = 0;
}

The following Cmp* logic blocks all operate in parallel /I processing a pixel each clock pixelCmp = CmpDel.ta(CurPixel, PrevPixel);
lineCmp = CmpDelta(CurPixel, LinePixel);
frameCmp = CmpDelta(CurPixel, OldPixel);
for (uint5 i= 0; i< MAX_SERIES_DEPTH; i++) seriesCmp[i] = Cmp(CurPixel, SerPi.xel[i]);
if (ClearFrame) pFActive = FALSE;
II
Initialize the default command as the leading contender*of active commands iii case run terminates here.
//
if (pFActive) command = CMDNC;
else if (pPActive) command = CMDCL;
else if (pLActive) command = CMD CA;
else if (SActivej command = CMD_SERIES;
else command = NULL;

Adjust current state of active encodings based on inputs pFActive = pFActive & frameCmp.equal;
pLActive = pLActive & lineCmp.equal;
pPActive = pPActive & pixelCmp.equal;
sActive = sActive &(seriesCmp[0] ~... ~ seriesCmp[SeriesDepth]);
If Series is still active, determine which pixel in the series matches the current pixel.
if (sActive) {
uint2 serNum;
for (serNum = 0;
(seriesCmp[serNum] == 0) && serNum < SeriesDepth;
i+f) if (serNum == SeriesDepth) CurrentSeriesMatch = SeriesDepth - 1;
else CurrentSeriesMatch = serNum;
}

Determine the leading delta contender, if any_ Uniform deltas take precedence, followed by non-uniform in the order: frame, line, pixel.

if (frameCmp.uniform) {
deltaType = CMD_DELTANC I Ox8;
deltaValue = frameCmp_.value;
I else if (pixelCmp.uniform) {
deltaType = CMD_DELTA CL I 0x8;
deltaValue = pixelCmp.value;
} else if (lineCmp.uniform) {
deltaType = CMD_DELTA_CA I 0x8;
deltaValue = lineCmp.value;
} else if (frameCmp.delta) {
deltaType = CMD_DELTANC;
deltaValue = frameCmp.value;
} else if (pixelCmp.delta) {
deltaType = CMD_DELTACL;
deltaValue = pixelCmp_.value;
} else if (lineCmp.delta) {
deltaType = CMDDELTACA;
deltaValue = lineCmp.value;
} else {
deltaType = NULL;
deltaValue = 0;
}

See if any of the directional encodings are still active and legal.
if ((pFActive (pPActive && ((x != (FrameWidth - 1)) CopyLeftWrap)) t~
(pLActive && (yPos != 0))) {

EncodeCount ++;

1/ We're de-activating series if ANY runs are possible to avoid having to track the order of pixels within a series command.

sActive = FALSE;

if (EncodeCount == MAX-ENCODE_COUNT) endHere = TRUE;
} else if (sActive && (PendingChange == 0)) {
ll Series is the only active command.
Accumulate series bits internally until they exceed internal limits, then write to the output series buffer Series count should always be internal, so therefore has a max of 2^25 pixels...more than enough.

EncodeCount ++;
SeriesData = (SeriesData << SeriesBitWidth) ~
CurrentSeriesMatch;
SeriesDataCount += SeriesBitWidth;

Keep track of RL counts while in series if (frameCmp.equal) MSRunNC++;' else MSRunNC = 0;
if (lineCmp.equal) MSRunCA++;
else MSRunCA = 0;
if (pixelCmp.equal) MSRunCL++;
else MSRunCL = 0;

//
When series is at a boundary, check runs if (EncodeCount & Ox7) == 0) {
MSTerminate = 0;
if (MSRunNC < 8) MSRunNC = 0;
if (MSRunCL < 8) MSRunCL = 0;
if (MSRunCA < 8) MSRunCA = 0;
if (MSRunNC == 16) MSTerminate 1= Ox1;
if (MSRunCL == 16) MSTerminate 1= 0x2;
if (MSRunCA == 16) MSTerminate 1= 0x4 }

if (MSTerminate) {

// It's beneficial to terminate the series at this point and replace with an active Run Length command //
endHere = TRUE;
EncodeCount -= 16;
SeriesDataCount = 0;
startRL = 16;
} else if (EncodeCount == MAX SERIES LENGTH) {
endHere = TRUE; ~ ~
}
if ((SeriesDataCount == 24) {
write 24-bit SeriesData out //
while (DataFTFO.FULL) ; // spin write SeriesData to DataFIFO;
SeriesData = 0;
SeriesDataCount = 0;
}
CurrentSeriesMatch = 0;
} else {
endHere = TRUE;
}

if (endHere 11 PendingChange) {

Run is terminating (or never started) if (EncodeCount == 0) {

//
// Run never started, output pixel or delta if (deltaType != NULL) {
FinishPendingPixels(};
WriteCourmand (deltaType, deltaValue);
} else {
while (DataFIFO.FULL) ; // spin Write curPixel to DataFIFO;
ConsecutivePixels++;
if (ConsecutivePixels >= MAXPIXEL_RUN) {
WriteCommand(CMD_NEW_PIXEL, ConsecutivePixels);
ConsecutivePixels = 0;
}
}

} else {
//
Run has terminated, either because encoding failed, or end condition reached.

if (not about to do a NewPixel) FinishPendingPixels(); // flush any DataFIFO pixels if (SeriesDataCount & (command == CMD SERIES)) {

//
flush any pending series data to the DataFIFO
while (DataFrFO.FULL) ; // spin Write SeriesData to DataFIFO;
SeriesData = 0;
SeriesDataCount = 0;
}
currentActive = pFActive ( pLActive I pPActive I sActive;
if (currentActive == 0) {

Encoding failed on this pixel, and therefore the current command does not include this pixel.
/I Output command for previous encoding which was initialized on entry to this function WriteCommand(command, EncodeCount);
currentPixelProcessed = FALSE;

} else {

//
run terminated due to end of line, frame, etc.
Pick command based on curxent active encoders.
if (pFActive) command = CMDNC;
else if (pPActive)_ command = CMD_CL;
else if (pLActive) command = CMDCA;
else if (sActive) command = CMD SERIES;
WriteComanand(command, EncodeCount);
}
}
if (startRL != 0) {
pFActive = pLActive = pPActive = sActive = FALSE;
if (MSTerminate & Ox1) pFActive = TRUE;
if (MSTerminate & Ox2) pLActive = TRUE;
if (MSTerminate & Ox4) pPActive = TRUE;
EncodeCount = startRL;
} else pFActive = pLActive = pPActive = sActive = TRUE;
}

return(currentPixelProcessed);
}

void ChangeColorDepth(uint4 feedback) {
uint8 data[2];

if (feedback == 0) ColorDepthMode = COLOR_DEPTH_MODE_24;
if (feedback == 1) ColorDepthMode = COLOR_DEPTH_MODE_15;
else ColorDepthMode = COLOR DEPTH MODE_7;
data[O] = INFO_HEADER; // 40 data[1] = COLOR DEPTH_HEADER I ColorDepthMode;
for (int index = 0; index < 2; index++) {
while (CmdFIFO FULL) ; // spin Write data[index].to CmdFIFO;
}
}

void SendFrameSize() {
uint8 data[6];
data[O] = INFO_HEADER; // 40 data[1] = FRAMESIZEHEADER; // 85 data[2] = Frame_Height & Oxff;
data[3] = (FrameHeight & OxffOO) >> 8;
data[4] = FrameWidth & Oxff;
data[5] = (FrameWidth & Oxff00) >> 8;
data[6] = ColorDepth; //????

for (int index = 0; index < 7; index++) {while (CmdFIFO FULL) ; // spin Write data[index] to CmdFIFO;
}
}
void SendP'rameStatus(uint4 status) {
uint8 data[2];
data[0] = INFOHEADER; // 40 data[l] = FRAME STATUS I status;

for (int index = 0; index < 2; index++) {
while (CmdFIFO FULL) ; // spin Write data[index] to CmdFIFO;
}
FrameStatusPending = FALSE;
}

void SendModeInfo() {
uint8 data[6];
data[0] = INFO_HEADER; // 40 data[1]== COLOR_DEPTH HEADER I ColorDepthMode;
data[2] = INFOHEADER; // 40 data[3] = SERIES_DEPTH_HEADER I SeriesDepthMode;
data[4] = INFOHEADER; // 40 data[5] = COMPARISON DEPTH HEADER I ComparisonDepth;
for (int index = 0; index < 6; index++) {
while (CmdFIFO FULL) ; // spin =
Write data[index] to CmdFIFO;
}
}

cmpStruct C,`mpDelta(PIXEL cur, PIXEL ref) {
cmpStruct cd;
uint8 diffred, diffgreen, diffblue;
cur = cur & ComparisonMask;
ref = ref & ComparisonMask;
diffred = cur.red - ref.red;
diffgreen = cur.green - ref.green;
diffblue = cur.blue - ref.blue;
if (cur == ref) cd.equal = TRUE;
else cd.equal = FALSE;

if ((diffred > MAX_DELTA) (diffred < MIN_DELTA) (diffgreen > MAXDELTA) (diffgreen < MINDELTA) (diffblue > MAX DELTA) II (diffblue < MIN_DELTA)) {
cd.delta = FALSE;
cd.uniform = FALSE;
} else {
cd.delta = TRUE;

if ((diffred == diffgreen) &&
diffgreen == diffblue)) cd.uniform = TRUE;
else cd.uniform = FALSE;
}

cd.value = (diffred & Oxf) {
((diffgreen & Oxf) << 4) ~
((diffblue & Oxf) << 8);
}
boolean Cmp(PIXEL cur, PIXEL ref) {
cur = cur & ComparisonMask;
ref = ref & ComparisonMask;
if (cur == ref) return(TRUE);
else return(FALSE);
}

void FinishPendingPixels() {
if (ConsecutivePixels) {
WriteCommand(CMD_NEW_PIXEL, ConsecutivePixels);
ConsecutivePixels = 0;
}
}

void WriteCommand(uint8 command, uint24 count) if ((command != CMD_NEWPIXEL) && ConsecutivePixels) Fini.shPendingPixels();

if ((command == CMD SERIES) && SeriesDataCount) //
flush any pending series data to the DataFIFO
while (DataFIFO.FULL) ; // spin Write SeriesData to DataFIFO;
SeriesData = 0;
SeriesDataCount = 0;
}

Write command to Command FIFO for output processing uint32 data = (command << 24) ~ count;
while (CmdFIFO FULL) ; // spin Write data to CmdFIFO;
}

void ForwardMessage(uint8 header) {
uint8 data;
uint4 count = header.count;
while (CmdFIFO FULL) ; // spin write header to CmdFIFO;
while (count) {
Flush byte from InFIFO;
while (InFIFO empty) ; // spin Read data from InFIFO;
while (CmdFIFO FULL) ; // spin Write data to CmdFIFO;
}
Flush byte from InFIFO;
}

Output Process #define MINIMUM_PIXEL_THROTTLE 1024 Minimum number of pixels that have to be processed for a frame before contemplating throttling back on the color depth.

uint4 Mode;
void OnReset() {
Mode = InitialColorDepthMode;
BytesPerFrame = 0;
PixelsPerFrame = 0;
}
void Forever() { CmdFIFOTs message;
while (CmdFIFO is empty) ; // spin Read message from CmdFIFO;

if (message == INFO HEADER) {

For informational messages, process the second byte.

Flush message from CmdFIFO;
while (CmdFIFO is empty) ; // spin Read message from CmdFIFO;
ProcessINFO(message);
return;
}
switch(message.command) {
case CMDNC:
case CMD_CA:
case CMD CL:

Output an encoded compression command up to 32-bits per clock. The entire command is included in this message.
OutputCommand(message.command, message.data);
break;

case CMD DELTA*:

output a single delta command, contained entirely in this message.
II
OutputDelta(message.command, message.data);
break;

case CMD NEW PIXEL:

output a pixel per clock The first output consists of the command byte and the first pixel. The subsequent data is pulled from the Pixel Buffer OutputPixels(message.data);
break;

case CMD SERIES:

output up to 32-bits per clock consisting of up to 32-bits of incoming command followed by data from the Series Buffer OutputSeries(message.data);
break;

default:
break;
}

Flush message from CmdFIFO;

if ((PBRatio != 0) && (PixelsPerFrame > MINIMUMPIXELTHROTTLE) &&
((BytesPerFrame * PBRatio) > PixelsPerFrame) &&
(Feedback == 0)) {

Need to try and throttle back the color depth if (Mode == COLOR_DEPTH_MODE_24) Feedback = 1;
else if (Mode == COLOR_DEPTH_MODE_15) Feedback = 2;
}
}

void ProcessInfo(uint8 message) {

First, output INFO header and sub-header while (OutFIFO FULL) spin Write INFOHEADER to OutFIFO;
Write message to OutFIFO;
Flush byte from CmdFIFO;

// Then read and output each byte of info int count = message & Oxf;
for (int index = 0; index < count; index++) {
Read data from CmdFIFO;
while (OutFIFO FULL) ; // spin Write data to OutFIFO;
Flush byte from CmdFIFO;
}

Now do any local processing required switch((message & OxfO) >> 4) {
case FRAME STATUS:

// This allows a place for the output process to see EOF
//
BytesPerFrame = 0; // reset this here.
break;

case COLOR DEPTH:

Mode = message & Oxf;
break;
}
}

void OutputCommand(uint8 header, uint24 count) {
uint32 data;
uint4 shift = 8;
PixelsPerFrame += count;

data = (header << 0x5) I (count & Oxf);
count = count >> 4;
if (count) data I= Ox10=;
BytesPerFrame ++;
while (count) {
data I= ((count & Ox7f) << shift);
count = count >> 7;
if (count) data J= (0x80 << shift);
shift += 8;
BytesPerFrame ++;
}

The following data should be written at up to 32-bits in a single clock.

while (OutFIFO FULL) ; // spin write data to the OutFIFO;
}

void OutputPixels(uint4 count) {
uint24 data24;

if (DataFIFO is empty) {

WHOA! Error!.!! DataFIFO should always have.data by the time the CmdFIFO is written to with a command that expects data.
}
PixelsPerFrame += count;

Read data24 from DataFIFO;

if (Mode == COLOR_DEPTH_MODE_7) {
uint8 datas;
data8 = data24 & Ox7f;
data8 I= Ox80;
while (OutputFIFO FULL) ; // spin Write data8 to OutputFIFO;
BytesPerFrame += 1;

} else {
uint4 index;
uint32 data32;

data32 = (data24 << 8).1 (CMD-NEW_PIXEL << 4) ~ (count - 1);
while (OutputFIFO FULL) ; // spin Write data32 to OutputFIFO;
BytesPerFrame += (Mode == COLOR_DEPTH_MODE15) ? 3 : 4;
for (index = 1; index < count; index++) {
if (DataFIFO is empty) Error!!!
Read data from DataFIFO;
while (OutputFIFO FULL) ; // spin Write data to OutputFIFO;
BytesPerFrame += ((Mode == COLOR-DEPTH-MODE-15) ? 2 3);
}
}
}

void OutputSeries(uint24 count) {
uint32 data;
int sent = 0;
int shift = 0;
uint4 command = CMDSERIES;
uint8 bytesToSend = count >> 3;
PixelsPerFrame += count;

//
// use count to determine header size and construct header //
if (count < 16) {
data = (command << 5) I count;
shift = 8;
bytesToSend += 1;
} else {
data =.(command << 5) HEADER EXT ~(count & Oxf);
data J= ((count & Ox7fO) << 4);
shift = 16;
bytesToSend += 2;
}

BytesPerFrame += bytesToSend;
//
Series data stream is read from the DataFIFO in 24-bit chunks, but can be written to the OutFIFO
in 32-bit chunks including the header initially.
while (sent < count) {
uint24 sdata;

Read sdata from DataFIFO;
data 1= (sdata << shift);
shift += 24;
if (shift >= 32) {
uint3 numbytes =(bytesToSend > 4) ? 4 bytesToSend;
while (OutputFIFO FULL) ; // spin Write "numbytes" of data to OutputFIFO;
shift = shift - 32;
data = sdata >> (24 - shift);
}
}
}

void OutputDelta(uint8 header, uint24 delta) {
while (OutputFIFO FULL) ; // spin PixelsPerFrame ++;

Handle Uniform and non-uniform differently if (header & Ox8) {
uint8 data;
data =((header & Ox7) << 5) 1(delta & Oxf);
Write data to OutputFIFO;
BytesPerFrame ++;
} else {
uintl6 data;
.data = (header << 5) I Ox1O I (delta & Oxf) I
I ((delta << 4) & OxffOO);
Write data to OutputFIFO;
BytesPerFrame += 2;
}
}

Claims (19)

1. A video compression routine including:
examining pixels in selected past and present video frames;
for a given said present frame and for a current pixel thereof, making fixed bit length packets that define at least the current pixel, said packets including:
(1) pixel-copy packets having at least one packet-type identifier bit and at least one repeat count identification bit;
(2) color defining packets having at least one packet-type identifier bit and at least one color identifier bit;
(3) delta defining packets having at least one packet-type identifier bit and at least one delta identifier bit;
(4) two-color series encoded packets having at least one packet-type identifier bit and a series of binary color identifier bits corresponding to only two colors and coinciding with a color of said present pixel and a number of pixels immediately following said current pixel.
2. A video transmission system, comprising:
a video encoding routine to encode serial pixel data according to an algorithm including, for a given set of consecutive pixels, choosing one of the following encoding that yields higher compression ratios:
(1) copy-pixel encoding that makes data packets defining the number of consecutive pixels of that can be represented by copying the color of a respective pixel with a frame location relationship;
(2) individually colored pixel encoding that makes a data packets each defining each color of the pixels in said given set of consecutive pixels; and (3) delta value pixel encoding that makes a data packets each defining the difference between the color a current pixel and a respective pixel with a frame location relationship;
(4) two-color series pixel encoding that makes a data packet including each bit indicating which color, from a two-color set, applies to each of the pixels in said series of consecutive pixels, wherein pixels in said given set of consecutive pixels are comprised of colors from a two color set.
3. A video transmission system according to claim 2, wherein:
the copy-pixel and the delta value pixel encoding include encoding based on a frame location relationship between the present pixel in the present frame and another pixel in the present frame.
4. A video transmission system according to claim 2, wherein:
the copy-pixel and the delta value encoding includes encoding based on a selection of:
a relationship between the present pixel in the present frame and another pixel to the left of the present pixel in the present frame;
a relationship between the present pixel in the present frame and another pixel above the present pixel in the present frame; and a relationship between the present pixel in the present frame and another pixel at the same location but in a previous frame.
5. A video transmission system according to claim 2, wherein:
the two-color series pixel encoding includes encoding wherein a sequential series of x pixels, beginning with the current pixel, are comprised only of colors from a two-color set.
6. A method of encoding video, comprising the steps of:
predefining a set of pixel-copy commands based on frame location relationships between the present pixel and other pixels;
for the present pixel, encoding according to a hierarchy selection from one of:
(1) copy-pixel encoding that makes data packets defining the number of consecutive pixels of that can be represented by copying the color of a respective pixel with a frame location relationship;

(2) individually colored pixel encoding that makes a data packets each defining each color of the pixels in said given set of consecutive pixels;
(3) delta value pixel encoding that makes a data packets each defining the difference between the color a current pixel and a respective pixel with a frame location relationship; and (4) two-color series pixel encoding that makes a data packet including each bit indicating which color, from a two-color set, applies to each of the pixels in said series of consecutive pixels, wherein pixels in said given set of consecutive pixels are comprised of colors from a two color set.
7. A method according to claim 6, wherein the encoding includes making fixed-size data packets including an opcode portion that identifies said copy-pixel encoding, delta value pixel encoding, two-color series pixel encoding, and individually colored pixel encoding and a payload portion.
8. A method according to claim 7, wherein for at least some of the fixed-size data packets:
said opcode includes one bit identifying whether the corresponding data packet be associated with said individually colored pixel encoding;
said opcode of the data packet associated with said copy-pixel encoding and said two-color series pixel encoding includes two additional bits identifying the hierarchy selections associated with three different pixel copy commands and the two-color series pixels command; and said payload portion has a length of at least n-bits.
9. A method according to claim 8, wherein others of the fixed-size data packets include an extension bit linking payload of the data packet with the payload of a previous data packet and including a payload of greater than n-bits.
10. A method according to claim 9, wherein others of the fixed-size packets include an extension bit linking the payload of the current data packet with the payload of the next data packet, which next data packet then includes a payload of greater than n-bits.
11. A method of compressing a video frame comprising pixels defined by n-bit color values, comprising the steps of:
at an encoder:
for a plurality of directional relationship types of a current pixel relative to a reference pixel, determining a (n-x)-bit delta value by determining a difference between an n-bit reference pixel color value and an n-bit current color value, where x is a predetermined number of significant bits of n;
if the x significant bits of the n-bit reference pixel color and the n-bit current color value are substantially equal, sending the delta value to a decoder;
at a decoder:
generating the n-bit current color value adjusting the n-bit reference pixel color value by the delta value.
12. The method of claim 11, wherein said plurality of directional relationship types specify the following relationships of the reference pixel to the current pixel:
a location to the left of the current pixel;
a location above the current pixel;
a location in same location as the current pixel but in a previous frame.
13. A method of compressing a video frame comprising pixels defined by n-bit color values partitioned into three y1-bit, y2-bit, and y3-bit channels, comprising the steps of:
for the y1-bit channel:
for a plurality of directional relationship types of a current pixel relative to a reference pixel, generating a (y1-x1)-bit delta value by determining a difference between a y1-bit reference pixel color value and a respective y1-bit current color value, where x1 is a predetermined number of significant bits of y1;
for the y2-bit channel:

for a plurality of directional relationship types of a current pixel relative to a reference pixel, generating a (y2-x2)-bit delta value by determining a difference between a y2-bit reference pixel color value from a respective y2-bit current color value, where X2 is a predetermined number of significant bits of y2;
for the y3-bit channel:
for a plurality of directional relationship types of a current pixel relative to a reference pixel, generating a (y3-x3)-bit delta value by determining a difference between a y3-bit reference pixel color value from a respective y3-bit current color value, where x3 is a predetermined number of significant bits of y3;
if the x1, x2, and x3 significant bits of respective y1-bit, y2-bit, and y3-bit channels are equal for the n-bit reference pixel color and the n-bit current color value, determining the n-bit current color value by adjusting each channel of the n-bit reference pixel color value by the respective delta values.
14. A video transmission system comprising;
an encoder that for a plurality of directional relationship types between a current and reference pixels, determines a (n-x)-bit delta value by determining a difference between an n-bit reference pixel color value from an n-bit current color value, where x is a predetermined number of significant bits of n and generates a delta value if the x significant bits of the n-bit reference pixel color and the n-bit current color value are equal; and a decoder that is adapted to receive said delta value from said encoder and generate the n-bit current color value by adjusting the n-bit reference pixel color value by the delta value.
15. The video transmission system of claim 14, wherein said plurality of directional relationship types specify the following relationships between the reference pixel and the current pixel:
a location to the left of the current pixel;
a location above the current pixel;
a location in same location as the current pixel but in a previous frame.
16. A video transmission system, where a video frame comprises pixels defined by n-bit color values partitioned into three y1-bit, y2-bit, and y3-bit channels, comprising;
an encoder that for the y1-bit channel:
for a plurality of directional relationship types of a current pixel relative to a reference pixel, generating a (y1-x1)-bit delta value by determining a difference between a y1-bit reference pixel color value and a respective y1-bit current color value, where x1 is a predetermined number of significant bits of y1;
for the y2-bit channel:
for a plurality of directional relationship types of a current pixel relative to a reference pixel, generating a (y2-x2)-bit delta value by determining a difference between a y2-bit reference pixel color value and a respective y2-bit current color value, where x2 is a predetermined number of significant bits of y2;
for the y3-bit channel:
for a plurality of directional relationship types of a current pixel relative to a reference pixel, generating a (y3-x3)-bit delta value by determining a difference between a y3-bit reference pixel color value from a respective y3-bit current color value, where x3 is a predetermined number of significant bits of y3;
and if the x1, x2, and X3 significant bits of respective y1-bit, y2-bit, and y3-bit channels are equal for the n-bit reference pixel color and the n-bit current color value, generating a delta value for each channel; and a decoder that is adapted to determine the n-bit current color value by adjusting each channel of the n-bit reference pixel color value by the respective delta values.
17. The video transmission system of claim 16, wherein said plurality of directional relationship types specify the following relationships between the reference pixel and the current pixel:
a location to the left of the current pixel;
a location above the current pixel;
a location in same location as the current pixel but in the previous frame.
18. A method dynamically adjusting the performance of a video compression system that compresses a video frame comprising pixels defined by n-bit color values channels by comparing the x most significant n-bits of current pixel to the respective x most significant bits of a reference pixel, comprising the steps of:
specifying a bytes per pixel ratio (BP);
for a group compressed pixels in video frame, determining an actual number of bytes required to compress the group of pixels;
determining a threshold number of bytes by multiplying BP and the number of pixels in the group;
comparing the actual number of bytes to the threshold number of bytes; and if the actual number of bytes is greater than the threshold number of bytes, reducing x.
19. The method of 18, wherein BP is defined by the following equation:
BP = (bytes/second)/(frame width * frame height * frames per second);
and wherein the step of setting a bytes per pixel ratio (BP) further comprises setting the (bytes/second) value.
CA002650663A 2006-04-28 2007-04-30 Dvc delta commands Abandoned CA2650663A1 (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US79557706P 2006-04-28 2006-04-28
US60/795,577 2006-04-28
PCT/US2007/010376 WO2007127452A2 (en) 2006-04-28 2007-04-30 Dvc delta commands
US11/790,994 US7782961B2 (en) 2006-04-28 2007-04-30 DVC delta commands
US11/790,994 2007-04-30

Publications (1)

Publication Number Publication Date
CA2650663A1 true CA2650663A1 (en) 2007-11-08

Family

ID=38656255

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002650663A Abandoned CA2650663A1 (en) 2006-04-28 2007-04-30 Dvc delta commands

Country Status (7)

Country Link
US (2) US7782961B2 (en)
EP (1) EP2016767A4 (en)
CA (1) CA2650663A1 (en)
IL (1) IL194952A (en)
MY (1) MY149291A (en)
TW (1) TW200814780A (en)
WO (1) WO2007127452A2 (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060126718A1 (en) * 2002-10-01 2006-06-15 Avocent Corporation Video compression encoder
US7321623B2 (en) 2002-10-01 2008-01-22 Avocent Corporation Video compression system
US9560371B2 (en) 2003-07-30 2017-01-31 Avocent Corporation Video compression system
US7457461B2 (en) 2004-06-25 2008-11-25 Avocent Corporation Video compression noise immunity
US7555570B2 (en) 2006-02-17 2009-06-30 Avocent Huntsville Corporation Device and method for configuring a target device
US8718147B2 (en) * 2006-02-17 2014-05-06 Avocent Huntsville Corporation Video compression algorithm
EP2016767A4 (en) 2006-04-28 2014-08-13 Avocent Corp Dvc delta commands
WO2011072893A1 (en) * 2009-12-16 2011-06-23 International Business Machines Corporation Video coding using pixel-streams
WO2012027354A1 (en) * 2010-08-24 2012-03-01 Avocent Corporation Method and system for block and dvc video compression
US20120106650A1 (en) * 2010-08-24 2012-05-03 Siegman Craig S Method and System for Block and DVC Compression
US9729875B2 (en) * 2013-07-08 2017-08-08 Sony Corporation Palette coding mode
RU2689189C2 (en) 2013-12-10 2019-05-24 Кэнон Кабусики Кайся Method and apparatus for encoding or decoding a pixel unit
RU2679566C1 (en) * 2013-12-10 2019-02-11 Кэнон Кабусики Кайся Improved palette mode in hevc
US9584701B2 (en) * 2014-01-06 2017-02-28 Panamorph, Inc. Image processing system and method
US11350015B2 (en) 2014-01-06 2022-05-31 Panamorph, Inc. Image processing system and method
US10304155B2 (en) * 2017-02-24 2019-05-28 Advanced Micro Devices, Inc. Delta color compression application to video
US11245911B1 (en) * 2020-05-12 2022-02-08 Whirlwind 3D, LLC Video encoder/decoder (codec) for real-time applications and size/b and width reduction
CA3221285A1 (en) * 2021-08-03 2023-02-09 Haralson K. Reeves, Jr. Dvcx and dvcy extensions to dvc video compression

Family Cites Families (190)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3710011A (en) 1970-12-04 1973-01-09 Computer Image Corp System for automatically producing a color display of a scene from a black and white representation of the scene
US3710001A (en) * 1971-08-16 1973-01-09 Soc De Traitements Electrolytiques Et Electrothermiques Vacuum tight high-frequency coaxial lead-through capable of handling high power
US3925762A (en) 1973-10-25 1975-12-09 Gen Electric Patient monitoring and data processing system
US3935379A (en) * 1974-05-09 1976-01-27 General Dynamics Corporation Method of and system for adaptive run length encoding of image representing digital information
US4005411A (en) * 1974-12-30 1977-01-25 International Business Machines Corporation Compression of gray scale imagery to less than one bit per picture element
JPS5816667B2 (en) * 1976-07-21 1983-04-01 ケイディディ株式会社 Interline encoding method for facsimile signals
MX4130E (en) * 1977-05-20 1982-01-04 Amdahl Corp IMPROVEMENTS IN DATA PROCESSING SYSTEM AND INFORMATION SCRUTINY USING CHECK SUMS
US4384327A (en) * 1978-10-31 1983-05-17 Honeywell Information Systems Inc. Intersystem cycle control logic
FR2461405A1 (en) * 1979-07-09 1981-01-30 Temime Jean Pierre SYSTEM FOR CODING AND DECODING A DIGITAL VISIOPHONE SIGNAL
US4764769A (en) 1983-10-19 1988-08-16 Vega Precision Laboratories, Inc. Position coded pulse communication system
FI842333A (en) * 1984-06-08 1985-12-09 Valtion Teknillinen Tutkimuskeskus FOERFARANDE FOER IDENTIFIERING AV DE MEST FOERAENDRADE BILDOMRAODENA I LEVANDE VIDEOSIGNAL.
CA1287161C (en) * 1984-09-17 1991-07-30 Akihiro Furukawa Apparatus for discriminating a moving region and a stationary region in a video signal
JPS63108879A (en) 1986-10-25 1988-05-13 Nippon Telegr & Teleph Corp <Ntt> Video encoding transmission equipment
CA1283962C (en) 1986-12-08 1991-05-07 Gerald F. Youngblood Apparatus and method for communication between host cpu and remote terminal
ZA883232B (en) 1987-05-06 1989-07-26 Dowd Research Pty Ltd O Packet switches,switching methods,protocols and networks
US4774587A (en) 1987-06-02 1988-09-27 Eastman Kodak Company Still video transceiver processor
US4873515A (en) * 1987-10-16 1989-10-10 Evans & Sutherland Computer Corporation Computer graphics pixel processing system
JPH01162480A (en) 1987-12-18 1989-06-26 Sanyo Electric Co Ltd Method for encoding
JPH01303988A (en) 1988-06-01 1989-12-07 Hitachi Ltd Consecutive picture coding method and decoding method and encoder and decoder
US5136717A (en) * 1988-11-23 1992-08-04 Flavors Technology Inc. Realtime systolic, multiple-instruction, single-data parallel computer system
US4959833A (en) 1989-03-08 1990-09-25 Ics Electronics Corporation Data transmission method and bus extender
JPH034351A (en) 1989-04-26 1991-01-10 Dubner Computer Syst Inc System-bus-data-link device
JPH03130767A (en) 1989-10-16 1991-06-04 Brother Ind Ltd Pressure developing device
JP3092135B2 (en) * 1990-03-13 2000-09-25 株式会社日立製作所 Application execution control method
US5046119A (en) * 1990-03-16 1991-09-03 Apple Computer, Inc. Method and apparatus for compressing and decompressing color video data with an anti-aliasing mode
JP2530466Y2 (en) 1990-04-13 1997-03-26 日本ケーブル株式会社 Automatic circulation type cableway acceleration and deceleration transfer device
US5083214A (en) * 1990-05-02 1992-01-21 Eastman Kodak Company Apparatus and methods for extracting data from a scanned bit-mapped data strip
US5757973A (en) * 1991-01-11 1998-05-26 Sony Corporation Compression of image data seperated into frequency component data in a two dimensional spatial frequency domain
DE69230922T2 (en) 1991-01-17 2000-11-30 Mitsubishi Electric Corp Video signal encoder with block exchange technology
KR930011971B1 (en) 1991-01-29 1993-12-23 삼성전자 주식회사 Color signal border sharpness compensation circuit
US5339164A (en) * 1991-12-24 1994-08-16 Massachusetts Institute Of Technology Method and apparatus for encoding of data using both vector quantization and runlength encoding and using adaptive runlength encoding
JP3192457B2 (en) 1992-01-29 2001-07-30 バブコック日立株式会社 Non-consumable electrode arc welding method and apparatus
US5526024A (en) * 1992-03-12 1996-06-11 At&T Corp. Apparatus for synchronization and display of plurality of digital video data streams
US5325126A (en) * 1992-04-01 1994-06-28 Intel Corporation Method and apparatus for real time compression and decompression of a digital motion video signal
GB2267624B (en) * 1992-05-05 1995-09-20 Acorn Computers Ltd Image data compression
US5408542A (en) * 1992-05-12 1995-04-18 Apple Computer, Inc. Method and apparatus for real-time lossless compression and decompression of image data
US5664029A (en) * 1992-05-13 1997-09-02 Apple Computer, Inc. Method of disregarding changes in data in a location of a data structure based upon changes in data in nearby locations
US5430848A (en) 1992-08-14 1995-07-04 Loral Fairchild Corporation Distributed arbitration with programmable priorities
JPH06153180A (en) 1992-09-16 1994-05-31 Fujitsu Ltd Picture data coding method and device
US5566339A (en) 1992-10-23 1996-10-15 Fox Network Systems, Inc. System and method for monitoring computer environment and operation
US5732212A (en) * 1992-10-23 1998-03-24 Fox Network Systems, Inc. System and method for remote monitoring and operation of personal computers
JPH06152970A (en) * 1992-11-02 1994-05-31 Fujitsu Ltd Picture compressing method and picture processor
US5572235A (en) * 1992-11-02 1996-11-05 The 3Do Company Method and apparatus for processing image data
GB2274224B (en) * 1993-01-07 1997-02-26 Sony Broadcast & Communication Data compression
US5812534A (en) 1993-01-08 1998-09-22 Multi-Tech Systems, Inc. Voice over data conferencing for a computer-based personal communications system
US5563661A (en) * 1993-04-05 1996-10-08 Canon Kabushiki Kaisha Image processing apparatus
US5537142A (en) * 1993-10-20 1996-07-16 Videolan Technologies, Inc. Local area network for simultaneous, bi-directional transmission of video bandwidth signals, including a switching matrix which defines user connections, upstream connections, and downstream connections and has an efficient configuration to minimize the
JP3385077B2 (en) * 1993-10-28 2003-03-10 松下電器産業株式会社 Motion vector detection device
US5465118A (en) * 1993-12-17 1995-11-07 International Business Machines Corporation Luminance transition coding method for software motion video compression/decompression
US5664223A (en) 1994-04-05 1997-09-02 International Business Machines Corporation System for independently transferring data using two independently controlled DMA engines coupled between a FIFO buffer and two separate buses respectively
US6195391B1 (en) * 1994-05-31 2001-02-27 International Business Machines Corporation Hybrid video compression/decompression system
JPH0833000A (en) 1994-07-15 1996-02-02 Matsushita Electric Ind Co Ltd Method and device for video transmission
US5659707A (en) 1994-10-07 1997-08-19 Industrial Technology Research Institute Transfer labeling mechanism for multiple outstanding read requests on a split transaction bus
US6972786B1 (en) 1994-12-30 2005-12-06 Collaboration Properties, Inc. Multimedia services using central office
US5805735A (en) * 1995-03-02 1998-09-08 Apple Computer, Inc. Method and apparatus for compression of digitized image data using variable color fidelity
JPH08263262A (en) 1995-03-20 1996-10-11 Oki Data:Kk Compression method for printing data
US5799207A (en) 1995-03-28 1998-08-25 Industrial Technology Research Institute Non-blocking peripheral access architecture having a register configure to indicate a path selection for data transfer between a master, memory, and an I/O device
DE19513105A1 (en) * 1995-04-07 1996-10-10 Hell Ag Linotype Procedure for generating a contone map
US5586121A (en) * 1995-04-21 1996-12-17 Hybrid Networks, Inc. Asymmetric hybrid access system and method
US6661838B2 (en) * 1995-05-26 2003-12-09 Canon Kabushiki Kaisha Image processing apparatus for detecting changes of an image signal and image processing method therefor
TW303438B (en) * 1995-06-07 1997-04-21 Ast Res Inc
US5844940A (en) 1995-06-30 1998-12-01 Motorola, Inc. Method and apparatus for determining transmit power levels for data transmission and reception
US5793371A (en) * 1995-08-04 1998-08-11 Sun Microsystems, Inc. Method and apparatus for geometric compression of three-dimensional graphics data
US5764924A (en) * 1995-08-24 1998-06-09 Ncr Corporation Method and apparatus for extending a local PCI bus to a remote I/O backplane
US5721842A (en) * 1995-08-25 1998-02-24 Apex Pc Solutions, Inc. Interconnection system for viewing and controlling remotely connected computers with on-screen video overlay for controlling of the interconnection switch
US5754836A (en) * 1995-09-21 1998-05-19 Videoserver, Inc. Split bus architecture for multipoint control unit
US5781747A (en) 1995-11-14 1998-07-14 Mesa Ridge Technologies, Inc. Method and apparatus for extending the signal path of a peripheral component interconnect bus to a remote location
US5712986A (en) 1995-12-19 1998-01-27 Ncr Corporation Asynchronous PCI-to-PCI Bridge
JPH09233467A (en) 1996-02-21 1997-09-05 Fujitsu Ltd Image data communication equipment and communication data amount control method for image data communication system
US5675382A (en) * 1996-04-08 1997-10-07 Connectix Corporation Spatial compression and decompression for video
US5812169A (en) * 1996-05-14 1998-09-22 Eastman Kodak Company Combined storage of data for two printheads
US5870429A (en) * 1996-06-17 1999-02-09 Motorola, Inc. Apparatus method, and software modem for utilizing envelope delay distortion characteristics to determine a symbol rate and a carrier frequency for data transfer
ATE236490T1 (en) * 1996-07-31 2003-04-15 Matsushita Electric Ind Co Ltd IMAGE DECODER AND IMAGE DECODING METHOD
US5864681A (en) * 1996-08-09 1999-01-26 U.S. Robotics Access Corp. Video encoder/decoder system
US5764479A (en) * 1996-09-23 1998-06-09 International Business Machines Corporation Split system personal computer having floppy disk drive moveable between accessible and inaccessible positions
US6084638A (en) 1996-10-08 2000-07-04 Hare; Charles S. Computer interface extension system and method
US6094453A (en) * 1996-10-11 2000-07-25 Digital Accelerator Corporation Digital data compression with quad-tree coding of header file
US5898861A (en) * 1996-10-18 1999-04-27 Compaq Computer Corporation Transparent keyboard hot plug
US5828848A (en) * 1996-10-31 1998-10-27 Sensormatic Electronics Corporation Method and apparatus for compression and decompression of video data streams
US5990852A (en) 1996-10-31 1999-11-23 Fujitsu Limited Display screen duplication system and method
EP0844567A1 (en) 1996-11-21 1998-05-27 Hewlett-Packard Company Long haul PCI-to-PCI bridge
WO1998026603A1 (en) 1996-12-09 1998-06-18 Telecom Finland Oy Method for the transmission of video images
US5861764A (en) * 1996-12-31 1999-01-19 Compaq Computer Corporation Clock skew reduction using spider clock trace routing
TW361051B (en) * 1997-01-09 1999-06-11 Matsushita Electric Ind Co Ltd Motion vector detection apparatus
US5731706A (en) * 1997-02-18 1998-03-24 Koeman; Henriecus Method for efficient calculation of power sum cross-talk loss
JPH10257485A (en) 1997-03-10 1998-09-25 Victor Co Of Japan Ltd Detection circuit for repetitive image and image coder
US5997358A (en) 1997-09-02 1999-12-07 Lucent Technologies Inc. Electrical connector having time-delayed signal compensation
US6134613A (en) 1997-06-16 2000-10-17 Iomega Corporation Combined video processing and peripheral interface card for connection to a computer bus
US6425033B1 (en) 1997-06-20 2002-07-23 National Instruments Corporation System and method for connecting peripheral buses through a serial bus
US6064771A (en) * 1997-06-23 2000-05-16 Real-Time Geometry Corp. System and method for asynchronous, adaptive moving picture compression, and decompression
US5967853A (en) 1997-06-24 1999-10-19 Lucent Technologies Inc. Crosstalk compensation for electrical connectors
WO1999007077A2 (en) * 1997-07-31 1999-02-11 Stanford Syncom Inc. Means and method for a synchronous network communications system
US6304895B1 (en) * 1997-08-22 2001-10-16 Apex Inc. Method and system for intelligently controlling a remotely located computer
US5948092A (en) 1997-10-07 1999-09-07 International Business Machines Corporation Local bus IDE architecture for a split computer system
US6055597A (en) * 1997-10-30 2000-04-25 Micron Electronics, Inc. Bi-directional synchronizing buffer system
JPH11161782A (en) 1997-11-27 1999-06-18 Seiko Epson Corp Method and device for encoding color picture, and method and device for decoding color picture
US6240481B1 (en) * 1997-12-22 2001-05-29 Konica Corporation Data bus control for image forming apparatus
US6032261A (en) * 1997-12-30 2000-02-29 Philips Electronics North America Corp. Bus bridge with distribution of a common cycle clock to all bridge portals to provide synchronization of local buses, and method of operation thereof
JP2885235B1 (en) * 1998-01-14 1999-04-19 日本電気株式会社 Data compression method and machine readable recording medium recording compression program
US6012101A (en) * 1998-01-16 2000-01-04 Int Labs, Inc. Computer network having commonly located computing systems
US6829301B1 (en) * 1998-01-16 2004-12-07 Sarnoff Corporation Enhanced MPEG information distribution apparatus and method
US6038346A (en) * 1998-01-29 2000-03-14 Seiko Espoo Corporation Runs of adaptive pixel patterns (RAPP) for lossless image compression
US6360017B1 (en) * 1998-03-05 2002-03-19 Lucent Technologies Inc. Perceptual-based spatio-temporal segmentation for motion estimation
US6097368A (en) * 1998-03-31 2000-08-01 Matsushita Electric Industrial Company, Ltd. Motion pixel distortion reduction for a digital display device using pulse number equalization
GB9806767D0 (en) * 1998-03-31 1998-05-27 Philips Electronics Nv Pixel colour valve encoding and decoding
JPH11308465A (en) 1998-04-17 1999-11-05 Seiko Epson Corp Encoding method for color image, encoder therefor, decoding method for color image and decoder therefor
US6060890A (en) * 1998-04-17 2000-05-09 Advanced Micro Devices, Inc. Apparatus and method for measuring the length of a transmission cable
JPH11313213A (en) 1998-04-27 1999-11-09 Canon Inc Information processing system, information processing method and medium
US6373890B1 (en) * 1998-05-05 2002-04-16 Novalogic, Inc. Video compression and playback process
US6571393B1 (en) * 1998-05-27 2003-05-27 The Hong Kong University Of Science And Technology Data transmission system
US6202116B1 (en) * 1998-06-17 2001-03-13 Advanced Micro Devices, Inc. Write only bus with whole and half bus mode operation
US6124811A (en) * 1998-07-02 2000-09-26 Intel Corporation Real time algorithms and architectures for coding images compressed by DWT-based techniques
US6567464B2 (en) * 1998-07-24 2003-05-20 Compaq Information Technologies Group, L.P. Fast retrain based on communication profiles for a digital modem
US6070214A (en) * 1998-08-06 2000-05-30 Mobility Electronics, Inc. Serially linked bus bridge for expanding access over a first bus to a second bus
US6327307B1 (en) * 1998-08-07 2001-12-04 Motorola, Inc. Device, article of manufacture, method, memory, and computer-readable memory for removing video coding errors
US6065073A (en) * 1998-08-17 2000-05-16 Jato Technologies, Inc. Auto-polling unit for interrupt generation in a network interface device
AU5688199A (en) * 1998-08-20 2000-03-14 Raycer, Inc. System, apparatus and method for spatially sorting image data in a three-dimensional graphics pipeline
US6146158A (en) 1998-09-14 2000-11-14 Tagnology, Inc. Self-adjusting shelf mounted interconnect for a digital display
JP2000125111A (en) 1998-10-20 2000-04-28 Fujitsu Ltd Picture compression method, picture restoration method, picture compression device, picture reader, picture compression program storage medium and picture restoration program storage medium
US6418494B1 (en) 1998-10-30 2002-07-09 Cybex Computer Products Corporation Split computer architecture to separate user and processor while retaining original user interface
US6233226B1 (en) * 1998-12-14 2001-05-15 Verizon Laboratories Inc. System and method for analyzing and transmitting video over a switched network
US6754241B1 (en) * 1999-01-06 2004-06-22 Sarnoff Corporation Computer system for statistical multiplexing of bitstreams
GB2350039B (en) 1999-03-17 2004-06-23 Adder Tech Ltd Computer signal transmission system
US6470050B1 (en) * 1999-04-09 2002-10-22 Matsushita Electric Industrial Co., Ltd. Image coding apparatus and its motion vector detection method
US7085319B2 (en) * 1999-04-17 2006-08-01 Pts Corporation Segment-based encoding system using segment hierarchies
US6516371B1 (en) * 1999-05-27 2003-02-04 Advanced Micro Devices, Inc. Network interface device for accessing data stored in buffer memory locations defined by programmable read pointer information
US6590930B1 (en) 1999-07-22 2003-07-08 Mysticom Ltd. Local area network diagnosis
JP2001053620A (en) 1999-08-13 2001-02-23 Canon Inc Method and device for encoding, method and device for decoding, and storage medium
US7046842B2 (en) * 1999-08-17 2006-05-16 National Instruments Corporation System and method for color characterization using fuzzy pixel classification with application in color matching and color match location
US6377313B1 (en) * 1999-09-02 2002-04-23 Techwell, Inc. Sharpness enhancement circuit for video signals
US6833875B1 (en) 1999-09-02 2004-12-21 Techwell, Inc. Multi-standard video decoder
JP4350877B2 (en) 1999-10-01 2009-10-21 パナソニック株式会社 Compressed video scene change detection device, compressed video scene change detection method, and recording medium recording the program
US7143432B1 (en) * 1999-10-01 2006-11-28 Vidiator Enterprises Inc. System for transforming streaming video data
US7031385B1 (en) * 1999-10-01 2006-04-18 Matsushita Electric Industrial Co., Ltd. Method and apparatus for detecting scene change of a compressed moving-picture, and program recording medium therefor
US6370191B1 (en) * 1999-11-01 2002-04-09 Texas Instruments Incorporated Efficient implementation of error approximation in blind equalization of data communications
US6664969B1 (en) 1999-11-12 2003-12-16 Hewlett-Packard Development Company, L.P. Operating system independent method and apparatus for graphical remote access
JP2001148849A (en) 1999-11-19 2001-05-29 Aiphone Co Ltd Video control system for multiple dwelling house
JP2001251632A (en) * 1999-12-27 2001-09-14 Toshiba Corp Motion vector detection method and system, and motion vector detection program
US6871008B1 (en) * 2000-01-03 2005-03-22 Genesis Microchip Inc. Subpicture decoding architecture and method
US6522365B1 (en) * 2000-01-27 2003-02-18 Oak Technology, Inc. Method and system for pixel clock recovery
US7158262B2 (en) * 2000-02-17 2007-01-02 Hewlett-Packard Development Company, L.P. Multi-level error diffusion apparatus and method of using same
US7013255B1 (en) * 2000-06-09 2006-03-14 Avaya Technology Corp. Traffic simulation algorithm for asynchronous transfer mode networks
JP2002043950A (en) 2000-07-21 2002-02-08 Canon Inc Coding method and device, and decoding method and device
US7689510B2 (en) 2000-09-07 2010-03-30 Sonic Solutions Methods and system for use in network management of content
US7058826B2 (en) * 2000-09-27 2006-06-06 Amphus, Inc. System, architecture, and method for logical server and other network devices in a dynamically configurable multi-server network environment
JP2002165105A (en) 2000-11-27 2002-06-07 Canon Inc Image processing device, its method, and recording medium
US7093008B2 (en) * 2000-11-30 2006-08-15 Intel Corporation Communication techniques for simple network management protocol
JP3580251B2 (en) * 2000-12-27 2004-10-20 日本電気株式会社 Data compression apparatus, compression method, and recording medium recording control program therefor
US6888893B2 (en) * 2001-01-05 2005-05-03 Microsoft Corporation System and process for broadcast and communication with very low bit-rate bi-level or sketch video
US20050249207A1 (en) 2001-01-29 2005-11-10 Richard Zodnik Repeater for locating electronic devices
US7145676B2 (en) 2001-01-31 2006-12-05 Hewlett-Packard Development Company, L.P. Compound document image compression using multi-region two layer format
WO2002071736A2 (en) * 2001-03-05 2002-09-12 Intervideo, Inc. Systems and methods of error resilience in a video decoder
AU2002305392A1 (en) * 2001-05-02 2002-11-11 Bitstream, Inc. Methods, systems, and programming for producing and displaying subpixel-optimized images and digital content including such images
TWI220036B (en) 2001-05-10 2004-08-01 Ibm System and method for enhancing broadcast or recorded radio or television programs with information on the world wide web
US6901455B2 (en) * 2001-06-29 2005-05-31 Intel Corporation Peripheral sharing device with unified clipboard memory
US6760235B2 (en) * 2001-09-13 2004-07-06 Netpower Technologies, Inc. Soft start for a synchronous rectifier in a power converter
JP3970007B2 (en) 2001-12-07 2007-09-05 キヤノン株式会社 Image processing apparatus, image processing method, program, and storage medium
JP2003244448A (en) 2002-02-15 2003-08-29 Canon Inc Encoding method and decoding method
JP4109875B2 (en) 2002-02-22 2008-07-02 キヤノン株式会社 Image encoding apparatus, image encoding method, program, and storage medium
US7221389B2 (en) * 2002-02-15 2007-05-22 Avocent Corporation Automatic equalization of video signals
GB2388504B (en) 2002-02-26 2006-01-04 Adder Tech Ltd Video signal skew
US6898313B2 (en) 2002-03-06 2005-05-24 Sharp Laboratories Of America, Inc. Scalable layered coding in a multi-layer, compound-image data transmission system
US7532808B2 (en) * 2002-03-15 2009-05-12 Nokia Corporation Method for coding motion in a video sequence
US7373008B2 (en) 2002-03-28 2008-05-13 Hewlett-Packard Development Company, L.P. Grayscale and binary image data compression
US7550870B2 (en) * 2002-05-06 2009-06-23 Cyber Switching, Inc. Method and apparatus for remote power management and monitoring
US6986107B2 (en) 2002-06-18 2006-01-10 Microsoft Corporation Dynamic generation of visual style variants for a graphical user interface
KR100472457B1 (en) 2002-06-21 2005-03-10 삼성전자주식회사 Method for encoding image differentially and apparatus thereof
US20060126718A1 (en) * 2002-10-01 2006-06-15 Avocent Corporation Video compression encoder
US7321623B2 (en) * 2002-10-01 2008-01-22 Avocent Corporation Video compression system
TW589871B (en) 2002-11-19 2004-06-01 Realtek Semiconductor Corp Method for eliminating boundary image zippers
TW569435B (en) * 2002-12-17 2004-01-01 Nanya Technology Corp A stacked gate flash memory and the method of fabricating the same
US7428587B2 (en) * 2002-12-19 2008-09-23 Microsoft Corporation Generating globally unique device identification
JP3764143B2 (en) 2003-01-10 2006-04-05 エヌ・ティ・ティ・コムウェア株式会社 Monitoring system, monitoring method and program thereof
FI114071B (en) * 2003-01-13 2004-07-30 Nokia Corp Processing images with a limited number of pieces
WO2004075556A1 (en) * 2003-02-19 2004-09-02 Ishikawajima-Harima Heavy Industries Co., Ltd. Image compression device, image compression method, image compression program, compression/encoding method, compression/encoding device, compression/encoding program, decoding method, decoding device, and decoding program
JP3791505B2 (en) * 2003-03-14 2006-06-28 コニカミノルタビジネステクノロジーズ株式会社 Image processing device
US7367514B2 (en) * 2003-07-03 2008-05-06 Hand Held Products, Inc. Reprogramming system including reprogramming symbol
US9560371B2 (en) * 2003-07-30 2017-01-31 Avocent Corporation Video compression system
US7606313B2 (en) 2004-01-15 2009-10-20 Ittiam Systems (P) Ltd. System, method, and apparatus for error concealment in coded video signals
US20050198245A1 (en) 2004-03-06 2005-09-08 John Burgess Intelligent modular remote server management system
US7613854B2 (en) 2004-04-15 2009-11-03 Aten International Co., Ltd Keyboard video mouse (KVM) switch wherein peripherals having source communication protocol are routed via KVM switch and converted to destination communication protocol
US7457461B2 (en) * 2004-06-25 2008-11-25 Avocent Corporation Video compression noise immunity
US7006700B2 (en) * 2004-06-25 2006-02-28 Avocent Corporation Digital video compression command priority
CA2574776A1 (en) * 2004-07-23 2006-02-02 Citrix Systems, Inc. Systems and methods for optimizing communications between network nodes
US7466713B2 (en) 2004-10-29 2008-12-16 Avocent Fremont Corp. Service processor gateway system and appliance
US7683896B2 (en) 2004-12-20 2010-03-23 Avocent Huntsville Corporation Pixel skew compensation apparatus and method
US7168702B1 (en) * 2005-07-19 2007-01-30 Shoemaker Stephen P Amusement device of skill and lottery
US7539795B2 (en) 2006-01-30 2009-05-26 Nokia Corporation Methods and apparatus for implementing dynamic shortcuts both for rapidly accessing web content and application program windows and for establishing context-based user environments
EP2016767A4 (en) 2006-04-28 2014-08-13 Avocent Corp Dvc delta commands
EP1927949A1 (en) * 2006-12-01 2008-06-04 Thomson Licensing Array of processing elements with local registers

Also Published As

Publication number Publication date
US8660194B2 (en) 2014-02-25
US20070253492A1 (en) 2007-11-01
US20090290647A1 (en) 2009-11-26
MY149291A (en) 2013-08-30
TW200814780A (en) 2008-03-16
EP2016767A4 (en) 2014-08-13
IL194952A (en) 2012-06-28
EP2016767A2 (en) 2009-01-21
IL194952A0 (en) 2009-08-03
WO2007127452A2 (en) 2007-11-08
WO2007127452A3 (en) 2009-04-02
US7782961B2 (en) 2010-08-24

Similar Documents

Publication Publication Date Title
CA2650663A1 (en) Dvc delta commands
CN101142821B (en) New compression format and apparatus using the new compression format for temporarily storing image data in a frame memory
CN107660280B (en) Low latency screen mirroring
US8385429B2 (en) Video compression encoder
US8718147B2 (en) Video compression algorithm
US20070030911A1 (en) Method and apparatus for skipping pictures
CN105191304A (en) Image encoding method and apparatus for performing bit-plane scanning coding upon pixel data and related image decoding method and apparatus
US8908982B2 (en) Image encoding device and image encoding method
US8339406B2 (en) Variable-length coding data transfer interface
EP1952641B1 (en) Video compression encoder
JP4609568B2 (en) Data processing apparatus, data processing method, and data processing system
US20040105497A1 (en) Encoding device and method
US7773817B2 (en) JPEG image processing circuit
US6788227B2 (en) Apparatus for integrated cascade encoding
CN110012292B (en) Method and apparatus for compressing video data
CN114339263A (en) Lossless processing method for video data
CN111372085B (en) Image decoding device and method
KR102523959B1 (en) Image processing device and method for operating image processing device
TW202218421A (en) Content display process
CN115442347A (en) Automatic driving audio and video lossless transmission method and system
KR20040042938A (en) Method converting rgb color in a motion picture codec

Legal Events

Date Code Title Description
EEER Examination request
FZDE Discontinued

Effective date: 20140818

FZDE Discontinued

Effective date: 20140818

FZDE Discontinued

Effective date: 20140818