CA2398346A1 - Data compression having improved compression speed - Google Patents

Data compression having improved compression speed Download PDF

Info

Publication number
CA2398346A1
CA2398346A1 CA002398346A CA2398346A CA2398346A1 CA 2398346 A1 CA2398346 A1 CA 2398346A1 CA 002398346 A CA002398346 A CA 002398346A CA 2398346 A CA2398346 A CA 2398346A CA 2398346 A1 CA2398346 A1 CA 2398346A1
Authority
CA
Canada
Prior art keywords
dictionary
data
match
current
adaptation vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002398346A
Other languages
French (fr)
Inventor
Simon Richard Jones
Jose Luis Nunez Yanez
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BTG International Ltd
Original Assignee
Btg International Limited
Simon Richard Jones
Jose Luis Nunez Yanez
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Btg International Limited, Simon Richard Jones, Jose Luis Nunez Yanez filed Critical Btg International Limited
Publication of CA2398346A1 publication Critical patent/CA2398346A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/3084Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction using adaptive string matching, e.g. the Lempel-Ziv method
    • H03M7/3088Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction using adaptive string matching, e.g. the Lempel-Ziv method employing the use of a dictionary, e.g. LZ78
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation

Abstract

A lossless data compression system comprising a dictionary (30) based on content addressable memory and a coder (40) having between them a critical path including a feedback loop forming a dictionary adaption path, in which circuit means (42) is connected in the feedback loop so that the dictionary can be updated using data from a previous comparison cycle at the same time as the coder codes a current comparison cycle. The circuit means (42) has a current adaptation vector (58, 62, 66) and a next adaption vector (60, 64, 68); at leach search step the current adaptation vector updates data in the dictionary and also rearranges the next adaptation vector. Compression speed is increased.

Description

DATA COMPRESSION HAVING IMPROVED COMPRESSION SPEED
This invention relates to a method and apparatus for the lossless compression of data.
While lossy data compression hardware has been available for image and signal processing for some years, lossless data compression has only recently become of interest, as a result of increased commercial pressure on bandwidth and cost per bit in data storage and data transmission; also, reduction in power consumption by reducing data volume is now of importance.
The principles of searching a dictionary and encoding data by reference to a dictionary address is well known, and the apparatus to apply the principle consists of a dictionary and a coder/decoder.
In Proceedings of EUROMICRO-22, 1996, IEEE, "Design and Performance of a Main Memory Hardware Data Compressor", Kjelso, Gooch and Jones describe a novel compression method, termed the X-Match algorithm, which is efficient at compressing small blocks of data and suitable for high speed hardware implementation.
The X-Match algorithm maintains a dictionary of data previously seen, and attempts to match a current data element, referred to as a tuple, with an entry in the dictionary, replacing a matched tuple with a shorter code referencing the match location. The algorithm operates on partial matching, such as 2 bytes in a 4 byte data element. In Proceedings of EUROMICRO-25, 1999, IEEE, "The X-MatchLITE
FPGA-Based Data Compressor", Nunez, Feregrino, Bateman and Jones describe the X-Match algorithm implemented in a Field Programmable Gate Array (FPGA) prototype.

It is an object of the invention to provide a lossless data compression algorithm which can compress data at a faster rate than is possible with the published arrangement.
According to the invention a lossless data compression system comprising a dictionary based on content addressable memory and a coder having between them a feedback loop forming a dictionary adaptation path, characterised by register means connected in the feedback loop whereby the dictionary can be updated using data from a previous comparison cycle at the same time as the coder codes a current comparison cycle.
Also according to the invention, a lossless method of compressing data comprising the steps of -comparing a search tuple of fixed length with a plurality of tuples of said fixed length stored in a dictionary;
indicating the location in the dictionary of a full or partial match or matches;
selecting a best match of any plurality of matches; and encoding the match location and the match type;
characterised by the further steps of : -providing the dictionary with a current adaptation vector and a next adaptation vector;
and after comparison of each search tuple (a) updating the contents of the dictionary in accordance with the current adaptation vector and (b) updating the next adaptation vector in accordance with the current adaptation vector.
In the drawings, figure 1 illustrates the architecture of a compressor arrangement published by Nunez et al.
-2-The invention will oe described by way of example only with reference to figures 2 - S in which: -figure 2 illustrates the architecture of the compressor hardware figure 3 illustrates the inventive adaptation of the dictionary figure 4 shows the detailed arrangement of the compressor hardware and figure 5 illustrates the decompressor hardware.
In the prior art as shown in figure l, a dictionary 10 is based on Content Addressable Memory (CAM) and is searched by data 12 supplied by a search register 14. In the dictionary 10 each data element is exactly 4 bytes in width and is referred to as a tuple. With data elements of standard width, there is a guaranteed input data rate during compression and output data rate during decompression, regardless of data mix.
The dictionary stores previously seen data for a current compression; when the search register 14 supplies a new entry and a match is found in the dictionary, the data is replaced by a shorter code referencing the match location. CAM is a form of associative memory which takes in a data element and gives a match address of the element as its output. The use of CAM technology allows rapid searching of the dictionary 12, because the search is implemented simultaneously at every address at which data is stored, and therefore simultaneously for every stored word.
In the X-Match algorithm, perfect matching is not essential. A partial match, which may be a match or 2 or 3 of the 4 bytes, is also replaced by the code referencing the match location and a match type code, with the unmatched byte or bytes being transmitted literally, everything prefixed by a single bit. This use of partial matching improves the compression ratio when compared with the requirement of 4 byte matching, but still maintains high throughput of the dictionary.
-3-The match type indicates which bytes of the incoming tuple were found in the dictionary and which bytes have to be concatenated in literal form to the compressed code. There are 11 different match types that correspond to the different combinations of 2,3 or 4 bytes being matched. For example 0000 indicates that all the bytes were matched (full match) while 1000 indicates a partial match where bytes 0,1 and 2 were matched but byte 3 was not and in this example byte 3 must be added as an uncompressed literal to the code. Since some match types are more frequent than others a static Huffman code based on the statistics obtained through extensive simulation is used to code them. For example the most popular match type is (full match) and the corresponding Huffman code is O1. On the other hand a partial match type 0010 (bytes 3, 2 and 0 match) is more infrequent so the corresponding Huffman code is 10110. This technique improves compression.
If, for example, the search tuple is CAT, and the dictionary contains the word SAT at position 2, the partial match will be indicated in the format (match/miss) (location) (match type) (literals required) which in this example would be 022S, binary code 0 000010 0010 1010011, i.e. the capital C is not matched and is sent literally to the coding part of the system.
The algorithm, in pseudo code, is given as:-Set the dictionary to its initial state;
DO
{ read in tuple T from the data stream;
search the dictionary for tuple T;
IF (full or partial hit) { determine the best match location ML and the match type MT;
output '0';
output Binary code for ML;
-4-output Huffinan code for MT;
output any required literal characters of T; }
ELSE
{ output '1';
output tuple T; }
IF (full hit) {move dictionary entries 0 to ML-1 by one location; }
ELSE
{ move all dictionary entries down by one location;}
copy tuple T to dictionary location 0; }
WHILE (more data is to be compressed);.
The dictionary 10 is arranged on a Move-To-Front strategy, i.e. a current tuple is placed at the front of the dictionary and other tuples moved down by one location to make space. If the dictionary becomes full, a Least Recently Used (LRU) policy applies, i.e., the tuple occupying the last location is simply discarded.
The dictionary is preloaded with common data.
The coding function for a match is required to code three separate fields, i.e.
(a) the match location in the dictionary 10; uniform binary code where the codes are of the fixed length log 2 (DICTIONARY SIZE) is used.
(b) a match type; i.e. which bytes of an incoming tuple match in a dictionary location; a static Huffman code is used.
(c) any extra characters which did not match the dictionary entry, transmitted in literal form.
-S-Refernng again to Figure 1, the match, or partial match or several partial matches, are output by the dictionary 10 to a match decision logic circuit 16, which supplies encoding equipment 18 which provides a compressed data output signal 20.
Shift control logic 22 connected between the match decision logic 16 and the dictionary 12 provides shift signals to the dictionary. The whole circuit can be provided on a single semiconductor chip.
The critical path incorporating a feedback loop which forms the dictionary adaptation patch includes the search register 14 , the match decision logic 16, the shift control logic 22 and the CAM array 10.
Referring now to a compressor according to the invention as illustrated in figure 2, a dictionary 30 is based on CAM technology and is supplied with data to be searched 32 by a search register 34. The dictionary searches in accordance with the X-Match algorithm, and is organised on a Move To Front strategy and Least Recently Used policy.
The dictionary output is connected to a priority logic 36 which is connected through a match decision logic 37 to an encoding circuit 40, which provides an output stream of compressed data 41.
The match decision logic circuit 37 also provides signals to a circuit 42 which will be referred to as an Out-of Date Adaptation (ODA) register; the ODA
circuit 42 supplies a shift control logic circuit 44 which supplies "move" signals to the dictionary 30.
The arrangement is such that the dictionary 30 is updated on an out-of date basis; a next adaptation vector t to be applied to the dictionary is transformed into a current adaptation vector t+1 and at the same time the dictionary is updated;
the transformation and updating; are performed by the current adaptation vector after each search step.
Figure 3 illustrates the ODA adaptation applied to the dictionary data and to the adaptation vectors. Eight steps are shown; for each step the top/front four dictionary addresses 0, 1, 2, 3, references 50, 52, 54, 56, are shown, with a current adaptation vector 58 shown on the left of the addresses and a next adaptation vector 60 shown on the right. In the adaptation vectors 58, 60, a bit set to 1 means "load data from previous position" and a bit set to 0 means "keep current data".
In each of the eight steps, a search tuple is loaded into address 0, reference 50, and the previously stored data in that address is deleted; this is indicated by the current adaptation vector on the left hand side of location 0 being set to 1 in all eight steps.
The arrows pointing downwards within the dictionary, such as the arrows A, indicate rearrangement of the dictionary at the end of each step under the control of the current adaptation vector of that step.
In step 1, the top dictionary address 50 at position 0 contains "the ", the second position at address 52 contains "at i"; the third address 52, position 2, contains "hung" and the fourth address, 56, position 3 contains "ry-". It will be seen that each location content is exactly 4 bytes long.
The search tuples for each step are shown above the data; in step 1 the search tuple is "at i". A full match is found at position 1, shown shaded, and this information is output as a code indicating position 1. In the next adaptation vector 60, the bits at positions 0 and 1, addresses S0, 52, are set to 1, indicating that a match was found at position 1. The current adaptation vector 58 rearranges the dictionary, i.e.
rearranges the data in positions 0, l, 2 and 3 in accordance with the vector values; at position 0, the bit value is 1 indicating "load data" and the search tuple is loaded at position 0 as stated above. The other three bits are set to 0, so no change is made to the data in positions 1, 2 or 3, as can be seen in step 2. The current adaptation vector 58 also rearranges the next adaptation vector 60 in accordance with its bit values; the S next adaptation vector 60 becomes the current adaptation vector 62 in step 2. The bits of the current adaptation to 58 at positions 1, 2 and 3 are all set to zero, meaning "keep current data", so the next adaptation vector 60 is unchanged as it is transferred to become the current adaptation vector 62 in step 2.
In step 2, the search tuple is "at i"; a full match is detected at positions 0 and 1 in addresses 50 and 52 and both are shown shaded. The algorithm is arranged to select the address of a match (or partial match) closer to the top/front of the dictionary, so the match at position 0 is taken as a valid match and is output; the next adaptation vector 64 is set to 1 position 0, and to 0 at all positions below position 1 i.e. below the position at which the match was found. The current adaptation vector 62 rearranges the dictionary 30, loads the search tuple in position 0, and updates the next adaptation vector 64.
In step 3, the search tuple is "ry_", and a full match is found at position 3, shown shaded, the output signal indicating this position. The current adaptation vector 66 updates the dictionary and transforms the next adaptation vector 68, so that the bits in all vector positions above that at which a match is found are set to 1.
In step 4 it can be seen that the duplicate entry "at i" has been eliminated.
The search tuple is "hung" and a full match is found at position 2. The dictionary is rearranged by the current adaptation vector; but the bit value at all positions is l, so all dictionary entries move down one place. The next adaptation vector is also updated.
_g_ In step 5 the search tuple is again "hung" and full matches are found at positions 0 and 3; the match at position 0 is selected by the algorithm and the adaptation vector is set to 1 at that position with the bits at all other positions being set to 0. The duplicate entry of "hung" is eliminated.
S
In step 6 the search tuple is "over" and there are no matches; the miss sets all bits in the next adaptation vector to 1. There are two addresses containing the entry "hung", but only two; in the arrangement according to the invention there can never be three or more addresses with the same entry, which prevents dictionary efficiency degradation.
In step 7, the duplicate entry of "hung" has been eliminated. The search tuple is again "over" and a match is found in position 0; this new word is added to the dictionary, the current adaptation vector reorganises the dictionary and updates the next adaptation vector.
In step 8, the search tuple is " ung" and a partial match is found in position 2.
As explained with reference to the prior art, partial matches are valid in arrangements using the previously known version of the X-Match algorithm. While the position of the partial match is output for encoding, as far as the adaptation vectors are concerned, a partial match is treated as a miss.
It will be clear that data within the dictionary is not duplicated in storage, all dictionary elements are unique at all times except the dictionary element at the top of the dictionary that can be duplicated. Dictionary data duplication is restricted to location 0. This is because the adaptation at time t is performed with an adaptation vector generated at cycle t-2 and modified according to invention.
It will apparent that the dictionary 30 has, in effect, lost one address because data duplication can take place between position 0 in the dictionary and any other position greater than 0 and less than the dictionary size. However, this arrangement allows duplicate entries to be eliminated quickly and efficiently.
The provision of an ODA circuit 42 in effect breaks a speed-limiting feedback loop in the system, removing it from the list of critical paths in the chip.
Thus the speed of compression can be improved with very little deleterious effect on the compression efficiency.
Figure 4 shows the full circuit of a compressor according to the invention based on the Figure 2 architecture. As is conventional, the number of bits on a connection is indicated adjacent to a bar crossing that connection.
The dictionary 30 is a 64 element CAM-based array, supplied with input data through a 32 bit wide search register 34. Data for search are provided directly to the dictionary 30 while a multiplexer 80 is arranged to select the search register during compression, and has an additional function during decompression (see Figure
5).
The output of the dictionary 30 i.e. an indication of the dictionary address at which a match has been found, or the address of a partial match plus the unmatched bit, passes to a priority logic circuit 82, which transforms the 4 bit wide match to a 5 bit wide priority type for each location in the dictionary and supplies the priority type to the match decision logic circuit 37; circuit 37 also receives the output of the dictionary 30 directly. The circuit 37 uses the priority types to select the best match location for the compression process.
The ODA circuit 42 receives a signal from the priority logic circuit 36 through multiplexer 84; the multiplexer 84 is a 64 bit wide multiplexer arranged to select the active move vector depending on whether compression or decompression is active.
The ODA circuit 42 is a 64 bit wide register and associtated multiplexor circuitry which creates the out of date adaptation mechanism as illustrated in Figure 3.

The output of the ODA circuit 42, which is 64 bits wide, is supplied to a move generation logic circuit 86, equivalent to the shift control logic 44 in figure 2, which propagates a 64 bit wide match vector to generate the move vector to adapt the dictionary 30. The same vector, i.e. the current adaptation vector, such as 58, 62 or 66 in Figure 3, is fed back by the control path 88 of the ODA circuit 42 to adapt the next adaptation vector, such as the vector 60, 68 in Figure 3.
Turning now to the remainder of the apparatus illustrated in figure 4, which functions in a manner similar to that described in the prior art referred to above, the match decision logic circuit 37 supplies the match location to a 64-to-6 encoder 90 which transforms the uncoded 64 bit wide match location into a 6 bit wide coded match location. The output of the encoder 90 passes to a binary code generator which concatenates the miss or match bit to the match location.
The match decision logic circuit 37 also supplies a match type signal to a literal character assembler 94, which constructs the literal part of a compressed code for non-matched bytes, and to a match type code generator 96 which creates static Huffman code for the match types. The match types code and match type width signals from the match type code generator 96, and the compressed code from the binary code generator 92, pass to a first code concatenator 98 which assembles code for the match type and match location. A second code concatenator 100 receives output from concatenator 98 and also literal code and literal width signals from the literal character assembler 94 and provides output to code concatenator 102 which assembles the current compressed code with previous compressed code.
Concatenator 10 outputs signals next width, next code, and next valid to a register 104, which is a 96 bit wide output register for the data and a 7 bit wide register for the length of valid data bits. The register 104 outputs compressed data 40, and also a valid signal, which is fed back to code concatenator 102 together with the current code and a current width signal from the register 104.

Pipelines ROC, R1 C, R2C, respectively references 106, 108 and 110, indicate pipeline registers of the compression path.
Figure 5 illustrates a decompression circuit. The dictionary 30, multiplexer 80, multiplexer 84 and ODA circuit 42 and move generation logic circuit 86 are connected as for the compression circuit.
Compressed data in, reference 120, is supplied to a code concatenate and shift circuit 122 which assembles new compressed data with old compressed data and shifts out data which has been decompressed. The signals next underflow, next width (7 bits) and next code (96 bits) pass to a register 124 for temporary storage of compressed data. The register output is supplied to a main decoder 126, which decodes compressed code of a maximum 33 bits into 6 bit location address, 4 bit match type, and 32 bit literal data. Both the 6 bit location address and miss signals pass to a 6 to 64 decoder 128 which decodes a 6 bit coded dictionary address into its uncoded 64 bit equivalent.
The match type and literal data signals pass from the main decoder 126 to an output tuple assembler 130.
The 6 to 64 decoder 128 passes match location signals to the multiplexer 84.
The ODA circuit 42, the move generation logic circuit 86 and the dictionary 30 operate to decompress the compressed data, working in the reverse to the compression process. The multiplexer 80 selects a newly formed tuple for application to the dictionary 30. The dictionary data is supplied to a selection multiplexer 132 which also receives a selected tuple signal from the 6-to-64 decoder 128. The selective multiplexer 132 selects one tuple out of the dictionary and supplies it to the output tuple assembler 130 which assembles the literal data and the dictionary word, depending on the type of match which has been decompressed.

The uncompressed data-out 134 is identical to the data-in 32. There has been no loss.
The present invention is likely to find application when small blocks of data are to be compressed.

Claims (11)

Claims
1. A lossless data compression system comprising a dictionary 30 based on content addressable memory and a coder 40 having between them a critical path including a feedback loop forming a dictionary adaptation path, characterised by circuit means 42 connected in the feedback loop whereby the dictionary can be updated using data from a previous comparison cycle at the same time as the coder codes a current comparison cycle.
2. A system according to claim 1 in which said previous adaptation cycle is the next but one previous cycle.
3. A system according to claim 1 or claim 2 in which the circuit means 42 is arranged to update the dictionary in accordance with a preceding data element while a current data element is being processed by the dictionary.
4. A system according to anyone of Claims 1, 2 or 3 in which the circuit means 42 has a current adaptation vector (58, 62, 66) and a next adaptation vector 60, 64, 68, and is arranged so that at each search step the current adaptation vector is arranged to update data in the dictionary 30 and to rearrange the next adaptation vector.
A system according to any preceding claim in which the dictionary 30 is arranged so that at each step a previous search tuple is loaded into the top/front address 50 of the dictionary 30.
6. A system according to any preceding claim in which the dictionary 30 is arranged to hold data elements which are all of precisely equal length.
7. A system according to any preceding claim in which the dictionary 30 is arranged to indicate the address of a full match or a partial match to a search tuple.
8. A system according to claim 7 in which when the dictionary 30 indicates a partial match, the unmatched bytes are sent literally to the code 38.
9. A lossless data decompression system comprising a content addressable memory dictionary 30 and a decoder 126, having between them a feedback loop forming a dictionary adaptation path characterised by circuit means 42 connected in the feedback loop, whereby the dictionary can be updated using data from a previous comparison cycle at the same time as the coder codes a current comparison cycle.
10. A lossless method of compressing data comprising the steps of;-comparing a search tuple of fixed length with a plurality of tuples of said fixed length stored in a dictionary;
indicating the location in the dictionary of a full or partial match or matches;
selecting a best match of any plurality of matches; and coding the match location and the match type;
characterised by the further steps of providing the dictionary with a current adaptation vector and a next adaptation vector;
and after comparison of each search tuple (a) updating the contents of the dictionary in accordance with the current adaptation vector, and (b) updating the next adaptation vector in according with the current adaptation vector.
11. A method according to claim 10 comprising performing the comparison of a search tuple in a first clock cycle, and storing said tuple in the front position in the dictionary in the next clock cycle.
CA002398346A 2000-01-25 2001-01-22 Data compression having improved compression speed Abandoned CA2398346A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
GBGB0001711.1A GB0001711D0 (en) 2000-01-25 2000-01-25 Data compression having improved compression speed
GB0001711.1 2000-01-25
PCT/GB2001/000237 WO2001056169A1 (en) 2000-01-25 2001-01-22 Data compression having improved compression speed

Publications (1)

Publication Number Publication Date
CA2398346A1 true CA2398346A1 (en) 2001-08-02

Family

ID=9884322

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002398346A Abandoned CA2398346A1 (en) 2000-01-25 2001-01-22 Data compression having improved compression speed

Country Status (11)

Country Link
US (1) US6765509B2 (en)
EP (1) EP1262025B1 (en)
JP (1) JP2003521190A (en)
KR (1) KR20020070504A (en)
AT (1) ATE297609T1 (en)
AU (1) AU2001228635A1 (en)
CA (1) CA2398346A1 (en)
DE (1) DE60111361D1 (en)
GB (1) GB0001711D0 (en)
HK (1) HK1048900A1 (en)
WO (1) WO2001056169A1 (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6748101B1 (en) 1995-05-02 2004-06-08 Cummins-Allison Corp. Automatic currency processing system
US6963587B2 (en) 2000-11-16 2005-11-08 Telefonaktiebolaget Lm Ericsson (Publ) Communication system and method utilizing request-reply communication patterns for data compression
AR042582A1 (en) * 2000-11-16 2005-06-29 Ericsson Telefon Ab L M SYSTEM AND METHOD OF COMMUNICATIONS USING FORMS OF REQUEST COMMUNICATION - REPLACEMENT FOR COMPRESSION OF DATA
GB0102572D0 (en) * 2001-02-01 2001-03-21 Btg Int Ltd Apparatus to provide fast data compression
US6892292B2 (en) * 2002-01-09 2005-05-10 Nec Corporation Apparatus for one-cycle decompression of compressed data and methods of operation thereof
US6674908B1 (en) 2002-05-04 2004-01-06 Edward Lasar Aronov Method of compression of binary data with a random number generator
DE10310858A1 (en) * 2003-03-11 2004-09-23 Bergische Universität Wuppertal Character string compression method for compressing computer data, whereby data on a data bus to a memory component is compared with data stored in the memory to find the longest possible match
US6900746B1 (en) * 2003-12-23 2005-05-31 Wend Llc Asynchronous, data-activated concatenator for variable length datum segments
JP2006324944A (en) * 2005-05-19 2006-11-30 Renesas Technology Corp Encoding device
US20090098050A1 (en) * 2005-09-28 2009-04-16 The Regents Of The Unversity Of California Calcium binding peptides
DE102007033146B4 (en) 2007-07-13 2012-02-02 Schwäbische Hüttenwerke Automotive GmbH & Co. KG Adjustment valve for adjusting the delivery volume of a positive displacement pump
JP5828256B2 (en) * 2011-09-27 2015-12-02 日本電気株式会社 Data transfer control device, data transfer control method, and data transfer control system
US8947270B2 (en) 2013-06-29 2015-02-03 Intel Corporation Apparatus and method to accelerate compression and decompression operations

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4876541A (en) * 1987-10-15 1989-10-24 Data Compression Corporation Stem for dynamically compressing and decompressing electronic data
US5861827A (en) * 1996-07-24 1999-01-19 Unisys Corporation Data compression and decompression system with immediate dictionary updating interleaved with string search

Also Published As

Publication number Publication date
GB0001711D0 (en) 2000-03-15
DE60111361D1 (en) 2005-07-14
US20030117299A1 (en) 2003-06-26
EP1262025A1 (en) 2002-12-04
EP1262025B1 (en) 2005-06-08
WO2001056169A1 (en) 2001-08-02
AU2001228635A1 (en) 2001-08-07
US6765509B2 (en) 2004-07-20
JP2003521190A (en) 2003-07-08
ATE297609T1 (en) 2005-06-15
KR20020070504A (en) 2002-09-09
HK1048900A1 (en) 2003-04-17

Similar Documents

Publication Publication Date Title
US6906645B2 (en) Data compression having more effective compression
US5729228A (en) Parallel compression and decompression using a cooperative dictionary
JP7031828B2 (en) Methods, devices, and systems for data compression and decompression of semantic values
US5870036A (en) Adaptive multiple dictionary data compression
US5281967A (en) Data compression/decompression method and apparatus
US5406279A (en) General purpose, hash-based technique for single-pass lossless data compression
KR100331351B1 (en) Method and apparatus for compressing and decompressing image data
EP1262025B1 (en) Data compression having improved compression speed
US7215259B2 (en) Data compression with selective encoding of short matches
US5874908A (en) Method and apparatus for encoding Lempel-Ziv 1 variants
JP2003133964A (en) Context model providing apparatus and method therefor
US5877711A (en) Method and apparatus for performing adaptive data compression
US20040022312A1 (en) Lossless data compression
US20060069857A1 (en) Compression system and method
CA2446952C (en) Character table implemented data compression method and apparatus
US6628211B1 (en) Prefix table implemented data compression method and apparatus
US20040119615A1 (en) Apparatus to provide fast data compression
Martinez et al. Rice-Marlin Codes: Tiny and Efficient Variable-to-Fixed Codes
JP3171510B2 (en) Method for compressing and decompressing data in dictionary-based memory

Legal Events

Date Code Title Description
FZDE Discontinued