US20120106621A1 - Chroma temporal rate reduction and high-quality pause system and method - Google Patents

Chroma temporal rate reduction and high-quality pause system and method Download PDF

Info

Publication number
US20120106621A1
US20120106621A1 US12/955,549 US95554910A US2012106621A1 US 20120106621 A1 US20120106621 A1 US 20120106621A1 US 95554910 A US95554910 A US 95554910A US 2012106621 A1 US2012106621 A1 US 2012106621A1
Authority
US
United States
Prior art keywords
data
video
transform
information
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/955,549
Inventor
Steven E. Saunders
Krasimir D. Kolarov
William C. Lynch
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Straight Path IP Group Inc
Original Assignee
Droplet Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US10/418,363 external-priority patent/US20030198395A1/en
Application filed by Droplet Technology Inc filed Critical Droplet Technology Inc
Priority to US12/955,549 priority Critical patent/US20120106621A1/en
Publication of US20120106621A1 publication Critical patent/US20120106621A1/en
Assigned to INNOVATIVE COMMUNICATIONS TECHNOLOGY, INC. reassignment INNOVATIVE COMMUNICATIONS TECHNOLOGY, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DROPLET TECHNOLOGY, INC.
Assigned to STRAIGHT PATH IP GROUP, INC. reassignment STRAIGHT PATH IP GROUP, INC. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: INNOVATIVE COMMUNICATIONS TECHNOLOGIES, INC.
Assigned to SORYN TECHNOLOGIES LLC reassignment SORYN TECHNOLOGIES LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: STRAIGHT PATH IP GROUP, INC.
Assigned to STRAIGHT PATH IP GROUP, INC. reassignment STRAIGHT PATH IP GROUP, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SORYN TECHNOLOGIES LLC
Assigned to CLUTTERBUCK CAPITAL MANAGEMENT, LLC reassignment CLUTTERBUCK CAPITAL MANAGEMENT, LLC SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DIPCHIP CORP., STRAIGHT PATH ADVANCED COMMUNICATION SERVICES, LLC, STRAIGHT PATH COMMUNICATIONS INC., STRAIGHT PATH IP GROUP, INC., STRAIGHT PATH SPECTRUM, INC., STRAIGHT PATH SPECTRUM, LLC, STRAIGHT PATH VENTURES, LLC
Assigned to STRAIGHT PATH IP GROUP, INC., DIPCHIP CORP., STRAIGHT PATH ADVANCED COMMUNICATION SERVICES, LLC, STRAIGHT PATH COMMUNICATIONS INC., STRAIGHT PATH SPECTRUM, INC., STRAIGHT PATH SPECTRUM, LLC, STRAIGHT PATH VENTURES, LLC reassignment STRAIGHT PATH IP GROUP, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: CLUTTERBUCK CAPITAL MANAGEMENT, LLC
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/167Position within a video image, e.g. region of interest [ROI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/187Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scalable video layer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/63Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets
    • H04N19/635Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets characterised by filter definition or implementation details

Definitions

  • the present invention relates to data compression, and more particularly to compressing visual data.
  • Video “codecs” are used to reduce the data rate required for data communication streams by balancing between image quality, processor requirements (i.e. cost/power consumption), and compression ratio (i.e. resulting data rate).
  • the currently available compression approaches offer a different range of trade-offs, and spawn a plurality of codec profiles, where each profile is optimized to meet the needs of a particular application.
  • Lossy digital video compression systems operate on digitized video sequences to produce much smaller digital representations.
  • the reconstructed visible result looks much like the original video but may not generally be a perfect match.
  • a typical digital video compression system operates in a sequence of stages, comprising a transform stage, a quantization stage, and an entropy-coding stage.
  • Some compression systems such as MPEG and other DCT-based codec algorithms add other stages, such as a motion compensation search, etc.
  • 2D and 3D Wavelets are current alternatives to the DCT-based codec algorithms. Wavelets have been highly regarded due to their pleasing image quality and flexible compression ratios, prompting the JPEG committee to adopt a wavelet algorithm for its JPEG2000 still image standard.
  • wavelet transform When using a wavelet transform as the transform stage in a video compressor, such algorithm operates as a sequence of filter pairs that split the data into high-pass and low-pass components or bands.
  • Standard wavelet transforms operate on the spatial extent of a single image, in 2-dimensional fashion. The two dimensions are handled by combining filters that work horizontally with filters that work vertically. Typically, these alternate in sequence, H-V-H-V, though strict alternation is not necessary. It is known in the art to apply wavelet filters in the temporal direction as well: operating with samples from successive images in time.
  • wavelet transforms can be applied separately to brightness or luminance (luma) and color-difference or chrominance (chroma) components of the video signal.
  • This mixed 3-D transform serves the same purpose as a 3-D wavelet transform. It is also possible to use a short DCT in the temporal direction for a 3-D DCT transform.
  • the temporal part of a 3-D wavelet transform typically differs from the spatial part in being much shorter.
  • Typical sizes for the spatial transform are 720 pixels horizontally and 480 pixels vertically; typical sizes for the spatial transform are two, four, eight, or fifteen frames. These temporal lengths are smaller because handling many frames results in long delays in processing, which are undesirable, and requires storing frames while they are processed, which is expensive.
  • a system and method are provided for compressing data.
  • luminescence data of a frame is updated at a first predetermined rate, while chrominance data of the frame is updated at a second predetermined rate that is less than the first predetermined rate.
  • one or more frequency bands of the chrominance data may be omitted.
  • the one or more frequency bands may be omitted utilizing a filter.
  • Such filter may include a wavelet filter.
  • Another system and method are provided for compressing data. Such system and method involves compressing video data, and inserting pause information with the compressed data. Thus, the pause information is used when the video data is paused during the playback thereof.
  • the pause information may be used to improve a quality of the played back video data during a pause operation.
  • the pause information may include a high-resolution frame.
  • the pause information may include data capable of being used to construct a high-resolution frame.
  • FIG. 1 illustrates a framework for compressing/decompressing data, in accordance with one embodiment.
  • FIG. 2 illustrates a method for compressing data with chrominance (chroma) temporal rate reduction, in accordance with one embodiment.
  • FIG. 2A illustrates a method for compressing data with a high-quality pause capability during playback, in accordance with one embodiment.
  • FIG. 3 illustrates a method for compressing/decompressing data, in accordance with one embodiment.
  • FIG. 4 shows a data structure on which the method of FIG. 3 is carried out.
  • FIG. 5 illustrates a method for compressing/decompressing data, in accordance with one embodiment.
  • FIG. 1 illustrates a framework 100 for compressing/decompressing data, in accordance with one embodiment. Included in this framework 100 are a coder portion 101 and a decoder portion 103 , which together form a “codec.”
  • the coder portion 101 includes a transform module 102 , a quantizer 104 , and an entropy encoder 106 for compressing data for storage in a file 108 .
  • the decoder portion 103 includes a reverse transform module 114 , a de-quantizer 111 , and an entropy decoder 110 for decompressing data for use (i.e. viewing in the case of video data, etc).
  • the transform module 102 carries out a reversible transform, often linear, of a plurality of pixels (i.e. in the case of video data) for the purpose of de-correlation.
  • the quantizer 104 effects the quantization of the transform values, after which the entropy encoder 106 is responsible for entropy coding of the quantized transform coefficients.
  • the various components of the decoder portion 103 essentially reverse such process.
  • FIG. 2 illustrates a method 200 for compressing data with chrominance temporal rate reduction, in accordance with one embodiment.
  • the present method 200 may be carried out in the context of the transform module 102 of FIG. 1 and the manner in which it carries out a reversible transform. It should be noted, however, that the method 200 may be implemented in any desired context.
  • luminescence (luma) data of a frame is updated at a first predetermined rate.
  • chrominance (chroma) data of the frame is updated at a second predetermined rate that is less than the first predetermined rate.
  • a digital video compression system it is thus possible to vary the effective rate of transmitting temporal detail for different components of the scene. For example, one may arrange the data stream so that some components of the transformed signal are sent more frequently than others. In one example of this, one may compute a three-dimensional (spatial+temporal) wavelet transform of a video sequence, and transmit the resulting luma coefficients at the full frame rate.
  • chroma rate compression is as follows: for the chroma components, one may compute an average across two frames (four fields) of the spatially transformed chroma values. This may be accomplished by applying a double Haar wavelet filter pair and discarding all but the lowest frequency component. One may transmit only this average value. On reconstruction, one can hold the received value across two frames (four fields) of chroma. It has been found that viewers do not notice this, even when they are critically examining the compression method for flaws.
  • the following stage of the video compression process discards information by grouping similar values together and transmitting only a representative value. This discards detail about exactly how bright an area is, or exactly what color it is.
  • zero is chosen for the representative value (denoting no change at a particular scale).
  • human visual sensitivity to levels is known to differ between luma and chroma.
  • FIG. 2A illustrates a method 250 for compressing data with a high-quality pause capability during playback, in accordance with one embodiment.
  • the present method 250 may be carried out in the context of the framework of FIG. 1 . It should be noted, however, that the method 250 may be implemented in any desired context.
  • video data is compressed.
  • the data compression may be carried out in the context of the coder portion 101 of the framework of FIG. 1 .
  • such compression may be implemented in any desired context.
  • pause information is inserted with the compressed data.
  • the pause information may be used to improve a quality of the played back video data.
  • the pause information may include a high-resolution frame.
  • the pause information may include data capable of being used to construct a high-resolution frame.
  • the pause information may be used when the video data is paused during the playback thereof.
  • the compressed video is equipped with a set of extra information especially for use when the video is paused.
  • This extra information may include a higher-quality frame, or differential information that when combined with a regular compressed frame results in a higher-quality frame.
  • this extra information need not be included for every frame, but rather only for some frames.
  • the extra information may be included for one frame of every 15 or so in the image, allowing a high-quality pause operation to occur at a time granularity of 1 ⁇ 2 second. This may be done in accord with observations of video pausing behavior.
  • the extra information may include a whole frame of the video, compressed using a different parameter set (for example, quantizing away less information) or using a different compression method altogether (for example, using JPEG-2000 within an MPEG stream). These extra frames may be computed when the original video is compressed, and may be carried along with the regular compressed video frames in the transmitted or stored compressed video.
  • a different parameter set for example, quantizing away less information
  • a different compression method altogether for example, using JPEG-2000 within an MPEG stream.
  • the extra information may include extra information for the use of the regular decompression process rather than a complete extra frame.
  • the extra information might consist of a filter band of data that is discarded in the normal compression but retained for extra visual sharpness when paused.
  • the extra information might include extra low-order bits of information from the transformed coefficients, and additional coefficients, resulting from using a smaller quantization setting for the chosen pausable frames.
  • the extra information may include data for the use of a decompression process that differs from the regular decompression process, and is not a complete frame. This information, after being decompressed, may be combined with one or more frames of video decompressed by the regular process to produce a more detailed still frame.
  • FIGS. 1-2A More information regarding an exemplary wavelet-based transformation will now be set forth, which may be employed in combination with the various features of FIGS. 1-2A . It should be highly noted, however, that such wavelet-based transformation is set forth for illustrative purposes only and should not be construed as limiting in any manner. For example, it is conceived that the various features of FIGS. 1-2A may be implemented in the context of a DCT-based algorithm or the like.
  • FIG. 3 illustrates a method 300 for compressing/decompressing data, in accordance with one embodiment.
  • the present method 300 may be carried out in the context of the transform module 102 of FIG. 1 and the manner in which it carries out a reversible transform. It should be noted, however, that the method 300 may be implemented in any desired context.
  • an interpolation formula is received (i.e. identified, retrieved from memory, etc.) for compressing data.
  • the data may refer to any data capable of being compressed.
  • the interpolation formula may include any formula employing interpolation (i.e. a wavelet filter, etc.).
  • At least one data value is required by the interpolation formula, where the required data value is unavailable.
  • Such data value may include any subset of the aforementioned data. By being unavailable, the required data value may be non-existent, out of range, etc.
  • the extrapolation formula may include any formula employing extrapolation. By this scheme, the compression of the data is enhanced.
  • FIG. 4 shows a data structure 400 on which the method 300 is carried out.
  • a “best fit” 401 may be achieved by an interpolation formula 403 involving a plurality of data values 402 .
  • FIG. 5 illustrates a method 500 for compressing/decompressing data, in accordance with one embodiment.
  • the present method 500 may be carried out in the context of the transform module 202 of FIG. 2 and the manner in which it carries out a reversible transform. It should be noted, however, that the method 500 may be implemented in any desired context.
  • the method 500 provides a technique for generating edge filters for a wavelet filter pair.
  • a wavelet scheme is analyzed to determine local derivatives that a wavelet filter approximates.
  • a polynomial order is chosen to use for extrapolation based on characteristics of the wavelet filter and a numbers of available samples.
  • extrapolation formulas are derived for each wavelet filter using the chosen polynomial order. See operation 506 .
  • specific edge wavelet cases are derived utilizing the extrapolation formulas with the available samples in each case.
  • One of the transforms specified in the JPEG 2000 standard 1) is the reversible 5-3 transform shown in Equations #1.1 and 1.2.
  • Equation #1.1.R Equation #1.1.R.
  • Equation #1.1.R may be used in place of Equation #1.1 when point one is right-most.
  • the apparent multiply by 3 can be accomplished with a shift and add.
  • the division by 3 is trickier.
  • the right-most index is 2N ⁇ 1
  • Equation #1.2 there is no problem calculating Y 2N-2 by means of Equation #1.2.
  • the index of the right-most point is even (say 2N)
  • Equation #1.2 involves missing values.
  • the object is to subtract an estimate of Y from the even X using just the previously calculated odd indexed Y s, Y 1 and Y 3 in the case in point. This required estimate at index 2N can be obtained by linear extrapolation, as noted above.
  • the appropriate formula is given by Equation #1.2.R.
  • the reverse transform filters can be obtained for these extrapolating boundary filters as for the original ones, namely by back substitution.
  • the inverse transform boundary filters may be used in place of the standard filters in exactly the same circumstances as the forward boundary filters are used.
  • Such filters are represented by Equations #2.1.Rinv, 2.2.Rinv, 2.1.L.inv, and 2.2.L.inv.
  • Equations ⁇ ⁇ #2 ⁇ .1 . Rinv , 2.2 . Rinv , 2.1 . L . inv , 2.2 . L . inv ⁇ X 2 ⁇ N - 1 - 3 ⁇ Y 2 ⁇ N - 1 + ⁇ 3 ⁇ X 2 ⁇ N - 2 - X 2 ⁇ N - 4 + 1 2 ⁇ eq ⁇ ⁇ 2.1 .
  • R ⁇ ⁇ inv X 2 ⁇ N Y 2 ⁇ N - ⁇ 3 ⁇ Y 2 ⁇ N - 1 - Y 2 ⁇ N - 3 + 2 4 ⁇ eq ⁇ ⁇ 2.2 .
  • one embodiment may utilize a reformulation of the 5-3 filters that avoids the addition steps of the prior art while preserving the visual properties of the filter. See for example, Equations #3.1, 3.1R, 3.2, 3.2L.
  • JPEG-2000 inverse filters can be reformulated in the following Equations #4.2, 4.2L, 4.1, 4.1R.

Abstract

A system and method are provided for compressing data. In use, luminescence data of a frame is updated at a first predetermined rate, while chrominance data of the frame is updated at a second predetermined rate that is less than the first predetermined rate. Moreover, pause information may be inserted with the compressed data, where the pause information may be used when the video data is paused during the playback thereof to increase the quality of a still frame.

Description

    RELATED APPLICATIONS
  • The present application is a divisional application of patent application Ser. No. 10/447,514, filed May 28, 2003, which is a continuation-in-part of a patent application Ser. No. 10/418,363, filed on April 17, and which claims priority from a first provisional application filed Jun. 21, 2002 under Ser. No. 60/390,345, and a second provisional application filed Jun. 21, 2002 under Ser. No. 60/390,492, all of which are incorporated herein by reference in their entirety.
  • FIELD OF THE INVENTION
  • The present invention relates to data compression, and more particularly to compressing visual data.
  • BACKGROUND OF THE INVENTION
  • Video “codecs” (compressor/decompressor) are used to reduce the data rate required for data communication streams by balancing between image quality, processor requirements (i.e. cost/power consumption), and compression ratio (i.e. resulting data rate). The currently available compression approaches offer a different range of trade-offs, and spawn a plurality of codec profiles, where each profile is optimized to meet the needs of a particular application.
  • Lossy digital video compression systems operate on digitized video sequences to produce much smaller digital representations. The reconstructed visible result looks much like the original video but may not generally be a perfect match. For these systems, it is important that the information lost in the process correspond to aspects of the video that are not easily seen or not readily noticed by viewers.
  • A typical digital video compression system operates in a sequence of stages, comprising a transform stage, a quantization stage, and an entropy-coding stage. Some compression systems such as MPEG and other DCT-based codec algorithms add other stages, such as a motion compensation search, etc. 2D and 3D Wavelets are current alternatives to the DCT-based codec algorithms. Wavelets have been highly regarded due to their pleasing image quality and flexible compression ratios, prompting the JPEG committee to adopt a wavelet algorithm for its JPEG2000 still image standard.
  • When using a wavelet transform as the transform stage in a video compressor, such algorithm operates as a sequence of filter pairs that split the data into high-pass and low-pass components or bands. Standard wavelet transforms operate on the spatial extent of a single image, in 2-dimensional fashion. The two dimensions are handled by combining filters that work horizontally with filters that work vertically. Typically, these alternate in sequence, H-V-H-V, though strict alternation is not necessary. It is known in the art to apply wavelet filters in the temporal direction as well: operating with samples from successive images in time. In addition, wavelet transforms can be applied separately to brightness or luminance (luma) and color-difference or chrominance (chroma) components of the video signal.
  • One may use a DCT or other non-wavelet spatial transform for spatial 2-D together with a wavelet-type transform in the temporal direction. This mixed 3-D transform serves the same purpose as a 3-D wavelet transform. It is also possible to use a short DCT in the temporal direction for a 3-D DCT transform.
  • The temporal part of a 3-D wavelet transform typically differs from the spatial part in being much shorter. Typical sizes for the spatial transform are 720 pixels horizontally and 480 pixels vertically; typical sizes for the spatial transform are two, four, eight, or fifteen frames. These temporal lengths are smaller because handling many frames results in long delays in processing, which are undesirable, and requires storing frames while they are processed, which is expensive.
  • When one looks at a picture or a video sequence to judge its quality, or when one visually compares two pictures or two video sequences, some defects or differences are harder to detect than others. This is a consequence of the human visual system having greater sensitivity for some aspects of what one sees than for others. For instance, one may see very fine details only when they are at high contrast, but can see medium-scale details which are very subtle in contrast. These differences are important for compression. Compression processes are designed to make the differences and errors as unnoticeable as possible. Thus, a compression process may produce good fidelity in the middle sizes of brightness contrast, while allowing more error in fine details.
  • There is thus a continuing need to exploit various psychophysics opportunities to improve compression algorithms, without significantly sacrificing perceived quality.
  • The foregoing compression systems are often used in Personal Video Recorders, Digital Video Recorders, Cable Set-Top Boxes, and the like. A common feature of these applications and others is that users have the possibility of pausing the video, keeping a single frame displayed for an extended time as a still image.
  • It is known in the art to process a video sequence, or other sequence of images, to derive a single image of higher resolution than the input images. This processing is very expensive in computing, however, as it must identify or match moving objects in the scene, camera motion, lighting shifts, and other changes and compensate for each change individually and in combination. Contemporary applications, however, do not presently support such computational extravagance for a simple pause function.
  • There is thus a continuing need to exploit various psychophysics opportunities to present a paused image that is of substantially higher visual quality than would be produced by simply repeating a frame of decompressed video.
  • DISCLOSURE OF THE INVENTION
  • A system and method are provided for compressing data. In use, luminescence data of a frame is updated at a first predetermined rate, while chrominance data of the frame is updated at a second predetermined rate that is less than the first predetermined rate.
  • Thus, the amount of compression is increased. To accomplish this, in one embodiment, one or more frequency bands of the chrominance data may be omitted. Moreover, the one or more frequency bands may be omitted utilizing a filter. Such filter may include a wavelet filter. Thus, upon the decompression of the video data, the omitted portions of the chrominance data may be interpolated.
  • Another system and method are provided for compressing data. Such system and method involves compressing video data, and inserting pause information with the compressed data. Thus, the pause information is used when the video data is paused during the playback thereof.
  • In one embodiment, the pause information may be used to improve a quality of the played back video data during a pause operation. Moreover, the pause information may include a high-resolution frame. Still yet, the pause information may include data capable of being used to construct a high-resolution frame.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 illustrates a framework for compressing/decompressing data, in accordance with one embodiment.
  • FIG. 2 illustrates a method for compressing data with chrominance (chroma) temporal rate reduction, in accordance with one embodiment.
  • FIG. 2A illustrates a method for compressing data with a high-quality pause capability during playback, in accordance with one embodiment.
  • FIG. 3 illustrates a method for compressing/decompressing data, in accordance with one embodiment.
  • FIG. 4 shows a data structure on which the method of FIG. 3 is carried out.
  • FIG. 5 illustrates a method for compressing/decompressing data, in accordance with one embodiment.
  • DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • FIG. 1 illustrates a framework 100 for compressing/decompressing data, in accordance with one embodiment. Included in this framework 100 are a coder portion 101 and a decoder portion 103, which together form a “codec.” The coder portion 101 includes a transform module 102, a quantizer 104, and an entropy encoder 106 for compressing data for storage in a file 108. To carry out decompression of such file 108, the decoder portion 103 includes a reverse transform module 114, a de-quantizer 111, and an entropy decoder 110 for decompressing data for use (i.e. viewing in the case of video data, etc).
  • In use, the transform module 102 carries out a reversible transform, often linear, of a plurality of pixels (i.e. in the case of video data) for the purpose of de-correlation. Next, the quantizer 104 effects the quantization of the transform values, after which the entropy encoder 106 is responsible for entropy coding of the quantized transform coefficients. The various components of the decoder portion 103 essentially reverse such process.
  • FIG. 2 illustrates a method 200 for compressing data with chrominance temporal rate reduction, in accordance with one embodiment. In one embodiment, the present method 200 may be carried out in the context of the transform module 102 of FIG. 1 and the manner in which it carries out a reversible transform. It should be noted, however, that the method 200 may be implemented in any desired context.
  • In operation 202, luminescence (luma) data of a frame is updated at a first predetermined rate. In operation 204, chrominance (chroma) data of the frame is updated at a second predetermined rate that is less than the first predetermined rate.
  • In a digital video compression system, it is thus possible to vary the effective rate of transmitting temporal detail for different components of the scene. For example, one may arrange the data stream so that some components of the transformed signal are sent more frequently than others. In one example of this, one may compute a three-dimensional (spatial+temporal) wavelet transform of a video sequence, and transmit the resulting luma coefficients at the full frame rate.
  • Moreover, one may omit one or more higher-frequency bands from the chroma signal, thus in effect lowering the temporal response rate—the temporal detail fidelity—of the video for chroma information. During reconstruction of the compressed video for viewing, one may fill in or interpolate the omitted information with an approximation, rather than showing a “zero level” where there was no information transmitted. This may be done in the same way as for omitted spatial detail. Most simply it can be done by holding the most-recently received level until new information is received. More generally, it can be done by computing an inverse wavelet filter, using zero or some other default value for the omitted information, to produce a smoothed variation with the correct overall level but reduced temporal detail.
  • One particular example of this sort of chroma rate compression is as follows: for the chroma components, one may compute an average across two frames (four fields) of the spatially transformed chroma values. This may be accomplished by applying a double Haar wavelet filter pair and discarding all but the lowest frequency component. One may transmit only this average value. On reconstruction, one can hold the received value across two frames (four fields) of chroma. It has been found that viewers do not notice this, even when they are critically examining the compression method for flaws.
  • The following stage of the video compression process, quantization, discards information by grouping similar values together and transmitting only a representative value. This discards detail about exactly how bright an area is, or exactly what color it is. When one finds that a transformed component is near zero, zero is chosen for the representative value (denoting no change at a particular scale). One can omit sending zero and have the receiver assume zero as its default value. This helps compression by lowering the amount of data sent. Again, human visual sensitivity to levels is known to differ between luma and chroma.
  • Thus, one can take advantage of this fact by applying different levels of quantization to luma and chroma components, discarding more information from the chroma. When this different quantization is done following a temporal or 3-D transform, the effect is to reduce the temporal detail in the chroma band along with the spatial detail. In a typical case, for ordinary video material, the temporal transform results in low frequency components that are much larger than higher frequency components. Then, applying quantization to this transformed result groups the small values with zero, in effect getting them omitted from the compressed representation and lowering the temporal resolution of the chroma components.
  • FIG. 2A illustrates a method 250 for compressing data with a high-quality pause capability during playback, in accordance with one embodiment. In one embodiment, the present method 250 may be carried out in the context of the framework of FIG. 1. It should be noted, however, that the method 250 may be implemented in any desired context.
  • In operation 252, video data is compressed. In one embodiment, the data compression may be carried out in the context of the coder portion 101 of the framework of FIG. 1. Of course, such compression may be implemented in any desired context.
  • In operation 254, pause information is inserted with the compressed data. In one embodiment, the pause information may be used to improve a quality of the played back video data. Moreover, the pause information may include a high-resolution frame. Still yet, the pause information may include data capable of being used to construct a high-resolution frame.
  • Thus, in operation 256, the pause information may be used when the video data is paused during the playback thereof. In the present method, the compressed video is equipped with a set of extra information especially for use when the video is paused. This extra information may include a higher-quality frame, or differential information that when combined with a regular compressed frame results in a higher-quality frame.
  • In order to keep the compression bit rate at a useful level, this extra information need not be included for every frame, but rather only for some frames. Typically, the extra information may be included for one frame of every 15 or so in the image, allowing a high-quality pause operation to occur at a time granularity of ½ second. This may be done in accord with observations of video pausing behavior. One can, however, include the extra information more often than this, at a cost in bit rate. One can also include it less often to get better compression performance, at a cost in user convenience. The tradeoff may be made over a range from two frames to 60 or more frames.
  • In one embodiment, the extra information may include a whole frame of the video, compressed using a different parameter set (for example, quantizing away less information) or using a different compression method altogether (for example, using JPEG-2000 within an MPEG stream). These extra frames may be computed when the original video is compressed, and may be carried along with the regular compressed video frames in the transmitted or stored compressed video.
  • In another embodiment, the extra information may include extra information for the use of the regular decompression process rather than a complete extra frame. For example, in a wavelet video compressor, the extra information might consist of a filter band of data that is discarded in the normal compression but retained for extra visual sharpness when paused. For another example, the extra information might include extra low-order bits of information from the transformed coefficients, and additional coefficients, resulting from using a smaller quantization setting for the chosen pausable frames.
  • In another embodiment, the extra information may include data for the use of a decompression process that differs from the regular decompression process, and is not a complete frame. This information, after being decompressed, may be combined with one or more frames of video decompressed by the regular process to produce a more detailed still frame.
  • More information regarding an exemplary wavelet-based transformation will now be set forth, which may be employed in combination with the various features of FIGS. 1-2A. It should be highly noted, however, that such wavelet-based transformation is set forth for illustrative purposes only and should not be construed as limiting in any manner. For example, it is conceived that the various features of FIGS. 1-2A may be implemented in the context of a DCT-based algorithm or the like.
  • FIG. 3 illustrates a method 300 for compressing/decompressing data, in accordance with one embodiment. In one embodiment, the present method 300 may be carried out in the context of the transform module 102 of FIG. 1 and the manner in which it carries out a reversible transform. It should be noted, however, that the method 300 may be implemented in any desired context.
  • In operation 302, an interpolation formula is received (i.e. identified, retrieved from memory, etc.) for compressing data. In the context of the present description, the data may refer to any data capable of being compressed. Moreover, the interpolation formula may include any formula employing interpolation (i.e. a wavelet filter, etc.).
  • In operation 304, it is determined whether at least one data value is required by the interpolation formula, where the required data value is unavailable. Such data value may include any subset of the aforementioned data. By being unavailable, the required data value may be non-existent, out of range, etc.
  • Thereafter, an extrapolation operation is performed to generate the required unavailable data value. See operation 306. The extrapolation formula may include any formula employing extrapolation. By this scheme, the compression of the data is enhanced.
  • FIG. 4 shows a data structure 400 on which the method 300 is carried out. As shown, during the transformation, a “best fit” 401 may be achieved by an interpolation formula 403 involving a plurality of data values 402. Note operation 302 of the method 300 of FIG. 3. If it is determined that one of the data values 402 is unavailable (see 404), an extrapolation formula may be used to generate such unavailable data value. More optional details regarding one exemplary implementation of the foregoing technique will be set forth in greater detail during reference to FIG. 5.
  • FIG. 5 illustrates a method 500 for compressing/decompressing data, in accordance with one embodiment. As an option, the present method 500 may be carried out in the context of the transform module 202 of FIG. 2 and the manner in which it carries out a reversible transform. It should be noted, however, that the method 500 may be implemented in any desired context.
  • The method 500 provides a technique for generating edge filters for a wavelet filter pair. Initially, in operation 502, a wavelet scheme is analyzed to determine local derivatives that a wavelet filter approximates. Next, in operation 504, a polynomial order is chosen to use for extrapolation based on characteristics of the wavelet filter and a numbers of available samples. Next, extrapolation formulas are derived for each wavelet filter using the chosen polynomial order. See operation 506. Still yet, in operation 508, specific edge wavelet cases are derived utilizing the extrapolation formulas with the available samples in each case.
  • Moreover, additional optional information regarding exemplary extrapolation formulas and related information will now be set forth in greater detail.
  • One of the transforms specified in the JPEG 2000 standard 1) is the reversible 5-3 transform shown in Equations #1.1 and 1.2.
  • Equation #1 .1 and 1.2 Y 2 n + 1 = X 2 n + 1 - X 2 n + X 2 n + 2 2 eq 1.1 Y 2 n = X 2 n + Y 2 n - 1 + X 2 n + 1 + 2 4 eq 1.2
  • To approximate Y2N-1 from the left, one may fit a quadratic polynomial from the left. Approximating the negative of half the 2nd derivative at 2N−1 using the available values yields Equation #1.1.R.
  • Equation # 1 .1 . R Y 2 N - 1 = - 1 3 ( X 2 N - 1 - 3 X 2 N - 2 - X 2 N - 4 + 1 2 ) eq 1.1 . R
  • Equation #1.1.R may be used in place of Equation #1.1 when point one is right-most. The apparent multiply by 3 can be accomplished with a shift and add. The division by 3 is trickier. For this case where the right-most index is 2N−1, there is no problem calculating Y2N-2 by means of Equation #1.2. In the case where the index of the right-most point is even (say 2N), there is no problem with Equation #1.1, but Equation #1.2 involves missing values. Here the object is to subtract an estimate of Y from the even X using just the previously calculated odd indexed Y s, Y1 and Y3 in the case in point. This required estimate at index 2N can be obtained by linear extrapolation, as noted above. The appropriate formula is given by Equation #1.2.R.
  • Equation #1 .2 . R Y 2 N = X 2 N + 3 Y 2 N - 1 - Y 2 N - 3 + 2 4 eq 1.2 . R
  • A corresponding situation applies at the left boundary. Similar edge filters apply with the required extrapolations from the right (interior) rather than from the left. In this case, the appropriate filters are represented by Equations #1.1.L and 1.2.L.
  • Equations #1 .1 . L and 1.2 . L Y 0 = - 1 3 ( X 0 - 3 X 1 - X 3 + 1 2 ) eq 1.1 . L Y 0 = X 0 + 3 Y 1 - Y 3 + 2 4 eq 1.2 . L
  • The reverse transform filters can be obtained for these extrapolating boundary filters as for the original ones, namely by back substitution. The inverse transform boundary filters may be used in place of the standard filters in exactly the same circumstances as the forward boundary filters are used. Such filters are represented by Equations #2.1.Rinv, 2.2.Rinv, 2.1.L.inv, and 2.2.L.inv.
  • Equations #2 .1 . Rinv , 2.2 . Rinv , 2.1 . L . inv , 2.2 . L . inv X 2 N - 1 = - 3 Y 2 N - 1 + 3 X 2 N - 2 - X 2 N - 4 + 1 2 eq 2.1 . R inv X 2 N = Y 2 N - 3 Y 2 N - 1 - Y 2 N - 3 + 2 4 eq 2.2 . R inv X 0 = - 3 Y 0 + 3 X 1 - X 3 + 1 2 eq 2.1 . L inv X 0 = Y 0 - 3 Y 1 - Y 3 + 2 4 eq 2.2 . L inv
  • Thus, one embodiment may utilize a reformulation of the 5-3 filters that avoids the addition steps of the prior art while preserving the visual properties of the filter. See for example, Equations #3.1, 3.1R, 3.2, 3.2L.
  • Equations #3 .1 , 3.1 R , 3.2 , 3.2 L Y 2 n + 1 = ( X 2 n + 1 + 1 / 2 ) - ( X 2 n + 1 / 2 ) + ( X 2 n + 2 + 1 / 2 ) 2 eq 3.1 Y 2 N + 1 = ( X 2 N + 1 + 1 / 2 ) - ( X 2 N + 1 / 2 ) eq 3.1 R ( Y 2 n + 1 / 2 ) = ( X 2 n + 1 / 2 ) + Y 2 n - 1 + Y 2 n + 1 4 eq 3.2 ( Y 0 + 1 / 2 ) = ( X 0 + 1 / 2 ) + Y 1 2 eq 3.2 L
  • In such formulation, certain coefficients are computed with an offset or bias of ½, in order to avoid the additions mentioned above. It is to be noted that, although there appear to be many additions of ½ in this formulation, these additions need not actually occur in the computation. In Equations #3.1 and 3.1R, it can be seen that the effects of the additions of ½ cancel out, so they need not be applied to the input data. Instead, the terms in parentheses (Y0+½) and the like may be understood as names for the quantities actually calculated and stored as coefficients, passed to the following level of the wavelet transform pyramid.
  • Just as in the forward case, the JPEG-2000 inverse filters can be reformulated in the following Equations #4.2, 4.2L, 4.1, 4.1R.
  • Equations #4 .2 , 4.2 L , 4.1 , 4.1 R ( X 2 n + 1 / 2 ) = ( Y 2 n + 1 / 2 ) - Y 2 n - 1 + Y 2 n + 1 4 eq 4.2 ( X 0 + 1 / 2 ) = ( Y 0 + 1 / 2 ) - Y 1 2 eq 4.2 L ( X 2 n + 1 + 1 / 2 ) = Y 2 n + 1 + ( X 2 + 1 / 2 ) + ( X 2 n + 2 + 1 / 2 ) 2 eq 4.1 ( X 2 N + 1 + 1 / 2 ) = Y 2 N + 1 + ( X 2 N + 1 / 2 ) eq 4.1 R
  • As can be seen here, the values taken as input to the inverse computation are the same terms produced by the forward computation in Equations #3.1˜3.2L and the corrections by ½ need never be calculated explicitly.
  • In this way, the total number of arithmetic operations performed during the computation of the wavelet transform is reduced.
  • While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of a preferred embodiment should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.

Claims (4)

1. A method for compressing video data, comprising:
compressing video data;
inserting pause information with the compressed data; and
wherein the pause information is used when the video data is paused during the playback thereof.
2. The method as recited in claim 1, wherein the pause information is used to improve a quality of the played back video data.
3. The method as recited in claim 2, wherein the pause information includes a high-resolution frame.
4. The method as recited in claim 2, wherein the pause information includes data capable of being used to construct a high-resolution frame.
US12/955,549 2002-06-21 2010-11-29 Chroma temporal rate reduction and high-quality pause system and method Abandoned US20120106621A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/955,549 US20120106621A1 (en) 2002-06-21 2010-11-29 Chroma temporal rate reduction and high-quality pause system and method

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US39034502P 2002-06-21 2002-06-21
US39049202P 2002-06-21 2002-06-21
US10/418,363 US20030198395A1 (en) 2002-04-19 2003-04-17 Wavelet transform system, method and computer program product
US10/447,514 US7844122B2 (en) 2002-06-21 2003-05-28 Chroma temporal rate reduction and high-quality pause system and method
US12/955,549 US20120106621A1 (en) 2002-06-21 2010-11-29 Chroma temporal rate reduction and high-quality pause system and method

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/447,514 Division US7844122B2 (en) 2002-04-19 2003-05-28 Chroma temporal rate reduction and high-quality pause system and method

Publications (1)

Publication Number Publication Date
US20120106621A1 true US20120106621A1 (en) 2012-05-03

Family

ID=29740855

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/447,514 Expired - Fee Related US7844122B2 (en) 2002-04-19 2003-05-28 Chroma temporal rate reduction and high-quality pause system and method
US12/955,549 Abandoned US20120106621A1 (en) 2002-06-21 2010-11-29 Chroma temporal rate reduction and high-quality pause system and method

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10/447,514 Expired - Fee Related US7844122B2 (en) 2002-04-19 2003-05-28 Chroma temporal rate reduction and high-quality pause system and method

Country Status (1)

Country Link
US (2) US7844122B2 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060072837A1 (en) * 2003-04-17 2006-04-06 Ralston John D Mobile imaging application, device architecture, and service platform architecture
US7861007B2 (en) * 2003-12-05 2010-12-28 Ati Technologies Ulc Method and apparatus for multimedia display in a mobile device
CA2583603A1 (en) * 2004-10-12 2006-04-20 Droplet Technology, Inc. Mobile imaging application, device architecture, and service platform architecture

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4819059A (en) * 1987-11-13 1989-04-04 Polaroid Corporation System and method for formatting a composite still and moving image defining electronic information signal
US6081278A (en) * 1998-06-11 2000-06-27 Chen; Shenchang Eric Animation object having multiple resolution format
US6268864B1 (en) * 1998-06-11 2001-07-31 Presenter.Com, Inc. Linking a video and an animation
US6278466B1 (en) * 1998-06-11 2001-08-21 Presenter.Com, Inc. Creating animation from a video
US20030145338A1 (en) * 2002-01-31 2003-07-31 Actv, Inc. System and process for incorporating, retrieving and displaying an enhanced flash movie
US7023924B1 (en) * 2000-12-28 2006-04-04 Emc Corporation Method of pausing an MPEG coded video stream

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4561012A (en) * 1983-12-27 1985-12-24 Rca Corporation Pre-emphasis and de-emphasis filters for a composite NTSC format video signal
US4979041A (en) * 1988-01-28 1990-12-18 Massachusetts Institute Of Technology High definition television system
US5003377A (en) * 1989-01-12 1991-03-26 Massachusetts Institute Of Technology Extended definition television systems
US6195465B1 (en) * 1994-09-21 2001-02-27 Ricoh Company, Ltd. Method and apparatus for compression using reversible wavelet transforms and an embedded codestream
US6144773A (en) * 1996-02-27 2000-11-07 Interval Research Corporation Wavelet-based data compression
US5893145A (en) * 1996-12-02 1999-04-06 Compaq Computer Corp. System and method for routing operands within partitions of a source register to partitions within a destination register
US5909572A (en) * 1996-12-02 1999-06-01 Compaq Computer Corp. System and method for conditionally moving an operand from a source register to a destination register
JPH10224789A (en) * 1997-02-07 1998-08-21 Matsushita Electric Ind Co Ltd Image data processor and its method
EP0907255A1 (en) * 1997-03-28 1999-04-07 Sony Corporation Data coding method and device, data decoding method and device, and recording medium
DE69811072T2 (en) * 1997-05-30 2004-01-15 Interval Research Corp METHOD AND DEVICE FOR DATA COMPRESSION BASED ON WAVELETS
US6125201A (en) * 1997-06-25 2000-09-26 Andrew Michael Zador Method, apparatus and system for compressing data
US6360021B1 (en) * 1998-07-30 2002-03-19 The Regents Of The University Of California Apparatus and methods of image and signal processing
US6272180B1 (en) * 1997-11-21 2001-08-07 Sharp Laboratories Of America, Inc. Compression and decompression of reference frames in a video decoder
US6396948B1 (en) * 1998-05-14 2002-05-28 Interval Research Corporation Color rotation integrated with compression of video signal
US6229929B1 (en) * 1998-05-14 2001-05-08 Interval Research Corporation Border filtering of video signal blocks
US6516030B1 (en) * 1998-05-14 2003-02-04 Interval Research Corporation Compression of combined black/white and color video signal
US6407747B1 (en) * 1999-05-07 2002-06-18 Picsurf, Inc. Computer screen image magnification system and method
GB9919805D0 (en) * 1999-08-21 1999-10-27 Univ Manchester Video cording
US6650773B1 (en) * 2000-09-29 2003-11-18 Hewlett-Packard Development Company, L.P. Method including lossless compression of luminance channel and lossy compression of chrominance channels

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4819059A (en) * 1987-11-13 1989-04-04 Polaroid Corporation System and method for formatting a composite still and moving image defining electronic information signal
US6081278A (en) * 1998-06-11 2000-06-27 Chen; Shenchang Eric Animation object having multiple resolution format
US6268864B1 (en) * 1998-06-11 2001-07-31 Presenter.Com, Inc. Linking a video and an animation
US6278466B1 (en) * 1998-06-11 2001-08-21 Presenter.Com, Inc. Creating animation from a video
US7023924B1 (en) * 2000-12-28 2006-04-04 Emc Corporation Method of pausing an MPEG coded video stream
US20030145338A1 (en) * 2002-01-31 2003-07-31 Actv, Inc. System and process for incorporating, retrieving and displaying an enhanced flash movie

Also Published As

Publication number Publication date
US20030235340A1 (en) 2003-12-25
US7844122B2 (en) 2010-11-30

Similar Documents

Publication Publication Date Title
US10931961B2 (en) High dynamic range codecs
KR100253931B1 (en) Approximate mpeg decoder with compressed reference frames
US9661340B2 (en) Band separation filtering / inverse filtering for frame packing / unpacking higher resolution chroma sampling formats
US7813564B2 (en) Method for controlling the amount of compressed data
US20070160299A1 (en) Moving image coding apparatus, moving image decoding apparatus, control method therefor, computer program, and computer-readable storage medium
US7372904B2 (en) Video processing system using variable weights and variable transmission priorities
EP2144432A1 (en) Adaptive color format conversion and deconversion
US20060002611A1 (en) Method and apparatus for encoding high dynamic range video
US20070053429A1 (en) Color video codec method and system
WO2012076646A1 (en) High-dynamic range video tone mapping
US6819801B2 (en) System and method for processing demosaiced images to reduce color aliasing artifacts
US20030002742A1 (en) Image compression method and apparatus, image expansion method and apparatus, and storage medium
US8903196B2 (en) Video presentation at fractional speed factor using time domain interpolation
US7729551B2 (en) Method for controlling the amount of compressed data
US7702161B2 (en) Progressive differential motion JPEG codec
US20120106621A1 (en) Chroma temporal rate reduction and high-quality pause system and method
JPH066829A (en) Method and apparatus for compressing image data
WO2003090028A2 (en) Wavelet transform system, method and computer program product
US20030198395A1 (en) Wavelet transform system, method and computer program product
Marcellin et al. JPEG2000 for digital cinema
Apostolopoulos et al. Video compression for digital advanced television systems
Marino et al. A DWT-based perceptually lossless color image compression architecture
KR20050018659A (en) Wavelet transform system, method and computer program product
Reuss VC-5 Video Compression for Mezzanine Compression Workflows
Lawai Scalable coding of HDTV pictures using the MPEG coder

Legal Events

Date Code Title Description
AS Assignment

Owner name: INNOVATIVE COMMUNICATIONS TECHNOLOGY, INC., VIRGIN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:DROPLET TECHNOLOGY, INC.;REEL/FRAME:030244/0608

Effective date: 20130410

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: STRAIGHT PATH IP GROUP, INC., VIRGINIA

Free format text: CHANGE OF NAME;ASSIGNOR:INNOVATIVE COMMUNICATIONS TECHNOLOGIES, INC.;REEL/FRAME:030442/0198

Effective date: 20130418

AS Assignment

Owner name: SORYN TECHNOLOGIES LLC, NEW JERSEY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:STRAIGHT PATH IP GROUP, INC.;REEL/FRAME:032169/0557

Effective date: 20140130

AS Assignment

Owner name: STRAIGHT PATH IP GROUP, INC., VIRGINIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SORYN TECHNOLOGIES LLC;REEL/FRAME:035511/0492

Effective date: 20150419

AS Assignment

Owner name: CLUTTERBUCK CAPITAL MANAGEMENT, LLC, OHIO

Free format text: SECURITY INTEREST;ASSIGNORS:STRAIGHT PATH COMMUNICATIONS INC.;DIPCHIP CORP.;STRAIGHT PATH IP GROUP, INC.;AND OTHERS;REEL/FRAME:041260/0649

Effective date: 20170206

AS Assignment

Owner name: DIPCHIP CORP., NEW JERSEY

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CLUTTERBUCK CAPITAL MANAGEMENT, LLC;REEL/FRAME:043996/0733

Effective date: 20171027

Owner name: STRAIGHT PATH ADVANCED COMMUNICATION SERVICES, LLC

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CLUTTERBUCK CAPITAL MANAGEMENT, LLC;REEL/FRAME:043996/0733

Effective date: 20171027

Owner name: STRAIGHT PATH SPECTRUM, INC., NEW JERSEY

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CLUTTERBUCK CAPITAL MANAGEMENT, LLC;REEL/FRAME:043996/0733

Effective date: 20171027

Owner name: STRAIGHT PATH SPECTRUM, LLC, NEW JERSEY

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CLUTTERBUCK CAPITAL MANAGEMENT, LLC;REEL/FRAME:043996/0733

Effective date: 20171027

Owner name: STRAIGHT PATH COMMUNICATIONS INC., NEW JERSEY

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CLUTTERBUCK CAPITAL MANAGEMENT, LLC;REEL/FRAME:043996/0733

Effective date: 20171027

Owner name: STRAIGHT PATH VENTURES, LLC, NEW JERSEY

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CLUTTERBUCK CAPITAL MANAGEMENT, LLC;REEL/FRAME:043996/0733

Effective date: 20171027

Owner name: STRAIGHT PATH IP GROUP, INC., NEW JERSEY

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CLUTTERBUCK CAPITAL MANAGEMENT, LLC;REEL/FRAME:043996/0733

Effective date: 20171027