US20100316292A1 - Remote sensing imageryaccuracy analysis method and apparatus - Google Patents

Remote sensing imageryaccuracy analysis method and apparatus Download PDF

Info

Publication number
US20100316292A1
US20100316292A1 US12/802,448 US80244810A US2010316292A1 US 20100316292 A1 US20100316292 A1 US 20100316292A1 US 80244810 A US80244810 A US 80244810A US 2010316292 A1 US2010316292 A1 US 2010316292A1
Authority
US
United States
Prior art keywords
image
band
computing
quality metrics
file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/802,448
Inventor
Charles G. O'Hara
Anil CHERIYADAT
Suyoung SEO
Bijay SHRESTHA
Veeraraghavan VIJAYARAJ
Nicolas H. Younan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US12/802,448 priority Critical patent/US20100316292A1/en
Publication of US20100316292A1 publication Critical patent/US20100316292A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/63Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • G06V10/993Evaluation of the quality of the acquired pattern
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10036Multispectral image; Hyperspectral image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10041Panchromatic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation

Definitions

  • the present invention relates generally to image sensing and image treatment methods and systems and more particularly to image accuracy analysis.
  • An aspect of the present invention is to provide a method of enhancing a resolution of an image by fusing images.
  • the method includes applying a principal component analysis to a multispectral image to obtain a plurality of principal components, and replacing a first component in the plurality of principal components by a panchromatic image.
  • the method further includes resampling remaining principal components to a resolution of the panchromatic image, and applying an inverse principal analysis to the panchromatic image and the remaining principal components to obtain a fused image of the panchromatic image and the multispectral image.
  • Another aspect of the present invention is to provide a method of pansharpening an image by fusing images.
  • the method includes applying a wavelet-based pansharpening to a plurality of bands in a multispectral image and a panchromatic image to obtain a pansharpened image, and computing quality metrics on the pansharpened image.
  • a further aspect of the present invention is to provide a method of compressing and decompressing an image.
  • the method includes preprocessing an image, applying a discrete wavelet transform on the preprocessed image to decompose the preprocessed image into a plurality of sub-bands, and applying a quantization to each sub-band in the plurality of sub-bands.
  • the method further includes partitioning the plurality of sub-bands into a plurality of code-blocks, encoding each code-block in the plurality of code-blocks independently to obtain a code-blocks stream, applying a rate control process to the code-blocks stream to obtain a bit-stream; and organizing the bit-stream to obtain a compressed image.
  • the method may further include transforming the compressed image using embedded block decoding to obtain embedded decoded block data, re-composing the embedded decoded block data using an inverse discrete wavelet decomposition process, performing a dequantization by assigning a single quantum value to a range of values to obtain a dequantized data, and performing a decoding process on the dequantized data to substantially reconstruct the image.
  • Another aspect of the present invention is to provide a method of inserting geolocation into a JP2 file.
  • the method includes inputting a GeoTIFF image file, extracting a GeoTIFF header that contains references to geographic metadata, creating a degenerated GeoTIFF image using the extracted geographic metadata, performing a geographic markup language (GML) conversion, inserting the degenerated GeoTIFF image into a universally unique identifier (UUID) box of the JP2 file, inserting the geographic markup language into an extandible markup language (XML) box of the JP2 file, and compressing the JP2 file using JP2000 image compression to obtain a GeoJPEG2000 image file.
  • GML geographic markup language
  • Yet another aspect of the present invention is to provide a method for analyzing and estimating horizontal accuracy in imaging systems.
  • the method includes inputting image locations and true locations in x and y directions, calculating a root mean square error in both x and y directions, and computing a horizontal accuracy by using the root mean square error in the x and y directions.
  • FIG. 1 shows a flow diagram of the principal component analysis method for fusing a multispectral and a panchromatic image, according to an embodiment of the present invention
  • FIG. 2 shows the original multispectral image
  • FIG. 3 shows the panchromatic image
  • FIG. 4 shows the fused image resulting from the fusion process
  • FIGS. 5A-5H show histograms of four bands in both multispectral and fused images
  • FIG. 6 shows a correlation plot of NDVI values for the fused image versus the NDVI values for the multispectral image
  • FIG. 7A shows a multispectral image data
  • FIG. 7B shows a panchromatic image data
  • FIG. 7C shows a wavelet sharpened image
  • FIG. 8 shows a flow diagram of the JPEG2000 encoding process, according to an embodiment of the invention.
  • FIG. 9 shows a flow diagram of the JPEG2000 decoding process, according to an embodiment of the invention.
  • FIG. 10 shows a structure of a JP2 file, according to an embodiment of the invention.
  • FIG. 11 shows a process flow diagram for inserting geolocation in JP2000 images information, according to an embodiment of the present invention
  • FIG. 12 shows a process flow diagram for compressing/decompressing an image and computing quality metrics, according to an embodiment of the present invention
  • FIG. 13A depicts an original image
  • FIGS. 13B-13I depict the original image compressed at various compression ratios
  • FIG. 14A depicts the original zoomed image of the original image depicted in FIG. 13A ;
  • FIGS. 14B-14K depict the original zoomed image compressed at various compression ratios
  • FIG. 15 is a plot of mean square error values versus compression ratio for various bands in images shown in FIGS. 13A-13I and 14 A- 14 K;
  • FIG. 16 is a plot of root mean square values versus compression ratio for various bands in the images shown in FIGS. 13A-13I and 14 A- 14 K;
  • FIG. 17 is a plot of peak signal to noise ratios versus compression ratio for bands in the images shown in FIGS. 13A-13I and 14 A- 14 K;
  • FIG. 18 is a plot of correlation between bands versus compression ratio
  • FIG. 19 depicts a graphical user interface of CE 90 toolkit, according to an embodiment of the present invention.
  • FIG. 20 depicts a data entry form of the CE 90 toolkit, according to an embodiment of the present invention.
  • FIG. 21 depicts a data list window allowing a user to select data points by highlighting them on the window
  • FIG. 22 shows an example with a circle representing the CE 90 radius calculated according to the horizontal accuracy standard
  • FIG. 23 shows the offset vector plot generated by the CE 90 toolkit.
  • Image quality assessment can play a role in many imaging applications, including in remote sensing applications, where both spatial and spectral variations characterization may be needed.
  • An example of use of image quality metrics is to quantitatively measure a quality of an image that correlates with perceived quality.
  • Image quality metrics can also be used to benchmark different image processing algorithms by comparing objective metrics.
  • Assessment measures can be divided into subjective and objective groups. Subjective measures are obtained according to rating by human observers, where judgment of different viewers may differ. Due to this and many other inherent drawbacks associated with subjective measures, quantitative or objective measures are used. A reliable image quality metric would provide a consistent, accurate and monotonic prediction of a quality of an image.
  • MS multispectral
  • a number of preprocessing steps may be used prior to applying any classification or segmentation algorithm.
  • the preprocessing steps may include resampling, atmospheric correction, sharpening, filtering, etc.
  • the effects of the preprocessing can degrade the performance of the segmentation or classification. Hence, assessing a quality of the image may be needed after any such preprocessing step.
  • MSE mean square error
  • the MSE quantifies the amount of difference in the energy of signals. Even though, the mean square error metric may have some limitation when used as a global measure of an image quality, when used as a local measure it can be effective in predicting the image quality accurately.
  • RMSE root mean square
  • MSE and RMSE give an account of the spectral fidelity of an image.
  • correlation function Another metric that may be used is the correlation function.
  • the closeness between two images can be quantified in terms of correlation function. Correlation coefficients range from ⁇ 1 to +1. A correlation coefficient value of +1 indicates that the two images are highly correlated, i.e., very close to one another. A correlation coefficient of ⁇ 1 indicates that the two images are exactly opposite to each other.
  • the correlation coefficient is computed as expressed in the following equation:
  • a and B are the two images between which the correlation is computed.
  • Correlation coefficients between each band of the multispectral image before and after a processing step quantify the spectral quality of the image after processing.
  • a further metric that may be used is the mean value of the pixels in a band.
  • the mean value of pixels in a band of an image is the central value of the distribution of the pixels in that band of the image.
  • a relative shift in the mean value quantifies the changes in the histogram of the image that may be the result of processing the image.
  • the changes in the standard deviation of the distribution can also be considered in addition to the shift in mean.
  • the relative shift in the mean value can be expressed mathematically as follows:
  • the entropy is defined as the amount of information contained in a signal.
  • the entropy of an image can be evaluated as shown in the following equation:
  • d is the number of gray levels possible and p(d i ) is the probability of occurrence of a particular gray level d i in the image.
  • the image noise index (INI) is based on entropy.
  • An image may undergo processing to obtain a processed image.
  • the image may also be reconstructed from the processed image.
  • a reconstructed image is obtained by going through the reverse process to get the estimate of the original multispectral image from the processed image.
  • Entropy may be defined for each of the images prior to processing (entropy of the original image is x), after processing (the entropy of the processed image is y) and after reconstruction from the processed image (the entropy of the reconstructed image is z).
  • the image noise index (INI) is expressed as a ratio between the two values (y ⁇ x) and
  • INI ( y - x ) ⁇ x - z ⁇ - 1 ( 7 )
  • a positive value of INI indicates an improvement in the information content and a negative value of INI indicates degradation of information in the processed image.
  • NDVI normalized difference vegetation index
  • NDVI NIR - R NIR + R ( 8 )
  • NIR is a near infrared band pixel value and R is a red band pixel value.
  • the NDVI varies between +1 and ⁇ 1. A value closer to +1 indicates dense vegetation. A NDVI value very close to zero represents water. NDVI is an important feature that is used in to distinguish between many classes. This metric quantifies the variations in the NDVI due to any preprocessing.
  • image resampling is performed on an image as an example of process that can affect a quality of an image.
  • the image resampling is applied on an image in order to study the effects of resampling on the image quality and to evaluate the image quality using various metrics.
  • Image resampling is a technique often used in remote sensing applications. Resampling techniques are used to estimate pixel values in between available samples. Many different resampling techniques such as nearest neighbor (NN), linear interpolation and cubic convolution can be used. One objective of resampling techniques is to minimize the residual difference between the actual and predicted pixel values.
  • NN nearest neighbor
  • linear interpolation linear interpolation
  • cubic convolution One objective of resampling techniques is to minimize the residual difference between the actual and predicted pixel values.
  • the spectral and spatial quality of nearest neighbor and cubic convolution method are compared.
  • the nearest neighbor method the intermediate pixel values can be replicated from adjacent pixel values. This provides no spectral distortion but sharp edges of the image are distorted and spatial artifacts can be observed.
  • the cubic convolution method tries to fit a polynomial of degree three and compute the intermediate pixel value. This method introduces some spectral distortion but the spatial quality of this method is better than nearest neighbor.
  • the correlation among the spectral bands can also be computed in the original and resampled multispectral image. In most application the spectral quality can be preserved after resampling. Hence, the correlation among the spectral bands for the resampled images and the original multispectral bands are expected to match.
  • co-registered subsets of Quickbird multispectral (MS) and panchromatic (PAN) image were considered.
  • the original multispectral image is resampled to the resolution of the panchromatic image.
  • TABLE 1 shows the values of the correlation coefficients computed.
  • the nearest neighbor (NN) method has the same correlation as that of the original multispectral. This can be explained by the fact that the pixel values are not changed during the nearest neighbor (NN) resampling method.
  • the cubic convolution method introduces some spectral distortion.
  • the correlation between the panchromatic image (PAN) and the resampled multispectral bands is also computed.
  • the cubic convolution technique provides higher correlation with the panchromatic image, which indicates that it has good spatial quality compared to the NN resampling technique.
  • TABLE 2 indicates the correlation values computed.
  • Image fusion algorithms improve the low spatial resolution of the multispectral images using the spatial information from the corresponding panchromatic image. These image fusion algorithms can be used as a preprocessing step before feature extraction or classification is done on a multispectral image. Image fusion algorithms are also called pansharpening. Pansharpening combines information from a multispectral image and spatial information from a panchromatic image into a single fused image. The single image has both high spectral and spatial resolutions. The high spatial and spectral resolutions help to enhance features, provide detail information of targets or objects in the image and improve classification accuracy.
  • Pansharpening algorithms include intensity-hue-saturation sharpening (HIS), principal component analysis (PCA) sharpening, Brovey sharpening, multiplicative sharpening and color normalized sharpening.
  • HIS intensity-hue-saturation sharpening
  • PCA principal component analysis
  • Brovey sharpening Brovey sharpening
  • multiplicative sharpening and color normalized sharpening.
  • quality metrics are applied to an image fused using the principal component analysis (PCA) method.
  • PCA principal component analysis
  • FIG. 1 shows a flow diagram of the principal component analysis (PCA) method for fusing the multispectral and panchromatic image, according to an embodiment of the present invention.
  • PCA transform 11 can be applied to the original multispectral image MS 12 .
  • the PCA 11 transforms the image into a set of uncorrelated data PC 1 , PC 2 , PC 3 , etc., at 13 .
  • the first principal component PC 1 has the maximum variance or most of the information.
  • the first principal component PC 1 is replaced by the panchromatic image PAN 14 .
  • the remaining principal components PC 2 , PC 3 , etc. are resampled to the panchromatic resolution, at 15 , and the inverse PCA transform is applied to get back into image domain, at 16 .
  • FIG. 2 shows the original multispectral MS image.
  • FIG. 3 shows the panchromatic PAN image and
  • FIG. 4 shows the result of the fusion process.
  • the MSE and RMSE values are computed between each band of the fused and original multispectral image. The values are shown in TABLE 4.
  • TABLE 4 shows that the DN values have changed a lot in band 4 due to sharpening.
  • the DN values in other bands are also distorted a lot.
  • the quality of the fused image cannot be based solely on this metric, i.e., solely based on MSE and/or RMSE, because some of the newly-added information might be useful information.
  • the correlation between different band combinations is computed to quantify the spectral and spatial quality of the image.
  • TABLE 5 shows the value of the correlation coefficients computed between the four spectral bands. It is expected that the correlation for the fused bands should be as close as possible to that of the original multispectral bands to ensure preservation of spectral information. In addition, it is expected that the fusion process should not increase the correlation between the spectral bands.
  • the values in TABLE 5 indicate that there are variations in the spectral information of the fused image.
  • the fused image does not improve the performance of spectral based classification.
  • the correlation is also computed between each band of the original multispectral and panchromatic image. For this combination, the correlation of the fused image should be higher because the fused image has more spatial information compared to the panchromatic image.
  • correlation values between each band of MS image and the pan image is computed for both the original MS image and the fused image.
  • the correlation values are reported in TABLE 6.
  • the correlation values indicate that the fused image has more useful spatial information compared the original multispectral MS image, as there is an increase in the corresponding values.
  • the correlation values are better indicators of the spatial and spectral quality of the image.
  • the relative shift mean (in %) for each band of the fused image is computed.
  • the relative shift in mean of each band of the fused image helps to visualize the changes in the histogram.
  • a positive shift in mean indicates the shift towards white and a negative shift indicates shift towards gray.
  • the computed values are shown in the TABLE 7.
  • the histogram of band 3 has shifted a lot compared to other bands.
  • the histograms are also plotted to aid visual comparison.
  • FIGS. 5A-5H show the histograms of the four bands in both multispectral and PCA fused images.
  • FIGS. 5A-5D show histograms of the four bands (band 1 , band 2 , band 3 , and band 4 , respectively) in the MS image and
  • FIGS. 5E-5H show histograms of the four bands (band 1 , band 2 , band 3 , and band 4 , respectively) in the fused image.
  • the image noise index (INI) is computed for each band of the fused image as shown in TABLE 8.
  • the degradation of spectral information in all the bands is indicated by a negative INI value.
  • the information loss or unwanted information is higher in band 4 compared to other bands.
  • NDVI values are computed for a subset of vegetated area.
  • FIG. 6 shows a correlation plot of the NDVI values for the fused image versus the NDVI values for the MS image. The correlation between the NDVI values of the multispectral and fused image is found equal to approximately 0.6696.
  • PAN dominates the spatial information which is shown from the high correlation of the fused bands with the panchromatic image. This is because the PAN replaces the first principal component, which contains most of the spectral information. In addition, the spectral information is lost compared to the multispectral image, which is reflected in MSE, RMSE, INI and relative shift in mean.
  • the quality of an image may not be predicted accurately by just considering only one of the metrics discussed above.
  • a combination of metrics allows to evaluate the quality of the image with more precision.
  • the combination of metrics selected can vary based on the type of preprocessing and the application of the image.
  • a pansharpening and image quality interface (PSIQI) application incorporating a wavelet-based pansharpening is implemented.
  • the PSIQI application is applied on images to sharpen images and/or compute quality metrics for sharpened data.
  • image data i.e. multispectral image data and panchromatic image data
  • the image data is co-registered. A user is able to specify a location of the multispectral image and corresponding panchromatic image.
  • the PSIQI application performs a wavelet-based sharpening on the specified image data sets for sharpening. Image size and number of bits per pixel control the block size. Block processing can be used for efficient memory handling and for increasing the speed. Quality metrics chosen are then computed over the entire data set and stored in a file, for example a text file.
  • the PSIQI application can be used in two modes of operation, sharpening and quality metric modes.
  • the sharpening mode is used to sharpen the data using a wavelet-based method and compute the quality metrics.
  • the quality metric mode is used only to compute metrics on data sharpened using other methods. A user may select a band to be sharpened.
  • the application provides a tunable sharpening ways by allowing the user to select different mother wavelets, by enabling initial and final histogram matching steps and by enabling filtering using a selected filter such as a Wiener filter. While using the quality metrics mode, the tunable options are switched off and the user can only choose bands that exist in the sharpened data.
  • PSIQI application is used to sharpen a co-registered IKONOS image set in the sharpening mode.
  • bands 1 , 2 , and 3 of the data are sharpened using the wavelet-based method and quality metrics, such as the mean square error (MSE), root mean square error (RMSE), correlation metrics, and relative shift in the mean are computed.
  • MSE mean square error
  • RMSE root mean square error
  • correlation metrics correlation metrics
  • relative shift in the mean are computed.
  • a bi-orthogonal 4.4 wavelet is used as the mother wavelet, for example.
  • initial and final histogram match are applied.
  • a Wiener filter is applied on the sharpened data to remove noise due to sharpening.
  • FIG. 7A shows the multispectral image data.
  • FIG. 7B shows the panchromatic image data.
  • FIG. 7C shows the wavelet sharpened image.
  • MSE mean square error
  • RMSE root mean square error
  • RMSE relative shift in the mean of the histogram
  • the average change in the pixel value of band 3 is slightly higher when compared with the other bands.
  • the mean of band 3 is shifted by 0.13%.
  • the spectral quality of the sharpened image can be ascertained by comparing the correlation between each band in the image before and after sharpening.
  • the correlation values computed are shown in TABLE 10. These correlation values indicate a slight variation in the spectral information in the sharpened data and an increase in spatial information.
  • image quality evaluation methods are used on image data.
  • the image data can be provided in a large size.
  • the size of the multi-resolution Quickbird GeoTIFF image used in this study is 380 MB. Therefore, compression of the images may be needed for storing and transmitting to save storage space, bandwidth, and to lower the transmission times.
  • Image compression can be performed in either “lossy” or lossless fashion. Lossless compression may be desirable in critical situations where any loss in image data and quality may lead to erroneous analysis. However, in various other applications lossy compression may be preferred as it provides high compression ratio that results in smaller image sizes. However, the trade off with using lossy compression is that as the compression ratio increases, increased spatial and spectral features of the image can be lost. Hence, it may be worthwhile to analyze the impact of image compression on image quality.
  • an encoding and decoding process JPEG2000 product is used to compress and decompress images using wavelet transformation opposed to its predecessor JPEG that uses discrete cosine transformation (DCT).
  • DCT discrete cosine transformation
  • Wavelet transform-based image compression algorithms allow images to be retained without much distortion or loss when compared to JPEG, and hence are recognized as a superior method.
  • the JPEG2000 encoding and decoding process includes a JPEG2000 encoding process and a JPEG2000 decoding process. The encoding and decoding process are divided into several stages as can be seen from FIGS. 8 and 9 .
  • TIFF Tagged Image File Format
  • a GeoTIFF file is a TIFF 6.0 file and inherits the file structure as described in the corresponding portion of the TIFF specifications.
  • GeoTIFF uses a small set of reserved TIFF tags to store a broad range of georeferencing information, catering to geographic as well as projected coordinate systems needs. The geographic data can then be used to position the image in the correct location and geometry on the screen of a geographic information display.
  • FIG. 8 shows a flow diagram of the JPEG2000 encoding process, according to an embodiment of the invention.
  • the JPEG2000 encoding process 20 includes pre-processing 22 which is the first stage of encoding followed by a discrete wavelet decomposition (DWT) 24 .
  • DWT 24 is used to decompose each image tile into its high and low sub bands and filter each row and column of the pre-processed image tile with a high-pass and a low-pass filter.
  • multiple levels of the DWT 24 can be performed and the number of stages performed is implementation dependent.
  • a basic quantization is done using a quantizer, at 26 .
  • Quantization is defined as achieving compression by assigning a range of values to a single quantum value.
  • the sub-bands of each tile are further partitioned into relatively small code-blocks (for example, 64 ⁇ 64 samples, 32 ⁇ 32 samples, etc.) such that code blocks from a sub-band have the same size and then each code-block is encoded independently to obtain a code block stream, at 28 .
  • a rate control process is applied the code block stream, at 30 .
  • Rate control is a process by which the code stream is altered to a bit stream so that a target bit rate can be reached.
  • the bit stream is then organized, at 32 and a compressed image data is obtained.
  • FIG. 9 shows a flow diagram of the JPEG2000 decoding process, according to an embodiment of the invention.
  • the decoding process 40 is the opposite of the encoding process 20 .
  • the compressed data is transformed using embedded block decoding, at 42 .
  • the embedded decoded block data is then re-composed using an inverse discrete wavelet decomposition (inverse DWT) 44 .
  • a dequantization is then performed at 46 .
  • the dequantization is the inverse of quantization.
  • dequantization is a decompression that allows assigning a single quantum value to a range of values.
  • an inverse ICT is performed at 48 in which is a decoding process to reconstruct substantially the original image data.
  • FIG. 10 shows a structure of a JP2 file, according to an embodiment of the invention.
  • the JP2 file structure can be considered as a collection of boxes 50 . Some of the boxes 50 are independent, such as box 52 , and some of the boxes, such as box 54 , contain other boxes.
  • the binary structure of a file is a contiguous sequence of boxes. The start of the first box is the first byte of the file, and the last byte of the last box is the last byte of the file.
  • UUID Universally Unique Identifiers
  • XML eXtensible Markup Language
  • GeoTIFF box For example in GeoTIFF, a UUID box termed as GeoTIFF box contains a specified UUID and a degenerated GeoTIFF file. By degenerated, it means a valid GeoTIFF file excluding image information.
  • the UUID box i.e., the GeoTIFF box
  • the intent of containing the valid GeoTIFF file is that any compliant GeoTIFF reader or writer would be able to read or write the image.
  • GML Geographic Markup Language
  • GML is an XML-based encoding standard for geographic information developed by the OpenGIS Consortium (OGC).
  • OGIS OpenGIS Consortium
  • geo-location information coded in GML is stored in a non-proprietary way within JPEG2000 XML box.
  • a JPEG2000_GeoLocation XML Element containing a RectifiedGrid construct contains the geographic information.
  • the RectifiedGrid includes an ID of “JPEG2000_GeoLocation — 1” with a dimension equal to 2.
  • the origin element is also included and is provided an id of “JPEG2000_Origin.”
  • the Point specifies the coordinate of the bottom-left corner of the bottom-left cell in the image.
  • the srsName can be an immediate EPSG code. However, if an existing EPSG code is not available, the srsName refers to a full SpatialReferenceSystem element definition within a same JP2 XML box.
  • a pair of offsetVector elements define vertical and horizontal cell “step” vectors, and may include a rotation.
  • a conformant reader is usually set to ignore all other elements within the JPEG2000_GeoLocation element.
  • FIG. 11 shows a process flow diagram for inserting geolocation (e.g., geo-referencing metadata) in JP2000 images information, according to an embodiment of the present invention.
  • a GeoTIFF image file or files is inputted at 60 .
  • a GeoTIFF header that contains the references to the geographic information is extracted from the GeoTIFF image files at 62 .
  • a degenerated GeoTIFF image is created using the extracted geographic metadata that follows restrictions imposed by the “GeoTIFF Box” specification, at 64 .
  • a GML conversion is performed at 65 .
  • the geographic metadata can also be used to extract the geographic information and create a XML string, at 66 , compliant with the GML provided in the second standard.
  • the degenerated GeoTIFF is inserted in the UUID box and the GML is inserted in the XML box during the image compression, at 68 such that the geolocation metadata is embedded in the JP2 image.
  • a GeoJPEG2000 image is then obtained and output at 70 .
  • a toolkit for image compression and metadata insertion is developed using Java version J2SE 1.4.2.
  • Object-oriented interfaces for manipulating different formats of images can be provided by various vendors. Examples of APIs that are used are Java advanced imaging (JAI), Java image I/O, LuraWave.jp2 Java and GeoTIFF-JAI.
  • JAI is a cross-platform, flexible, extensible toolkit for adding advanced image processing capabilities to applications for the Java platform.
  • Java Image I/O API provides a pluggable architecture for working with images stored in files. It offers substantially more flexibility and power than the previously-available APIs for loading and saving images.
  • LuraWave.jp2 JAVA/JNI-SDK for Windows (demo version) is a part of the LuraWave.jp2 image compression software family and is based on Algo Vision LuraTech's implementation of the JPEG2000 image compression standard and is fully compliant with the Part 1 of the JPEG2000 International Standard.
  • GeoTIFF—JAI is a “geotiff” extension to the Java Advanced Imaging component and is an opens source interface developed by Niles Ritter.
  • Front-end and the codes to compute image quality metrics are developed using Matlab 6.5.1, release 13.
  • Matlab provides a java interface to access classes written in Java and call the object's methods.
  • Image quality metrics are figures of merit used for the evaluation of imaging systems or processes.
  • the image quality metrics can be broadly classified into two categories, subjective and objective. In objective measures of image quality metrics, some statistical indices are calculated to indicate the reconstructed image quality.
  • the image quality metrics provide some measure of closeness between two digital images by exploiting the differences in the statistical distribution of pixel values. Examples of error metrics used for comparing compression are Mean Square Error and Peak Signal to Noise Ratio (PSNR).
  • PSNR Peak Signal to Noise Ratio
  • PSNR Peak Signal to Noise Ratio
  • PSNR 20 ⁇ log 10 ⁇ ( S RMSE ) ( 9 )
  • geo-referencing metadata is inserted into a JPEG2000 (jp2) file during compression, for example using the method illustrated in FIG. 11 .
  • a UUID box with specified UUID is created and a degenerated GeoTIFF is created and inserted into the data section of UUID box.
  • a XML box of the JPEG2000 file is also filled with a minimal set of GML used for geo-location. Since the JPEG2000 created by the application meets both the standards, the geo-location information of the file should be compatible with most of the GIS application that support JP2 file format of JPEG2000.
  • FIG. 12 shows a process flow diagram for compressing/decompressing an image and computing quality metrics, according to an embodiment of the present invention.
  • An image file is inputted, at 80 .
  • a JPEG2000 compression is performed on the image file, at 82 to obtain a compressed image at 84 .
  • the compression at 82 is performed at certain compression ratio and the resultant compressed image at 84 may comprise lossy or lossless information.
  • the compressed imaged at 84 is then decompressed to obtain a decompressed image at 86 .
  • Computer quality metrics are then computed to compare the original image at 80 and the decompressed image at 86 .
  • the obtained quality metrics obtained at 88 can be stored at 90 .
  • the reversible compressions were performed at different ratios on a test image and the JPEG2000 file is decompressed back to TIFF file format.
  • the quality metrics were then calculated to compare the original and the reconstructed images.
  • the test image is a 1024 ⁇ 1024 pixels subset of Quickbird multi-spectral image of the Memphis, Tenn. area.
  • the image is compressed at various compression ratios and then decompressed using JPEG2000 method and the quality metrics of the reconstructed image is computed by using the original image as a benchmark.
  • FIG. 13A depicts the original image.
  • FIG. 13B depicts the original image compressed at the compression ratio of 1:5.
  • FIG. 13C depicts the original image compressed at the compression ratio of 1:10.
  • FIG. 13D depicts the original image compressed at a compression ratio of 1:20.
  • FIG. 13E depicts the original image compressed at a compression ratio of 1:30.
  • FIG. 13F depicts the original image compressed at a compression ratio of 1:50.
  • FIG. 13G depicts the original image compressed at a compression ratio of 1:100.
  • FIG. 13H depicts the original image compressed at a compression ratio of 1:150.
  • FIG. 13I depicts the original image compressed at a compression ratio of 1:200.
  • FIGS. 14A-K depict zoomed images with increasing compression ratios.
  • FIG. 14A depicts the original zoomed image.
  • FIG. 14B depicts the original zoomed image compressed at the compression ratio of 1:5.
  • FIG. 14C depicts the original zoomed image compressed at the compression ratio of 1:10.
  • FIG. 14D depicts the original zoomed image compressed at a compression ratio of 1:20.
  • FIG. 14E depicts the original zoomed image compressed at a compression ratio of 1:30.
  • FIG. 14F depicts the original zoomed image compressed at a compression ratio of 1:40.
  • FIG. 14A depicts the original zoomed image.
  • FIG. 14B depicts the original zoomed image compressed at the compression ratio of 1:5.
  • FIG. 14C depicts the original zoomed image compressed at the compression ratio of 1:10.
  • FIG. 14D depicts the original zoomed image compressed at a compression ratio of 1:20.
  • FIG. 14E depicts the original zoomed image compressed
  • FIG. 14G depicts the original zoomed image compressed at a compression ratio of 1:50.
  • FIG. 14H depicts the original zoomed image compressed at a compression ratio of 1:60.
  • FIG. 14I depicts the original zoomed image compressed at a compression ratio of 1:100.
  • FIG. 14J depicts the original zoomed image compressed at a compression ratio of 1:150.
  • FIG. 14K depicts the original zoomed image compressed at a compression ratio of 1:200.
  • FIG. 15 is a plot of MSE values versus compression ratio for various bands (band 1 , band 2 , band 3 , and band 4 ) in the images.
  • FIG. 16 is a plot of RMSE values versus compression ratio for various bands in the images.
  • FIG. 17 is a plot of PSNR values versus compression ratio for bands in the images.
  • FIG. 18 is a plot of correlation between bands (bands 1 and 2 , bands 1 and 3 , bands 1 and 4 , bands 2 and 3 , bands 2 and 4 , and bands 3 and 4 ) versus compression ratio.
  • the MSE and RMSE are equal to 0 and PSNR is infinity when lossless compression is performed.
  • Lossless compression reduces the size of the image around a factor of 2. Therefore, a lossy compression ratio of 2 performs as well as lossless compression.
  • the MSE and RMSE values also increase accordingly, implying that the distortion in the image increases as the compressed image gets smaller in size, which go along with the theoretical expectations.
  • the fourth band (near Infrared) had the maximum values of MSE and RMSE, which is also understandable as that band contains larger pixel values and therefore is further distorted when compared to the other bands.
  • PSNR values decrease as the compression ratio increases.
  • PSNR value decreases most in the fourth band.
  • mapping inaccuracies Mechanical limitations of instrument, sensor position and orientation, curvature of the earth, and unforeseen human errors are some of the sources for mapping inaccuracies that are usually encountered in mapping (e.g., geospatial mapping) or imaging processes.
  • One such spatial discrepancy is the horizontal positional inaccuracy of the remotely acquired image. Due to the aforementioned sources of errors, the horizontal positional information of an object obtained from a remotely acquired image may deviate from its true real world measurement. Although some of the potential causes for spatial errors can be substantially eliminated or reduced, estimation and/or evaluation of horizontal inaccuracies may be needed to assess the reliability of the information retrieved from the image.
  • the horizontal positional error of an object can be represented by a random variable pair (x, y).
  • the random variables x and y correspond to the error encountered in the X (longitude) and Y (latitude) directions respectively.
  • the error can be considered as a deviation of the measured values from the true values.
  • the two random variables can be assumed to be independent, with a Gaussian distribution and zero mean.
  • the joint probability density distribution for these random variables (x, y) is given by the following equation:
  • equation (11) is obtained.
  • the probability density function represents the square of the radius of circle assuming that variances ( ⁇ x and ⁇ y ) in both the dimensions are equal.
  • the probability for an error random variable pair (x, y) to be contained within a circle of radius R can be defined by the circular error probability function P(R).
  • the circular error probability function can be derived from equation (11).
  • a condensed form for P(R) for the case when ⁇ x and ⁇ y are equal is given by the following equation:
  • NMAS National Map Accuracy Standard
  • ⁇ x ⁇ ( x image - x realworld ) 2 n ( 14 )
  • x image and x realworld are the coordinates of the control points measured from the image and real world, respectively, and n is the number of such control points.
  • ⁇ y is calculated similar to equation (14):
  • ⁇ y ⁇ ( y image - y realworld ) 2 n ( 15 )
  • y image and y realworld are the coordinates of the control points measured from the image and real world respectively, and n is the number of such control points.
  • ⁇ min is the minimum value between ⁇ x and ⁇ y
  • ⁇ max is the maximum value between ⁇ x and ⁇ y .
  • ⁇ c is estimated by a linear combination of ⁇ x and ⁇ y as given by the following equation:
  • Equation (17) is the minimum value between ⁇ x and ⁇ y
  • ⁇ max is the maximum value between ⁇ x and ⁇ y
  • ⁇ c is estimated using an interpolated value from statistical data that relates
  • a computer algorithm (CE 90 TOOLKIT version 1.0) is developed to allow a user to automate circular error distribution analysis procedures.
  • the computer code is written in Matlab. However, it must be appreciated that other computer languages and/or computer mathematical packages may be used.
  • the coordinates for the ground control points which are obtained from the remotely acquired image and measured using the global positioning system (GPS), can be loaded into the toolkit (code) as input files or through data entry forms.
  • the CE 90 toolkit includes a graphical user interface (GUI).
  • GUI graphical user interface
  • the graphical user interface is shown in FIG. 19 .
  • the data entry form is shown in FIG. 20 .
  • the data is displayed on the data list window as shown in FIG. 21 .
  • the root mean square error in both the directions, ⁇ x and ⁇ y are calculated and displayed (RMSEx and RMSEy in FIG. 21 ).
  • the graphical user interface is configured to interactively tie a GPS point to a point in the imagery.
  • the results may be stored in a simple text file or other file formats.
  • the values are computed based on the points interactively chosen.
  • the images were automatically stretched to enhance contrast and easily visualize the image data. This functionality is added just for display purposes only.
  • a user can choose options from the tools pull-down menu.
  • the tools pull-down menu allows the user to select radial plot or vector plot options as shown in FIG. 22 .
  • the radial plot option allows the user to generate error distribution plots, where the user can visually verify where these positional errors fall with respect to the CE 90 value estimated using equation (13).
  • the value may also be computed using different parameters. For example, instead of using 2.1460 ⁇ c (when a value of 90% of well-defined points in an image is specified), a 2.4477 ⁇ c (when a value of 95% of well-defined points in an image is specified) can be used.
  • FIG. 22 An example case is shown in FIG. 22 .
  • the user can select the data points by highlighting them on the data list window as shown in FIG. 21 .
  • the graphical user interface also allows to display the method that is used to calculate the CE 90 value. In this example, the
  • the circle in FIG. 22 represents the CE 90 radius calculated according to equation (13).
  • the CE 90 graphical user interface allows the user to display the offset vector plot, which represents the magnitude and direction of the error random variables (x, y). This is done by choosing the offset plot from the tools pull-down menu. The user can also input an appropriate scale value to make the error magnitude and directions more visible.
  • FIG. 23 shows the offset vector plot generated by CE 90 toolkit.

Abstract

A method of enhancing a resolution of an image by fusing images includes applying a principal component analysis to a multispectral image to obtain a plurality of principal components, and replacing a first component in the plurality of principal components by a panchromatic image. The method further includes resampling remaining principal components to a resolution of the panchromatic image, and applying an inverse principal analysis to the panchromatic image and the remaining principal components to obtain a fused image of the panchromatic image and the multispectral image.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a divisional application of U.S. patent application Ser. No. 11/279,982, filed Apr. 17, 2006, which claims priority to provisional application Ser. Nos. 60/671,508, 60/671,517 and 60/671,520, all filed on Apr. 15, 2005, the entire contents of each of which are incorporated herein by reference.
  • STATEMENT AS TO RIGHTS TO INVENTIONS MADE UNDER FEDERALLY SPONSORED RESEARCH AND DEVELOPMENT
  • This invention was made with Government support under SBAHQ-03-1-0023 awarded by the U.S. Small Business Administration. The government has certain rights in the invention.
  • FIELD OF THE INVENTION
  • The present invention relates generally to image sensing and image treatment methods and systems and more particularly to image accuracy analysis.
  • BRIEF SUMMARY OF THE INVENTION
  • An aspect of the present invention is to provide a method of enhancing a resolution of an image by fusing images. The method includes applying a principal component analysis to a multispectral image to obtain a plurality of principal components, and replacing a first component in the plurality of principal components by a panchromatic image. The method further includes resampling remaining principal components to a resolution of the panchromatic image, and applying an inverse principal analysis to the panchromatic image and the remaining principal components to obtain a fused image of the panchromatic image and the multispectral image.
  • Another aspect of the present invention is to provide a method of pansharpening an image by fusing images. The method includes applying a wavelet-based pansharpening to a plurality of bands in a multispectral image and a panchromatic image to obtain a pansharpened image, and computing quality metrics on the pansharpened image.
  • A further aspect of the present invention is to provide a method of compressing and decompressing an image. The method includes preprocessing an image, applying a discrete wavelet transform on the preprocessed image to decompose the preprocessed image into a plurality of sub-bands, and applying a quantization to each sub-band in the plurality of sub-bands. The method further includes partitioning the plurality of sub-bands into a plurality of code-blocks, encoding each code-block in the plurality of code-blocks independently to obtain a code-blocks stream, applying a rate control process to the code-blocks stream to obtain a bit-stream; and organizing the bit-stream to obtain a compressed image. The method may further include transforming the compressed image using embedded block decoding to obtain embedded decoded block data, re-composing the embedded decoded block data using an inverse discrete wavelet decomposition process, performing a dequantization by assigning a single quantum value to a range of values to obtain a dequantized data, and performing a decoding process on the dequantized data to substantially reconstruct the image.
  • Another aspect of the present invention is to provide a method of inserting geolocation into a JP2 file. The method includes inputting a GeoTIFF image file, extracting a GeoTIFF header that contains references to geographic metadata, creating a degenerated GeoTIFF image using the extracted geographic metadata, performing a geographic markup language (GML) conversion, inserting the degenerated GeoTIFF image into a universally unique identifier (UUID) box of the JP2 file, inserting the geographic markup language into an extandible markup language (XML) box of the JP2 file, and compressing the JP2 file using JP2000 image compression to obtain a GeoJPEG2000 image file.
  • Yet another aspect of the present invention is to provide a method for analyzing and estimating horizontal accuracy in imaging systems. The method includes inputting image locations and true locations in x and y directions, calculating a root mean square error in both x and y directions, and computing a horizontal accuracy by using the root mean square error in the x and y directions.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 shows a flow diagram of the principal component analysis method for fusing a multispectral and a panchromatic image, according to an embodiment of the present invention;
  • FIG. 2 shows the original multispectral image;
  • FIG. 3 shows the panchromatic image;
  • FIG. 4 shows the fused image resulting from the fusion process;
  • FIGS. 5A-5H show histograms of four bands in both multispectral and fused images;
  • FIG. 6 shows a correlation plot of NDVI values for the fused image versus the NDVI values for the multispectral image;
  • FIG. 7A shows a multispectral image data;
  • FIG. 7B shows a panchromatic image data;
  • FIG. 7C shows a wavelet sharpened image;
  • FIG. 8 shows a flow diagram of the JPEG2000 encoding process, according to an embodiment of the invention;
  • FIG. 9 shows a flow diagram of the JPEG2000 decoding process, according to an embodiment of the invention;
  • FIG. 10 shows a structure of a JP2 file, according to an embodiment of the invention;
  • FIG. 11 shows a process flow diagram for inserting geolocation in JP2000 images information, according to an embodiment of the present invention;
  • FIG. 12 shows a process flow diagram for compressing/decompressing an image and computing quality metrics, according to an embodiment of the present invention;
  • FIG. 13A depicts an original image;
  • FIGS. 13B-13I depict the original image compressed at various compression ratios;
  • FIG. 14A depicts the original zoomed image of the original image depicted in FIG. 13A;
  • FIGS. 14B-14K depict the original zoomed image compressed at various compression ratios;
  • FIG. 15 is a plot of mean square error values versus compression ratio for various bands in images shown in FIGS. 13A-13I and 14A-14K;
  • FIG. 16 is a plot of root mean square values versus compression ratio for various bands in the images shown in FIGS. 13A-13I and 14A-14K;
  • FIG. 17 is a plot of peak signal to noise ratios versus compression ratio for bands in the images shown in FIGS. 13A-13I and 14A-14K;
  • FIG. 18 is a plot of correlation between bands versus compression ratio;
  • FIG. 19 depicts a graphical user interface of CE90 toolkit, according to an embodiment of the present invention;
  • FIG. 20 depicts a data entry form of the CE90 toolkit, according to an embodiment of the present invention;
  • FIG. 21 depicts a data list window allowing a user to select data points by highlighting them on the window;
  • FIG. 22 shows an example with a circle representing the CE90 radius calculated according to the horizontal accuracy standard; and
  • FIG. 23 shows the offset vector plot generated by the CE90 toolkit.
  • DESCRIPTION OF EMBODIMENTS OF THE INVENTION
  • Image quality assessment can play a role in many imaging applications, including in remote sensing applications, where both spatial and spectral variations characterization may be needed. An example of use of image quality metrics is to quantitatively measure a quality of an image that correlates with perceived quality. Image quality metrics can also be used to benchmark different image processing algorithms by comparing objective metrics. Assessment measures can be divided into subjective and objective groups. Subjective measures are obtained according to rating by human observers, where judgment of different viewers may differ. Due to this and many other inherent drawbacks associated with subjective measures, quantitative or objective measures are used. A reliable image quality metric would provide a consistent, accurate and monotonic prediction of a quality of an image.
  • For example, in remote sensing, multispectral (MS) images are widely used for land cover classification, change detection and many other applications. In many applications, a number of preprocessing steps may be used prior to applying any classification or segmentation algorithm. The preprocessing steps may include resampling, atmospheric correction, sharpening, filtering, etc. In many applications, the effects of the preprocessing can degrade the performance of the segmentation or classification. Hence, assessing a quality of the image may be needed after any such preprocessing step.
  • Many metrics can be used to evaluate the quality of an image. In an embodiment of the invention, mean square error (MSE) is used to evaluate the quality of an image. MSE refers to the average of sum of squares of the error between two images. The MSE is defined as follows:

  • σ2 msE[|u(m,n)−v(m,n)|2]  (1)
  • where u(m, n) and v(m, n) are two images of size m×n and E denotes the mathematical expectation.
  • An approximation of the MSE can also be used. The average least square error metric, which computed as shown in equation (2), is used as an approximate to MSE:
  • σ ls 2 = 1 MN m = 1 M n = 1 N u ( m , n ) - v ( m , n ) 2 ( 2 )
  • The MSE quantifies the amount of difference in the energy of signals. Even though, the mean square error metric may have some limitation when used as a global measure of an image quality, when used as a local measure it can be effective in predicting the image quality accurately.
  • Another metric that may be used to evaluate the quality of an image is a root mean square (RMSE) which is related to MSE. Indeed, a root mean square error (RMSE) is the square root of the MSE. RMSE quantifies the average amount of distortion in each pixel in an image. MSE and RMSE give an account of the spectral fidelity of an image.
  • Another metric that may be used is the correlation function. The closeness between two images can be quantified in terms of correlation function. Correlation coefficients range from −1 to +1. A correlation coefficient value of +1 indicates that the two images are highly correlated, i.e., very close to one another. A correlation coefficient of −1 indicates that the two images are exactly opposite to each other. The correlation coefficient is computed as expressed in the following equation:
  • Corr ( A / B ) = i = 1 M j = 1 N ( A i , j - A _ ) ( B i , j - B _ ) i = 1 M j = 1 N ( A i , j - A _ ) 2 i = 1 M j = 1 N ( B i , j - B _ ) 2 ( 3 )
  • where A and B are the two images between which the correlation is computed. Correlation coefficients between each band of the multispectral image before and after a processing step quantify the spectral quality of the image after processing.
  • A further metric that may be used is the mean value of the pixels in a band. The mean value of pixels in a band of an image is the central value of the distribution of the pixels in that band of the image. A relative shift in the mean value (RM) quantifies the changes in the histogram of the image that may be the result of processing the image. The changes in the standard deviation of the distribution can also be considered in addition to the shift in mean. The relative shift in the mean value can be expressed mathematically as follows:
  • RM = Outputmean - OriginalMean OriginalMean % ( 4 )
  • Another metric that may be of interest for the evaluation of a quality of an image is the entropy. The entropy is defined as the amount of information contained in a signal. The entropy of an image can be evaluated as shown in the following equation:
  • H = - i = 1 d p ( d i ) log 2 ( p ( d i ) ) ( 5 )
  • where d is the number of gray levels possible and p(di) is the probability of occurrence of a particular gray level di in the image.
  • Yet another metric that can be used is the image noise index (INI). The image noise index (INI) is based on entropy. An image may undergo processing to obtain a processed image. The image may also be reconstructed from the processed image. Hence, a reconstructed image is obtained by going through the reverse process to get the estimate of the original multispectral image from the processed image. Entropy may be defined for each of the images prior to processing (entropy of the original image is x), after processing (the entropy of the processed image is y) and after reconstruction from the processed image (the entropy of the reconstructed image is z).
  • The value of (y−x) then gives the increased information content of the processed image. This increased information may be useful information, noise or both. The quantity |x−z| is the unwanted information or noise. Hence, the amount of useful information is the difference between these two values:

  • Signal=(y−x)−|x−z|  (6)
  • The image noise index (INI) is expressed as a ratio between the two values (y−x) and |x−z| minus 1:
  • INI = ( y - x ) x - z - 1 ( 7 )
  • Therefore, it can be understood that a positive value of INI indicates an improvement in the information content and a negative value of INI indicates degradation of information in the processed image.
  • Another metric that may be used is the normalized difference vegetation index (NDVI). NDVI can be used to quantify the strength of a vegetation area. NDVI can be defined as follows:
  • NDVI = NIR - R NIR + R ( 8 )
  • where NIR is a near infrared band pixel value and R is a red band pixel value. The NDVI varies between +1 and −1. A value closer to +1 indicates dense vegetation. A NDVI value very close to zero represents water. NDVI is an important feature that is used in to distinguish between many classes. This metric quantifies the variations in the NDVI due to any preprocessing.
  • In an embodiment of the invention, image resampling is performed on an image as an example of process that can affect a quality of an image. The image resampling is applied on an image in order to study the effects of resampling on the image quality and to evaluate the image quality using various metrics.
  • Image resampling is a technique often used in remote sensing applications. Resampling techniques are used to estimate pixel values in between available samples. Many different resampling techniques such as nearest neighbor (NN), linear interpolation and cubic convolution can be used. One objective of resampling techniques is to minimize the residual difference between the actual and predicted pixel values.
  • In this resampling example, the spectral and spatial quality of nearest neighbor and cubic convolution method are compared. In the nearest neighbor method, the intermediate pixel values can be replicated from adjacent pixel values. This provides no spectral distortion but sharp edges of the image are distorted and spatial artifacts can be observed. The cubic convolution method tries to fit a polynomial of degree three and compute the intermediate pixel value. This method introduces some spectral distortion but the spatial quality of this method is better than nearest neighbor.
  • The correlation among the spectral bands can also be computed in the original and resampled multispectral image. In most application the spectral quality can be preserved after resampling. Hence, the correlation among the spectral bands for the resampled images and the original multispectral bands are expected to match.
  • In one example, co-registered subsets of Quickbird multispectral (MS) and panchromatic (PAN) image were considered. The original multispectral image is resampled to the resolution of the panchromatic image. TABLE 1 shows the values of the correlation coefficients computed. The nearest neighbor (NN) method has the same correlation as that of the original multispectral. This can be explained by the fact that the pixel values are not changed during the nearest neighbor (NN) resampling method. The cubic convolution method introduces some spectral distortion.
  • TABLE 1
    Correlation
    Band Combinations Original MS NN Cubic
    Band 1&2 0.9752 0.9752 0.9772
    Band 1&3 0.9382 0.9382 0.9407
    Band 1&4 0.5477 0.5477 0.5505
    Band 2&3 0.9760 0.9760 0.9772
    Band 2&4 0.6373 0.6373 0.6390
    Band 3&4 0.6647 0.6647 0.6660
  • The correlation between the panchromatic image (PAN) and the resampled multispectral bands is also computed. The cubic convolution technique provides higher correlation with the panchromatic image, which indicates that it has good spatial quality compared to the NN resampling technique. TABLE 2 indicates the correlation values computed.
  • TABLE 2
    Correlation
    Band Combinations NN Cubic
    Band 1&Pan 0.5122 0.5229
    Band 2&Pan 0.5293 0.5380
    Band 3&Pan 0.5317 0.5381
    Band 4&Pan 0.4808 0.4887
  • The relative shift in mean of each band is also computed. As expected, the shift for NN resampling is found to be zero. The values computed are shown in TABLE 3.
  • TABLE 3
    Relative Shift in Mean (%)
    Band NN Cubic
    Band
    1 0 0.0720
    Band 2 0 0.0982
    Band 3 0 0.1232
    Band 4 0 0.0284
  • Image fusion algorithms improve the low spatial resolution of the multispectral images using the spatial information from the corresponding panchromatic image. These image fusion algorithms can be used as a preprocessing step before feature extraction or classification is done on a multispectral image. Image fusion algorithms are also called pansharpening. Pansharpening combines information from a multispectral image and spatial information from a panchromatic image into a single fused image. The single image has both high spectral and spatial resolutions. The high spatial and spectral resolutions help to enhance features, provide detail information of targets or objects in the image and improve classification accuracy. Pansharpening algorithms include intensity-hue-saturation sharpening (HIS), principal component analysis (PCA) sharpening, Brovey sharpening, multiplicative sharpening and color normalized sharpening. Some of pansharpening algorithms are available in commercial remote sensing packages like ERDAS, commercialized by Leica Geosystems, and ENVI, commercialized by Research Systems Incorporated.
  • In an embodiment of the invention, quality metrics are applied to an image fused using the principal component analysis (PCA) method.
  • FIG. 1 shows a flow diagram of the principal component analysis (PCA) method for fusing the multispectral and panchromatic image, according to an embodiment of the present invention. In the flow diagram of principal component analysis method 10, PCA transform 11 can be applied to the original multispectral image MS 12. The PCA 11 transforms the image into a set of uncorrelated data PC1, PC2, PC3, etc., at 13. The first principal component PC1 has the maximum variance or most of the information. The first principal component PC1 is replaced by the panchromatic image PAN 14. The remaining principal components PC2, PC3, etc. are resampled to the panchromatic resolution, at 15, and the inverse PCA transform is applied to get back into image domain, at 16. Co-registered Quick bird multispectral MS and panchromatic PAN images were fused using this technique. The quality metrics were computed on the fused image. FIG. 2 shows the original multispectral MS image. FIG. 3 shows the panchromatic PAN image and FIG. 4 shows the result of the fusion process.
  • In an embodiment of the invention, the MSE and RMSE values are computed between each band of the fused and original multispectral image. The values are shown in TABLE 4.
  • TABLE 4
    MSE RMSE
    Band
    1 887.9 29.797
    Band 2 4364.9 66.067
    Band 3 4541.3 67.389
    Band 4 6840.10 82.704
  • TABLE 4 shows that the DN values have changed a lot in band 4 due to sharpening. The DN values in other bands are also distorted a lot. However, the quality of the fused image cannot be based solely on this metric, i.e., solely based on MSE and/or RMSE, because some of the newly-added information might be useful information.
  • In an embodiment of the invention, the correlation between different band combinations is computed to quantify the spectral and spatial quality of the image. TABLE 5 shows the value of the correlation coefficients computed between the four spectral bands. It is expected that the correlation for the fused bands should be as close as possible to that of the original multispectral bands to ensure preservation of spectral information. In addition, it is expected that the fusion process should not increase the correlation between the spectral bands.
  • The values in TABLE 5 indicate that there are variations in the spectral information of the fused image. Thus, the fused image does not improve the performance of spectral based classification. The correlation is also computed between each band of the original multispectral and panchromatic image. For this combination, the correlation of the fused image should be higher because the fused image has more spatial information compared to the panchromatic image.
  • TABLE 5
    Correlation
    Band Combinations Original MS Fused
    Band 1&2 0.9752 0.9691
    Band 1&3 0.9382 0.9186
    Band 1&4 0.5477 0.3992
    Band 2&3 0.9760 0.9662
    Band 2&4 0.6373 0.4997
    Band 3&4 0.6647 0.5345
  • TABLE 6
    Correlation
    Band Combinations Original MS Fused
    Band 1&Pan 0.5122 0.8034
    Band 2&Pan 0.5293 0.8675
    Band 3&Pan 0.5317 0.8780
    Band 4&Pan 0.4808 0.7842
  • In an embodiment of the invention, correlation values between each band of MS image and the pan image is computed for both the original MS image and the fused image. The correlation values are reported in TABLE 6. The correlation values indicate that the fused image has more useful spatial information compared the original multispectral MS image, as there is an increase in the corresponding values. The correlation values are better indicators of the spatial and spectral quality of the image.
  • In an embodiment of the invention, the relative shift mean (in %) for each band of the fused image is computed. The relative shift in mean of each band of the fused image helps to visualize the changes in the histogram. A positive shift in mean indicates the shift towards white and a negative shift indicates shift towards gray. The computed values are shown in the TABLE 7. The histogram of band 3 has shifted a lot compared to other bands. The histograms are also plotted to aid visual comparison. FIGS. 5A-5H show the histograms of the four bands in both multispectral and PCA fused images. FIGS. 5A-5D show histograms of the four bands (band 1, band 2, band 3, and band 4, respectively) in the MS image and FIGS. 5E-5H show histograms of the four bands (band 1, band 2, band 3, and band 4, respectively) in the fused image.
  • TABLE 7
    Band Relative Shift in Mean (%)
    Band 1 8.74
    Band 2 14.19
    Band 3 20.53
    Band 4 14.96
  • In an embodiment of the invention, the image noise index (INI) is computed for each band of the fused image as shown in TABLE 8. The degradation of spectral information in all the bands is indicated by a negative INI value. The information loss or unwanted information is higher in band 4 compared to other bands.
  • TABLE 8
    Band INI
    Band
    1 −1.923
    Band 2 −1.847
    Band 3 −1.708
    Band 4 −1.961
  • In an embodiment of the invention, NDVI values are computed for a subset of vegetated area. FIG. 6 shows a correlation plot of the NDVI values for the fused image versus the NDVI values for the MS image. The correlation between the NDVI values of the multispectral and fused image is found equal to approximately 0.6696.
  • Therefore, from the above results, it can be seen that PAN dominates the spatial information which is shown from the high correlation of the fused bands with the panchromatic image. This is because the PAN replaces the first principal component, which contains most of the spectral information. In addition, the spectral information is lost compared to the multispectral image, which is reflected in MSE, RMSE, INI and relative shift in mean.
  • The quality of an image may not be predicted accurately by just considering only one of the metrics discussed above. In an embodiment of the invention, a combination of metrics (at least two) allows to evaluate the quality of the image with more precision. The combination of metrics selected can vary based on the type of preprocessing and the application of the image.
  • In another embodiment, a pansharpening and image quality interface (PSIQI) application incorporating a wavelet-based pansharpening is implemented. The PSIQI application is applied on images to sharpen images and/or compute quality metrics for sharpened data. Before inputting image data, i.e. multispectral image data and panchromatic image data, the image data is co-registered. A user is able to specify a location of the multispectral image and corresponding panchromatic image.
  • The PSIQI application performs a wavelet-based sharpening on the specified image data sets for sharpening. Image size and number of bits per pixel control the block size. Block processing can be used for efficient memory handling and for increasing the speed. Quality metrics chosen are then computed over the entire data set and stored in a file, for example a text file. The PSIQI application can be used in two modes of operation, sharpening and quality metric modes. The sharpening mode is used to sharpen the data using a wavelet-based method and compute the quality metrics. The quality metric mode is used only to compute metrics on data sharpened using other methods. A user may select a band to be sharpened. In addition, the application provides a tunable sharpening ways by allowing the user to select different mother wavelets, by enabling initial and final histogram matching steps and by enabling filtering using a selected filter such as a Wiener filter. While using the quality metrics mode, the tunable options are switched off and the user can only choose bands that exist in the sharpened data.
  • In an embodiment of the invention, PSIQI application is used to sharpen a co-registered IKONOS image set in the sharpening mode. In an embodiment of the invention, bands 1, 2, and 3 of the data are sharpened using the wavelet-based method and quality metrics, such as the mean square error (MSE), root mean square error (RMSE), correlation metrics, and relative shift in the mean are computed. In an embodiment of the invention, a bi-orthogonal 4.4 wavelet is used as the mother wavelet, for example. In an embodiment of the invention, initial and final histogram match are applied. In further embodiment of the invention, a Wiener filter is applied on the sharpened data to remove noise due to sharpening.
  • FIG. 7A shows the multispectral image data. FIG. 7B shows the panchromatic image data. FIG. 7C shows the wavelet sharpened image. The mean square error (MSE), the root mean square error (RMSE), and relative shift in the mean of the histogram (RM) values are computed for each of the sharpened band and the values are shown in TABLE 9.
  • TABLE 9
    Band number MSE RMSE RM (%)
    Band 1 866.618955 29.438393 0.10
    Band 2 920.135026 30.333728 0.09
    Band 3 923.392627 30.387376 0.13
  • As shown in TABLE 9, the average change in the pixel value of band 3 is slightly higher when compared with the other bands. In addition, the mean of band 3 is shifted by 0.13%.
  • The spectral quality of the sharpened image can be ascertained by comparing the correlation between each band in the image before and after sharpening. The correlation values computed are shown in TABLE 10. These correlation values indicate a slight variation in the spectral information in the sharpened data and an increase in spatial information.
  • TABLE 10
    Correlation coefficient
    Band Combinations Original MS Sharpened
    Band 1&2 0.986744 0.974967
    Band 1&3 0.969066 0.954633
    Band 2&3 0.990432 0.990152
    Band 1&Pan 0.740918 0.784106
    Band 2&Pan 0.790111 0.840246
    Band 3&Pan 0.784879 0.836089
  • As stated above, image quality evaluation methods are used on image data. The image data can be provided in a large size. For example, the size of the multi-resolution Quickbird GeoTIFF image used in this study is 380 MB. Therefore, compression of the images may be needed for storing and transmitting to save storage space, bandwidth, and to lower the transmission times. Image compression can be performed in either “lossy” or lossless fashion. Lossless compression may be desirable in critical situations where any loss in image data and quality may lead to erroneous analysis. However, in various other applications lossy compression may be preferred as it provides high compression ratio that results in smaller image sizes. However, the trade off with using lossy compression is that as the compression ratio increases, increased spatial and spectral features of the image can be lost. Hence, it may be worthwhile to analyze the impact of image compression on image quality.
  • In an embodiment of the invention, an encoding and decoding process JPEG2000 product is used to compress and decompress images using wavelet transformation opposed to its predecessor JPEG that uses discrete cosine transformation (DCT). Wavelet transform-based image compression algorithms allow images to be retained without much distortion or loss when compared to JPEG, and hence are recognized as a superior method. The JPEG2000 encoding and decoding process includes a JPEG2000 encoding process and a JPEG2000 decoding process. The encoding and decoding process are divided into several stages as can be seen from FIGS. 8 and 9.
  • Many formats for the images can be used including TIFF (Tagged Image File Format) which is used to store and transfer digital satellite imagery, scanned aerial photos, elevation models, scanned maps or the results of many types of geographic analysis. TIFF is the only full-featured raster file format in the public domain, capable of supporting compression, tiling, and extension to include geographic metadata. The main strengths of TIFF are a highly flexible and platform-independent format that is supported by numerous image-processing applications. Another image format is the GeoTIFF format. A GeoTIFF file is a TIFF 6.0 file and inherits the file structure as described in the corresponding portion of the TIFF specifications. GeoTIFF uses a small set of reserved TIFF tags to store a broad range of georeferencing information, catering to geographic as well as projected coordinate systems needs. The geographic data can then be used to position the image in the correct location and geometry on the screen of a geographic information display.
  • FIG. 8 shows a flow diagram of the JPEG2000 encoding process, according to an embodiment of the invention. The JPEG2000 encoding process 20 includes pre-processing 22 which is the first stage of encoding followed by a discrete wavelet decomposition (DWT) 24. DWT 24 is used to decompose each image tile into its high and low sub bands and filter each row and column of the pre-processed image tile with a high-pass and a low-pass filter. In an embodiment of the invention, multiple levels of the DWT 24 can be performed and the number of stages performed is implementation dependent. In an embodiment of the invention, for each sub-band, a basic quantization is done using a quantizer, at 26. Quantization is defined as achieving compression by assigning a range of values to a single quantum value. The sub-bands of each tile are further partitioned into relatively small code-blocks (for example, 64×64 samples, 32×32 samples, etc.) such that code blocks from a sub-band have the same size and then each code-block is encoded independently to obtain a code block stream, at 28. After block coding, a rate control process is applied the code block stream, at 30. Rate control is a process by which the code stream is altered to a bit stream so that a target bit rate can be reached. The bit stream is then organized, at 32 and a compressed image data is obtained.
  • FIG. 9 shows a flow diagram of the JPEG2000 decoding process, according to an embodiment of the invention. The decoding process 40 is the opposite of the encoding process 20. The compressed data is transformed using embedded block decoding, at 42. The embedded decoded block data is then re-composed using an inverse discrete wavelet decomposition (inverse DWT) 44. A dequantization is then performed at 46. The dequantization is the inverse of quantization. Thus dequantization is a decompression that allows assigning a single quantum value to a range of values. Following the dequantization at 46, an inverse ICT is performed at 48 in which is a decoding process to reconstruct substantially the original image data.
  • FIG. 10 shows a structure of a JP2 file, according to an embodiment of the invention. The JP2 file structure can be considered as a collection of boxes 50. Some of the boxes 50 are independent, such as box 52, and some of the boxes, such as box 54, contain other boxes. The binary structure of a file is a contiguous sequence of boxes. The start of the first box is the first byte of the file, and the last byte of the last box is the last byte of the file.
  • There are two approaches for the implementation of geo-referencing data in JP2 files, which are inserting the data into either one of the two boxes. One approach is UUID (Universally Unique Identifiers) box, which provides a tool by which vendors can add additional data to a file without risking conflict with other vendors. Another approach is XML (eXtensible Markup Language) box that provides a tool for vendors to add XML formatted information to a JP2 file. Since a UUID box and a XML box can be used to add vendor specific information, two open standards can be used, each making use of either UUID box or XML box. For example in GeoTIFF, a UUID box termed as GeoTIFF box contains a specified UUID and a degenerated GeoTIFF file. By degenerated, it means a valid GeoTIFF file excluding image information. A UUID represents a 128-bit unique value and the UUID for the box is static unsigned char geotiff_box={0xb1, 0x4b, 0xf8, 0xbd, 0x08, 0x3d, 0x4b, 0x43, 0xa5, 0xae, 0x8c, 0xd7, 0xd5, 0xa6, 0xce, 0x03}. The UUID box, i.e., the GeoTIFF box, contains a valid GeoTIFF file which contains the geo-referencing information about the file being compressed and a very simple image with the constraints: image height and width are both 1 pixel, datatype is 8-bit, color space is grayscale, the (single) pixel must have a value of 0 for its (single) sample. The intent of containing the valid GeoTIFF file is that any compliant GeoTIFF reader or writer would be able to read or write the image.
  • Another file formatting that can be used in JP2 file is Geographic Markup Language (GML). GML is an XML-based encoding standard for geographic information developed by the OpenGIS Consortium (OGC). In this approach, geo-location information coded in GML is stored in a non-proprietary way within JPEG2000 XML box. For example, the JPEG2000_GeoLocation in GML given below refers to a JP2 file with an EPSG code of 32610 (PCS_WGS84_UTM_zone10N), origin 631333.108344E, 4279994.858126N, a cell size of X=4 and Y=4, and a rotation of 0.0.
  • <?xml version=“1.0” encoding=“UTF-8”?>
    <JPEG2000_GeoLocation >
    <gml:RectifiedGrid xmlns:gml=“http://www.opengis.net/gml”
    gml:id=“
    JPEG2000_GeoLocation_1” dimension=“2”>
    <gml:origin>
    <gml:Point gml:id=“JPEG2000_Origin”
    srsName=“epsg:32610”>
    <gml:coordinates>631333.108344,
    4279994.858126</gml:coordinates>
    </gml:Point>
    </gml:origin>
    gml: offsetVector gml:id=“p1”>0.0,4.0,0.0</gml: offsetVector>
    <gml: offsetVector gml:id=“p2”>4.0,0.0,0.0</gml: offsetVector>
    </gml:RectifiedGrid>
    </JPEG2000_GeoLocation>
  • A JPEG2000_GeoLocation XML Element containing a RectifiedGrid construct contains the geographic information. The RectifiedGrid includes an ID of “JPEG2000_GeoLocation 1” with a dimension equal to 2. The origin element is also included and is provided an id of “JPEG2000_Origin.” The Point specifies the coordinate of the bottom-left corner of the bottom-left cell in the image. The srsName can be an immediate EPSG code. However, if an existing EPSG code is not available, the srsName refers to a full SpatialReferenceSystem element definition within a same JP2 XML box. A pair of offsetVector elements define vertical and horizontal cell “step” vectors, and may include a rotation. A conformant reader is usually set to ignore all other elements within the JPEG2000_GeoLocation element.
  • FIG. 11 shows a process flow diagram for inserting geolocation (e.g., geo-referencing metadata) in JP2000 images information, according to an embodiment of the present invention. A GeoTIFF image file or files is inputted at 60. First, a GeoTIFF header that contains the references to the geographic information is extracted from the GeoTIFF image files at 62. Then, a degenerated GeoTIFF image is created using the extracted geographic metadata that follows restrictions imposed by the “GeoTIFF Box” specification, at 64. A GML conversion is performed at 65. The geographic metadata can also be used to extract the geographic information and create a XML string, at 66, compliant with the GML provided in the second standard. The degenerated GeoTIFF is inserted in the UUID box and the GML is inserted in the XML box during the image compression, at 68 such that the geolocation metadata is embedded in the JP2 image. A GeoJPEG2000 image is then obtained and output at 70.
  • In an embodiment of the invention, a toolkit for image compression and metadata insertion is developed using Java version J2SE 1.4.2. Object-oriented interfaces for manipulating different formats of images can be provided by various vendors. Examples of APIs that are used are Java advanced imaging (JAI), Java image I/O, LuraWave.jp2 Java and GeoTIFF-JAI. JAI is a cross-platform, flexible, extensible toolkit for adding advanced image processing capabilities to applications for the Java platform. Java Image I/O API provides a pluggable architecture for working with images stored in files. It offers substantially more flexibility and power than the previously-available APIs for loading and saving images. LuraWave.jp2 JAVA/JNI-SDK for Windows (demo version) is a part of the LuraWave.jp2 image compression software family and is based on Algo Vision LuraTech's implementation of the JPEG2000 image compression standard and is fully compliant with the Part 1 of the JPEG2000 International Standard. GeoTIFF—JAI is a “geotiff” extension to the Java Advanced Imaging component and is an opens source interface developed by Niles Ritter.
  • In an embodiment of the invention, Front-end and the codes to compute image quality metrics are developed using Matlab 6.5.1, release 13. Matlab provides a java interface to access classes written in Java and call the object's methods.
  • Image quality metrics are figures of merit used for the evaluation of imaging systems or processes. The image quality metrics can be broadly classified into two categories, subjective and objective. In objective measures of image quality metrics, some statistical indices are calculated to indicate the reconstructed image quality. The image quality metrics provide some measure of closeness between two digital images by exploiting the differences in the statistical distribution of pixel values. Examples of error metrics used for comparing compression are Mean Square Error and Peak Signal to Noise Ratio (PSNR). The MSE, RMSE and correlation metrics are described in the above paragraphs.
  • Peak Signal to Noise Ratio (PSNR) measures the estimates of the quality of reconstructed image compared with an original image and is a standard way to measure image fidelity. A ‘signal’ is the original image and ‘noise’ is the error in a reconstructed image due to compression and decompression. PSNR is a single number that reflects the quality of reconstructed image and is measured in decibels (db):
  • PSNR = 20 log 10 ( S RMSE ) ( 9 )
  • where S is a maximum pixel value and RMSE is the Root Mean Square Error of the image. The actual value of PSNR is not meaningful but a comparison between two values between different reconstructed images gives one measure of quality. As seen from inverse relation between MSE and/or RMSE and PSNR, low value of MSE and/or RMSE translates to a higher value of PSNR. This implies that a higher value of PSNR is better.
  • In an embodiment of the invention, geo-referencing metadata is inserted into a JPEG2000 (jp2) file during compression, for example using the method illustrated in FIG. 11. A UUID box with specified UUID is created and a degenerated GeoTIFF is created and inserted into the data section of UUID box. A XML box of the JPEG2000 file is also filled with a minimal set of GML used for geo-location. Since the JPEG2000 created by the application meets both the standards, the geo-location information of the file should be compatible with most of the GIS application that support JP2 file format of JPEG2000.
  • FIG. 12 shows a process flow diagram for compressing/decompressing an image and computing quality metrics, according to an embodiment of the present invention. An image file is inputted, at 80. A JPEG2000 compression is performed on the image file, at 82 to obtain a compressed image at 84. The compression at 82 is performed at certain compression ratio and the resultant compressed image at 84 may comprise lossy or lossless information. The compressed imaged at 84 is then decompressed to obtain a decompressed image at 86. Computer quality metrics are then computed to compare the original image at 80 and the decompressed image at 86. The obtained quality metrics obtained at 88 can be stored at 90.
  • In an embodiment of the invention, the reversible compressions were performed at different ratios on a test image and the JPEG2000 file is decompressed back to TIFF file format. The quality metrics were then calculated to compare the original and the reconstructed images. The test image is a 1024×1024 pixels subset of Quickbird multi-spectral image of the Memphis, Tenn. area. The image is compressed at various compression ratios and then decompressed using JPEG2000 method and the quality metrics of the reconstructed image is computed by using the original image as a benchmark.
  • FIG. 13A depicts the original image. FIG. 13B depicts the original image compressed at the compression ratio of 1:5. FIG. 13C depicts the original image compressed at the compression ratio of 1:10. FIG. 13D depicts the original image compressed at a compression ratio of 1:20. FIG. 13E depicts the original image compressed at a compression ratio of 1:30. FIG. 13F depicts the original image compressed at a compression ratio of 1:50. FIG. 13G depicts the original image compressed at a compression ratio of 1:100. FIG. 13H depicts the original image compressed at a compression ratio of 1:150. FIG. 13I depicts the original image compressed at a compression ratio of 1:200.
  • A visual comparison of the images that are reconstructed with the original image shows that the reconstructed images lose some fine details as the compression ratio is increased. However, the original and the reconstructed image which is compressed at the ratio of 30 are undistinguishable when doing a visual inspection. But the differences are more pronounced after each successive increase in compression ratio.
  • Similarly, when the above images are zoomed such that details are more visible, difference in various pixels can be seen as the encoding ratio increases. FIGS. 14A-K depict zoomed images with increasing compression ratios. FIG. 14A depicts the original zoomed image. FIG. 14B depicts the original zoomed image compressed at the compression ratio of 1:5. FIG. 14C depicts the original zoomed image compressed at the compression ratio of 1:10. FIG. 14D depicts the original zoomed image compressed at a compression ratio of 1:20. FIG. 14E depicts the original zoomed image compressed at a compression ratio of 1:30. FIG. 14F depicts the original zoomed image compressed at a compression ratio of 1:40. FIG. 14G depicts the original zoomed image compressed at a compression ratio of 1:50. FIG. 14H depicts the original zoomed image compressed at a compression ratio of 1:60. FIG. 14I depicts the original zoomed image compressed at a compression ratio of 1:100. FIG. 14J depicts the original zoomed image compressed at a compression ratio of 1:150. FIG. 14K depicts the original zoomed image compressed at a compression ratio of 1:200.
  • Even though the images seem to appear similar to the naked eye, quality metrics show that the images are being distorted as can be seen from the MSE and RMSE of the images that are provided in the TABLE 11 and the PSNR provided in the TABLE 12. Similarly, the correlation coefficient is provided in TABLE 13.
  • TABLE 11
    Compression Band 1 Band 2 Band 3 Band 4
    Ratio MSE RMSE MSE RMSE MSE RMSE MSE RMSE
    Lossless
    0 0 0 0 0 0 0 0
    2 0 0 0 0 0 0 0 0
    3 11.4448 3.383 11.0364 3.3221 11.4331 3.3813 10.4425 3.2315
    4 13.5028 3.6746 12.0749 3.4749 13.6729 3.6977 12.6368 3.5548
    5 17.6446 4.2005 13.8873 3.7266 18.5525 4.3073 14.8448 3.8529
    10 38.2085 6.1813 29.0712 5.3918 43.6054 6.6034 45.2104 6.7239
    20 83.5791 9.1422 71.3339 8.4459 103.9162 10.1939 139.143 11.7959
    30 116.0723 10.7737 117.408 10.8355 156.6261 12.515 267.2953 16.3492
    40 156.4596 12.5084 167.5248 12.9431 215.1113 14.6667 374.4435 19.3505
    50 188.6118 13.7336 243.0937 15.5915 293.9313 17.1444 432.1737 20.7888
    100 315.9 17.7738 426.5 20.6514 483.6 21.9911 1003.4 31.676
    150 353.6 18.8042 578.8 24.0581 632.8 25.155 1528.3 39.0937
    200 432.9 20.8052 790.2 278.1101 836.7 28.9266 1813.7 42.5872
  • TABLE 12
    Compression
    Ratio Band
    1 Band 2 Band 3 Band 4
    Lossless
    2
    3 197.4314 197.7948 197.4417 198.348
    4 195.7778 196.8955 195.6526 196.4407
    5 193.1025 195.497 192.6008 194.8303
    10 185.3762 188.1093 184.055 183.6935
    20 177.5489 179.1331 175.3709 172.4518
    30 174.2647 174.1502 171.2682 165.9233
    40 171.2788 170.5955 168.0952 162.5524
    50 169.4099 166.8723 164.9733 161.1185
    100 164.2522 161.2512 159.9941 152.6956
    150 163.1252 158.1974 157.3056 148.4876
    200 161.1028 155.0842 154.5116 146.7757
  • TABLE 13
    Compression Band Band Band Band Band Band
    Ratio 1&2 1&3 1&4 2&3 2&4 3&4
    Original 0.9863 0.9446 0.6589 0.9769 0.7289 0.8027
    Lossless 0.9863 0.9446 0.6589 0.9769 0.7289 0.8027
    2 0.9863 0.9446 0.6589 0.9769 0.7289 0.8027
    3 0.9858 0.9441 0.6594 0.9767 0.7295 0.8032
    4 0.9855 0.9433 0.6585 0.9765 0.7293 0.803
    5 0.9852 0.9429 0.6582 0.9766 0.7291 0.8029
    10 0.9849 0.9432 0.6594 0.9776 0.7287 0.8023
    20 0.983 0.9462 0.6649 0.9801 0.7277 0.7973
    30 0.9808 0.9468 0.6664 0.9809 0.7258 0.794
    40 0.9776 0.9463 0.6684 0.9818 0.7246 0.7896
    50 0.9756 0.9462 0.67 0.9821 0.7198 0.7833
    100 0.9674 0.946 0.6773 0.9845 0.7186 0.7744
    150 0.9667 0.9473 0.6723 0.9853 0.7173 0.7714
    200 0.964 0.9473 0.6782 0.9863 0.7172 0.7672
  • FIG. 15 is a plot of MSE values versus compression ratio for various bands (band 1, band 2, band 3, and band 4) in the images. FIG. 16 is a plot of RMSE values versus compression ratio for various bands in the images. FIG. 17 is a plot of PSNR values versus compression ratio for bands in the images. FIG. 18 is a plot of correlation between bands ( bands 1 and 2, bands 1 and 3, bands 1 and 4, bands 2 and 3, bands 2 and 4, and bands 3 and 4) versus compression ratio.
  • As expected, the MSE and RMSE are equal to 0 and PSNR is infinity when lossless compression is performed. Lossless compression reduces the size of the image around a factor of 2. Therefore, a lossy compression ratio of 2 performs as well as lossless compression. As the encoding ratio increases, the MSE and RMSE values also increase accordingly, implying that the distortion in the image increases as the compressed image gets smaller in size, which go along with the theoretical expectations.
  • Another interesting fact observed is that the fourth band (near Infrared) had the maximum values of MSE and RMSE, which is also understandable as that band contains larger pixel values and therefore is further distorted when compared to the other bands.
  • Similarly, PSNR values decrease as the compression ratio increases. For comparable values of PSNR, from a compression ratio 0.33 to 0.005 its value ranges from 197.43 db to 161.1 db in band 1, which shows a range of 36.33. Furthermore, it can be seen that PSNR value decreases most in the fourth band.
  • However, an interesting fact observed is that the correlation between the different bands of images does not change by much even when the compression is performed at a ratio of 1:200. As can be seen from FIG. 18 and the TABLE 13, the correlation between different bands do not exhibit much difference, i.e., remains substantially constant. This shows that the distortion is comparable among the bands.
  • In the following paragraphs, a method and system for analyzing and estimating horizontal accuracy in imaging systems such as mapping and the like is described.
  • Mechanical limitations of instrument, sensor position and orientation, curvature of the earth, and unforeseen human errors are some of the sources for mapping inaccuracies that are usually encountered in mapping (e.g., geospatial mapping) or imaging processes. One such spatial discrepancy is the horizontal positional inaccuracy of the remotely acquired image. Due to the aforementioned sources of errors, the horizontal positional information of an object obtained from a remotely acquired image may deviate from its true real world measurement. Although some of the potential causes for spatial errors can be substantially eliminated or reduced, estimation and/or evaluation of horizontal inaccuracies may be needed to assess the reliability of the information retrieved from the image.
  • The horizontal positional error of an object can be represented by a random variable pair (x, y). The random variables x and y correspond to the error encountered in the X (longitude) and Y (latitude) directions respectively. The error can be considered as a deviation of the measured values from the true values. The two random variables can be assumed to be independent, with a Gaussian distribution and zero mean. The joint probability density distribution for these random variables (x, y) is given by the following equation:
  • p ( x , y ) = 1 2 πσ x σ y - 1 2 ( x 2 σ x 2 + y 2 σ y 2 ) ( 10 )
  • By rearranging equation (10), equation (11) is obtained.
  • - 2 ln [ p ( x , y ) 2 πσ x σ y ] = ( x 2 σ x 2 + y 2 σ y 2 ) ( 11 )
  • As it can be observed from equation (11), for a given value (x, y) the probability density function represents the square of the radius of circle assuming that variances (σx and σy) in both the dimensions are equal. The probability for an error random variable pair (x, y) to be contained within a circle of radius R can be defined by the circular error probability function P(R). The circular error probability function can be derived from equation (11). A condensed form for P(R) for the case when σx and σy are equal is given by the following equation:
  • P ( R ) = 1 - - R 2 2 σ c 2 ( 12 )
  • where σxyc and R is the radial distance.
  • The National Map Accuracy Standard (NMAS) specifies that 90% of well-defined points in an image or map should fall within a certain radial distance R. Therefore, substituting the left hand side of equation (12) with 0.90 yields the horizontal accuracy standard as specified by NMAS which is given by the following equation:

  • CE90=2.1460σc  (13)
  • where σxyc.
  • The calculation for σx is shown below:
  • σ x = ( x image - x realworld ) 2 n ( 14 )
  • where ximage and xrealworld are the coordinates of the control points measured from the image and real world, respectively, and n is the number of such control points. σy is calculated similar to equation (14):
  • σ y = ( y image - y realworld ) 2 n ( 15 )
  • where yimage and yrealworld are the coordinates of the control points measured from the image and real world respectively, and n is the number of such control points.
  • For cases where σx and σy are not equal, the error distribution takes on a more elliptical shape rather than being truly circular. Although this is the case, it can be shown that a Gaussian circular distribution can be still substituted for N elliptical distribution for certain
  • σ min σ max
  • ratios, where σmin is the minimum value between σx and σy, and σmax is the maximum value between σx and σy.
  • For cases where σx and σy are not equal and
  • σ min σ max
  • ratio is between 0.6 and 1.0, it can be shown that σc is estimated by a linear combination of σx and σy as given by the following equation:

  • σc=0.5222σmin+0.4778σmax  (16)
  • where σmin is the minimum value between σx and σy, and σmax is the maximum value between σx and σy. A further approximation of equation (15) is given in equation (17), which is adopted by NSSDA (Federal Geographic Data Committee 1988) for United States standard for spatial data:

  • σc=0.5(σminmax)  (17)
  • For cases where σx and σy are not equal and
  • σ min σ max
  • ratio is between 0.2 and 0.6, σc is estimated using an interpolated value from statistical data that relates
  • σ min σ max to σ c σ max .
  • A computer algorithm (CE90 TOOLKIT version 1.0) is developed to allow a user to automate circular error distribution analysis procedures. In an embodiment of the invention, the computer code is written in Matlab. However, it must be appreciated that other computer languages and/or computer mathematical packages may be used.
  • The coordinates for the ground control points (GCP), which are obtained from the remotely acquired image and measured using the global positioning system (GPS), can be loaded into the toolkit (code) as input files or through data entry forms.
  • In an embodiment of the invention, the CE90 toolkit includes a graphical user interface (GUI). The graphical user interface is shown in FIG. 19. The data entry form is shown in FIG. 20. Upon loading the data using either of the data input methods, by inputting true locations (realworld locations) and image locations, the data is displayed on the data list window as shown in FIG. 21. Along with the data listing, the root mean square error in both the directions, σx and σy are calculated and displayed (RMSEx and RMSEy in FIG. 21). The
  • σ min σ max
  • ratio is also calculated and displayed on the user interface (RMSEmin/RMSEmax in FIG. 21).
  • In an embodiment of the invention, the graphical user interface is configured to interactively tie a GPS point to a point in the imagery. The results may be stored in a simple text file or other file formats. The values are computed based on the points interactively chosen.
  • The images were automatically stretched to enhance contrast and easily visualize the image data. This functionality is added just for display purposes only.
  • In an embodiment of the invention, to perform an error distribution analysis, a user can choose options from the tools pull-down menu. The tools pull-down menu allows the user to select radial plot or vector plot options as shown in FIG. 22. The radial plot option allows the user to generate error distribution plots, where the user can visually verify where these positional errors fall with respect to the CE90 value estimated using equation (13). However, the value may also be computed using different parameters. For example, instead of using 2.1460 σc (when a value of 90% of well-defined points in an image is specified), a 2.4477 σc (when a value of 95% of well-defined points in an image is specified) can be used.
  • An example case is shown in FIG. 22. The user can select the data points by highlighting them on the data list window as shown in FIG. 21. The graphical user interface also allows to display the method that is used to calculate the CE90 value. In this example, the
  • σ min σ max
  • ratio is found to be between 0.6 and 0.1 and hence corresponding strategy is adopted in the estimation of σc. The circle in FIG. 22 represents the CE90 radius calculated according to equation (13).
  • In addition, in an embodiment of the invention, the CE90 graphical user interface allows the user to display the offset vector plot, which represents the magnitude and direction of the error random variables (x, y). This is done by choosing the offset plot from the tools pull-down menu. The user can also input an appropriate scale value to make the error magnitude and directions more visible. FIG. 23 shows the offset vector plot generated by CE90 toolkit.
  • While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example, and not limitation. It will be apparent to persons skilled in the relevant art(s) that various changes in form and detail can be made therein without departing from the spirit and scope of the present invention. In fact, after reading the above description, it will be apparent to one skilled in the relevant art(s) how to implement the invention in alternative embodiments. Thus, the present invention should not be limited by any of the above-described exemplary embodiments.
  • Moreover, the methods and systems of the present invention, like related systems and methods used in the imaging arts are complex in nature, are often best practiced by empirically determining the appropriate values of the operating parameters, or by conducting computer simulations to arrive at best design for a given application. Accordingly, all suitable modifications, combinations and equivalents should be considered as falling within the spirit and scope of the invention.
  • In addition, it should be understood that the figures, are presented for example purposes only. The architecture of the present invention is sufficiently flexible and configurable, such that it may be utilized in ways other than that shown in the accompanying figures.
  • Further, the purpose of the Abstract of the Disclosure is to enable the U.S. Patent and Trademark Office and the public generally, and especially the scientists, engineers and practitioners in the art who are not familiar with patent or legal terms or phraseology, to determine quickly from a cursory inspection the nature and essence of the technical disclosure of the application. The Abstract of the Disclosure is not intended to be limiting as to the scope of the present invention in any way.

Claims (40)

1. A method of evaluating the effects of image manipulation, such as sharpening and compressing an original image, by comparing the original image to the manipulated image results through use of a collection of quality metrics, the method comprising:
applying a principal component analysis to a multispectral image to obtain a plurality of principal components;
replacing a first component in the plurality of principal components by a panchromatic image;
resampling remaining principal components to a resolution of the panchromatic image; and
applying an inverse principal analysis to the panchromatic image and the remaining principal components to obtain a fused image of the panchromatic image and the multispectral image.
2. The method of claim 1, further comprising computing quality metrics on the fused image.
3. The method of claim 2, wherein the computing of the quality metrics comprises computing a mean square error value between a band of the fused image and a band of the multispectral image.
4. The method of claim 2, wherein the computing of the quality metrics comprises computing a root mean square value between a band of the fused image and a band of the multispectral image.
5. The method of claim 2, wherein the computing of the quality metrics comprises computing a correlation between a first band in the multispectral image and a second band in the multispectral image and between a first band in the fused image and a second band in the fused image.
6. The method of claim 2, wherein the computing of the quality metrics comprises computing a correlation between a band in the multispectral image and the panchromatic image and between a band in the fused image and the panchromatic image.
7. The method of claim 2, wherein the computing of the quality metrics comprises computing a relative shift mean for each band of the fused image.
8. The method of claim 2, wherein the computing of the quality metrics comprises computing histograms of bands in the multispectral image and computing histograms of bands in the fused image.
9. The method of claim 8, further comprising comparing between a histogram of a band in the multispectral image and a histogram of a band in the fused image.
10. The method of claim 2, wherein the computing of the quality metrics comprises computing an image noise index for each band of the fused image.
11. The method of claim 10, wherein a negative value of the image noise index for a band corresponds to a degradation of spectral information for the band.
12. The method of claim 2, wherein the computing of the quality metrics comprises computing a normalized difference vegetation index (NDVI) for the fused image and computing a normalized difference vegetation index for the multispectral image and correlating the normalized difference vegetation index for the fused image and the normalized difference vegetation index for the multispectral image.
13. A method of evaluating the effects of image manipulation, such as sharpening and compressing an original image, by comparing the original image to the manipulated image results through use of a collection of quality metrics, the method comprising:
applying a wavelet-based pansharpening to a plurality of bands in a multispectral image and a panchromatic image to obtain a pansharpened image; and
computing quality metrics on the pansharpened image.
14. The method of claim 13, wherein the applying of the wavelet-based pansharpening comprises using a bi-orthogonal mother wavelet.
15. The method of claim 13, further comprising applying filtering on the pansharpened image to remove noise.
16. The method of claim 15, wherein applying the filtering comprises applying a Wiener filter on the pansharpened image.
17. The method of claim 13, wherein the computing of the quality metrics comprises computing a root mean square value for each band of the pansharpened image.
18. The method of claim 13, wherein the computing of the quality metrics comprises computing a correlation between a first band in the multispectral image and a second band in the multispectral image and between a first band in the fused image and a second band in the pansharpened image.
19. The method of claim 13, wherein the computing of the quality metrics comprises computing a correlation between a band in the multispectral image and the panchromatic image and between a band in the fused image and the panchromatic image.
20. The method of claim 13, wherein the computing of the quality metrics comprises computing a relative shift mean for each band of the pansharpened image.
21. A method of evaluating the effects of image manipulation, such as sharpening and compressing an original image, by comparing the original image to the manipulated image results through use of a collection of quality metrics, the method comprising:
preprocessing an image;
applying a discrete wavelet transform on the preprocessed image to decompose the preprocessed image into a plurality of sub-bands;
applying a quantization to each sub-band in the plurality of sub-bands;
partitioning the plurality of sub-bands into a plurality of code-blocks;
encoding each code-block in the plurality of code-blocks independently to obtain a code-blocks stream;
applying a rate control process to the code-blocks stream to obtain a bit-stream; and
organizing the bit-stream to obtain a compressed image.
22. The method of claim 21, further comprising:
transforming the compressed image using embedded block decoding to obtain embedded decoded block data;
re-composing the embedded decoded block data using an inverse discrete wavelet decomposition process;
performing a dequantization by assigning a single quantum value to a range of values to obtain a dequantized data; and
performing a decoding process on the dequantized data to substantially reconstruct the image.
23. The method of claim 22, wherein the image has a tagged image file format (TIFF).
24. The method of claim 22, wherein the image has a GeoTIFF format.
25. The method of claim 22, wherein the applying of the discrete wavelet transform on the preprocessed image comprises decomposing each preprocessed image tile in a plurality of preprocessed image tiles into a high and low sub-bands of the preprocessed image tile with a low-pass filter and a high-pass filter.
26. The method of claim 22, wherein the applying of the quantization to each sub-band in the plurality of sub-bands comprises assigning a range of values to a single quantum value in each sub-band.
27. The method of claim 22, wherein the partitioning of the plurality of sub-bands into the plurality of code-blocks comprises partitioning the plurality of sub-bands into the plurality of code-blocks such that the code-blocks from each sub-band have substantially a same size.
28. A method of evaluating the effects of image manipulation, such as sharpening and compressing an original image, by comparing the original image to the manipulated image results through use of a collection of quality metrics, the method comprising:
inputting a GeoTIFF image file;
extracting a GeoTIFF header that contains references to geographic metadata;
creating a degenerated GeoTIFF image using the extracted geographic metadata;
performing a geographic markup language (GML) conversion;
inserting the degenerated GeoTIFF image into a universally unique identifier (UUID) box of the JP2 file;
inserting the geographic markup language into an extandible markup language (XML) box of the JP2 file; and
compressing the JP2 file using JP2000 image compression to obtain a GeoJPEG2000 image file.
29. The method of claim 28, wherein the compressing using the JP2000 image compression comprises compressing with LuraWave.jp2 image compression code, JP2 Java/JNI-SDK or GeoTIFF-JAI.
30. The method of claim 28, wherein the compressing using the JP2000 image compression comprises using a compression code developed using java.
31. The method of claim 28, further comprising:
decompressing the GeoJP2000 image file to obtain a decompressed TIFF image file; and
computing quality metrics to compare the GeoTIFF image and the decompressed TIFF image file.
32. The method of claim 31, wherein the compressing of the JP2 file comprises compressing the JP2 file at a plurality of compression ratios.
33. The method of claim 32, wherein the computing of the quality metrics comprises computing a mean square error value for each band of the decompressed image file at each compression ratio in the plurality of compression ratios.
34. The method of claim 33, wherein as the compression ratio increases the mean square error value increases for each band.
35. The method of claim 32, wherein the computing of the quality metrics comprises computing a root mean square value for each band of the decompressed image file at each compression ratio in the plurality of compression ratios.
36. The method of claim 35, wherein as the compression ratio increases the root mean square error value increases for each band.
37. The method of claim 32, wherein the computing of the quality metrics comprises computing a peak signal to noise ratio (PSNR) for each band of the decompressed image file at each compression ratio in the plurality of compression ratios.
38. The method of claim 37, wherein as the compression ratio increases the peak signal to noise ratio decreases for each band.
39. The method of claim 32, wherein the computing of the quality metrics comprises computing a correlation between a first band and a second band in the decompressed image file at each compression ratio in the plurality of compression ratios.
40. The method of claim 39, wherein as the compression ratio increases the correlation between the first and second bands remains substantially constant.
US12/802,448 2005-04-15 2010-06-07 Remote sensing imageryaccuracy analysis method and apparatus Abandoned US20100316292A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/802,448 US20100316292A1 (en) 2005-04-15 2010-06-07 Remote sensing imageryaccuracy analysis method and apparatus

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US67150805P 2005-04-15 2005-04-15
US67151705P 2005-04-15 2005-04-15
US67152005P 2005-04-15 2005-04-15
US11/279,982 US7733961B2 (en) 2005-04-15 2006-04-17 Remote sensing imagery accuracy analysis method and apparatus
US12/802,448 US20100316292A1 (en) 2005-04-15 2010-06-07 Remote sensing imageryaccuracy analysis method and apparatus

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/279,982 Division US7733961B2 (en) 2005-04-15 2006-04-17 Remote sensing imagery accuracy analysis method and apparatus

Publications (1)

Publication Number Publication Date
US20100316292A1 true US20100316292A1 (en) 2010-12-16

Family

ID=37115798

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/279,982 Active - Reinstated 2028-04-07 US7733961B2 (en) 2005-04-15 2006-04-17 Remote sensing imagery accuracy analysis method and apparatus
US12/802,448 Abandoned US20100316292A1 (en) 2005-04-15 2010-06-07 Remote sensing imageryaccuracy analysis method and apparatus

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/279,982 Active - Reinstated 2028-04-07 US7733961B2 (en) 2005-04-15 2006-04-17 Remote sensing imagery accuracy analysis method and apparatus

Country Status (2)

Country Link
US (2) US7733961B2 (en)
WO (1) WO2006113583A2 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100315501A1 (en) * 2009-06-16 2010-12-16 Ludwig Lester F Electronic imaging flow-microscope for environmental remote sensing, bioreactor process monitoring, and optical microscopic tomography
US20110037997A1 (en) * 2007-08-31 2011-02-17 William Karszes System and method of presenting remotely sensed visual data in multi-spectral, fusion, and three-spatial dimension images
US20110229046A1 (en) * 2010-03-17 2011-09-22 Yasuhiko Muto Image processing apparatus and image processing method
US8180814B1 (en) * 2011-02-16 2012-05-15 Docbert, LLC System and method for file management
US20140028695A1 (en) * 2012-07-27 2014-01-30 Disney Enterprises, Inc. Image aesthetic signatures
WO2014081570A1 (en) * 2012-11-07 2014-05-30 Eye Drop Imaging Technology, Llc Performing and monitoring drug delivery
EP2765555A1 (en) * 2012-04-25 2014-08-13 Rakuten, Inc. Image evaluation device, image selection device, image evaluation method, recording medium, and program
WO2015100207A1 (en) * 2013-12-27 2015-07-02 Weyerhaeuser Nr Company Method and apparatus for distinguishing between types of vegetation using near infrared color photos
US9646013B2 (en) 2011-02-16 2017-05-09 Docbert Llc System and method for file management
US9709483B2 (en) 2009-06-16 2017-07-18 Lester F. Ludwig Electronic imaging flow-microscope for environmental remote sensing, bioreactor process monitoring, and optical microscopic tomography
CN109166089A (en) * 2018-07-24 2019-01-08 重庆三峡学院 The method that a kind of pair of multispectral image and full-colour image are merged
CN109785253A (en) * 2018-12-25 2019-05-21 西安交通大学 A kind of panchromatic sharpening post-processing approach based on enhancing back projection
CN111027509A (en) * 2019-12-23 2020-04-17 武汉大学 Hyperspectral image target detection method based on double-current convolution neural network
CN116071640A (en) * 2023-02-17 2023-05-05 自然资源部国土卫星遥感应用中心 Hyperspectral satellite remote sensing image radiation quality evaluation method based on noise factors

Families Citing this family (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4427001B2 (en) * 2005-05-13 2010-03-03 オリンパス株式会社 Image processing apparatus and image processing program
EP2077191A1 (en) * 2006-09-15 2009-07-08 Chiyoda Gravure Corporation Grain pattern for grain pattern printing, its grain pattern creating method and program, housing material product on which grain pattern is printed, automobile interior component, home electric appliance, and information device
US7936949B2 (en) * 2006-12-01 2011-05-03 Harris Corporation Panchromatic modulation of multispectral imagery
US8249371B2 (en) * 2007-02-23 2012-08-21 International Business Machines Corporation Selective predictor and selective predictive encoding for two-dimensional geometry compression
US7889921B2 (en) * 2007-05-23 2011-02-15 Eastman Kodak Company Noise reduced color image using panchromatic image
US8896712B2 (en) * 2007-07-20 2014-11-25 Omnivision Technologies, Inc. Determining and correcting for imaging device motion during an exposure
US8260085B2 (en) * 2008-06-03 2012-09-04 Bae Systems Information Solutions Inc. Fusion of image block adjustments for the generation of a ground control network
US8350952B2 (en) * 2008-06-04 2013-01-08 Omnivision Technologies, Inc. Image sensors with improved angle response
US8094960B2 (en) * 2008-07-07 2012-01-10 Harris Corporation Spectral calibration of image pairs using atmospheric characterization
US8073279B2 (en) * 2008-07-08 2011-12-06 Harris Corporation Automated atmospheric characterization of remotely sensed multi-spectral imagery
US8078009B2 (en) * 2008-07-08 2011-12-13 Harris Corporation Optical flow registration of panchromatic/multi-spectral image pairs
US8478067B2 (en) * 2009-01-27 2013-07-02 Harris Corporation Processing of remotely acquired imaging data including moving objects
US8260086B2 (en) * 2009-03-06 2012-09-04 Harris Corporation System and method for fusion of image pairs utilizing atmospheric and solar illumination modeling
US8224082B2 (en) * 2009-03-10 2012-07-17 Omnivision Technologies, Inc. CFA image with synthetic panchromatic image
WO2011068807A1 (en) * 2009-12-01 2011-06-09 Divx, Llc System and method for determining bit stream compatibility
US20130188878A1 (en) * 2010-07-20 2013-07-25 Lockheed Martin Corporation Image analysis systems having image sharpening capabilities and methods using same
WO2012142471A1 (en) 2011-04-14 2012-10-18 Dolby Laboratories Licensing Corporation Multiple color channel multiple regression predictor
DE202012013411U1 (en) 2011-04-25 2016-11-15 Terra Bella Technologies Inc. Systems for overhead image and video display
US9218641B1 (en) * 2012-06-19 2015-12-22 Exelis, Inc. Algorithm for calculating high accuracy image slopes
US8923632B2 (en) * 2012-10-22 2014-12-30 The United States Of America, As Represented By The Secretary Of The Navy System and method for encoding standard-formatted images with information
US9251419B2 (en) * 2013-02-07 2016-02-02 Digitalglobe, Inc. Automated metric information network
US10230925B2 (en) 2014-06-13 2019-03-12 Urthecast Corp. Systems and methods for processing and providing terrestrial and/or space-based earth observation video
CN104469374B (en) * 2014-12-24 2017-11-10 广东省电信规划设计院有限公司 Method for compressing image
US10871561B2 (en) 2015-03-25 2020-12-22 Urthecast Corp. Apparatus and methods for synthetic aperture radar with digital beamforming
CN108432049B (en) 2015-06-16 2020-12-29 阿卜杜拉阿齐兹国王科技城 Efficient planar phased array antenna assembly
WO2017091747A1 (en) 2015-11-25 2017-06-01 Urthecast Corp. Synthetic aperture radar imaging apparatus and methods
EP3646054A4 (en) 2017-05-23 2020-10-28 King Abdulaziz City for Science and Technology Synthetic aperture radar imaging apparatus and methods for moving targets
CA3064735C (en) 2017-05-23 2022-06-21 Urthecast Corp. Synthetic aperture radar imaging apparatus and methods
WO2019226194A2 (en) 2017-11-22 2019-11-28 Urthecast Corp. Synthetic aperture radar apparatus and methods
CN109523497A (en) * 2018-10-30 2019-03-26 中国资源卫星应用中心 A kind of optical remote sensing image fusion method
US10825160B2 (en) * 2018-12-12 2020-11-03 Goodrich Corporation Spatially dynamic fusion of images of different qualities
DE102019204527B4 (en) * 2019-03-29 2020-11-19 Technische Universität München CODING / DECODING DEVICES AND METHODS FOR CODING / DECODING VIBROTACTILE SIGNALS
JP7399646B2 (en) * 2019-08-14 2023-12-18 キヤノンメディカルシステムズ株式会社 Data compression device and data compression method
CN110930439B (en) * 2019-12-04 2022-11-29 长光卫星技术股份有限公司 High-grade product automatic production system suitable for high-resolution remote sensing image
CN111340743B (en) * 2020-02-18 2023-06-06 云南大学 Semi-supervised multispectral and panchromatic remote sensing image fusion method and system
CN111432189B (en) * 2020-05-07 2022-03-11 上海航天计算机技术研究所 Satellite-borne multi-channel image compression and detection integrated device and use method
CN112085684B (en) * 2020-07-23 2023-08-04 中国资源卫星应用中心 Remote sensing image fusion method and device
CN113992838A (en) * 2021-08-09 2022-01-28 中科联芯(广州)科技有限公司 Imaging focusing method and control method of silicon-based multispectral signal
CN116612391B (en) * 2023-07-21 2023-09-19 四川发展环境科学技术研究院有限公司 Land illegal invasion detection method based on spectrum remote sensing and multi-feature fusion

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5325449A (en) * 1992-05-15 1994-06-28 David Sarnoff Research Center, Inc. Method for fusing images and apparatus therefor
DE69416717T2 (en) * 1993-05-21 1999-10-07 Nippon Telegraph & Telephone Moving picture encoders and decoders
US6704454B1 (en) * 1999-07-23 2004-03-09 Sarnoff Corporation Method and apparatus for image processing by generating probability distribution of images
US6934420B1 (en) * 1999-12-22 2005-08-23 Trident Systems Incorporated Wave image compression
TW550521B (en) * 2002-02-07 2003-09-01 Univ Nat Central Method for re-building 3D model of house in a semi-automatic manner using edge segments of buildings
KR100480600B1 (en) * 2002-06-12 2005-04-06 삼성전자주식회사 Method and apparatus based on grouped zero tree wavelet image coding algorithm

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110037997A1 (en) * 2007-08-31 2011-02-17 William Karszes System and method of presenting remotely sensed visual data in multi-spectral, fusion, and three-spatial dimension images
US20100315501A1 (en) * 2009-06-16 2010-12-16 Ludwig Lester F Electronic imaging flow-microscope for environmental remote sensing, bioreactor process monitoring, and optical microscopic tomography
US10656076B2 (en) 2009-06-16 2020-05-19 Nri R&D Patent Licensing, Llc Optical tomography optoelectronic arrangements for microscopy, cell cytometry, microplate array instrumentation, crystallography, and other applications
US8885035B2 (en) * 2009-06-16 2014-11-11 Lester F. Ludwig Electronic imaging flow-microscope for environmental remote sensing, bioreactor process monitoring, and optical microscopic tomography
US9709483B2 (en) 2009-06-16 2017-07-18 Lester F. Ludwig Electronic imaging flow-microscope for environmental remote sensing, bioreactor process monitoring, and optical microscopic tomography
US20110229046A1 (en) * 2010-03-17 2011-09-22 Yasuhiko Muto Image processing apparatus and image processing method
US8145006B2 (en) * 2010-03-17 2012-03-27 Kabushiki Kaisha Toshiba Image processing apparatus and image processing method capable of reducing an increase in coding distortion due to sharpening
US9646013B2 (en) 2011-02-16 2017-05-09 Docbert Llc System and method for file management
US8180814B1 (en) * 2011-02-16 2012-05-15 Docbert, LLC System and method for file management
EP2765555A1 (en) * 2012-04-25 2014-08-13 Rakuten, Inc. Image evaluation device, image selection device, image evaluation method, recording medium, and program
EP2765555A4 (en) * 2012-04-25 2015-04-01 Rakuten Inc Image evaluation device, image selection device, image evaluation method, recording medium, and program
US20140028695A1 (en) * 2012-07-27 2014-01-30 Disney Enterprises, Inc. Image aesthetic signatures
US9013497B2 (en) * 2012-07-27 2015-04-21 Disney Enterprises, Inc. Image aesthetic signatures
CN104853701A (en) * 2012-11-07 2015-08-19 滴眼成像技术有限责任公司 Performing and monitoring drug delivery
US20150289805A1 (en) * 2012-11-07 2015-10-15 Eye Drop Imaging Technology, Llc Performing and monitoring drug delivery
US9839391B2 (en) * 2012-11-07 2017-12-12 Eye Drop Imaging Technology, Llc Performing and monitoring drug delivery
WO2014081570A1 (en) * 2012-11-07 2014-05-30 Eye Drop Imaging Technology, Llc Performing and monitoring drug delivery
WO2015100207A1 (en) * 2013-12-27 2015-07-02 Weyerhaeuser Nr Company Method and apparatus for distinguishing between types of vegetation using near infrared color photos
US9830514B2 (en) 2013-12-27 2017-11-28 Weyerhaeuser Nr Company Method and apparatus for distinguishing between types of vegetation using near infrared color photos
CN109166089A (en) * 2018-07-24 2019-01-08 重庆三峡学院 The method that a kind of pair of multispectral image and full-colour image are merged
CN109785253A (en) * 2018-12-25 2019-05-21 西安交通大学 A kind of panchromatic sharpening post-processing approach based on enhancing back projection
CN111027509A (en) * 2019-12-23 2020-04-17 武汉大学 Hyperspectral image target detection method based on double-current convolution neural network
CN116071640A (en) * 2023-02-17 2023-05-05 自然资源部国土卫星遥感应用中心 Hyperspectral satellite remote sensing image radiation quality evaluation method based on noise factors

Also Published As

Publication number Publication date
US7733961B2 (en) 2010-06-08
WO2006113583A2 (en) 2006-10-26
US20060269158A1 (en) 2006-11-30
WO2006113583A3 (en) 2009-04-16

Similar Documents

Publication Publication Date Title
US7733961B2 (en) Remote sensing imagery accuracy analysis method and apparatus
Amro et al. A survey of classical methods and new trends in pansharpening of multispectral images
Fang et al. A variational approach for pan-sharpening
Choi et al. A new adaptive component-substitution-based satellite image fusion by using partial replacement
Garzelli et al. Optimal MMSE pan sharpening of very high resolution multispectral images
Afjal et al. Band reordering heuristics for lossless satellite image compression with 3D-CALIC and CCSDS
US7505608B2 (en) Methods and apparatus for adaptive foreground background analysis
US8184918B2 (en) Header-based processing of images compressed using multi-scale transforms
US20110007819A1 (en) Method and System for Compression of Hyperspectral or Multispectral Imagery with a Global Optimal Compression Algorithm (GOCA)
Yusuf et al. Spectral information analysis of image fusion data for remote sensing applications
US8538175B1 (en) System and method for representing and coding still and moving images
KR102160687B1 (en) Aviation image fusion method
EP1368972A2 (en) Scalable video coding using vector graphics
Kiema et al. Wavelet compression and the automatic classification of urban environments using high resolution multispectral imagery and laser scanning data
Loncan Fusion of hyperspectral and panchromatic images with very high spatial resolution
Zabala et al. Impact of CCSDS-IDC and JPEG 2000 compression on image quality and classification
Afjal et al. Band reordering heuristic for lossless satellite image compression with CCSDS
Gimona et al. The effect of image compression on synthetic PROBA-V images
CN113888421A (en) Fusion method of multispectral satellite remote sensing image
Marsetic et al. THE EFFECT OF LOSSY IMAGE COMPRESSION ON OBJECT BASED IMAGE CLASSIFICATION–WORLDVIEW-2 CASE STUDY
Blanes et al. Classification of hyperspectral images compressed through 3D-JPEG2000
Hosny et al. Effect of image compression and resampling methods on accuracy of land-cover classification
Hakami Wavelet based multimedia data compression techniques
Roman-Gonzalez Compression based analysis of image artifacts: Application to satellite images
CN114172561B (en) Remote image screening and transmitting method for microsatellite

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION