WO2012093962A1 - Intelligent and efficient computation of point spread function for high speed image processing applications - Google Patents

Intelligent and efficient computation of point spread function for high speed image processing applications Download PDF

Info

Publication number
WO2012093962A1
WO2012093962A1 PCT/SG2011/000002 SG2011000002W WO2012093962A1 WO 2012093962 A1 WO2012093962 A1 WO 2012093962A1 SG 2011000002 W SG2011000002 W SG 2011000002W WO 2012093962 A1 WO2012093962 A1 WO 2012093962A1
Authority
WO
WIPO (PCT)
Prior art keywords
sub
regions
sharpness value
test image
sharpness
Prior art date
Application number
PCT/SG2011/000002
Other languages
French (fr)
Inventor
Wee Soon Ching
Geok See NG
Original Assignee
Nanyang Polytechnic
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanyang Polytechnic filed Critical Nanyang Polytechnic
Priority to SG2011075645A priority Critical patent/SG182239A1/en
Priority to PCT/SG2011/000002 priority patent/WO2012093962A1/en
Publication of WO2012093962A1 publication Critical patent/WO2012093962A1/en

Links

Classifications

    • G06T5/73

Definitions

  • This invention relates to a high speed computation technique and system for minimizing out-of-focus, blurred regions in a projected image.
  • this invention makes use of sharpness values computed for partitioned regions of a test image and multi-resolution interpolation of point spread functions (PSFs) of certain partitioned regions of the test image to enhance the computational efficiency.
  • PSFs point spread functions
  • this invention provides a technique to enhance the system performance by automatically determining a focus plane of the system based on the computed sharpness values and PSFs associated with certain partitioned regions of the test image.
  • a first advantage of a system in accordance with this invention is the provision of an efficient method to remove image blur caused by out-of-focus regions in the projected image.
  • a second advantage of a system in accordance with this invention is the provision of a high speed computational technique to remove image blur as compared to conventional techniques, thereby reducing computational time.
  • a third advantage of a system in accordance with this invention is that the focus plane of the system is determined by changing the focus position of the image capturing device and/or the projection device, thereby increasing system performance.
  • the system performs the following process to correct for blurring in a projected image.
  • the process begins by receiving a test image captured by an image capturing device (e.g. a camera).
  • the test image is then partitioned into sub-regions.
  • a sharpness value is then calculated for a sub-region.
  • the process determines whether the calculated sharpness value of the sub-region is greater than a first threshold value.
  • the first threshold value is, preferably, in a range of sharpness value 21 to sharpness value 25. If the sharpness value is not greater than the first threshold value, the sub-region is filtered to generate a filtered sub- region by performing a filtering operation such as median filtering, Weiner filtering, and etc.
  • a corrected sharpness value is then calculated for the filtered sub-region.
  • the corrected sharpness value is then compared to a second threshold value.
  • the second threshold value is, preferably, in a range of sharpness value 1 to sharpness value 20. If the corrected sharpness value is not greater than the second threshold value, a point spread function (PSF) (or an inverse PSF) is calculated for the filtered sub-region.
  • PSF resolution is inversely proportional to the sharpness value or corrected sharpness value.
  • the calculated PSF (or inverse PSF) and the corrected sharpness value of the filtered sub-region are then stored in calibration data stored in a memory. In accordance with some embodiments, the whole process described above is then repeated for each of the other sub-regions of the test image.
  • the system further comprises a projection device (e.g. a projector) for projecting a test image onto a surface, and an image capturing device for capturing the test image and providing the captured test image to a processing unit.
  • a projection device e.g. a projector
  • an image capturing device for capturing the test image and providing the captured test image to a processing unit.
  • the system stores the calculated sharpness value of a sub-region in calibration data stored in the memory if the calculated sharpness value is greater than the first threshold value.
  • the system also stores the corrected sharpness value of a filtered sub-region in calibration data stored in the memory if the corrected sharpness value is greater than the second threshold value.
  • An image is then calibrated based on the calibration data determined from each sub- regions of the test image.
  • the test image may be partitioned into equally sized sub- regions.
  • the sharpness value and the corrected sharpness value are calculated using Sobel operators or other operators including, but not limited to, Laplacian operators and Prewitt Grad
  • the number of interpolation points needed for estimating the PSF (or inverse PSF) for a sub-region is determined.
  • the calculation of interpolation points for PSF includes calculation of a percentage of change between a sharpness value of a first sub-region ("sharpness value ) and a sharpness value of a second sub-region (“sharpness value 2) as follow:
  • One interpolation point is needed if the percentage of change is less than or equal to 30%.
  • Two interpolation points are needed if the percentage of change is greater than 30 % but less than 80%.
  • Four interpolation points are needed if the percentage of change is greater or equal to 80%.
  • the system determines the total number of sub-regions of a test image having sharpness values greater than a predetermined threshold value. Based on the number of sub-regions having sharpness values greater than the pre-determined threshold value, the focus position of either one or both of the projection device and the image capturing device is/are changed automatically to provide a better focus position to the system. Preferably, the highest number of sub- regions with sharpness values greater than the pre-determined threshold value among several captured test images is used to provide the best focus position to the system. Further, the system also calculates the average sharpness value for sub-regions of a test image.
  • the focus position of either one or both of the projection device and the image capturing device is/are changed automatically to provide a better focus position to the system.
  • the highest average sharpness value among several captured test images is used to provide the best focus position to the system.
  • an original image is then calibrated before projection based on the computed sharpness values and PSFs (or inverse PSFs) stored in the calibration data.
  • the purpose of image calibration is to build a correspondence between the projected image and the captured image so that the original image can be calibrated or pre-conditioned before projecting onto a surface, thereby minimising image blur.
  • FIG. 1 illustrating a processing system performing instructions to provide a system in accordance with an embodiment of this invention
  • FIG. 2 illustrating a flow diagram of a technique for minimising image blur in accordance with an embodiment of this invention.
  • Figure 3 illustrates an image of a test image with a plurality of feature markers.
  • This invention relates to a high speed computation technique and system for minimizing image blur caused by out-of-focus, blurred regions in a projected image.
  • this invention makes use of sharpness values computed for partitioned regions of a test image and multi-resolution interpolation of PSFs of certain partitioned regions of the image to enhance the computational efficiency.
  • this invention provides a technique to enhance the system performance by automatically determining a focus plane of the system based on the computed sharpness values and PSFs associated with certain partitioned regions of the test image.
  • PSF represents the out-of-focus of a blurred image (or blurring function) which may be modelled by a Gaussian function.
  • a preconditioned image may be estimated by performing a suitable preconditioning process (e.g. Weiner filtering, median filtering, and etc.) on the original image.
  • a suitable preconditioning process e.g. Weiner filtering, median filtering, and etc.
  • the test image is partitioned into smaller sub-regions within which each sub-region is determined whether a PSF calculation is necessary based on the sharpness value of that sub-region.
  • sharpness value is used in this invention, other suitable image parameters, such as brightness, may be used without departing from this invention and is left as a design choice to those skilled in the art.
  • the calculated PSFs are stored in calibration data which is then used to pre-conditioning an original image such that when it is projected via the out-of-focus projector, the projected image looks similar to the original image.
  • a system for minimising image blur is provided a process that is performed by hardware, firmware, or a set of software instructions stored in a memory or other media that direct a processing unit to perform the process.
  • Figure 1 illustrates an exemplary embodiment of a processing system that may perform stored instructions to provide a system in accordance with this invention.
  • Processing system 100 includes Central Processing Unit (CPU) 105.
  • CPU 105 is a processor, microprocessor, or any combination of processors and microprocessors that execute instructions to perform the processes in accordance with the present invention.
  • CPU 105 connects to memory bus 110 and input/Output (I/O) bus 115.
  • Memory bus 110 connects CPU 105 to memories 120 and 125 to transmit data and instructions between the memories and CPU 105.
  • I/O bus 115 connects CPU 105 to peripheral devices to transmit data between CPU 105 and the peripheral devices.
  • I/O bus 115 and memory bus 110 may be combined into one bus or subdivided into many other busses and the exact configuration is left to those skilled in the art.
  • Non-volatile memory 120 such as a Read Only Memory (ROM), is connected to memory bus 1 0.
  • Non-volatile memory 120 stores instructions and data needed to operate various sub-systems of processing system 100 and to boot the system at start- up.
  • ROM Read Only Memory
  • a volatile memory 125 such as Random Access Memory (RAM) is also connected to memory bus 110.
  • Volatile memory 125 stores the instructions and data needed by CPU 105 to perform software instructions for processes such as the processes for providing a system in accordance with this invention.
  • RAM Random Access Memory
  • Any number of types of memory may be used to provide volatile memory and the exact type used is left as a design choice to those skilled in the art.
  • I/O device 130 keyboard 135.
  • display 140 memory 145, network device 150 and any number of other peripheral devices connect to I/O bus 115 to exchange data with CPU 105 for use in applications being executed by CPU 105.
  • I/O device 130 is any device that transmits and/or receives data from CPU 105.
  • Keyboard 135 is a specific type of I/O that receives user input and transmits the input to CPU 105.
  • Display 140 receives display data from CPU 105 and display images on a screen for a user to see.
  • Memory 145 is a device that transmits and receives data to and from CPU 105 for storing data to a media.
  • Network device 150 connects CPU 105 to a network for transmission of data to and from other processing systems.
  • Figure 2 illustrates a flow diagram of process 200 for minimising image blur when projecting an image onto a surface using a test image projected on the same surface by a projection device.
  • the test image is an image with feature markers.
  • An example of a test image is illustrated in Figure 3.
  • Test image 300 includes 8 rows and 10 columns of identical crosses. Each cross is in a region of the test image that is a specified number of pixels in height and length.
  • a three-dimension (3D) microscopy requires many layers of image imperfection compensation which are computationally time intensive. For example, if one direction has 100 layers image compensational processes, thus 3 dimensions (x.y.z) would require 300 image compensation processes. If the computational time required for an image with a resolution of 1 mega pixels per image frame is 10 minutes, 300 image compensational processes could take 50 hours (300 x 10 minutes) to perform using existing systems. The time taken could be even longer when image resolution increases, such as 500 hours for 10 mega pixels resolution. To resolve the problem, a system in accordance with the present invention improves the computational time by at least 10 times over the existing systems.
  • the basic procedure of this invention involves two steps, the camera-based calibration step and the image correction step.
  • a test image with special features is projected on a display surface and the projected test image is captured by a camera.
  • the purpose of the calibration step is to build a correspondence between projected features (in projector coordinates) and captured features (in camera coordinates).
  • calibration information e.g. sharpness values, etc.
  • an original image is corrected according to the calibration results so that blur is removed / minimised when projecting the original image onto the display surface.
  • the aforementioned procedure may be repeated for more than one test images for better calibration results.
  • a process in accordance with this invention may be applied to both uniformly and non-uniformly blurred images.
  • Process 200 begins in step 201 by receiving a test image.
  • the received test image may be an image of the test image projected on a surface by a projection device that is captured by an image capturing device, such as a digital camera.
  • the image capturing device then transmits the test image to a processing unit performing the processes in accordance with this invention.
  • the image capturing device may be configured either internal or external to the processing unit.
  • process 200 partitions the test image into sub-regions in step 202.
  • the test image is partitioned into equally sized sub-regions.
  • the exact number and size of sub-regions of the test image is left as a design choice to those skilled in the art.
  • step 203 a sub-region is selected.
  • step 204 a sharpness value for the selected sub-region is calculated.
  • the sharpness value is calculated using the Sobel operator as follows:
  • Gx and Gy are Sobel operators for horizontal and vertical directions respectively; and n is the number of pixels in the sub-region.
  • sharpness values may be calculated in other manners without departing from this invention
  • the computed sharpness value of the sub-region is then compared with a first threshold value, T1 , in step 205.
  • T1 a first threshold value
  • the purpose of comparing the sharpness value to T1 is to minimise the impact of impulsive noises on the sharpness value obtained.
  • a higher sharpness value corresponds to a sharper image, i.e. less blur in the sub-region.
  • T1 is typically in the range of 21 to 25.
  • T1 can be any number and in any range without departing with this invention and is left as a design choice to those skilled in the art based upon, but not limited to, the images being captured and the method used to calculate the sharpness value.
  • a filtering operation is performed on that sub-region in step 206 to generate a filtered sub-region.
  • the filtering operation may be any filtering operation including, but not limited to, median filtering, Weiner filtering, and any other filtering method that removes impulsive noises.
  • a corrected sharpness value is then computed for the filtered sub-region in step 208.
  • the corrected sharpness value is calculated in the same manner as the sharpness value calculated in step 204. This step provides a first level of computational efficiency enhancement as filtering is only applied to sub-regions that are not greater than T1.
  • step 207 if the sharpness value calculated in step 204 is greater than T1 , no filtering operation is required for that sub-region because that sub- region is at a sufficient focus depth. As such, the computed sharpness value is stored in the calibration data in a memory associated with the processing unit.
  • T2 is preferably less than T1 and more preferably substantially less than T1.
  • T2 is in a range of 1 to 20.
  • T2 can be any number and in any range depending on, but not limited to, the requirements of the system and method used to determine the corrected sharpness value. Thus, the exact T2 is left as a design choice to those skilled in the art.
  • a PSF (or an inverse PSF) for the filtered sub-region is calculated in step 210.
  • the PSF is calculated using a two-dimensional Gaussian h(x,y) of the form:
  • is a constant > 0.
  • other methods for determining the PSF may be used without departing from this invention.
  • the PSF and corrected sharpness value are stored in calibration data in the memory associated with the processing unit in step 212.
  • the resolution of PSF computation is inversely proportional to the corrected sharpness value of that sub-region. Accordingly, the higher the sharpness value of that sub-region, the lower the resolution of PSF computation for that sub-region. This provides a second level of computational efficiency enhancement.
  • step 209 If the corrected sharpness value is greater than T2 in step 209, no PSF computation is required for that sub-region. This is because that sub-region is at a sufficient focus depth.
  • Step 213 determines whether all of the sub-regions of the test image have been tested. If not, steps 203 to 213 are repeated for each of the remaining sub-regions of the test image until all sub-regions are being processed. When all sub-regions of the test image are completely processed, the processing unit will calibrate an original image using the stored calibration data before projecting the image onto a surface. As such, an image is said being calibrated or pre-conditioned to remove blurring before projecting the image onto a surface. Thus, a substantially focused image is projected onto a surface.
  • a third level of computational efficiency is provided by using the difference in sharpness values between two adjacent sub-regions of the test image to determine the number of interpolation points needed for estimating the PSF for a sub-region.
  • the corrected sharpness value of a filtered sub- region that is required for PSF computation is compared with the sharpness value of a sub-region adjacent to the filtered sub-region to determine how many interpolation points needed between the two adjacent sub-regions.
  • a standard interpolation is used in the sense that the number of interpolation points needed for estimating the PSF for a sub-region is determined based on the differences in sharpness values as follows:
  • sharpness value 2 (sharpness value 1 - sharpness value 2)
  • one interpolation point is used. If the sharpness difference between two adjacent sub-regions is greater than 30% but less than 80%, two interpolation points are used. If the sharpness difference between two adjacent sub-regions is greater than or equal to 80%, four interpolation points are used. As such, the greater the difference in sharpness values between two adjacent sub-regions, preferably the closest neighbour sub-regions, more interpolation points are required. This provides better accuracy and at the same time achieves computational efficiency.
  • the number of interpolation points to be used can be in any number depending on the resolution of the original images and is left as a design choice to those skilled in the art.
  • the sharpness difference is divided into three groups (i.e. ⁇ 30%; > 30% and ⁇ 80%; > 80%) in accordance with the described embodiment of this invention, the sharpness difference can be any other ranges without departing from this invention and is left as a design choice to those skilled in the art.
  • the interpolation may be in any direction, such as nearest-neighbour interpolation.
  • an enhancement in system performance may be provided by determining a focus plane and changing the focus position of the image capturing device and/or the projection device in accordance with the determined focus plane.
  • the best focus position is determined either based on the highest number of sub-regions having sharpness values greater than a predetermined threshold value, or the highest average sharpness values for all sub-regions of the test image. The choice is depends on the type of application and is left as a design choice by those skilled in the art.

Abstract

This invention relates to a high speed computation technique and system for minimizing image blur caused by out-of-focus regions in projected image. In particular, this invention makes use of the sharpness values and multi-resolution interpolation of point spread function (PSF) to enhance the computational efficiency. More particularly, this invention provides a technique to enhance the system performance by automatically determine the best focus plane of the system.

Description

INTELLIGENT AND EFFICIENT COMPUTATION OF POINT SPREAD FUNCTION FOR HIGH SPEED IMAGE PROCESSING APPLICATIONS
Field of the Invention
This invention relates to a high speed computation technique and system for minimizing out-of-focus, blurred regions in a projected image. In particular, this invention makes use of sharpness values computed for partitioned regions of a test image and multi-resolution interpolation of point spread functions (PSFs) of certain partitioned regions of the test image to enhance the computational efficiency. More particularly, this invention provides a technique to enhance the system performance by automatically determining a focus plane of the system based on the computed sharpness values and PSFs associated with certain partitioned regions of the test image.
Background of the Invention
In existing projector-camera systems, extensive computations are required to suppress blurriness of an image projected on a surface. Tradition restoration algorithms are commonly used on the blurred output image to recover the original input image. More recently, methods to correct an original image before blurring occurs during projection were studied, including the geometric calibration and photometric calibration techniques for single projector or multiple projectors configurations. These studies produced image calibration techniques that cause an original out-of-focus image to appear nearly identical to the original image when projected. In other words, the original input image is preconditioned before projection to compensate for out-of-focus due to, but not limited to, the positioning of the projector or the geometry of the displace surface. These image calibration techniques have various industrial applications in microscope systems, high resolution imaging systems, high speed object analysis systems, projector de-blurring systems, and etc. However, it is a problem that these image calibration techniques are usually computationally intensive. Thus, those skilled in the art are constantly striving to provide a calibration technique for image calibration to minimise image blur that is less computationally intensive to increase the speed of the calibration and reduce the amount of computational resources needed.
l Summary of the Invention
The above and other problems are solved and an advance in the art is made by a system that provides a high speed computation technique for minimizing image blur in a projected image. A first advantage of a system in accordance with this invention is the provision of an efficient method to remove image blur caused by out-of-focus regions in the projected image. A second advantage of a system in accordance with this invention is the provision of a high speed computational technique to remove image blur as compared to conventional techniques, thereby reducing computational time. A third advantage of a system in accordance with this invention is that the focus plane of the system is determined by changing the focus position of the image capturing device and/or the projection device, thereby increasing system performance.
In accordance with an embodiment of this invention, the system performs the following process to correct for blurring in a projected image. The process begins by receiving a test image captured by an image capturing device (e.g. a camera). The test image is then partitioned into sub-regions. A sharpness value is then calculated for a sub-region. The process then determines whether the calculated sharpness value of the sub-region is greater than a first threshold value. The first threshold value is, preferably, in a range of sharpness value 21 to sharpness value 25. If the sharpness value is not greater than the first threshold value, the sub-region is filtered to generate a filtered sub- region by performing a filtering operation such as median filtering, Weiner filtering, and etc. A corrected sharpness value is then calculated for the filtered sub-region. The corrected sharpness value is then compared to a second threshold value. The second threshold value is, preferably, in a range of sharpness value 1 to sharpness value 20. If the corrected sharpness value is not greater than the second threshold value, a point spread function (PSF) (or an inverse PSF) is calculated for the filtered sub-region. The PSF resolution is inversely proportional to the sharpness value or corrected sharpness value. The calculated PSF (or inverse PSF) and the corrected sharpness value of the filtered sub-region are then stored in calibration data stored in a memory. In accordance with some embodiments, the whole process described above is then repeated for each of the other sub-regions of the test image.
In accordance with an embodiment of this invention, the system further comprises a projection device (e.g. a projector) for projecting a test image onto a surface, and an image capturing device for capturing the test image and providing the captured test image to a processing unit. In addition to the above described process, the system stores the calculated sharpness value of a sub-region in calibration data stored in the memory if the calculated sharpness value is greater than the first threshold value. The system also stores the corrected sharpness value of a filtered sub-region in calibration data stored in the memory if the corrected sharpness value is greater than the second threshold value. An image is then calibrated based on the calibration data determined from each sub- regions of the test image. The test image may be partitioned into equally sized sub- regions. The sharpness value and the corrected sharpness value are calculated using Sobel operators or other operators including, but not limited to, Laplacian operators and Prewitt Gradient.
Based on the difference in sharpness values between two adjacent sub-regions of the test image, the number of interpolation points needed for estimating the PSF (or inverse PSF) for a sub-region is determined. The calculation of interpolation points for PSF includes calculation of a percentage of change between a sharpness value of a first sub-region ("sharpness value ) and a sharpness value of a second sub-region ("sharpness value 2) as follow:
(sharpness value 1 - sharpness value 2) χ -\QQ /
sharpness value 2
One interpolation point is needed if the percentage of change is less than or equal to 30%. Two interpolation points are needed if the percentage of change is greater than 30 % but less than 80%. Four interpolation points are needed if the percentage of change is greater or equal to 80%.
In accordance with an embodiment of this invention, the system determines the total number of sub-regions of a test image having sharpness values greater than a predetermined threshold value. Based on the number of sub-regions having sharpness values greater than the pre-determined threshold value, the focus position of either one or both of the projection device and the image capturing device is/are changed automatically to provide a better focus position to the system. Preferably, the highest number of sub- regions with sharpness values greater than the pre-determined threshold value among several captured test images is used to provide the best focus position to the system. Further, the system also calculates the average sharpness value for sub-regions of a test image. Based on the calculated average value for sub-regions of the test image, the focus position of either one or both of the projection device and the image capturing device is/are changed automatically to provide a better focus position to the system. Preferably, the highest average sharpness value among several captured test images is used to provide the best focus position to the system.
In accordance with some embodiments of this invention, an original image is then calibrated before projection based on the computed sharpness values and PSFs (or inverse PSFs) stored in the calibration data. The purpose of image calibration is to build a correspondence between the projected image and the captured image so that the original image can be calibrated or pre-conditioned before projecting onto a surface, thereby minimising image blur.
Brief Description of Drawings
The above and other problems are solved by features and advantages of a technique for minimising image blur in projected image in accordance with this invention described in the following detailed description and shown in the following drawings:
Figure 1 illustrating a processing system performing instructions to provide a system in accordance with an embodiment of this invention;
Figure 2 illustrating a flow diagram of a technique for minimising image blur in accordance with an embodiment of this invention; and
Figure 3 illustrates an image of a test image with a plurality of feature markers.
Detailed Description of the Invention
This invention relates to a high speed computation technique and system for minimizing image blur caused by out-of-focus, blurred regions in a projected image. In particular, this invention makes use of sharpness values computed for partitioned regions of a test image and multi-resolution interpolation of PSFs of certain partitioned regions of the image to enhance the computational efficiency. More particularly, this invention provides a technique to enhance the system performance by automatically determining a focus plane of the system based on the computed sharpness values and PSFs associated with certain partitioned regions of the test image. In this context, PSF represents the out-of-focus of a blurred image (or blurring function) which may be modelled by a Gaussian function. Thus, if PSF is known from a test image, a preconditioned image may be estimated by performing a suitable preconditioning process (e.g. Weiner filtering, median filtering, and etc.) on the original image. As it is impractical to calculate PSF for each pixel of the projected test image, the test image is partitioned into smaller sub-regions within which each sub-region is determined whether a PSF calculation is necessary based on the sharpness value of that sub-region. Although sharpness value is used in this invention, other suitable image parameters, such as brightness, may be used without departing from this invention and is left as a design choice to those skilled in the art. The calculated PSFs are stored in calibration data which is then used to pre-conditioning an original image such that when it is projected via the out-of-focus projector, the projected image looks similar to the original image.
A system for minimising image blur is provided a process that is performed by hardware, firmware, or a set of software instructions stored in a memory or other media that direct a processing unit to perform the process. Figure 1 illustrates an exemplary embodiment of a processing system that may perform stored instructions to provide a system in accordance with this invention.
Processing system 100 includes Central Processing Unit (CPU) 105. CPU 105 is a processor, microprocessor, or any combination of processors and microprocessors that execute instructions to perform the processes in accordance with the present invention. CPU 105 connects to memory bus 110 and input/Output (I/O) bus 115. Memory bus 110 connects CPU 105 to memories 120 and 125 to transmit data and instructions between the memories and CPU 105. I/O bus 115 connects CPU 105 to peripheral devices to transmit data between CPU 105 and the peripheral devices. One skilled in the art will recognize that I/O bus 115 and memory bus 110 may be combined into one bus or subdivided into many other busses and the exact configuration is left to those skilled in the art.
A non-volatile memory 120, such as a Read Only Memory (ROM), is connected to memory bus 1 0. Non-volatile memory 120 stores instructions and data needed to operate various sub-systems of processing system 100 and to boot the system at start- up. One skilled in the art will recognize that any number of types of memory may be used to perform this function.
A volatile memory 125, such as Random Access Memory (RAM), is also connected to memory bus 110. Volatile memory 125 stores the instructions and data needed by CPU 105 to perform software instructions for processes such as the processes for providing a system in accordance with this invention. One skilled in the art will recognize that any number of types of memory may be used to provide volatile memory and the exact type used is left as a design choice to those skilled in the art.
I/O device 130, keyboard 135. display 140, memory 145, network device 150 and any number of other peripheral devices connect to I/O bus 115 to exchange data with CPU 105 for use in applications being executed by CPU 105. I/O device 130 is any device that transmits and/or receives data from CPU 105. Keyboard 135 is a specific type of I/O that receives user input and transmits the input to CPU 105. Display 140 receives display data from CPU 105 and display images on a screen for a user to see. Memory 145 is a device that transmits and receives data to and from CPU 105 for storing data to a media. Network device 150 connects CPU 105 to a network for transmission of data to and from other processing systems.
Figure 2 illustrates a flow diagram of process 200 for minimising image blur when projecting an image onto a surface using a test image projected on the same surface by a projection device. The test image is an image with feature markers. An example of a test image is illustrated in Figure 3. Test image 300 includes 8 rows and 10 columns of identical crosses. Each cross is in a region of the test image that is a specified number of pixels in height and length.
The computational efficiency of a process in accordance with this invention is elaborated in the following example. A three-dimension (3D) microscopy requires many layers of image imperfection compensation which are computationally time intensive. For example, if one direction has 100 layers image compensational processes, thus 3 dimensions (x.y.z) would require 300 image compensation processes. If the computational time required for an image with a resolution of 1 mega pixels per image frame is 10 minutes, 300 image compensational processes could take 50 hours (300 x 10 minutes) to perform using existing systems. The time taken could be even longer when image resolution increases, such as 500 hours for 10 mega pixels resolution. To resolve the problem, a system in accordance with the present invention improves the computational time by at least 10 times over the existing systems.
The basic procedure of this invention involves two steps, the camera-based calibration step and the image correction step. During the camera-based calibration step, a test image with special features is projected on a display surface and the projected test image is captured by a camera. The purpose of the calibration step is to build a correspondence between projected features (in projector coordinates) and captured features (in camera coordinates). After analysis of a captured test image, calibration information (e.g. sharpness values, etc.) about how to correct an original image is determined. During the image correction step, an original image is corrected according to the calibration results so that blur is removed / minimised when projecting the original image onto the display surface. The aforementioned procedure may be repeated for more than one test images for better calibration results. A process in accordance with this invention may be applied to both uniformly and non-uniformly blurred images.
Process 200 begins in step 201 by receiving a test image. The received test image may be an image of the test image projected on a surface by a projection device that is captured by an image capturing device, such as a digital camera. The image capturing device then transmits the test image to a processing unit performing the processes in accordance with this invention. The image capturing device may be configured either internal or external to the processing unit.
Once the image is received, process 200 partitions the test image into sub-regions in step 202. Preferably, the test image is partitioned into equally sized sub-regions. However, the exact number and size of sub-regions of the test image is left as a design choice to those skilled in the art.
In step 203, a sub-region is selected. In step 204, a sharpness value for the selected sub-region is calculated. In the described embodiment, the sharpness value is calculated using the Sobel operator as follows:
Figure imgf000008_0001
where Gx and Gy are Sobel operators for horizontal and vertical directions respectively; and n is the number of pixels in the sub-region. One skilled in the art will recognize that sharpness values may be calculated in other manners without departing from this invention
The computed sharpness value of the sub-region is then compared with a first threshold value, T1 , in step 205. The purpose of comparing the sharpness value to T1 is to minimise the impact of impulsive noises on the sharpness value obtained. In the described embodiment, a higher sharpness value corresponds to a sharper image, i.e. less blur in the sub-region. In accordance with this embodiment, T1 is typically in the range of 21 to 25. However, T1 can be any number and in any range without departing with this invention and is left as a design choice to those skilled in the art based upon, but not limited to, the images being captured and the method used to calculate the sharpness value.
If the sharpness value calculated in step 204 is not greater than T1 , a filtering operation is performed on that sub-region in step 206 to generate a filtered sub-region. The filtering operation may be any filtering operation including, but not limited to, median filtering, Weiner filtering, and any other filtering method that removes impulsive noises. A corrected sharpness value is then computed for the filtered sub-region in step 208. Preferably, the corrected sharpness value is calculated in the same manner as the sharpness value calculated in step 204. This step provides a first level of computational efficiency enhancement as filtering is only applied to sub-regions that are not greater than T1. On the other hand in step 207, if the sharpness value calculated in step 204 is greater than T1 , no filtering operation is required for that sub-region because that sub- region is at a sufficient focus depth. As such, the computed sharpness value is stored in the calibration data in a memory associated with the processing unit.
The computed corrected sharpness value for the filtered sub-region is then compared with a second threshold value, T2, in step 209. T2 is preferably less than T1 and more preferably substantially less than T1. In the described embodiment, T2 is in a range of 1 to 20. However, T2 can be any number and in any range depending on, but not limited to, the requirements of the system and method used to determine the corrected sharpness value. Thus, the exact T2 is left as a design choice to those skilled in the art. If the corrected sharpness value is not greater than T2, a PSF (or an inverse PSF) for the filtered sub-region is calculated in step 210. In the described embodiment, the PSF is calculated using a two-dimensional Gaussian h(x,y) of the form:
Figure imgf000010_0001
where σ is a constant > 0. However, other methods for determining the PSF may be used without departing from this invention. After the PSF is calculated, the PSF and corrected sharpness value are stored in calibration data in the memory associated with the processing unit in step 212. The resolution of PSF computation is inversely proportional to the corrected sharpness value of that sub-region. Accordingly, the higher the sharpness value of that sub-region, the lower the resolution of PSF computation for that sub-region. This provides a second level of computational efficiency enhancement.
If the corrected sharpness value is greater than T2 in step 209, no PSF computation is required for that sub-region. This is because that sub-region is at a sufficient focus depth.
Step 213 then determines whether all of the sub-regions of the test image have been tested. If not, steps 203 to 213 are repeated for each of the remaining sub-regions of the test image until all sub-regions are being processed. When all sub-regions of the test image are completely processed, the processing unit will calibrate an original image using the stored calibration data before projecting the image onto a surface. As such, an image is said being calibrated or pre-conditioned to remove blurring before projecting the image onto a surface. Thus, a substantially focused image is projected onto a surface.
In accordance with the described embodiment, a third level of computational efficiency is provided by using the difference in sharpness values between two adjacent sub-regions of the test image to determine the number of interpolation points needed for estimating the PSF for a sub-region. The corrected sharpness value of a filtered sub- region that is required for PSF computation is compared with the sharpness value of a sub-region adjacent to the filtered sub-region to determine how many interpolation points needed between the two adjacent sub-regions. In accordance with the described embodiment, a standard interpolation is used in the sense that the number of interpolation points needed for estimating the PSF for a sub-region is determined based on the differences in sharpness values as follows:
(sharpness value 1 - sharpness value 2)
i) x 100% < 30%;
sharpness value 2 ... (sharpness value 1 - sharpness value 2) „Λ„„, ΛΛΛ, . nnn,
II) - s- - x 100% > 30% and < 80%; and
sharpness value 2
.... (sharpness value 1 - sharpness value 2) ΛΛΛ,
in) - ^ - - x 100% > 80%.
sharpness value 2
If the sharpness difference between two adjacent sub-regions is less than or equal to 30%, one interpolation point is used. If the sharpness difference between two adjacent sub-regions is greater than 30% but less than 80%, two interpolation points are used. If the sharpness difference between two adjacent sub-regions is greater than or equal to 80%, four interpolation points are used. As such, the greater the difference in sharpness values between two adjacent sub-regions, preferably the closest neighbour sub-regions, more interpolation points are required. This provides better accuracy and at the same time achieves computational efficiency. Although one, two and four interpolations points are used in accordance with the described embodiment of this invention, the number of interpolation points to be used can be in any number depending on the resolution of the original images and is left as a design choice to those skilled in the art. Further, although the sharpness difference is divided into three groups (i.e.≤ 30%; > 30% and < 80%; > 80%) in accordance with the described embodiment of this invention, the sharpness difference can be any other ranges without departing from this invention and is left as a design choice to those skilled in the art. The interpolation may be in any direction, such as nearest-neighbour interpolation.
In accordance with some embodiments of this invention, an enhancement in system performance may be provided by determining a focus plane and changing the focus position of the image capturing device and/or the projection device in accordance with the determined focus plane. The best focus position is determined either based on the highest number of sub-regions having sharpness values greater than a predetermined threshold value, or the highest average sharpness values for all sub-regions of the test image. The choice is depends on the type of application and is left as a design choice by those skilled in the art.
The above describes a system for that provides a high speed computation technique for minimizing image blur in a projected image. It is expected that those skilled in the art can and will design alternative embodiments that infringe on this invention as set forth in the following claims.

Claims

A system for minimizing image blur comprising: instructions for directing a processing unit to:
1. receive a test image captured by an image capturing device,
2. partition said test image into a plurality of sub-regions,
3. calculate a sharpness value for one of said plurality of sub- regions of said test image,
4. determine whether said sharpness value calculated for said one of said plurality of sub-regions of said test image is greater than a first threshold value,
5. filter said one of said plurality of sub-regions of said test image to generate a filtered sub-region in response to a determination that said sharpness value is not greater than said first threshold value,
6. calculate a corrected sharpness value for said filtered sub- region,
7. determine whether said corrected sharpness value is greater than a second threshold value,
8. calculate a point spread function (PSF) for said filtered sub- region in response to a determination that said corrected sharpness value is not greater than said second threshold value, and
9. store said point spread function (PSF) and said corrected sharpness value of said filtered sub-region in calibration data stored in a memory; and
a media readable by said processing unit for storing said instructions.
The system of claim 1 further comprises:
a projection device for projecting a test image onto a surface.
The system of claim 1 further comprises:
an image capturing device for capturing said test image and providing said test image to said processing unit.
The system of claim 1 wherein said instructions further comprise:
instructions for directing said processing unit to store said sharpness value of said one of said plurality of sub-regions in said calibration data stored in said memory responsive to a determination that said sharpness value is greater than said first threshold value.
5. The system of claim 1 wherein said instructions further comprise:
instructions for directing said processing unit to store said corrected sharpness value of said filtered sub-region generated for said one of said plurality of sub-regions in said calibration data stored in said memory responsive to a determination that said corrected sharpness value is greater than said second threshold value.
6. The system of claim 1 wherein said instructions further comprise:
instructions for directing said processing unit to apply said instructions 3 to 9 to each of said plurality of sub-regions of said test image.
7. The system of claim 6 wherein said instructions further comprise:
instructions for directing said processing unit to calibrate an image based on said calibration data determined from said plurality of sub-regions of said test image.
8. The system of claim 1 wherein said instructions to partition said test image
comprise:
instructions for directing said processing unit to partition said test image into equally sized sub-regions.
9. The system of claim 1 wherein said instructions to calculate said sharpness value comprise:
instructions for directing said processing unit to calculate said sharpness value using a Sobel operator.
10. The system of claim 1 wherein said instructions to calculate said corrected
sharpness value comprise:
instructions for directing said processing unit to calculate said corrected sharpness value using a Sobel operator.
11. The system of claim 1 wherein said instructions to calculate said point spread function (PSF) comprise: instructions for directing said processing unit to calculate a number of interpolation points needed for said point spread function (PSF) based on difference in sharpness values between two adjacent sub-regions of said test image.
The system of claim 11 wherein said instructions to calculate said number of interpolation points for said point spread function (PSF) comprise:
instructions for directing said processing unit to:
calculate a percentage of change between a sharpness value of a first one of said plurality of sub-regions (sharpness value 1) and a second one of said plurality of sub-regions (sharpness value 2) wherein said percentage of change is determined by this equation: (sharpness value 1 - sharpness value 2) / sharpness value 2 x 100%;
determine that one interpolation point is needed responsive to said percentage of change being less than or equal to 30%;
determine that two interpolations points are needed responsive to said percentage of change being greater than 30% and less than 80%; and
determine that four interpolation points are needed responsive to said percentage of change being greater than or equal to 80%.
The system of claim 1 wherein said instructions further comprise:
instructions for directing said processing unit to determine a total number of sub-regions of said test image having sharpness values greater than a predetermined threshold value.
The system of claims 13 wherein said instructions further comprise:
instructions for directing said processing unit to change focus position of a projection device based upon said total number of sub-regions of said test image having sharpness values greater than said pre-determined threshold value.
The system of claims 13 wherein said instructions further comprise:
instructions for directing said processing unit to change focus position of an image capturing device automatically based upon said total number of sub- regions of said test image having sharpness values greater than said predetermined threshold value.
16. The system of claim 1 wherein said instructions further comprise: instructions for directing said processing unit to calculate an average sharpness value for said plurality of sub-regions of said test image.
17. The system of claims 16 wherein said instructions further comprise:
instructions for directing said processing unit to change focus position of a projection device based upon said average sharpness value for said plurality of sub-regions of said test image.
18. The system of claim 16 wherein said instructions further comprise:
instructions for directing said processing unit to change focus position of an image capturing device automatically based upon said average sharpness value for said plurality of sub-regions of said test image.
19. The system of claim 1 wherein said first threshold value is in a range of sharpness value 21 to sharpness value 25.
20. The system of claim 1 wherein said second threshold value is in a range of
sharpness value 1 to sharpness value 20.
21. The system of claim 1 wherein said instructions to filter said one of said plurality of sub-regions of said test image is performed by a filtering operation.
22. The system of claim 21 wherein said filtering operation is median filtering.
23. The system of claim 1 wherein said point spread function (PSF) resolution is
inversely proportional to one of said sharpness value and said corrected sharpness value.
24. A method for minimizing image blur with a digital processing system comprising:
1. receiving a test image captured by an image capturing device;
2. partitioning said test image into a plurality of sub-regions;
3. calculating a sharpness value for one of said plurality of sub-regions of said test image;
4. determining whether said sharpness value calculated for said one of said plurality of sub-regions of said test image is greater than a first threshold value;
5. filtering said one of said plurality of sub-regions of said test image to generate a filtered sub-region in response to a determination that said sharpness value is not greater than said first threshold value;
6. calculating a corrected sharpness value for said filtered sub-region;
7. determining whether said corrected sharpness value is greater than a second threshold value;
8. calculating a point spread function (PSF) for said filtered sub-region in response to a determination that said corrected sharpness value is not greater than said second threshold value; and
9. storing said point spread function (PSF) and said corrected sharpness value of said filtered sub-region in calibration data stored in a memory.
25. The method of claim 24 further comprising:
storing said sharpness value of said one of said plurality of sub-regions in said calibration data stored in said memory responsive to a determination that said sharpness value is greater than said first threshold value.
26. The method of claim 24 further comprising:
storing said corrected sharpness value of said filtered sub-region generated for said one of said plurality of sub-regions in said calibration data stored in said memory responsive to a determination that said corrected sharpness value is greater than said second threshold value.
27. The method of claim 24 further comprising:
applying steps 3 to 9 to each of said plurality of sub-regions of said test image.
28. The method of claim 27 further comprising:
calibrating an image based on said calibration data determined from said plurality of sub-regions of said test image.
29. The method of claim 24 further comprising:
partitioning said test image into equally sized sub-regions.
30. The method of claim 24 further comprising:
calculating said sharpness value using a Sobel operator.
31. The method of claim 24 further comprising:
calculating said corrected sharpness value using a Sobel operator.
The method of claim 24 further comprising:
calculating a number of interpolation points needed for said point spread function (PSF) based on difference in sharpness values between two adjacent sub-regions of said test image.
The method of claim 32 wherein said calculating said number of interpolation points for said point spread function (PSF) further comprise:
calculating a percentage of change between a sharpness value of a first one of said plurality of sub-regions (sharpness value 1 ) and a second one of said plurality of sub-regions (sharpness value 2) wherein said percentage of change is determined by this equation: (sharpness value 1 - sharpness value 2) / sharpness value 2 x 100%;
determining that one interpolation point is needed responsive to said percentage of change being less than or equal to 30%;
determining that two interpolations points are needed responsive to said percentage of change being greater than 30% and less than 80%; and
determining that four interpolation points are needed responsive to said percentage of change being greater than or equal to 80%.
The method of claim 24 further comprising:
determining a total number of sub-regions of said test image having sharpness values greater than a p re-determined threshold value.
The method of claims 34 further comprising:
changing focus position of a projection device based upon said total number of sub-regions of said test image having sharpness values greater than said pre-determined threshold value.
36. The method of claims 34 further comprising:
changing focus position of an image capturing device based upon said total number of sub-regions of said test image having sharpness values greater than said pre-determined threshold value.
37. The method of claim 24 further comprising:
calculating an average sharpness value for said plurality of sub-regions of said test image.
38. The method of claim 37 further comprising:
changing focus position of a projection device automatically based upon said average sharpness value for said plurality of sub-regions of said test image.
39. The method of claim 37 further comprising:
changing focus position of an image capturing device automatically based upon said average sharpness value for said plurality of sub-regions of said test image.
40. The method of claim 24 wherein said first threshold value is in a range of
sharpness value 21 to sharpness value 25.
41. The method of claim 24 wherein said second threshold value is in a range of sharpness value 1 to sharpness value 20.
42. The method of claim 24 wherein said filtering said one of said plurality of sub- regions of said test image is performed by a filtering operation.
43. The method of claim 42 wherein said filtering operation is median filtering.
44. The method of claim 24 wherein said point spread function (PSF) resolution is inversely proportional to one of said sharpness value and said corrected sharpness value.
45. A system for minimizing image blur with a processing unit comprising:
circuitry configured to receive a test image captured by an image capturing device; circuitry configured to partition said test image into a plurality of sub- regions;
circuitry configured to calculate a sharpness value for one of said plurality of sub-regions of said test image;
circuitry configured to determine whether said sharpness value calculated for said one of said plurality of sub-regions of said test image is greater than a first threshold value;
circuitry configured to filter said one of said plurality of sub-regions of said test image to generate a filtered sub-region in response to a determination that said sharpness value is not greater than said first threshold value;
circuitry configured to calculate a corrected sharpness value for said filtered sub-region;
circuitry configured to determine whether said corrected sharpness value is greater than a second threshold value;
circuitry configured to calculate a point spread function (PSF) for said filtered sub-region in response to a determination that said corrected sharpness value is not greater than said second threshold value; and
circuitry configured to store said point spread function (PSF) and said corrected sharpness value of said filtered sub-region in calibration data stored in a memory.
46. The system of claim 45 further comprising:
circuitry configured to store said sharpness value of said one of said plurality of sub-regions in said calibration data stored in said memory responsive to a determination that said sharpness value is greater than said first threshold value.
47. The system of claim 45 further comprising:
circuitry configured to store said corrected sharpness value of said filtered sub-region generated for said one of said plurality of sub-regions in said calibration data stored in said memory responsive to a determination that said corrected sharpness value is greater than said second threshold value.
48. The system of claim 45 further comprising:
circuitry configured to apply said circuitry of said system to each of said plurality of sub-regions of said test image.
49. The system of claim 48 further comprising:
circuitry configured to calibrate an image based on said calibration data determined from said plurality of sub-regions of said test image.
50. The system of claim 45 further comprising:
circuitry configured to partition said test image into equally sized sub- regions.
51. The system of claim 45 further comprising:
circuitry configured to calculate said sharpness value using a Sobel operator.
52. The system of claim 45 further comprising:
circuitry configured to calculate said corrected sharpness value using a Sobel operator.
53. The system of claim 45 further comprising:
circuitry configured to calculate a number of interpolation points needed for said point spread function (PSF) based on difference in sharpness values between two adjacent sub-regions of said test image.
54. The system of claim 53 wherein said circuitry configured to calculate said number of interpolation points for said point spread function (PSF) comprise:
circuitry configured to calculate a percentage of change between a sharpness value of a first one of said plurality of sub-regions (sharpness value 1 ) and a second one of said plurality of sub-regions (sharpness value 2) wherein said percentage of change is determined by this equation: (sharpness value 1 - sharpness value 2) / sharpness value 2 x 100%;
circuitry configured to determine that one interpolation point is needed responsive to said percentage of change being less than or equal to 30%;
circuitry configured to determine that two interpolations points are needed responsive to said percentage of change being greater than 30% and less than 80%; and
circuitry configured to determine that four interpolation points are needed responsive to said percentage of change being greater than or equal to 80%.
55. The system of claim 45 further comprising:
circuitry configured to determine a total number of sub-regions of said test image having sharpness values greater than a pre-determined threshold value.
56. The system of claims 55 further comprising:
circuitry configured to change focus position of a projection device based upon said total number of sub-regions of said test image having sharpness values greater than said pre-determined threshold value.
57. The system of claims 55 wherein said instructions further comprise:
circuitry configured to change focus position of an image capturing device based upon said total number of sub-regions of said test image having sharpness values greater than said pre-determined threshold value.
58. The system of claim 45 further comprising:
circuitry configured to calculate an average sharpness value for said plurality of sub-regions of said test image.
59. The system of claims 58 further comprising:
circuitry configured to change focus position of a projection device based upon said average sharpness value for said plurality of sub-regions of said test image.
60. The system of claim 58 further comprising:
circuitry configured to change focUs position of an image capturing device automatically based upon said average sharpness value for said plurality of sub- regions of said test image.
61. The system of claim 45 wherein said first threshold value is in a range of
sharpness value 21 to sharpness value 25.
62. The system of claim 45 wherein said second threshold value is in a range of sharpness value 1 to sharpness value 20.
63. The system of claim 45 wherein said circuitry configured to filter said one of said plurality of sub-regions of said test image is performed by a filtering operation.
64. The system of claim 63 wherein said filtering operation is median filtering.
65. The system of claim 45 wherein said point spread function (PSF) resolution is inversely proportional to one of said sharpness value and said corrected sharpness value.
PCT/SG2011/000002 2011-01-03 2011-01-03 Intelligent and efficient computation of point spread function for high speed image processing applications WO2012093962A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
SG2011075645A SG182239A1 (en) 2011-01-03 2011-01-03 Intelligent and efficient computation of point spread function for high speed image processing applications
PCT/SG2011/000002 WO2012093962A1 (en) 2011-01-03 2011-01-03 Intelligent and efficient computation of point spread function for high speed image processing applications

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/SG2011/000002 WO2012093962A1 (en) 2011-01-03 2011-01-03 Intelligent and efficient computation of point spread function for high speed image processing applications

Publications (1)

Publication Number Publication Date
WO2012093962A1 true WO2012093962A1 (en) 2012-07-12

Family

ID=46457623

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SG2011/000002 WO2012093962A1 (en) 2011-01-03 2011-01-03 Intelligent and efficient computation of point spread function for high speed image processing applications

Country Status (2)

Country Link
SG (1) SG182239A1 (en)
WO (1) WO2012093962A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113742204A (en) * 2020-05-27 2021-12-03 南京大学 Deep learning operator testing tool based on fuzzy test

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20070117807A (en) * 2006-06-09 2007-12-13 엘지전자 주식회사 Method and apparatus for improving a sharpness data of (an) image display device
US20070286514A1 (en) * 2006-06-08 2007-12-13 Michael Scott Brown Minimizing image blur in an image projected onto a display surface by a projector
US20080193034A1 (en) * 2007-02-08 2008-08-14 Yu Wang Deconvolution method using neighboring-pixel-optical-transfer-function in fourier domain
US7636486B2 (en) * 2004-11-10 2009-12-22 Fotonation Ireland Ltd. Method of determining PSF using multiple instances of a nominally similar scene

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7636486B2 (en) * 2004-11-10 2009-12-22 Fotonation Ireland Ltd. Method of determining PSF using multiple instances of a nominally similar scene
US20070286514A1 (en) * 2006-06-08 2007-12-13 Michael Scott Brown Minimizing image blur in an image projected onto a display surface by a projector
KR20070117807A (en) * 2006-06-09 2007-12-13 엘지전자 주식회사 Method and apparatus for improving a sharpness data of (an) image display device
US20080193034A1 (en) * 2007-02-08 2008-08-14 Yu Wang Deconvolution method using neighboring-pixel-optical-transfer-function in fourier domain

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113742204A (en) * 2020-05-27 2021-12-03 南京大学 Deep learning operator testing tool based on fuzzy test
CN113742204B (en) * 2020-05-27 2023-12-12 南京大学 Deep learning operator testing method based on fuzzy test

Also Published As

Publication number Publication date
SG182239A1 (en) 2012-08-30

Similar Documents

Publication Publication Date Title
JP6142611B2 (en) Method for stereo matching and system for stereo matching
CN107633536B (en) Camera calibration method and system based on two-dimensional plane template
Jeon et al. Accurate depth map estimation from a lenslet light field camera
CN110717942B (en) Image processing method and device, electronic equipment and computer readable storage medium
US9998666B2 (en) Systems and methods for burst image deblurring
CN110023810B (en) Digital correction of optical system aberrations
US8090214B2 (en) Method for automatic detection and correction of halo artifacts in images
US20130106848A1 (en) Image generation apparatus and image generation method
CN107316326B (en) Edge-based disparity map calculation method and device applied to binocular stereo vision
CN110493488B (en) Video image stabilization method, video image stabilization device and computer readable storage medium
JP2019500762A5 (en)
US10628924B2 (en) Method and device for deblurring out-of-focus blurred images
US20130243346A1 (en) Method and apparatus for deblurring non-uniform motion blur using multi-frame including blurred image and noise image
JP2011028588A (en) Information processing apparatus, line noise reduction processing method, and program
US20130016239A1 (en) Method and apparatus for removing non-uniform motion blur using multi-frame
EP2564234A1 (en) Range measurement using a coded aperture
EP2887310B1 (en) Method and apparatus for processing light-field image
KR102582261B1 (en) Method for determining a point spread function of an imaging system
CN110345875B (en) Calibration and ranging method, device, electronic equipment and computer readable storage medium
KR20230137937A (en) Device and method for correspondence analysis in images
CN104754316A (en) 3D imaging method and device and imaging system
WO2012093962A1 (en) Intelligent and efficient computation of point spread function for high speed image processing applications
Kriener et al. Accelerating defocus blur magnification
JP3959547B2 (en) Image processing apparatus, image processing method, and information terminal apparatus
CN113938578A (en) Image blurring method, storage medium and terminal device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11854602

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11854602

Country of ref document: EP

Kind code of ref document: A1