US20150103200A1 - Heterogeneous mix of sensors and calibration thereof - Google Patents
Heterogeneous mix of sensors and calibration thereof Download PDFInfo
- Publication number
- US20150103200A1 US20150103200A1 US14/065,810 US201314065810A US2015103200A1 US 20150103200 A1 US20150103200 A1 US 20150103200A1 US 201314065810 A US201314065810 A US 201314065810A US 2015103200 A1 US2015103200 A1 US 2015103200A1
- Authority
- US
- United States
- Prior art keywords
- image
- sensor
- attribute
- difference
- calibration
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 claims abstract description 86
- 238000012545 processing Methods 0.000 claims abstract description 69
- 230000035945 sensitivity Effects 0.000 claims description 12
- 230000002950 deficient Effects 0.000 claims description 10
- 238000013507 mapping Methods 0.000 claims description 5
- 238000003672 processing method Methods 0.000 claims 2
- 230000008569 process Effects 0.000 description 58
- 239000000203 mixture Substances 0.000 description 26
- 230000008901 benefit Effects 0.000 description 10
- 239000002131 composite material Substances 0.000 description 10
- 230000004044 response Effects 0.000 description 10
- 238000000605 extraction Methods 0.000 description 8
- 238000004519 manufacturing process Methods 0.000 description 8
- 238000013459 approach Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 7
- 238000009499 grossing Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 238000001914 filtration Methods 0.000 description 4
- 239000004065 semiconductor Substances 0.000 description 4
- 230000001413 cellular effect Effects 0.000 description 3
- 239000011521 glass Substances 0.000 description 3
- 238000002156 mixing Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 229920006395 saturated elastomer Polymers 0.000 description 2
- 230000002411 adverse Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000011143 downstream manufacturing Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
- H04N17/002—Diagnosis, testing or measuring for television systems or their details for television cameras
-
- H04N5/2258—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/133—Equalising the characteristics of different image components, e.g. their average brightness or colour balance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/246—Calibration of cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/25—Image signal generators using stereoscopic image cameras using two or more image sensors with different characteristics other than in their location or field of view, e.g. having different resolutions or colour pickup characteristics; using image signals from one sensor to control the characteristics of another sensor
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/10—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
- H04N23/13—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with multiple sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/45—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/741—Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H04N5/217—
-
- H04N5/23232—
-
- H04N5/2355—
-
- H04N9/735—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Definitions
- Certain cameras such as light-field or plenoptic cameras, rely upon a lens array over an image sensor and/or an array of image sensors to capture directional projection of light.
- these approaches use relatively large and specialized image sensors which are generally unsuitable for other applications (e.g., video capture, video conferencing, etc.), use only a fraction of the information captured, and rely upon high levels of processing to deliver even a viewfinder image, for example.
- some of these light-field or plenoptic camera devices require a relatively large height for specialized lens and/or sensor arrays and, thus, do not present practical solutions for use in cellular telephones.
- FIG. 1A illustrates a system including a heterogeneous mix of image sensors according to an example embodiment.
- FIG. 1B illustrates a device for image capture and calibration using the system of FIG. 1A according to an example embodiment.
- FIG. 2A illustrates a process flow for calibration of the heterogeneous mix of image sensors in the system of FIG. 1A according to an example embodiment.
- FIG. 2B illustrates a process flow for depth map generation using the system of FIG. 1A , after calibration of the heterogeneous mix of image sensors, according to an example embodiment.
- FIG. 3 illustrates an example edge map generated by the edge map generator of FIG. 1A according to an example embodiment.
- FIG. 4 illustrates an example depth map generated by the depth map generator of FIG. 1A according to an example embodiment.
- FIG. 5 illustrates an example process of smoothing performed by the smoother of FIG. 1A according to an example embodiment.
- FIG. 6 illustrates a flow diagram for a process of calibration of a mix of image sensors in the system of FIG. 1A according to an example embodiment.
- FIG. 7 illustrates an example schematic block diagram of a computing environment which may embody one or more of the system elements of FIG. 1A according to various embodiments.
- Certain cameras such as light-field or plenoptic cameras, rely upon a lens array over an image sensor and/or an array of image sensors to capture directional projection of light.
- these approaches use relatively large and specialized image sensors which are generally unsuitable for other applications (e.g., video capture, video conferencing, etc.), use only a fraction of the information captured, and rely upon high levels of processing to deliver even a viewfinder image, for example.
- some of these light-field or plenoptic camera devices require a relatively large height for specialized lens and/or sensor arrays and, thus, do not present practical solutions for use in cellular telephones.
- the embodiments described herein include a heterogeneous mix of sensors which may be relied upon to achieve, among other processing results, image processing results that are similar, at least in some aspects, to those achieved by light-field or plenoptic imaging devices.
- the mix of sensors may be used for focusing and re-focusing images after the images are captured.
- the mix of sensors may be used for object extraction, scene understanding, gesture recognition, etc.
- a mix of image sensors may be used for high dynamic range (HDR) image processing.
- the mix of image sensors may be calibrated for focusing and re-focusing, object extraction, scene understanding, gesture recognition, HDR image processing, etc.
- the heterogeneous mix of sensors includes a main color image sensor having a pixel density ranging from 3 to 20 Megapixels, for example, with color pixels arranged in a Bayer pattern, and a secondary luminance image sensor having a relatively lower pixel density.
- main and secondary sensors which may be embodied as sensors of any suitable type, pixel resolution, process, structure, or arrangement (e.g., infra-red, charge-coupled device (CCD), 3CCD, Foveon X3, complementary metal-oxide-semiconductor (CMOS), red-green-blue-clear (RGBC), etc.).
- FIG. 1A illustrates a system 10 including a heterogeneous mix of image sensors according to an example embodiment.
- the system 10 includes a processing environment 100 , a memory 110 , and first and second sensors 150 and 152 , respectively, which may be embodied as a heterogeneous mix of image sensors.
- the memory 110 includes memory areas for image data 112 and calibration characteristic data 114 .
- the processing environment 100 may be embodied as one or more processors, processing circuits, and/or combinations thereof.
- the processing environment 100 includes embedded (i.e., application-specific) and/or general purpose processing circuitry and/or software structures that process data, such as image data captured by the first and second sensors 150 and 152 , for example. Further structural aspects of the processing environment 100 are described below with reference to FIG. 7 .
- the processing environment 100 generally includes elements for focusing and re-focusing of images captured by the first and second sensors 150 and 152 , as further described below.
- the processing environment 100 includes a scaler 120 , a calibrator 122 , a depth map generator 124 , an edge map generator 126 , a smoother 128 , a focuser 130 , and an image processor 132 .
- a scaler 120 a calibrator 122 , a depth map generator 124 , an edge map generator 126 , a smoother 128 , a focuser 130 , and an image processor 132 .
- the elements of the processing environment 100 may vary among embodiments, particularly depending upon the application for use of the heterogeneous mix of image sensors 150 and 152 .
- the processing environment 100 may include additional or alternative processing elements or modules.
- the embodiments described herein are generally directed to calibrating operational aspects of the first and second sensors 150 and 152 and/or the image data captured by the first and second sensors 150 and 152 . In this way, the first and second sensors 150 and 152 and the images captured by the sensors 150 and 152 can be used together.
- the first and second sensors 150 and 152 may be embodied as any suitable types of sensors, depending upon the application for use of the system 10 .
- the first and second sensors 150 and 152 may be embodied as image sensors having the same or different pixel densities, ranging from a fraction of 1 to 20 Megapixels, for example.
- the first image sensor 150 may be embodied as a color image sensor having a first pixel density
- the second image sensor 152 may be embodied as a luminance image sensor having a relatively lower pixel density.
- the system 10 is generally agnostic to the resolution and format of the first and second sensors 150 and 152 , which may be embodied as sensors of any suitable type, pixel resolution, process, structure, or arrangement (e.g., infra-red, charge-coupled device (CCD), 3CCD, Foveon X3, complementary metal-oxide-semiconductor (CMOS), red-green-blue-clear (RGBC), etc.).
- CCD charge-coupled device
- CMOS complementary metal-oxide-semiconductor
- RGBC red-green-blue-clear
- the memory 110 may be embodied as any suitable memory that stores data provided by the first and second sensors 150 and 152 , among other data, for example.
- the memory 110 may store image and image-related data for manipulation and processing by the processing environment 100 .
- the memory 110 includes memory areas for image data 112 and calibration characteristic data 114 .
- image data 112 includes memory areas for image data 112 and calibration characteristic data 114 .
- calibration characteristic data 114 Various aspects of processing and/or manipulation of the image data 112 by the processing environment 100 , based, for example, upon the calibration characteristic data 114 , are described in further detail below.
- FIG. 1B illustrates a device 160 for image capture and calibration using the system 10 of FIG. 1A according to an example embodiment.
- the device 160 includes the processing environment 100 , the memory 110 , and the first and second sensors 150 and 152 of FIG. 1A , among other elements.
- the device 160 may be embodied as a cellular telephone, tablet computing device, laptop computer, desktop computer, television, set-top box, personal media player, appliance, etc., without limitation.
- the device 160 may be embodied as a pair of glasses, a watch, wristband, or other device which may be worn or attached to clothing. If embodied as a pair of glasses, then the sensors 150 and 152 of the device 160 may be positioned at opposite corners of rims or end-pieces of the pair of glasses.
- the first and second sensors 150 and 152 are separated by a first distance X in a first dimension and by a second distance Y in a second dimension.
- the distances X and Y may vary among embodiments, for example, based on aesthetic and/or performance factors, depending upon the application or field of use for the device 160 .
- the relative positions (e.g., right verses left, top verses bottom, etc.) of the first and second sensors 150 and 152 may vary among embodiments.
- a relative difference in rotational or angular displacement i.e., R1-R2
- the device 160 may include one or more additional elements for image capture, such as lenses, flash devices, focusing mechanisms, etc., although these elements may not be relied upon in certain embodiments and may be omitted.
- the first and second sensors 150 and 152 may be embodied as sensors of varied operating and structural characteristics (i.e., a heterogeneous mix of sensors).
- the differences in operating characteristics may be identified during manufacturing and/or assembly of the device 160 , for example, based on manufacturing and/or assembly calibration processes. Additionally or alternatively, the differences in operating characteristics may be identified during post-assembly calibration processes by the calibrator 122 . These differences may be quantified as calibration data which is representative of the operating characteristics of the first and second sensors 150 and 152 , and stored in the memory 110 as the calibration characteristic data 114 .
- the device 160 is configured to capture images using the first and second sensors 150 and 152 .
- images captured by the first and second sensors 150 and 152 may be focused and re-focused after being captured.
- the images may be processed according to one or more HDR image processing techniques, for example, or for object extraction, scene understanding, gesture recognition, etc.
- FIG. 2A illustrates a process flow for calibration of the heterogeneous mix of image sensors 150 and 152 in the system 10 of FIG. 1A according to an example embodiment.
- the first sensor 150 generates a first image 202
- the second sensor 152 generates a second image 204 .
- the first and second images 202 and 204 may be captured at a substantially same time.
- the first and second images 202 and 204 may be captured, respectively, by the first and second sensors 150 and 152 , at different times.
- Data associated with the first and second images 202 and 204 may be stored in the memory 110 ( FIG. 1 ).
- the calibrator 122 may adapt at least one operating parameter of the first sensor 150 or the second sensor 152 to accommodate for at least one of noise, defective pixels, dark current, vignetting, demosaicing, or white balancing, for example, without limitation. More particularly, the calibrator 122 may reference the calibration characteristic data 114 in the memory 110 , to identify any adjustments to the operating parameters of the first and second sensors 150 and 152 , and accommodate for or balance differences in noise, defective pixels, dark current, vignetting, demosaicing, or white balancing, for example, between or among images generated by the first and second sensors 150 and 152 .
- the calibrator 122 may adjust one or more of the operating parameters of the first and second sensors 150 and 152 (e.g., operating voltages, timings, temperatures, exposure timings, etc.) to address the difference or differences.
- the calibrator 122 may seek to align or normalize aspects of the operating characteristics of the first and second sensors 150 and 152 . In this way, downstream operations performed by other elements in the system 10 may be aligned, as necessary, for suitable performance and results in image processing.
- the first sensor 150 may produce images including relatively more noise than the images produced by the second sensor 152 .
- This difference in the generation of noise may be embodied in values of the calibration characteristic data 114 , for example, in one or more variables, coefficients, or other data metrics.
- the calibrator 122 may refer to the calibration characteristic data 114 and, based on the calibration characteristic data 114 , adjust operating parameters of the first and second sensor 150 and 152 , in an effort to address the difference.
- the first sensor 150 may produce images including a first dark current characteristic
- the second sensor 152 may produce images including a second dark current characteristic.
- the difference between these dark current characteristics may be embodied in values of the calibration characteristic data 114 .
- the calibrator 122 may seek to adjust operating parameters of the first and second sensors 150 and 152 to address this difference. Although certain examples are provided herein, it should be appreciated that the calibrator 122 may seek to normalize or address other differences in operating characteristics between the first and second sensors 150 and 152 , so that a suitable comparison may be made between images produced by the first and second sensors 150 and 152 .
- the differences in operating characteristics between the first and second sensors 150 and 152 may be due to various factors. For example, the differences may be due to different pixel densities of the first and second sensors 150 and 152 , different manufacturing processes used to form the first and second sensors 150 and 152 , different pixel array patterns or filters (e.g., Bayer, EXR, X-Trans, etc.) of the first and second sensors 150 and 152 , different sensitivities of the first and second sensors 150 and 152 to light, temperature, operating frequency, operating voltage, or other factors, without limitation.
- different pixel densities of the first and second sensors 150 and 152 may be due to different manufacturing processes used to form the first and second sensors 150 and 152 , different pixel array patterns or filters (e.g., Bayer, EXR, X-Trans, etc.) of the first and second sensors 150 and 152 , different sensitivities of the first and second sensors 150 and 152 to light, temperature, operating frequency, operating voltage, or other factors, without limitation.
- differences in operating characteristics between the first and second sensors 150 and 152 may be identified and characterized during manufacturing and/or assembly of the device 160 , for example, based on manufacturing and/or assembly calibration processes. Additionally or alternatively, the differences in operating characteristics may be identified during post-assembly calibration processes. These differences may be quantified as calibration data representative of the operating characteristics of the first and second sensors 150 and 152 , and stored in the memory 110 as the calibration characteristic data 114 .
- the calibrator 122 may adjust one or more attributes of one or more of the first or second images 202 or 204 to substantially address a difference between attributes of the first or second images 202 or 204 . For example, based on a difference in sensitivity between the first sensor 150 and the second sensor 152 , the calibrator 122 may adjust the exposure of one or more of the first image 202 and the second image 204 , to address the difference in exposure. Similarly, based on a difference in noise, the calibrator 122 may filter one or more of the first image 202 and the second image 204 , to address a difference in an amount of noise among the images.
- the calibrator 122 may adjust one or more attributes of the first and/or second images 202 and/or 204 to accommodate for, address, or normalize differences between them due to noise, defective pixels, dark current, vignetting, demosaicing, or white balancing, for example, or any combination thereof, without limitation.
- a measure of differences among attributes e.g., noise response, defective pixels, dark current response, vignetting response, white balance response, exposure response, etc.
- This calibration characteristic data 114 may be referenced by the calibrator 122 when adjusting attributes of the first and/or second images 202 and/or 204 .
- the first and second images 202 and 204 may be provided to the scaler 120 .
- the scaler 120 downscales and/or upscales images in pixel density. It is noted that, in certain embodiments, the scaler 120 may be omitted from the process flow of FIG. 2A , for one or more of the first and second images 202 and 204 .
- the scaler 120 is generally relied upon, for example, to reduce the pixel processing loads of other elements in the system 10 , to align pixel densities among the first and second images 202 and 204 (e.g., if the first and second sensors 150 and 152 vary in pixel density) and/or to reduce or compact image features.
- the downscaling and/or upscaling operations of the scaler 120 may be embodied according to nearest-neighbor interpolation, bi-linear interpolation, bi-cubic interpolation, supersampling, and/or other suitable interpolation techniques, or combinations thereof, without limitation.
- the calibrator 122 may adjust one or more attributes of the first and/or second downscaled images 212 and/or 214 to accommodate for, address, or normalize differences between them due to noise, defective pixels, dark current, vignetting, demosaicing, or white balancing, for example, or any combination thereof, without limitation.
- the calibrator 122 may make adjustments to the first and/or second downscaled images 212 and/or 214 at various stages. For example, the adjustments may be made before and/or after downscaling, upscaling, or other image processing activities.
- the calibrator 122 adapts operating parameters of the first and second sensors 150 and 152 and adjusts attributes of the first and second images 202 and 204 to substantially remove, normalize, or balance differences between images, for other downstream image processing activities of the system 10 and/or the device 160 .
- the images captured by the system 10 and/or the device 160 may be relied upon in focusing and re-focusing, object extraction, scene understanding, gesture recognition, HDR image processing, etc.
- the calibrator 122 is configured to adapt and/or adjust certain operating characteristics and attributes into substantial alignment for the benefit of the downstream image processing activities.
- FIG. 2B illustrates a process flow for depth map generation using the system of FIG. 1A , after calibration of the heterogeneous mix of image sensors, according to an example embodiment.
- the first image 202 may be compared with the second image 204 according to one or more techniques for image processing.
- the first and second images 202 and 204 may be representative of and capture substantially the same field of view.
- similar or corresponding image information e.g., pixel data
- similar or corresponding image information e.g., pixel data
- pixel data e.g., pixel data
- similar or corresponding image information among the first and second images 202 and 204 is typically shifted in pixel space between the first and second images 202 and 204 , due to the relative difference in position (e.g., illustrated as X, Y, R1, and R2 in FIG. 1B ) between the first and second sensors 150 and 152 on the device 160 .
- the amount of this shift, per pixel, is representative of depth, because it is dependent (i.e., changes) upon the relative depths of items within a field of view of the images 202 and 204 . Additionally, it is noted that the image information among the first and second images 202 and 204 is typically shifted in other aspects, such as luminance, color, color coding, pixel density, noise, etc., and these differences should be accounted for by the calibrator 122 of the system 10 before or while processing the images 202 and 204 .
- the first and second images 202 and 204 may have the same or different pixel densities, depending upon the respective types and characteristics of the first and second image sensors 150 and 152 , for example. Further, the first and second images 202 and 204 may be of the same or different image formats.
- the first image 202 may include several color components of a color image encoded or defined according to a certain color space (e.g., red, green, blue (RGB); cyan, magenta, yellow, key (CMYK); phase alternating line (PAL); YUV or Y′UV; YCbCr; YPbPr, etc.), and the second image 204 may include a single component of another color space.
- a certain color space e.g., red, green, blue (RGB); cyan, magenta, yellow, key (CMYK); phase alternating line (PAL); YUV or Y′UV; YCbCr; YPbPr, etc.
- the second image 204 may include
- the first downscaled image 212 is provided to the edge map generator 126 .
- the edge map generator 126 generally, generates an edge map by identifying edges in at least one image. In other words, the edge map generator 126 generates an edge map by identifying edges in one or more of the first or second downscaled images 212 and 214 . In the embodiment illustrated in FIG. 2B , the edge map generator 126 generates the edge map 222 by identifying edges in the first downscaled image 212 , although the edge map 222 may be generated by identifying edges in the second downscaled image 214 . It should be appreciated that the performance of the edge map generator 126 may be improved by identifying edges in downscaled, rather than higher pixel density, images.
- edges in higher density images may span several (e.g., 5, 10, 15, or more) pixels.
- edges in relatively fewer pixels in downscaled images may span several (e.g., 5, 10, 15, or more) pixels.
- the scaler 120 may be configured to downscale one or more of the first or second images 202 or 204 so as to provide a suitable pixel density for accurate edge detection by the edge map generator 126 .
- FIG. 3 illustrates an example edge map 222 generated by the edge map generator 126 of FIG. 1A according to an example embodiment.
- the edge map 126 is embodied by data representative of edges.
- the edge map 126 is embodied by data representative of edges in the first image 202 .
- the edge map generator 126 generates the edge map 222 by identifying pixels or pixel areas in the first image 202 where pixel or pixel area brightness quickly changes or encounters a discontinuity (i.e., at “step changes”). Points at which pixel brightness change quickly are organized into edge segments in the edge map 222 by the edge map generator 126 .
- the changes may be due to changes in surface or material orientation, changes in surface or material properties, or variations in illumination, for example.
- Data associated with the edge map 222 may be stored by the edge map generator 126 in the memory 110 ( FIG. 1 ).
- the first and second downscaled images 212 and 214 are also provided to the depth map generator 124 .
- the depth map generator 124 generally, generates a depth map including a mapping among relative depth values in a field of view based on a difference between pixels of a first image and pixels of a second image.
- the depth map generator 124 generates a depth map 224 including a mapping of relative depth values based on differences between pixels of the first downscaled image 212 and pixels of the second downscaled image 214 .
- the depth map generator 124 (and/or the edge map generator 126 ) may operate using only the luminance component of images.
- the first sensor 150 may be embodied as a main color image sensor
- the second sensor 152 may be embodied as a secondary luminance only image sensor.
- the secondary luminance image sensor may not need to be at the full resolution of the main color sensor, because no demosaicing interpolation is required for the luminance image sensor (i.e., the luminance image sensor has a higher effective resolution).
- downscaling by the scaler 120 may be omitted for the second image 204 , for example.
- FIG. 4 illustrates an example depth map 224 generated by the depth map generator 124 of FIG. 1A according to an example embodiment.
- the depth map 224 is embodied by data representative of relative depths in a field of view based on differences between pixels of the first downscaled image 212 and pixels of the second downscaled image 214 .
- relatively darker areas are closer in depth and relatively lighter areas are further in depth, from the point of view of the first and second image sensors 150 and 152 and/or the device 160 ( FIG. 1B ). It should be appreciated that the relatively darker and lighter areas in FIG. 4 are representative of depth values.
- the depth map 224 is referred to as a “raw” depth map, because it is representative of unsmoothed or unfiltered depth values.
- Data associated with the depth map 224 may be stored by the depth map generator 124 in the memory 110 ( FIG. 1 ).
- the depth map generator 124 may generate the depth map 224 , for example, by calculating a sum of absolute differences (SAD) between pixel values in a neighborhood of pixels in the downscaled image 212 and a corresponding neighborhood of pixels in the downscaled image 214 , for each pixel in the downscaled images 212 and 214 .
- SAD sum of absolute differences
- Each SAD value may be representative of a relative depth value in a field of view of the downscaled images 212 and 214 and, by extension, the first and second images 202 and 204 .
- other stereo algorithms, processes, or variations thereof may be relied upon by the depth map generator 124 .
- the depth map generator 124 may rely upon squared intensity differences, absolute intensity differences, mean absolute difference measures, or other measures of difference between pixel values, for example, without limitation. Additionally, the depth map generator 124 may rely upon any suitable size, shape, or variation of pixel neighborhoods for comparisons between pixels among images. Among embodiments, any suitable stereo correspondence algorithm may be relied upon by the depth map generator 124 to generate a depth map including a mapping among relative depth values between images.
- the smoother 128 smooths the relative depth values of the depth map 224 using the edge map 222 .
- the smoother 128 filters columns (i.e., in a first direction) of depth values of the depth map 224 between a first pair of edges in the edge map 222 .
- the smoother 128 further filters rows (i.e., in a second direction) of depth values of the depth map 224 between a second pair edges in the edge map 222 .
- the process of filtering along columns and rows may proceed iteratively between filtering columns and rows, until a suitable level of smoothing has been achieved.
- FIG. 5 illustrates an example process of smoothing performed by the smoother 128 of FIG. 1A according to an example embodiment.
- the depth map 500 is smoothed or filtered along columns (i.e., in a first direction Y) of depth values and between pairs of edges
- the depth map 502 is smoothed or filtered along rows (i.e., in a second direction X) of depth values and between pairs of edges.
- the depth map 500 is representative, for example, of depth values after a first pass of smoothing depths along columns, using the raw depth map 224 as a basis for depth values and the edge map 222 as a basis for edges.
- the depth map 502 is representative of smoothed depth values after a second pass of smoothing depths along rows, using the depth map 500 as a starting basis for depth values.
- the smoother 128 scans along columns of the depth map 500 , from a right to a left, for example, of the map.
- the columns may be scanned according to a column-wise pixel-by-pixel shift of depth values in the map.
- edges which intersect the column are identified, and the depth values within or between adjacent pairs of intersecting edges are filtered.
- a pair of adjacent edges 512 and 514 is identified by the smoother 128 .
- the pair of adjacent edges 516 and 518 is identified by the smoother 128 .
- the smoother 128 filters the depth values between the pair of edges, to provide a smoothed range of depth values between the pair of edges. As illustrated in FIG. 5 , smoothing or filtering depth values between pairs of edges is performed by the smoother 128 along the column 510 , on a per edge-pair basis. In this way, raw depth values in the raw depth map 224 ( FIG. 4 ) are smoothed or filtered with reference to the edges in the edge map 222 ( FIG. 3 ). Thus, depth values are generally extended and smoothed with a certain level of consistency among edges.
- the smoother 128 scans along rows of the depth map 502 , from a top to a bottom, for example, of the map.
- the rows may be scanned according to a row-wise pixel-by-pixel shift of depth values in the map.
- edges which intersect the row are identified, and the depth values within or between adjacent pairs of intersecting edges are filtered.
- a pair of adjacent edges 522 and 524 is identified by the smoother 128 .
- the pair of adjacent edges 526 and 528 is identified by the smoother 128 .
- the smoother 128 filters the depth values between the pair of edges, to a provide smoothed range of depth values between the pair of edges. As illustrated in FIG. 5 , smoothing or filtering depth values between pairs of edges is performed by the smoother 128 along the row 520 , on a per edge-pair basis. In this way, depth values are generally extended and smoothed with a certain level of consistency among edges. It should be appreciated here that several pairs of intersecting edges may be identified along each column 510 and row 520 in a depth map, and depth values may be smoothed between each of the pairs of edges.
- the smoother 128 smooths the depth values in the depth map 224 , to provide a smoothed depth map 226 .
- the smoother 128 provides the smoothed depth map 226 to the scaler 120 .
- the scaler 120 upscales the smoothed depth map 226 , and provides an upscaled depth map 228 to the focuser 130 .
- the upscaled depth map 228 includes a density of depth values which corresponds to the pixel density of the first and/or second images 202 and 204 .
- the focuser 130 may focus and/or re-focus one or more pixels in the first image 202 , for example, with reference to corresponding values of depth in the depth map 228 .
- the focuser 130 receives the upscaled depth map 228 , the first image 202 , and a point for focus 140 .
- the focuser 130 selectively focuses the first image 202 according to the point for focus 140 , by blending portions of a blurred replica of the first image 202 with the first image 202 .
- the blending is performed by the focuser 130 with reference to the relative depth values of the upscaled depth map 228 as a measure for blending.
- the focuser 130 provides an output image based on a blend of the first image 202 and the blurred replica of the first image 202 .
- the point for focus 140 may be received by the device 160 ( FIG. 1B ) using any suitable input means, such as by capacitive touch screen, mouse, keyboard, electronic pen, etc. That is, a user of the device 160 may, after capture of the first and second images 202 and 204 by the device 160 , select a point on the first image 202 (or the second image 204 ) to be selectively focused using a capacitive touch screen, mouse, keyboard, electronic pen, etc.
- the first image 202 may be captured by the first sensor 150 according to a relatively large depth of field. In other words, the first image 202 may be substantially focused throughout its field of view, for example, based on a sufficiently small optical aperture, etc.
- the focuser 130 may selectively focus areas of the first image 202 based on depth, by simulating a focal point and associated in-focus depth of field of the first image 202 along with other depths of field which are out of focus (i.e., blurred).
- the focuser 130 identifies a corresponding depth value (i.e., a selected depth value for focus) in the upscaled depth map 228 , and evaluates a relative difference in depth between the selected depth value and each other depth value in the upscaled depth map 228 .
- the focuser 130 evaluates the depth values in the depth map 228 according to relative differences from the point for focus 140 .
- the focuser 130 blends the first image 202 and the blurred replica of the first image 202 based on relative differences in depth, as compared to the point for focus 140 .
- the blurred replica of the first image 202 may be generated by the image processor 132 using a Gaussian blur or similar filter, and the focuser 130 blends the first image 202 and the blurred replica according to an alpha blend.
- the focuser 130 may form a composite of the first image 202 and the blurred replica, where the first image 202 comprises all or substantially all information in the composite and the blurred replica comprises no or nearly no information in the composite.
- the focuser 130 may form another composite of the first image 202 and the blurred replica, where the first image 202 comprises no or nearly no information in the composite and the blurred replica comprises all or substantially all information in the composite.
- the focuser 130 may evaluate several points among the first image 202 for difference in depth as compared to the point for focus 140 , and generate or form a composite image for each point based on relative differences in depth, as compared to the point for focus 140 as described above. The composites for the various points may then be formed or joined together by the focuser 130 into an output image.
- the focuser 130 may evaluate individual pixels in the first image 202 for difference in depth as compared to the point for focus 140 , and generate or form a composite image for each pixel (or surrounding each pixel) based on relative differences in depth embodied in the depth values of the depth map 228 , as compared to the point for focus 140 .
- the output image of the focuser 130 includes a region of focus identified by the point for focus 140 , and a blend of regions of progressively less focus (i.e., more blur) based on increasing difference in depth as compared to the point for focus 140 .
- the focuser 130 simulates a focal point and associated in-focus depth of field in the output image 260 A, along with other depths of field which are out of focus (i.e., blurred).
- the output image 260 A also includes several graduated ranges of blur or blurriness.
- the focuser 130 simulates the effect of capturing the image 202 using a relatively larger optical aperture, and the point of focus when capturing the image 202 may be altered after the image 202 is captured.
- several points for focus 140 may be received by the focuser 130 over time, and the focuser 130 may generate respective output images 260 A for each point for focus 140 .
- the focuser 130 selectively focuses regions of the first image 202 without using the blurred replica.
- the focuser 130 may determine a point spread per pixel for pixels of the first image 202 , to generate an output image. For example, for pixels with little or no difference in depth relative to the point for focus 140 , the focuser 130 may form the output image 260 using the pixel values in the first image 202 without (or with little) change to the pixel values.
- the focuser 130 may determine a blend of the value of the pixel and its surrounding pixel values based on a measure of the difference. In this case, rather than relying upon a predetermined blurred replica, the focuser 130 may determine a blend of each pixel, individually, according to values of neighboring pixels.
- the processes for focusing and re-focusing images may benefit from the calibration processes performed by the calibrator 122
- other image processing techniques may benefit from the calibration processes.
- depth maps may be relied upon for object extraction, scene understanding, or gesture recognition, for example.
- the calibrator 122 may improve object extraction, scene understanding, or gesture recognition image processes.
- HDR images are created by capturing both a short exposure image and a normal or long exposure image of a certain field of view.
- the short exposure image provides the additional details for regions that would otherwise saturated in the normal or long exposure.
- the short and normal exposure images may be captured in various ways. For example, multiple images may be captured for the same field of view, successively, over a short period of time and at different levels of exposure. This approach is commonly used in video capture, for example, especially if a steady and relatively high-rate flow of frames is being captured and any object motion is acceptably low. For still images, however, object motion artifacts are generally unacceptable for a multiple, successive capture approach.
- An alternative HDR image processing approach alternates the exposure lengths of certain pixels of an image sensor. This minimizes problems associated with object motion, but injects interpolation artifacts due to the interpolation needed to reconstruct a full resolution image for both exposures.
- Still another approach adds white or clear pixels to the Bayer pattern of an image sensor, and is commonly known as RGBC or RGBW.
- the white or clear pixels may be embodied as low light pixels, but the approach may have problems with interpolation artifacts due to the variation on the Bayer pattern required for the white or clear pixels.
- the luminance-only data provided from the second sensor 152 may provide additional information in HDR detail enhancement.
- the exposure settings and characteristics of the secondary luminance image sensor may be set and determined separately from that of the main color image sensor by the calibrator 122 . This is achieved while the main sensor is not adversely affected by the addition of white or clear pixels, for example.
- the embodiments described herein may be practiced using an alternative order of the steps illustrated in FIG. 6 . That is, the process flows illustrated in FIG. 6 are provided as examples only, and the embodiments may be practiced using process flows that differ from those illustrated. Additionally, it is noted that not all steps are required in every embodiment. In other words, one or more of the steps may be omitted or replaced, without departing from the spirit and scope of the embodiments. Further, steps may be performed in different orders, in parallel with one another, or omitted entirely, and/or certain additional steps may be performed without departing from the scope and spirit of the embodiments. Finally, although the process 600 of FIG. 6 is generally described in connection with the system 10 of FIG. 1A and/or the device 160 of FIG. 1B , the process 600 may be performed by other systems and/or devices.
- FIG. 6 illustrates a flow diagram for a process 600 of calibration of a mix of image sensors in the system 10 of FIG. 1A according to an example embodiment.
- the process 600 includes identifying a characteristic for calibration associated with at least one of a first sensor or a second sensor.
- a pixel density of the second sensor may be a fraction of the pixel density of the first sensor.
- the identifying at reference numeral 602 may be performed during manufacturing and/or assembly of the device 160 , for example, based on manufacturing and/or assembly calibration processes.
- the characteristics for calibration may be identified during post-assembly calibration processes by the calibrator 122 .
- the characteristic for calibration may be related to operating characteristics of one or more of the first and second sensors 150 and 152 .
- Differences in operating characteristics between the first and second sensors 150 and 152 may be quantified as calibration data and stored in the memory 110 as the calibration characteristic data 114 .
- the differences may be due to different pixel densities of the first and second sensors 150 and 152 , different manufacturing processes used to form the first and second sensors 150 and 152 , different pixel array patterns or filters (e.g., Bayer, EXR, X-Trans, etc.) of the first and second sensors 150 and 152 , different sensitivities of the first and second sensors 150 and 152 to light, temperature, operating frequency, operating voltage, or other factors, without limitation.
- different pixel array patterns or filters e.g., Bayer, EXR, X-Trans, etc.
- the process 600 includes adapting an operating characteristic of at least one of the first sensor or the second sensor to accommodate for at least one of noise, defective pixels, dark current, vignetting, demosaicing, or white balancing, for example, using the characteristic for calibration identified at reference numeral 602 .
- the calibrator 122 may adapt operating characteristics or parameters of one or more of the first sensor 150 and/or the second sensor 152 , as described herein.
- the process 600 includes capturing a first image with the first sensor, and capturing a second image with a second sensor.
- the first image may be captured by the first sensor 150
- the second image may be captured by the second image sensor 152 .
- the first and second images may be captured at a substantially same time or at different times among embodiments.
- the first sensor be embodied as a multi-spectral component (e.g., color) sensor and the second sensor may be embodied as a limited-spectral (e.g., luminance only) component sensor.
- the first and second sensors may be embodied as sensors having similar or different pixel densities or other characteristics.
- the process 600 includes adjusting an attribute of one or more of the first or second images to substantially address at least one difference between them.
- reference numeral 608 may include adjusting an attribute of the second image to substantially address a difference between the attribute of the second image and a corresponding attribute of the first image using the characteristic for calibration identified at reference numeral 602 .
- Reference numeral 608 may further include aligning the second image with the first image to substantially address a difference in alignment between the first sensor and the second sensor.
- reference numeral 608 may include normalizing values among the first image and the second image to substantially address a difference in sensitivity between the first sensor and the second sensor.
- the calibrator 122 may adjust one or more attributes of one or more of the first or second images 202 or 204 ( FIG. 1B ) to substantially address a difference between attributes of the first or second images 202 or 204 .
- the calibrator 122 may adjust the exposure of one or more of the first image 202 and the second image 204 , to address the difference in exposure.
- the calibrator 122 may filter one or more of the first image 202 and the second image 204 , to address a difference in an amount of noise among the images.
- the calibrator 122 may adjust one or more attributes of the first and/or second images 202 and/or 204 to accommodate for, address, or normalize differences between them due to noise, defective pixels, dark current, vignetting, demosaicing, or white balancing, for example, or any combination thereof, without limitation.
- a measure of differences among attributes e.g., noise response, defective pixels, dark current response, vignetting response, white balance response, exposure response, etc.
- This calibration characteristic data 114 may be referenced by the calibrator 122 when adjusting attributes of the first and/or second images 202 and/or 204 at reference numeral 608 .
- the process 600 may include scaling one or more of the first image or the second image to scaled image copies.
- the process 600 may include upscaling the first image to an upscaled first image and/or upscaling the second image to an upscaled second image.
- the process 600 may include downscaling the first image to a downscaled first image and/or downscaling the second image to a downscaled second image.
- the scaling at reference numeral 610 may be omitted, for example, depending upon the application for use of the first and/or second images and the pixel densities of the sensors used to capture the images.
- the process 600 includes adjusting an attribute of one or more of the scaled (i.e., upscaled or downscaled) first or second images to substantially address at least one difference between them.
- This process may be similar to that performed at reference numeral 608 , although performed on scaled images.
- the process 600 may make adjustments to downscaled or upscaled images at various stages. For example, adjustments may be made before and/or after downscaling, upscaling, or other image processing activities.
- the processes performed at reference numerals 602 , 604 , 606 , 608 , 610 , and 612 may be relied upon to adapt and/or adjust one or more images or pairs of images, so that other image processes, such as the processes at reference numerals 614 , 616 , and 618 , may be performed with better results.
- the processes at reference numerals 614 , 616 , and 618 are described by way of example (and may be omitted or replaced), as other downstream image processing techniques may follow the image calibration according to the embodiments described herein.
- the process 600 may include generating one or more edge or depth maps. For example, the generation of edge or depth maps may be performed by the edge map generator 126 and/or the depth map generator 124 as described above with reference to FIG. 2B .
- the process 600 may include receiving a point for focus and focusing or re-focusing one or more images. Again, the focusing or re-focusing of images may be performed by the focuser 130 as described above with reference to FIG. 2B .
- the process 600 may include extracting one or more objects, recognizing one or more gestures, or other image processing techniques. These techniques may be performed with reference to the edge or depth maps generated at reference numeral 614 , for example.
- the accuracy of edge or depth maps may be improved, and the image processing techniques at reference numeral 618 (and reference 616 ) may also be improved.
- the process 600 may include generating an HDR image.
- the generation of an HDR image may occur before any image scaling occurs at reference numeral 610 .
- the generation of an HDR image may be performed according to the embodiments described herein.
- the generation of an HDR image may include generating the HDR image by combining luminance values of a second image with full color values of a first image.
- the process 600 may be relied upon for calibration of images captured from a plurality of image sensors, which may include a heterogeneous mix of image sensors.
- the calibration may assist with various image processing techniques, such as focusing and re-focusing, object extraction, scene understanding, gesture recognition, HDR image processing, etc.
- FIG. 7 illustrates an example schematic block diagram of a computing architecture 700 that may be employed as the processing environment 100 of the system 10 of FIG. 1A , according to various embodiments described herein.
- the computing architecture 700 may be embodied, in part, using one or more elements of a mixed general and/or specific purpose computer.
- the computing architecture 700 includes a processor 710 , a Random Access Memory (RAM) 720 , a Read Only Memory (ROM) 730 , a memory device 740 , and an Input Output (I/O) interface 750 .
- the elements of computing architecture 700 are communicatively coupled via one or more local interfaces 702 .
- the elements of the computing architecture 700 are not intended to be limiting in nature, as the architecture may omit elements or include additional or alternative elements.
- the processor 710 may include or be embodied as a general purpose arithmetic processor, a state machine, or an ASIC, for example.
- the processing environment 100 of FIGS. 1A and 1B may be implemented, at least in part, using a computing architecture 700 including the processor 710 .
- the processor 710 may include one or more circuits, one or more microprocessors, ASICs, dedicated hardware, or any combination thereof.
- the processor 710 is configured to execute one or more software modules which may be stored, for example, on the memory device 740 .
- the software modules may configure the processor 710 to perform the tasks undertaken by the elements of the computing environment 100 of the system 10 of FIG. 1A , for example.
- the process 600 described in connection with FIG. 6 may be implemented or executed by the processor 710 according to instructions stored on the memory device 740 .
- the RAM and ROM 720 and 730 may include or be embodied as any random access and read only memory devices that store computer-readable instructions to be executed by the processor 710 .
- the memory device 740 stores computer-readable instructions thereon that, when executed by the processor 710 , direct the processor 710 to execute various aspects of the embodiments described herein.
- the memory device 740 includes one or more non-transitory memory devices, such as an optical disc, a magnetic disc, a semiconductor memory (i.e., a semiconductor, floating gate, or similar flash based memory), a magnetic tape memory, a removable memory, combinations thereof, or any other known non-transitory memory device or means for storing computer-readable instructions.
- the I/O interface 750 includes device input and output interfaces, such as keyboard, pointing device, display, communication, and/or other interfaces.
- the one or more local interfaces 702 electrically and communicatively couples the processor 710 , the RAM 720 , the ROM 730 , the memory device 740 , and the I/O interface 750 , so that data and instructions may be communicated among them.
- the processor 710 is configured to retrieve computer-readable instructions and data stored on the memory device 740 , the RAM 720 , the ROM 730 , and/or other storage means, and copy the computer-readable instructions to the RAM 720 or the ROM 730 for execution, for example.
- the processor 710 is further configured to execute the computer-readable instructions to implement various aspects and features of the embodiments described herein.
- the processor 710 may be adapted or configured to execute the process 600 described above in connection with FIG. 6 .
- the processor 710 may include internal memory and registers for maintenance of data being processed.
- each block may represent one or a combination of steps or executions in a process.
- each block may represent a module, segment, or portion of code that includes program instructions to implement the specified logical function(s).
- the program instructions may be embodied in the form of source code that includes human-readable statements written in a programming language or machine code that includes numerical instructions recognizable by a suitable execution system such as the processor 710 .
- the machine code may be converted from the source code, etc.
- each block may represent, or be connected with, a circuit or a number of interconnected circuits to implement a certain logical function or process step.
Abstract
Description
- This application claims the benefit of U.S. Provisional Application No. 61/891,648, filed Oct. 16, 2013, and claims the benefit of U.S. Provisional Application No. 61/891,631, filed Oct. 16, 2013, the entire contents of each of which are hereby incorporated herein by reference.
- This application also makes reference to U.S. patent application Ser. No. ______ (Attorney Docket #50229-5030), titled “Depth Map Generation and Post-Capture Focusing,” filed on even date herewith, the entire contents of which are hereby incorporated herein by reference.
- Certain cameras, such as light-field or plenoptic cameras, rely upon a lens array over an image sensor and/or an array of image sensors to capture directional projection of light. Among other drawbacks, these approaches use relatively large and specialized image sensors which are generally unsuitable for other applications (e.g., video capture, video conferencing, etc.), use only a fraction of the information captured, and rely upon high levels of processing to deliver even a viewfinder image, for example. Further, some of these light-field or plenoptic camera devices require a relatively large height for specialized lens and/or sensor arrays and, thus, do not present practical solutions for use in cellular telephones.
- For a more complete understanding of the embodiments and the advantages thereof, reference is now made to the following description, in conjunction with the accompanying figures briefly described as follows:
-
FIG. 1A illustrates a system including a heterogeneous mix of image sensors according to an example embodiment. -
FIG. 1B illustrates a device for image capture and calibration using the system ofFIG. 1A according to an example embodiment. -
FIG. 2A illustrates a process flow for calibration of the heterogeneous mix of image sensors in the system ofFIG. 1A according to an example embodiment. -
FIG. 2B illustrates a process flow for depth map generation using the system ofFIG. 1A , after calibration of the heterogeneous mix of image sensors, according to an example embodiment. -
FIG. 3 illustrates an example edge map generated by the edge map generator ofFIG. 1A according to an example embodiment. -
FIG. 4 illustrates an example depth map generated by the depth map generator ofFIG. 1A according to an example embodiment. -
FIG. 5 illustrates an example process of smoothing performed by the smoother ofFIG. 1A according to an example embodiment. -
FIG. 6 illustrates a flow diagram for a process of calibration of a mix of image sensors in the system ofFIG. 1A according to an example embodiment. -
FIG. 7 illustrates an example schematic block diagram of a computing environment which may embody one or more of the system elements ofFIG. 1A according to various embodiments. - The drawings are provided by way of example and should not be considered limiting of the scope of the embodiments described herein, as other equally effective embodiments are within the scope and spirit of this disclosure. The elements and features shown in the drawings are not necessarily drawn to scale, emphasis instead being placed upon clearly illustrating the principles of the embodiments. Additionally, certain dimensions or positions of elements and features may be exaggerated to help visually convey certain principles. In the drawings, similar reference numerals among the figures generally designate like or corresponding, but not necessarily the same, elements.
- In the following paragraphs, the embodiments are described in further detail by way of example with reference to the attached drawings. In the description, well known components, methods, and/or processing techniques are omitted or briefly described so as not to obscure the embodiments.
- Certain cameras, such as light-field or plenoptic cameras, rely upon a lens array over an image sensor and/or an array of image sensors to capture directional projection of light. Among other drawbacks, these approaches use relatively large and specialized image sensors which are generally unsuitable for other applications (e.g., video capture, video conferencing, etc.), use only a fraction of the information captured, and rely upon high levels of processing to deliver even a viewfinder image, for example. Further, some of these light-field or plenoptic camera devices require a relatively large height for specialized lens and/or sensor arrays and, thus, do not present practical solutions for use in cellular telephones.
- In this context, the embodiments described herein include a heterogeneous mix of sensors which may be relied upon to achieve, among other processing results, image processing results that are similar, at least in some aspects, to those achieved by light-field or plenoptic imaging devices. In various embodiments, the mix of sensors may be used for focusing and re-focusing images after the images are captured. In other embodiments, the mix of sensors may be used for object extraction, scene understanding, gesture recognition, etc. In other aspects, a mix of image sensors may be used for high dynamic range (HDR) image processing. Further, according to the embodiments described herein, the mix of image sensors may be calibrated for focusing and re-focusing, object extraction, scene understanding, gesture recognition, HDR image processing, etc.
- In one embodiment, the heterogeneous mix of sensors includes a main color image sensor having a pixel density ranging from 3 to 20 Megapixels, for example, with color pixels arranged in a Bayer pattern, and a secondary luminance image sensor having a relatively lower pixel density. It should be appreciated, however, that the system is generally agnostic to the resolution and format of the main and secondary sensors, which may be embodied as sensors of any suitable type, pixel resolution, process, structure, or arrangement (e.g., infra-red, charge-coupled device (CCD), 3CCD, Foveon X3, complementary metal-oxide-semiconductor (CMOS), red-green-blue-clear (RGBC), etc.).
- Turning now to the drawings, a description of exemplary embodiments of a system and its components are provided, followed by a discussion of the operation of the same.
-
FIG. 1A illustrates asystem 10 including a heterogeneous mix of image sensors according to an example embodiment. Thesystem 10 includes aprocessing environment 100, amemory 110, and first andsecond sensors memory 110 includes memory areas forimage data 112 andcalibration characteristic data 114. Theprocessing environment 100 may be embodied as one or more processors, processing circuits, and/or combinations thereof. Generally, theprocessing environment 100 includes embedded (i.e., application-specific) and/or general purpose processing circuitry and/or software structures that process data, such as image data captured by the first andsecond sensors processing environment 100 are described below with reference toFIG. 7 . - In the example illustrated in
FIG. 1A , theprocessing environment 100 generally includes elements for focusing and re-focusing of images captured by the first andsecond sensors processing environment 100 includes ascaler 120, acalibrator 122, adepth map generator 124, anedge map generator 126, a smoother 128, afocuser 130, and animage processor 132. Each of these elements of theprocessing environment 100, and the respective operation of each, is described in further detail below. - Here, it should be appreciated that the elements of the
processing environment 100 may vary among embodiments, particularly depending upon the application for use of the heterogeneous mix ofimage sensors second sensors processing environment 100 may include additional or alternative processing elements or modules. Regardless of the application for use of the first andsecond sensors second sensors second sensors second sensors sensors - The first and
second sensors system 10. For example, in image processing applications, the first andsecond sensors first image sensor 150 may be embodied as a color image sensor having a first pixel density, and thesecond image sensor 152 may be embodied as a luminance image sensor having a relatively lower pixel density. It should be appreciated, however, that thesystem 10 is generally agnostic to the resolution and format of the first andsecond sensors - The
memory 110 may be embodied as any suitable memory that stores data provided by the first andsecond sensors memory 110 may store image and image-related data for manipulation and processing by theprocessing environment 100. As noted above, thememory 110 includes memory areas forimage data 112 and calibrationcharacteristic data 114. Various aspects of processing and/or manipulation of theimage data 112 by theprocessing environment 100, based, for example, upon the calibrationcharacteristic data 114, are described in further detail below. -
FIG. 1B illustrates adevice 160 for image capture and calibration using thesystem 10 ofFIG. 1A according to an example embodiment. Thedevice 160 includes theprocessing environment 100, thememory 110, and the first andsecond sensors FIG. 1A , among other elements. Thedevice 160 may be embodied as a cellular telephone, tablet computing device, laptop computer, desktop computer, television, set-top box, personal media player, appliance, etc., without limitation. In other embodiments, thedevice 160 may be embodied as a pair of glasses, a watch, wristband, or other device which may be worn or attached to clothing. If embodied as a pair of glasses, then thesensors device 160 may be positioned at opposite corners of rims or end-pieces of the pair of glasses. - As illustrated in
FIG. 1B , the first andsecond sensors device 160. Further, the relative positions (e.g., right verses left, top verses bottom, etc.) of the first andsecond sensors second sensors device 160 may include one or more additional elements for image capture, such as lenses, flash devices, focusing mechanisms, etc., although these elements may not be relied upon in certain embodiments and may be omitted. - As described herein, in one embodiment, the first and
second sensors device 160, for example, based on manufacturing and/or assembly calibration processes. Additionally or alternatively, the differences in operating characteristics may be identified during post-assembly calibration processes by thecalibrator 122. These differences may be quantified as calibration data which is representative of the operating characteristics of the first andsecond sensors memory 110 as the calibrationcharacteristic data 114. - Among other operational aspects, the
device 160 is configured to capture images using the first andsecond sensors second sensors -
FIG. 2A illustrates a process flow for calibration of the heterogeneous mix ofimage sensors system 10 ofFIG. 1A according to an example embodiment. As illustrated inFIG. 2 , thefirst sensor 150 generates afirst image 202, and thesecond sensor 152 generates asecond image 204. The first andsecond images second images second sensors second images FIG. 1 ). - Here, it is noted that, before the first and
second sensors second images calibrator 122 may adapt at least one operating parameter of thefirst sensor 150 or thesecond sensor 152 to accommodate for at least one of noise, defective pixels, dark current, vignetting, demosaicing, or white balancing, for example, without limitation. More particularly, thecalibrator 122 may reference the calibrationcharacteristic data 114 in thememory 110, to identify any adjustments to the operating parameters of the first andsecond sensors second sensors - In this context, it should be appreciated that, to the extent that the characteristics of the first and
second sensors second images calibrator 122 may adjust one or more of the operating parameters of the first andsecond sensors 150 and 152 (e.g., operating voltages, timings, temperatures, exposure timings, etc.) to address the difference or differences. In other words, thecalibrator 122 may seek to align or normalize aspects of the operating characteristics of the first andsecond sensors system 10 may be aligned, as necessary, for suitable performance and results in image processing. - As a further example, based on the respective characteristics of the
first sensor 150 and thesecond sensor 152, thefirst sensor 150 may produce images including relatively more noise than the images produced by thesecond sensor 152. This difference in the generation of noise may be embodied in values of the calibrationcharacteristic data 114, for example, in one or more variables, coefficients, or other data metrics. Thecalibrator 122 may refer to the calibrationcharacteristic data 114 and, based on the calibrationcharacteristic data 114, adjust operating parameters of the first andsecond sensor - Similarly, the
first sensor 150 may produce images including a first dark current characteristic, and thesecond sensor 152 may produce images including a second dark current characteristic. The difference between these dark current characteristics may be embodied in values of the calibrationcharacteristic data 114. Thecalibrator 122 may seek to adjust operating parameters of the first andsecond sensors calibrator 122 may seek to normalize or address other differences in operating characteristics between the first andsecond sensors second sensors - The differences in operating characteristics between the first and
second sensors second sensors second sensors second sensors second sensors - As noted above, differences in operating characteristics between the first and
second sensors device 160, for example, based on manufacturing and/or assembly calibration processes. Additionally or alternatively, the differences in operating characteristics may be identified during post-assembly calibration processes. These differences may be quantified as calibration data representative of the operating characteristics of the first andsecond sensors memory 110 as the calibrationcharacteristic data 114. - In addition to adapting one or more of the operating parameters of the first and
second sensors calibrator 122 may adjust one or more attributes of one or more of the first orsecond images second images first sensor 150 and thesecond sensor 152, thecalibrator 122 may adjust the exposure of one or more of thefirst image 202 and thesecond image 204, to address the difference in exposure. Similarly, based on a difference in noise, thecalibrator 122 may filter one or more of thefirst image 202 and thesecond image 204, to address a difference in an amount of noise among the images. - In various embodiments, to the extent possible, the
calibrator 122 may adjust one or more attributes of the first and/orsecond images 202 and/or 204 to accommodate for, address, or normalize differences between them due to noise, defective pixels, dark current, vignetting, demosaicing, or white balancing, for example, or any combination thereof, without limitation. Again, a measure of differences among attributes (e.g., noise response, defective pixels, dark current response, vignetting response, white balance response, exposure response, etc.) of the first andsecond images characteristic data 114. This calibrationcharacteristic data 114 may be referenced by thecalibrator 122 when adjusting attributes of the first and/orsecond images 202 and/or 204. - In one embodiment, as further illustrated in
FIG. 2A , the first andsecond images scaler 120. Generally, thescaler 120 downscales and/or upscales images in pixel density. It is noted that, in certain embodiments, thescaler 120 may be omitted from the process flow ofFIG. 2A , for one or more of the first andsecond images scaler 120 is generally relied upon, for example, to reduce the pixel processing loads of other elements in thesystem 10, to align pixel densities among the first andsecond images 202 and 204 (e.g., if the first andsecond sensors scaler 120 may be embodied according to nearest-neighbor interpolation, bi-linear interpolation, bi-cubic interpolation, supersampling, and/or other suitable interpolation techniques, or combinations thereof, without limitation. - In some embodiments, after the
scaler 120 downscales the first andsecond images images calibrator 122 may adjust one or more attributes of the first and/or second downscaledimages 212 and/or 214 to accommodate for, address, or normalize differences between them due to noise, defective pixels, dark current, vignetting, demosaicing, or white balancing, for example, or any combination thereof, without limitation. In other words, it should be appreciated that thecalibrator 122 may make adjustments to the first and/or second downscaledimages 212 and/or 214 at various stages. For example, the adjustments may be made before and/or after downscaling, upscaling, or other image processing activities. - Generally, the
calibrator 122 adapts operating parameters of the first andsecond sensors second images system 10 and/or thedevice 160. For example, as described in the examples below, the images captured by thesystem 10 and/or thedevice 160 may be relied upon in focusing and re-focusing, object extraction, scene understanding, gesture recognition, HDR image processing, etc. To the extent that these image processing activities rely upon a stereo pair of images, and to the extent that thesystem 10 and/or thedevice 160 may benefit from a heterogeneous mix of image sensors (e.g., for cost reduction, processing reduction, parts availability, wider composite sensor range and sensitivity, etc.), thecalibrator 122 is configured to adapt and/or adjust certain operating characteristics and attributes into substantial alignment for the benefit of the downstream image processing activities. - As one example of a downstream image processing activity that may benefit from the operations of the
calibrator 122, aspects of depth map generation and focusing and re-focusing our described below with reference toFIG. 2B .FIG. 2B illustrates a process flow for depth map generation using the system ofFIG. 1A , after calibration of the heterogeneous mix of image sensors, according to an example embodiment. - It is noted that, in certain downstream processes, the
first image 202 may be compared with thesecond image 204 according to one or more techniques for image processing. In this context, the first andsecond images second images second images FIG. 1B ) between the first andsecond sensors device 160. The amount of this shift, per pixel, is representative of depth, because it is dependent (i.e., changes) upon the relative depths of items within a field of view of theimages second images calibrator 122 of thesystem 10 before or while processing theimages - According to various embodiments described herein, the first and
second images second image sensors second images first image 202 may include several color components of a color image encoded or defined according to a certain color space (e.g., red, green, blue (RGB); cyan, magenta, yellow, key (CMYK); phase alternating line (PAL); YUV or Y′UV; YCbCr; YPbPr, etc.), and thesecond image 204 may include a single component of another color space. - Referring again to
FIG. 2B , the firstdownscaled image 212 is provided to theedge map generator 126. Theedge map generator 126, generally, generates an edge map by identifying edges in at least one image. In other words, theedge map generator 126 generates an edge map by identifying edges in one or more of the first or second downscaledimages FIG. 2B , theedge map generator 126 generates theedge map 222 by identifying edges in the firstdownscaled image 212, although theedge map 222 may be generated by identifying edges in the seconddownscaled image 214. It should be appreciated that the performance of theedge map generator 126 may be improved by identifying edges in downscaled, rather than higher pixel density, images. For example, edges in higher density images may span several (e.g., 5, 10, 15, or more) pixels. In contrast, such edges may span relatively fewer pixels in downscaled images. Thus, in certain embodiments, thescaler 120 may be configured to downscale one or more of the first orsecond images edge map generator 126. -
FIG. 3 illustrates anexample edge map 222 generated by theedge map generator 126 ofFIG. 1A according to an example embodiment. As illustrated inFIG. 3 , theedge map 126 is embodied by data representative of edges. In the context ofFIGS. 2 and 3 , theedge map 126 is embodied by data representative of edges in thefirst image 202. In one embodiment, theedge map generator 126 generates theedge map 222 by identifying pixels or pixel areas in thefirst image 202 where pixel or pixel area brightness quickly changes or encounters a discontinuity (i.e., at “step changes”). Points at which pixel brightness change quickly are organized into edge segments in theedge map 222 by theedge map generator 126. The changes may be due to changes in surface or material orientation, changes in surface or material properties, or variations in illumination, for example. Data associated with theedge map 222 may be stored by theedge map generator 126 in the memory 110 (FIG. 1 ). - Referring again to
FIG. 2B , the first and second downscaledimages depth map generator 124. Thedepth map generator 124, generally, generates a depth map including a mapping among relative depth values in a field of view based on a difference between pixels of a first image and pixels of a second image. In the context ofFIG. 2B , thedepth map generator 124 generates adepth map 224 including a mapping of relative depth values based on differences between pixels of the firstdownscaled image 212 and pixels of the seconddownscaled image 214. In this context, it is noted that, in certain embodiments, the depth map generator 124 (and/or the edge map generator 126) may operate using only the luminance component of images. Thus, in certain embodiments, thefirst sensor 150 may be embodied as a main color image sensor, and thesecond sensor 152 may be embodied as a secondary luminance only image sensor. In this case, the secondary luminance image sensor may not need to be at the full resolution of the main color sensor, because no demosaicing interpolation is required for the luminance image sensor (i.e., the luminance image sensor has a higher effective resolution). Thus, as suggested above, downscaling by thescaler 120 may be omitted for thesecond image 204, for example. -
FIG. 4 illustrates anexample depth map 224 generated by thedepth map generator 124 ofFIG. 1A according to an example embodiment. As illustrated inFIG. 4 , thedepth map 224 is embodied by data representative of relative depths in a field of view based on differences between pixels of the firstdownscaled image 212 and pixels of the seconddownscaled image 214. InFIG. 4 , relatively darker areas are closer in depth and relatively lighter areas are further in depth, from the point of view of the first andsecond image sensors FIG. 1B ). It should be appreciated that the relatively darker and lighter areas inFIG. 4 are representative of depth values. That is, relatively darker areas are representative of data values (e.g., per pixel data values) associated with less depth, and relatively lighter areas are representative of data values associated with more depth. In the context ofFIG. 5 , as further described below, thedepth map 224 is referred to as a “raw” depth map, because it is representative of unsmoothed or unfiltered depth values. Data associated with thedepth map 224 may be stored by thedepth map generator 124 in the memory 110 (FIG. 1 ). - The
depth map generator 124 may generate thedepth map 224, for example, by calculating a sum of absolute differences (SAD) between pixel values in a neighborhood of pixels in the downscaledimage 212 and a corresponding neighborhood of pixels in the downscaledimage 214, for each pixel in the downscaledimages images second images depth map 224 by calculating a sum of absolute differences, other stereo algorithms, processes, or variations thereof may be relied upon by thedepth map generator 124. For example, thedepth map generator 124 may rely upon squared intensity differences, absolute intensity differences, mean absolute difference measures, or other measures of difference between pixel values, for example, without limitation. Additionally, thedepth map generator 124 may rely upon any suitable size, shape, or variation of pixel neighborhoods for comparisons between pixels among images. Among embodiments, any suitable stereo correspondence algorithm may be relied upon by thedepth map generator 124 to generate a depth map including a mapping among relative depth values between images. - Referring again to
FIG. 2B , after theedge map generator 126 generates theedge map 222 and thedepth map generator 124 generates thedepth map 224, the smoother 128 smooths the relative depth values of thedepth map 224 using theedge map 222. For example, according to one embodiment, the smoother 128 filters columns (i.e., in a first direction) of depth values of thedepth map 224 between a first pair of edges in theedge map 222. The smoother 128 further filters rows (i.e., in a second direction) of depth values of thedepth map 224 between a second pair edges in theedge map 222. The process of filtering along columns and rows may proceed iteratively between filtering columns and rows, until a suitable level of smoothing has been achieved. -
FIG. 5 illustrates an example process of smoothing performed by the smoother 128 ofFIG. 1A according to an example embodiment. InFIG. 5 , thedepth map 500 is smoothed or filtered along columns (i.e., in a first direction Y) of depth values and between pairs of edges, and thedepth map 502 is smoothed or filtered along rows (i.e., in a second direction X) of depth values and between pairs of edges. With reference toFIGS. 3 and 4 , thedepth map 500 is representative, for example, of depth values after a first pass of smoothing depths along columns, using theraw depth map 224 as a basis for depth values and theedge map 222 as a basis for edges. Thedepth map 502 is representative of smoothed depth values after a second pass of smoothing depths along rows, using thedepth map 500 as a starting basis for depth values. - More particularly, in the generation of the
depth map 500 by the smoother 128, the smoother 128 scans along columns of thedepth map 500, from a right to a left, for example, of the map. The columns may be scanned according to a column-wise pixel-by-pixel shift of depth values in the map. Along each column, edges which intersect the column are identified, and the depth values within or between adjacent pairs of intersecting edges are filtered. For example, as illustrated inFIG. 5 , along thecolumn 510 of depth values, a pair ofadjacent edges adjacent edges FIG. 5 , smoothing or filtering depth values between pairs of edges is performed by the smoother 128 along thecolumn 510, on a per edge-pair basis. In this way, raw depth values in the raw depth map 224 (FIG. 4 ) are smoothed or filtered with reference to the edges in the edge map 222 (FIG. 3 ). Thus, depth values are generally extended and smoothed with a certain level of consistency among edges. - As further illustrated in
FIG. 5 , starting with thedepth map 500 as input, the smoother 128 scans along rows of thedepth map 502, from a top to a bottom, for example, of the map. The rows may be scanned according to a row-wise pixel-by-pixel shift of depth values in the map. Along each row, edges which intersect the row are identified, and the depth values within or between adjacent pairs of intersecting edges are filtered. For example, along therow 520 of depth values, a pair ofadjacent edges adjacent edges FIG. 5 , smoothing or filtering depth values between pairs of edges is performed by the smoother 128 along therow 520, on a per edge-pair basis. In this way, depth values are generally extended and smoothed with a certain level of consistency among edges. It should be appreciated here that several pairs of intersecting edges may be identified along eachcolumn 510 androw 520 in a depth map, and depth values may be smoothed between each of the pairs of edges. - Referring back to
FIG. 2B , after the smoother 128 smooths the depth values in thedepth map 224, to provide a smoothed depth map 226, the smoother 128 provides the smoothed depth map 226 to thescaler 120. Thescaler 120 upscales the smoothed depth map 226, and provides anupscaled depth map 228 to thefocuser 130. Generally, the upscaleddepth map 228 includes a density of depth values which corresponds to the pixel density of the first and/orsecond images upscaled depth map 228, thefocuser 130 may focus and/or re-focus one or more pixels in thefirst image 202, for example, with reference to corresponding values of depth in thedepth map 228. - As illustrated in
FIG. 2B , thefocuser 130 receives the upscaleddepth map 228, thefirst image 202, and a point for focus 140. Generally, thefocuser 130 selectively focuses thefirst image 202 according to the point for focus 140, by blending portions of a blurred replica of thefirst image 202 with thefirst image 202. The blending is performed by thefocuser 130 with reference to the relative depth values of the upscaleddepth map 228 as a measure for blending. Thefocuser 130 provides an output image based on a blend of thefirst image 202 and the blurred replica of thefirst image 202. - The point for focus 140 may be received by the device 160 (
FIG. 1B ) using any suitable input means, such as by capacitive touch screen, mouse, keyboard, electronic pen, etc. That is, a user of thedevice 160 may, after capture of the first andsecond images device 160, select a point on the first image 202 (or the second image 204) to be selectively focused using a capacitive touch screen, mouse, keyboard, electronic pen, etc. Here, it is noted that thefirst image 202 may be captured by thefirst sensor 150 according to a relatively large depth of field. In other words, thefirst image 202 may be substantially focused throughout its field of view, for example, based on a sufficiently small optical aperture, etc. Thus, after capture of thefirst image 202, thefocuser 130 may selectively focus areas of thefirst image 202 based on depth, by simulating a focal point and associated in-focus depth of field of thefirst image 202 along with other depths of field which are out of focus (i.e., blurred). - According to one embodiment, for a certain point for focus 140 selected by a user, the
focuser 130 identifies a corresponding depth value (i.e., a selected depth value for focus) in the upscaleddepth map 228, and evaluates a relative difference in depth between the selected depth value and each other depth value in the upscaleddepth map 228. Thus, thefocuser 130 evaluates the depth values in thedepth map 228 according to relative differences from the point for focus 140. In turn, thefocuser 130 blends thefirst image 202 and the blurred replica of thefirst image 202 based on relative differences in depth, as compared to the point for focus 140. - In one embodiment, the blurred replica of the
first image 202 may be generated by theimage processor 132 using a Gaussian blur or similar filter, and thefocuser 130 blends thefirst image 202 and the blurred replica according to an alpha blend. For example, at the point for focus 140, thefocuser 130 may form a composite of thefirst image 202 and the blurred replica, where thefirst image 202 comprises all or substantially all information in the composite and the blurred replica comprises no or nearly no information in the composite. On the other hand, for a point in thefirst image 202 having a relatively significant difference in depth as compared to the point for focus 140 in thefirst image 202, thefocuser 130 may form another composite of thefirst image 202 and the blurred replica, where thefirst image 202 comprises no or nearly no information in the composite and the blurred replica comprises all or substantially all information in the composite. - The
focuser 130 may evaluate several points among thefirst image 202 for difference in depth as compared to the point for focus 140, and generate or form a composite image for each point based on relative differences in depth, as compared to the point for focus 140 as described above. The composites for the various points may then be formed or joined together by thefocuser 130 into an output image. In one embodiment, thefocuser 130 may evaluate individual pixels in thefirst image 202 for difference in depth as compared to the point for focus 140, and generate or form a composite image for each pixel (or surrounding each pixel) based on relative differences in depth embodied in the depth values of thedepth map 228, as compared to the point for focus 140. - According to the operation of the
focuser 130, the output image of thefocuser 130 includes a region of focus identified by the point for focus 140, and a blend of regions of progressively less focus (i.e., more blur) based on increasing difference in depth as compared to the point for focus 140. In this manner, thefocuser 130 simulates a focal point and associated in-focus depth of field in the output image 260A, along with other depths of field which are out of focus (i.e., blurred). It should be appreciated that, because thedepth map 228 includes several graduated (or nearly continuous) values of depth, the output image 260A also includes several graduated ranges of blur or blurriness. In this way, thefocuser 130 simulates the effect of capturing theimage 202 using a relatively larger optical aperture, and the point of focus when capturing theimage 202 may be altered after theimage 202 is captured. Particularly, several points for focus 140 may be received by thefocuser 130 over time, and thefocuser 130 may generate respective output images 260A for each point for focus 140. - In another embodiment, rather than relying upon a blurred replica of the
first image 202, thefocuser 130 selectively focuses regions of thefirst image 202 without using the blurred replica. In this context, thefocuser 130 may determine a point spread per pixel for pixels of thefirst image 202, to generate an output image. For example, for pixels with little or no difference in depth relative to the point for focus 140, thefocuser 130 may form the output image 260 using the pixel values in thefirst image 202 without (or with little) change to the pixel values. On the other hand, for pixels with larger differences in depth relative to the point for focus 140, thefocuser 130 may determine a blend of the value of the pixel and its surrounding pixel values based on a measure of the difference. In this case, rather than relying upon a predetermined blurred replica, thefocuser 130 may determine a blend of each pixel, individually, according to values of neighboring pixels. - While it is noted that the processes for focusing and re-focusing images, as described above, may benefit from the calibration processes performed by the
calibrator 122, other image processing techniques may benefit from the calibration processes. For example, depth maps may be relied upon for object extraction, scene understanding, or gesture recognition, for example. In this context, to the extent that the calibration processes performed by thecalibrator 122 improve the accuracy of depth maps generated by thesystem 10, thecalibrator 122 may improve object extraction, scene understanding, or gesture recognition image processes. - As another example of image processing techniques which may benefit from the calibration processes performed by the
calibrator 122, it is noted that additional details may be imparted to regions of an image which would otherwise be saturated (i.e., featureless or beyond the measureable range) using HDR image processing techniques. Generally, HDR images are created by capturing both a short exposure image and a normal or long exposure image of a certain field of view. The short exposure image provides the additional details for regions that would otherwise saturated in the normal or long exposure. The short and normal exposure images may be captured in various ways. For example, multiple images may be captured for the same field of view, successively, over a short period of time and at different levels of exposure. This approach is commonly used in video capture, for example, especially if a steady and relatively high-rate flow of frames is being captured and any object motion is acceptably low. For still images, however, object motion artifacts are generally unacceptable for a multiple, successive capture approach. - An alternative HDR image processing approach alternates the exposure lengths of certain pixels of an image sensor. This minimizes problems associated with object motion, but injects interpolation artifacts due to the interpolation needed to reconstruct a full resolution image for both exposures. Still another approach adds white or clear pixels to the Bayer pattern of an image sensor, and is commonly known as RGBC or RGBW. The white or clear pixels may be embodied as low light pixels, but the approach may have problems with interpolation artifacts due to the variation on the Bayer pattern required for the white or clear pixels.
- In the context of the
system 10 and/or thedevice 160, if thefirst sensor 150 is embodied as a main color image sensor, and thesecond sensor 152 is embodied as a secondary luminance only image sensor, for example, the luminance-only data provided from thesecond sensor 152 may provide additional information in HDR detail enhancement. In certain aspects of the embodiments described herein, the exposure settings and characteristics of the secondary luminance image sensor may be set and determined separately from that of the main color image sensor by thecalibrator 122. This is achieved while the main sensor is not adversely affected by the addition of white or clear pixels, for example. - While various examples are provided above, it should be appreciated that the examples are not to be considered limiting, as other advantages in image processing techniques may be achieved based on the calibration processes performed by the
calibrator 122. - Before turning to the process flow diagrams of
FIG. 6 , it is noted that the embodiments described herein may be practiced using an alternative order of the steps illustrated inFIG. 6 . That is, the process flows illustrated inFIG. 6 are provided as examples only, and the embodiments may be practiced using process flows that differ from those illustrated. Additionally, it is noted that not all steps are required in every embodiment. In other words, one or more of the steps may be omitted or replaced, without departing from the spirit and scope of the embodiments. Further, steps may be performed in different orders, in parallel with one another, or omitted entirely, and/or certain additional steps may be performed without departing from the scope and spirit of the embodiments. Finally, although theprocess 600 ofFIG. 6 is generally described in connection with thesystem 10 ofFIG. 1A and/or thedevice 160 ofFIG. 1B , theprocess 600 may be performed by other systems and/or devices. -
FIG. 6 illustrates a flow diagram for aprocess 600 of calibration of a mix of image sensors in thesystem 10 ofFIG. 1A according to an example embodiment. Atreference numeral 602, theprocess 600 includes identifying a characteristic for calibration associated with at least one of a first sensor or a second sensor. In one embodiment, a pixel density of the second sensor may be a fraction of the pixel density of the first sensor. With reference to thesystem 10 ofFIG. 1A and/or thedevice 160 ofFIG. 1B , the identifying atreference numeral 602 may be performed during manufacturing and/or assembly of thedevice 160, for example, based on manufacturing and/or assembly calibration processes. Additionally or alternatively, the characteristics for calibration may be identified during post-assembly calibration processes by thecalibrator 122. The characteristic for calibration may be related to operating characteristics of one or more of the first andsecond sensors - Differences in operating characteristics between the first and
second sensors memory 110 as the calibrationcharacteristic data 114. The differences may be due to different pixel densities of the first andsecond sensors second sensors second sensors second sensors - At
reference numeral 604, theprocess 600 includes adapting an operating characteristic of at least one of the first sensor or the second sensor to accommodate for at least one of noise, defective pixels, dark current, vignetting, demosaicing, or white balancing, for example, using the characteristic for calibration identified atreference numeral 602. For example, thecalibrator 122 may adapt operating characteristics or parameters of one or more of thefirst sensor 150 and/or thesecond sensor 152, as described herein. - At
reference numeral 606, theprocess 600 includes capturing a first image with the first sensor, and capturing a second image with a second sensor. In the context of thesystem 10 and/or the device 160 (FIG. 1A andFIG. 1B ), the first image may be captured by thefirst sensor 150, and the second image may be captured by thesecond image sensor 152. The first and second images may be captured at a substantially same time or at different times among embodiments. As noted above, the first sensor be embodied as a multi-spectral component (e.g., color) sensor and the second sensor may be embodied as a limited-spectral (e.g., luminance only) component sensor. Further, the first and second sensors may be embodied as sensors having similar or different pixel densities or other characteristics. - At reference numeral 608, the
process 600 includes adjusting an attribute of one or more of the first or second images to substantially address at least one difference between them. For example, reference numeral 608 may include adjusting an attribute of the second image to substantially address a difference between the attribute of the second image and a corresponding attribute of the first image using the characteristic for calibration identified atreference numeral 602. Reference numeral 608 may further include aligning the second image with the first image to substantially address a difference in alignment between the first sensor and the second sensor. Additionally or alternatively, reference numeral 608 may include normalizing values among the first image and the second image to substantially address a difference in sensitivity between the first sensor and the second sensor. - In this context, the
calibrator 122 may adjust one or more attributes of one or more of the first orsecond images 202 or 204 (FIG. 1B ) to substantially address a difference between attributes of the first orsecond images first sensor 150 and thesecond sensor 152, thecalibrator 122 may adjust the exposure of one or more of thefirst image 202 and thesecond image 204, to address the difference in exposure. Similarly, based on a difference in noise, thecalibrator 122 may filter one or more of thefirst image 202 and thesecond image 204, to address a difference in an amount of noise among the images. - In various embodiments, to the extent possible, the
calibrator 122 may adjust one or more attributes of the first and/orsecond images 202 and/or 204 to accommodate for, address, or normalize differences between them due to noise, defective pixels, dark current, vignetting, demosaicing, or white balancing, for example, or any combination thereof, without limitation. Again, a measure of differences among attributes (e.g., noise response, defective pixels, dark current response, vignetting response, white balance response, exposure response, etc.) of the first andsecond images characteristic data 114. This calibrationcharacteristic data 114 may be referenced by thecalibrator 122 when adjusting attributes of the first and/orsecond images 202 and/or 204 at reference numeral 608. - At
reference numeral 610, theprocess 600 may include scaling one or more of the first image or the second image to scaled image copies. For example, atreference numeral 610, theprocess 600 may include upscaling the first image to an upscaled first image and/or upscaling the second image to an upscaled second image. Alternatively, atreference numeral 610, theprocess 600 may include downscaling the first image to a downscaled first image and/or downscaling the second image to a downscaled second image. In certain embodiments, the scaling atreference numeral 610 may be omitted, for example, depending upon the application for use of the first and/or second images and the pixel densities of the sensors used to capture the images. - At reference numeral 612, the
process 600 includes adjusting an attribute of one or more of the scaled (i.e., upscaled or downscaled) first or second images to substantially address at least one difference between them. This process may be similar to that performed at reference numeral 608, although performed on scaled images. Here, it should be appreciated that theprocess 600 may make adjustments to downscaled or upscaled images at various stages. For example, adjustments may be made before and/or after downscaling, upscaling, or other image processing activities. - Here, it is noted that the processes performed at
reference numerals reference numerals reference numerals - At
reference numeral 614, theprocess 600 may include generating one or more edge or depth maps. For example, the generation of edge or depth maps may be performed by theedge map generator 126 and/or thedepth map generator 124 as described above with reference toFIG. 2B . In turn, atreference numeral 616, theprocess 600 may include receiving a point for focus and focusing or re-focusing one or more images. Again, the focusing or re-focusing of images may be performed by thefocuser 130 as described above with reference toFIG. 2B . - Alternatively or additionally, at reference numeral 618, the
process 600 may include extracting one or more objects, recognizing one or more gestures, or other image processing techniques. These techniques may be performed with reference to the edge or depth maps generated atreference numeral 614, for example. In this context, due to the calibration processes performed atreference numerals - As another alternative, at
reference numeral 620, theprocess 600 may include generating an HDR image. Here, it is noted that the generation of an HDR image may occur before any image scaling occurs atreference numeral 610. The generation of an HDR image may be performed according to the embodiments described herein. For example, the generation of an HDR image may include generating the HDR image by combining luminance values of a second image with full color values of a first image. - According to various aspects of the
process 600, theprocess 600 may be relied upon for calibration of images captured from a plurality of image sensors, which may include a heterogeneous mix of image sensors. The calibration may assist with various image processing techniques, such as focusing and re-focusing, object extraction, scene understanding, gesture recognition, HDR image processing, etc. -
FIG. 7 illustrates an example schematic block diagram of acomputing architecture 700 that may be employed as theprocessing environment 100 of thesystem 10 ofFIG. 1A , according to various embodiments described herein. Thecomputing architecture 700 may be embodied, in part, using one or more elements of a mixed general and/or specific purpose computer. Thecomputing architecture 700 includes aprocessor 710, a Random Access Memory (RAM) 720, a Read Only Memory (ROM) 730, amemory device 740, and an Input Output (I/O)interface 750. The elements ofcomputing architecture 700 are communicatively coupled via one or morelocal interfaces 702. The elements of thecomputing architecture 700 are not intended to be limiting in nature, as the architecture may omit elements or include additional or alternative elements. - In various embodiments, the
processor 710 may include or be embodied as a general purpose arithmetic processor, a state machine, or an ASIC, for example. In various embodiments, theprocessing environment 100 ofFIGS. 1A and 1B may be implemented, at least in part, using acomputing architecture 700 including theprocessor 710. Theprocessor 710 may include one or more circuits, one or more microprocessors, ASICs, dedicated hardware, or any combination thereof. In certain aspects and embodiments, theprocessor 710 is configured to execute one or more software modules which may be stored, for example, on thememory device 740. The software modules may configure theprocessor 710 to perform the tasks undertaken by the elements of thecomputing environment 100 of thesystem 10 ofFIG. 1A , for example. In certain embodiments, theprocess 600 described in connection withFIG. 6 may be implemented or executed by theprocessor 710 according to instructions stored on thememory device 740. - The RAM and
ROM processor 710. Thememory device 740 stores computer-readable instructions thereon that, when executed by theprocessor 710, direct theprocessor 710 to execute various aspects of the embodiments described herein. - As a non-limiting example group, the
memory device 740 includes one or more non-transitory memory devices, such as an optical disc, a magnetic disc, a semiconductor memory (i.e., a semiconductor, floating gate, or similar flash based memory), a magnetic tape memory, a removable memory, combinations thereof, or any other known non-transitory memory device or means for storing computer-readable instructions. The I/O interface 750 includes device input and output interfaces, such as keyboard, pointing device, display, communication, and/or other interfaces. The one or morelocal interfaces 702 electrically and communicatively couples theprocessor 710, theRAM 720, theROM 730, thememory device 740, and the I/O interface 750, so that data and instructions may be communicated among them. - In certain aspects, the
processor 710 is configured to retrieve computer-readable instructions and data stored on thememory device 740, theRAM 720, theROM 730, and/or other storage means, and copy the computer-readable instructions to theRAM 720 or theROM 730 for execution, for example. Theprocessor 710 is further configured to execute the computer-readable instructions to implement various aspects and features of the embodiments described herein. For example, theprocessor 710 may be adapted or configured to execute theprocess 600 described above in connection withFIG. 6 . In embodiments where theprocessor 710 includes a state machine or ASIC, theprocessor 710 may include internal memory and registers for maintenance of data being processed. - The flowchart or process diagram of
FIG. 6 is representative of certain processes, functionality, and operations of embodiments described herein. Each block may represent one or a combination of steps or executions in a process. Alternatively or additionally, each block may represent a module, segment, or portion of code that includes program instructions to implement the specified logical function(s). The program instructions may be embodied in the form of source code that includes human-readable statements written in a programming language or machine code that includes numerical instructions recognizable by a suitable execution system such as theprocessor 710. The machine code may be converted from the source code, etc. Further, each block may represent, or be connected with, a circuit or a number of interconnected circuits to implement a certain logical function or process step. - Although embodiments have been described herein in detail, the descriptions are by way of example. The features of the embodiments described herein are representative and, in alternative embodiments, certain features and elements may be added or omitted. Additionally, modifications to aspects of the embodiments described herein may be made by those skilled in the art without departing from the spirit and scope of the present invention defined in the following claims, the scope of which are to be accorded the broadest interpretation so as to encompass modifications and equivalent structures.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/065,810 US20150103200A1 (en) | 2013-10-16 | 2013-10-29 | Heterogeneous mix of sensors and calibration thereof |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201361891631P | 2013-10-16 | 2013-10-16 | |
US201361891648P | 2013-10-16 | 2013-10-16 | |
US14/065,810 US20150103200A1 (en) | 2013-10-16 | 2013-10-29 | Heterogeneous mix of sensors and calibration thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
US20150103200A1 true US20150103200A1 (en) | 2015-04-16 |
Family
ID=52809342
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/065,810 Abandoned US20150103200A1 (en) | 2013-10-16 | 2013-10-29 | Heterogeneous mix of sensors and calibration thereof |
US14/065,786 Active 2033-11-02 US9294662B2 (en) | 2013-10-16 | 2013-10-29 | Depth map generation and post-capture focusing |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/065,786 Active 2033-11-02 US9294662B2 (en) | 2013-10-16 | 2013-10-29 | Depth map generation and post-capture focusing |
Country Status (1)
Country | Link |
---|---|
US (2) | US20150103200A1 (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106454149A (en) * | 2016-11-29 | 2017-02-22 | 广东欧珀移动通信有限公司 | Image photographing method and device and terminal device |
US20170084044A1 (en) * | 2015-09-22 | 2017-03-23 | Samsung Electronics Co., Ltd | Method for performing image process and electronic device thereof |
US20170150067A1 (en) * | 2015-11-24 | 2017-05-25 | Samsung Electronics Co., Ltd. | Digital photographing apparatus and method of operating the same |
CN106780618A (en) * | 2016-11-24 | 2017-05-31 | 周超艳 | 3 D information obtaining method and its device based on isomery depth camera |
WO2017113048A1 (en) * | 2015-12-28 | 2017-07-06 | 华为技术有限公司 | Image fusion method and apparatus, and terminal device |
CN106991372A (en) * | 2017-03-02 | 2017-07-28 | 北京工业大学 | A kind of dynamic gesture identification method based on interacting depth learning model |
US9998716B2 (en) | 2015-08-24 | 2018-06-12 | Samsung Electronics Co., Ltd. | Image sensing device and image processing system using heterogeneous image sensor |
US20180270431A1 (en) * | 2017-03-17 | 2018-09-20 | Canon Kabushiki Kaisha | Imaging apparatus, correction method for defective pixel, and computer readable storage medium |
US10742847B2 (en) * | 2016-02-12 | 2020-08-11 | Contrast, Inc. | Devices and methods for high dynamic range video |
CN112040203A (en) * | 2020-09-02 | 2020-12-04 | Oppo(重庆)智能科技有限公司 | Computer storage medium, terminal device, image processing method and device |
CN113642481A (en) * | 2021-08-17 | 2021-11-12 | 百度在线网络技术(北京)有限公司 | Recognition method, training method, device, electronic equipment and storage medium |
EP3185537B1 (en) * | 2015-12-24 | 2021-12-08 | Samsung Electronics Co., Ltd. | Imaging device, electronic device, and method for obtaining image by the same |
US20220207775A1 (en) * | 2020-12-24 | 2022-06-30 | Safran Electronics & Defense | Method for calibrating a photodetector array, a calibration device, and an associated imaging system |
WO2023160190A1 (en) * | 2022-02-28 | 2023-08-31 | 荣耀终端有限公司 | Automatic exposure method and electronic device |
Families Citing this family (54)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10298834B2 (en) | 2006-12-01 | 2019-05-21 | Google Llc | Video refocusing |
US9858649B2 (en) * | 2015-09-30 | 2018-01-02 | Lytro, Inc. | Depth-based image blurring |
US10334151B2 (en) | 2013-04-22 | 2019-06-25 | Google Llc | Phase detection autofocus using subaperture images |
US10107747B2 (en) * | 2013-05-31 | 2018-10-23 | Ecole Polytechnique Federale De Lausanne (Epfl) | Method, system and computer program for determining a reflectance distribution function of an object |
WO2015137635A1 (en) * | 2014-03-13 | 2015-09-17 | Samsung Electronics Co., Ltd. | Image pickup apparatus and method for generating image having depth information |
US9524556B2 (en) * | 2014-05-20 | 2016-12-20 | Nokia Technologies Oy | Method, apparatus and computer program product for depth estimation |
US9646365B1 (en) | 2014-08-12 | 2017-05-09 | Amazon Technologies, Inc. | Variable temporal aperture |
US9749532B1 (en) | 2014-08-12 | 2017-08-29 | Amazon Technologies, Inc. | Pixel readout of a charge coupled device having a variable aperture |
US9787899B1 (en) | 2014-08-12 | 2017-10-10 | Amazon Technologies, Inc. | Multiple captures with a variable aperture |
KR20160112810A (en) * | 2015-03-20 | 2016-09-28 | 삼성전자주식회사 | Method for processing image and an electronic device thereof |
US10440407B2 (en) | 2017-05-09 | 2019-10-08 | Google Llc | Adaptive control for immersive experience delivery |
US11328446B2 (en) | 2015-04-15 | 2022-05-10 | Google Llc | Combining light-field data with active depth data for depth map generation |
US10341632B2 (en) | 2015-04-15 | 2019-07-02 | Google Llc. | Spatial random access enabled video system with a three-dimensional viewing volume |
US10275898B1 (en) | 2015-04-15 | 2019-04-30 | Google Llc | Wedge-based light-field video capture |
US10546424B2 (en) | 2015-04-15 | 2020-01-28 | Google Llc | Layered content delivery for virtual and augmented reality experiences |
US20170059305A1 (en) * | 2015-08-25 | 2017-03-02 | Lytro, Inc. | Active illumination for enhanced depth map generation |
US10469873B2 (en) | 2015-04-15 | 2019-11-05 | Google Llc | Encoding and decoding virtual reality video |
US10567464B2 (en) | 2015-04-15 | 2020-02-18 | Google Llc | Video compression with adaptive view-dependent lighting removal |
US10540818B2 (en) | 2015-04-15 | 2020-01-21 | Google Llc | Stereo image generation and interactive playback |
US10419737B2 (en) | 2015-04-15 | 2019-09-17 | Google Llc | Data structures and delivery methods for expediting virtual reality playback |
US10565734B2 (en) | 2015-04-15 | 2020-02-18 | Google Llc | Video capture, processing, calibration, computational fiber artifact removal, and light-field pipeline |
US10412373B2 (en) | 2015-04-15 | 2019-09-10 | Google Llc | Image capture for virtual reality displays |
US10444931B2 (en) | 2017-05-09 | 2019-10-15 | Google Llc | Vantage generation and interactive playback |
CN107534764B (en) * | 2015-04-30 | 2020-03-17 | 深圳市大疆创新科技有限公司 | System and method for enhancing image resolution |
JP6529360B2 (en) * | 2015-06-26 | 2019-06-12 | キヤノン株式会社 | Image processing apparatus, imaging apparatus, image processing method and program |
KR102336447B1 (en) * | 2015-07-07 | 2021-12-07 | 삼성전자주식회사 | Image capturing apparatus and method for the same |
US9979909B2 (en) | 2015-07-24 | 2018-05-22 | Lytro, Inc. | Automatic lens flare detection and correction for light-field images |
US11024047B2 (en) * | 2015-09-18 | 2021-06-01 | The Regents Of The University Of California | Cameras and depth estimation of images acquired in a distorting medium |
US10679326B2 (en) * | 2015-11-24 | 2020-06-09 | Canon Kabushiki Kaisha | Image data processing apparatus and image data processing method that determine confidence data indicating a level of confidence in a pixel value in high resolution image data |
US10275892B2 (en) | 2016-06-09 | 2019-04-30 | Google Llc | Multi-view scene segmentation and propagation |
US10321112B2 (en) * | 2016-07-18 | 2019-06-11 | Samsung Electronics Co., Ltd. | Stereo matching system and method of operating thereof |
US10679361B2 (en) | 2016-12-05 | 2020-06-09 | Google Llc | Multi-view rotoscope contour propagation |
US10445861B2 (en) * | 2017-02-14 | 2019-10-15 | Qualcomm Incorporated | Refinement of structured light depth maps using RGB color data |
US10389936B2 (en) | 2017-03-03 | 2019-08-20 | Danylo Kozub | Focus stacking of captured images |
US10594945B2 (en) | 2017-04-03 | 2020-03-17 | Google Llc | Generating dolly zoom effect using light field image data |
US10474227B2 (en) | 2017-05-09 | 2019-11-12 | Google Llc | Generation of virtual reality with 6 degrees of freedom from limited viewer data |
CN109087235B (en) * | 2017-05-25 | 2023-09-15 | 钰立微电子股份有限公司 | Image processor and related image system |
US10354399B2 (en) | 2017-05-25 | 2019-07-16 | Google Llc | Multi-view back-projection to a light-field |
CN107909574B (en) * | 2017-08-23 | 2021-04-13 | 陈皊皊 | Image recognition system |
US10545215B2 (en) | 2017-09-13 | 2020-01-28 | Google Llc | 4D camera tracking and optical stabilization |
CN108024056B (en) * | 2017-11-30 | 2019-10-29 | Oppo广东移动通信有限公司 | Imaging method and device based on dual camera |
US10965862B2 (en) | 2018-01-18 | 2021-03-30 | Google Llc | Multi-camera navigation interface |
US10595007B2 (en) * | 2018-03-21 | 2020-03-17 | Himax Imaging Limited | Structured-light method and system of dynamically generating a depth map |
US11107191B2 (en) * | 2019-02-18 | 2021-08-31 | Samsung Electronics Co., Ltd. | Apparatus and method for detail enhancement in super-resolution imaging using mobile electronic device |
US10984513B1 (en) * | 2019-09-30 | 2021-04-20 | Google Llc | Automatic generation of all-in-focus images with a mobile camera |
US11172139B2 (en) * | 2020-03-12 | 2021-11-09 | Gopro, Inc. | Auto exposure metering for spherical panoramic content |
US11368991B2 (en) | 2020-06-16 | 2022-06-21 | At&T Intellectual Property I, L.P. | Facilitation of prioritization of accessibility of media |
US11233979B2 (en) | 2020-06-18 | 2022-01-25 | At&T Intellectual Property I, L.P. | Facilitation of collaborative monitoring of an event |
US11184517B1 (en) | 2020-06-26 | 2021-11-23 | At&T Intellectual Property I, L.P. | Facilitation of collaborative camera field of view mapping |
US11411757B2 (en) | 2020-06-26 | 2022-08-09 | At&T Intellectual Property I, L.P. | Facilitation of predictive assisted access to content |
US11356349B2 (en) | 2020-07-17 | 2022-06-07 | At&T Intellectual Property I, L.P. | Adaptive resource allocation to facilitate device mobility and management of uncertainty in communications |
US11768082B2 (en) | 2020-07-20 | 2023-09-26 | At&T Intellectual Property I, L.P. | Facilitation of predictive simulation of planned environment |
RU2745010C1 (en) * | 2020-08-25 | 2021-03-18 | Самсунг Электроникс Ко., Лтд. | Methods for reconstruction of depth map and electronic computer device for their implementation |
KR20220028698A (en) * | 2020-08-31 | 2022-03-08 | 삼성전자주식회사 | Image processing device and image processing method for high resolution display, and application processor including the same |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080030611A1 (en) * | 2006-08-01 | 2008-02-07 | Jenkins Michael V | Dual Sensor Video Camera |
US20080165257A1 (en) * | 2007-01-05 | 2008-07-10 | Micron Technology, Inc. | Configurable pixel array system and method |
US20110169921A1 (en) * | 2010-01-12 | 2011-07-14 | Samsung Electronics Co., Ltd. | Method for performing out-focus using depth information and camera using the same |
US20110205389A1 (en) * | 2010-02-22 | 2011-08-25 | Buyue Zhang | Methods and Systems for Automatic White Balance |
US20120044328A1 (en) * | 2010-08-17 | 2012-02-23 | Apple Inc. | Image capture using luminance and chrominance sensors |
US20120075432A1 (en) * | 2010-09-27 | 2012-03-29 | Apple Inc. | Image capture using three-dimensional reconstruction |
US20120236124A1 (en) * | 2011-03-18 | 2012-09-20 | Ricoh Company, Ltd. | Stereo camera apparatus and method of obtaining image |
US8368803B2 (en) * | 2009-09-10 | 2013-02-05 | Seiko Epson Corporation | Setting exposure attributes for capturing calibration images |
US20130235226A1 (en) * | 2012-03-12 | 2013-09-12 | Keith Stoll Karn | Digital camera having low power capture mode |
US20140347350A1 (en) * | 2013-05-23 | 2014-11-27 | Htc Corporation | Image Processing Method and Image Processing System for Generating 3D Images |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6393142B1 (en) * | 1998-04-22 | 2002-05-21 | At&T Corp. | Method and apparatus for adaptive stripe based patch matching for depth estimation |
US8384763B2 (en) * | 2005-07-26 | 2013-02-26 | Her Majesty the Queen in right of Canada as represented by the Minster of Industry, Through the Communications Research Centre Canada | Generating a depth map from a two-dimensional source image for stereoscopic and multiview imaging |
US8532425B2 (en) * | 2011-01-28 | 2013-09-10 | Sony Corporation | Method and apparatus for generating a dense depth map using an adaptive joint bilateral filter |
US9007441B2 (en) * | 2011-08-04 | 2015-04-14 | Semiconductor Components Industries, Llc | Method of depth-based imaging using an automatic trilateral filter for 3D stereo imagers |
US20130070049A1 (en) * | 2011-09-15 | 2013-03-21 | Broadcom Corporation | System and method for converting two dimensional to three dimensional video |
-
2013
- 2013-10-29 US US14/065,810 patent/US20150103200A1/en not_active Abandoned
- 2013-10-29 US US14/065,786 patent/US9294662B2/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080030611A1 (en) * | 2006-08-01 | 2008-02-07 | Jenkins Michael V | Dual Sensor Video Camera |
US20080165257A1 (en) * | 2007-01-05 | 2008-07-10 | Micron Technology, Inc. | Configurable pixel array system and method |
US8368803B2 (en) * | 2009-09-10 | 2013-02-05 | Seiko Epson Corporation | Setting exposure attributes for capturing calibration images |
US20110169921A1 (en) * | 2010-01-12 | 2011-07-14 | Samsung Electronics Co., Ltd. | Method for performing out-focus using depth information and camera using the same |
US20110205389A1 (en) * | 2010-02-22 | 2011-08-25 | Buyue Zhang | Methods and Systems for Automatic White Balance |
US20120044328A1 (en) * | 2010-08-17 | 2012-02-23 | Apple Inc. | Image capture using luminance and chrominance sensors |
US20120075432A1 (en) * | 2010-09-27 | 2012-03-29 | Apple Inc. | Image capture using three-dimensional reconstruction |
US20120236124A1 (en) * | 2011-03-18 | 2012-09-20 | Ricoh Company, Ltd. | Stereo camera apparatus and method of obtaining image |
US20130235226A1 (en) * | 2012-03-12 | 2013-09-12 | Keith Stoll Karn | Digital camera having low power capture mode |
US20140347350A1 (en) * | 2013-05-23 | 2014-11-27 | Htc Corporation | Image Processing Method and Image Processing System for Generating 3D Images |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9998716B2 (en) | 2015-08-24 | 2018-06-12 | Samsung Electronics Co., Ltd. | Image sensing device and image processing system using heterogeneous image sensor |
US10341641B2 (en) * | 2015-09-22 | 2019-07-02 | Samsung Electronics Co., Ltd. | Method for performing image process and electronic device thereof |
US20170084044A1 (en) * | 2015-09-22 | 2017-03-23 | Samsung Electronics Co., Ltd | Method for performing image process and electronic device thereof |
US20170150067A1 (en) * | 2015-11-24 | 2017-05-25 | Samsung Electronics Co., Ltd. | Digital photographing apparatus and method of operating the same |
US11496696B2 (en) | 2015-11-24 | 2022-11-08 | Samsung Electronics Co., Ltd. | Digital photographing apparatus including a plurality of optical systems for acquiring images under different conditions and method of operating the same |
EP3185537B1 (en) * | 2015-12-24 | 2021-12-08 | Samsung Electronics Co., Ltd. | Imaging device, electronic device, and method for obtaining image by the same |
WO2017113048A1 (en) * | 2015-12-28 | 2017-07-06 | 华为技术有限公司 | Image fusion method and apparatus, and terminal device |
US10511776B2 (en) | 2015-12-28 | 2019-12-17 | Huawei Technologies Co., Ltd. | Image fusion method and apparatus, and terminal device |
CN108541374A (en) * | 2015-12-28 | 2018-09-14 | 华为技术有限公司 | A kind of image interfusion method, device and terminal device |
US10742847B2 (en) * | 2016-02-12 | 2020-08-11 | Contrast, Inc. | Devices and methods for high dynamic range video |
CN106780618A (en) * | 2016-11-24 | 2017-05-31 | 周超艳 | 3 D information obtaining method and its device based on isomery depth camera |
US10178326B2 (en) * | 2016-11-29 | 2019-01-08 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method and apparatus for shooting image and terminal device |
EP3328067A1 (en) * | 2016-11-29 | 2018-05-30 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method and apparatus for shooting image and terminal device |
CN106454149A (en) * | 2016-11-29 | 2017-02-22 | 广东欧珀移动通信有限公司 | Image photographing method and device and terminal device |
CN106991372A (en) * | 2017-03-02 | 2017-07-28 | 北京工业大学 | A kind of dynamic gesture identification method based on interacting depth learning model |
US10742914B2 (en) * | 2017-03-17 | 2020-08-11 | Canon Kabushiki Kaisha | Head-wearable imaging apparatus with two imaging elements corresponding to a user left eye and right eye, method, and computer readable storage medium for correcting a defective pixel among plural pixels forming each image captured by the two imaging elements based on defective-pixel related position information |
US20180270431A1 (en) * | 2017-03-17 | 2018-09-20 | Canon Kabushiki Kaisha | Imaging apparatus, correction method for defective pixel, and computer readable storage medium |
CN112040203A (en) * | 2020-09-02 | 2020-12-04 | Oppo(重庆)智能科技有限公司 | Computer storage medium, terminal device, image processing method and device |
US20220207775A1 (en) * | 2020-12-24 | 2022-06-30 | Safran Electronics & Defense | Method for calibrating a photodetector array, a calibration device, and an associated imaging system |
US11557063B2 (en) * | 2020-12-24 | 2023-01-17 | Safran Electronics & Defense | Method for calibrating a photodetector array, a calibration device, and an associated imaging system |
CN113642481A (en) * | 2021-08-17 | 2021-11-12 | 百度在线网络技术(北京)有限公司 | Recognition method, training method, device, electronic equipment and storage medium |
WO2023160190A1 (en) * | 2022-02-28 | 2023-08-31 | 荣耀终端有限公司 | Automatic exposure method and electronic device |
Also Published As
Publication number | Publication date |
---|---|
US9294662B2 (en) | 2016-03-22 |
US20150104074A1 (en) | 2015-04-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20150103200A1 (en) | Heterogeneous mix of sensors and calibration thereof | |
US10298864B2 (en) | Mismatched foreign light detection and mitigation in the image fusion of a two-camera system | |
KR101059403B1 (en) | Adaptive spatial image filter for filtering image information | |
US10015374B2 (en) | Image capturing apparatus and photo composition method thereof | |
GB2501810B (en) | Method for determining the extent of a foreground object in an image | |
JP6173156B2 (en) | Image processing apparatus, imaging apparatus, and image processing method | |
US20190075233A1 (en) | Extended or full-density phase-detection autofocus control | |
US8351776B2 (en) | Auto-focus technique in an image capture device | |
CN107395991B (en) | Image synthesis method, image synthesis device, computer-readable storage medium and computer equipment | |
US20220270345A1 (en) | Subject detection method and apparatus, electronic device, and computer-readable storage medium | |
US10853926B2 (en) | Image processing device, imaging device, and image processing method | |
CN110365894A (en) | The method and relevant apparatus of image co-registration in camera system | |
US9860456B1 (en) | Bayer-clear image fusion for dual camera | |
US11245878B2 (en) | Quad color filter array image sensor with aperture simulation and phase detection | |
CN105993164A (en) | Solid-state image sensor, electronic device, and auto focusing method | |
TWI693576B (en) | Method and system for image blurring processing | |
JP5927265B2 (en) | Image processing apparatus and program | |
US9401012B2 (en) | Method for correcting purple distortion in digital images and a computing device employing same | |
JP6099973B2 (en) | Subject area tracking device, control method thereof, and program | |
US9710897B2 (en) | Image processing apparatus, image processing method, and recording medium | |
US20140368701A1 (en) | Cloning image data patch in hole of pixel array (patch and clone) | |
CN111050097B (en) | Infrared crosstalk compensation method and device | |
CN112866554A (en) | Focusing method and device, electronic equipment and computer readable storage medium | |
JP6461184B2 (en) | Image processing unit, imaging apparatus, image processing program, and image processing method | |
JP2017182668A (en) | Data processor, imaging device, and data processing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: BROADCOM CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MACFARLANE, CHARLES DUNLOP;REEL/FRAME:031811/0071 Effective date: 20131028 |
|
AS | Assignment |
Owner name: BROADCOM CORPORATION, CALIFORNIA Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE THE ASSIGNMENT DOCUMENT PREVIOUSLY RECORDED ON REEL 031811 FRAME 0071. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT;ASSIGNORS:VONDRAN, GARY LEE;MACFARLANE, CHARLES DUNLOP;SIGNING DATES FROM 20131025 TO 20131028;REEL/FRAME:032190/0438 |
|
AS | Assignment |
Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001 Effective date: 20160201 Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001 Effective date: 20160201 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD., SINGAPORE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001 Effective date: 20170120 Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001 Effective date: 20170120 |
|
AS | Assignment |
Owner name: BROADCOM CORPORATION, CALIFORNIA Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041712/0001 Effective date: 20170119 |