WO2004098167A2 - A system and process for generating high dynamic range video - Google Patents
A system and process for generating high dynamic range video Download PDFInfo
- Publication number
- WO2004098167A2 WO2004098167A2 PCT/US2004/010167 US2004010167W WO2004098167A2 WO 2004098167 A2 WO2004098167 A2 WO 2004098167A2 US 2004010167 W US2004010167 W US 2004010167W WO 2004098167 A2 WO2004098167 A2 WO 2004098167A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- frame
- exposure
- under consideration
- pixel
- warped
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration by the use of histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/741—Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/72—Combination of two or more compensation controls
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/765—Interface circuits between an apparatus for recording and another apparatus
- H04N5/77—Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/78—Television signal recording using magnetic recording
- H04N5/781—Television signal recording using magnetic recording on disks or drums
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/84—Television signal recording using optical recording
- H04N5/85—Television signal recording using optical recording on discs or drums
Definitions
- the invention is related to producing High Dynamic Range (HDR) video, and more particularly to a system and process for generating HDR video from a video image sequence captured while varying the exposure of each frame.
- HDR High Dynamic Range
- the real world has a lot more brightness variation than can be captured by the sensors available in most cameras today.
- the radiance of a single scene may contain four orders of magnitude in brightness - from shadows to fully lit regions.
- Typical CCD or CMOS sensors only capture about 256-1024 brightness levels.
- the present invention is directed toward a system and process for generating HDR video from an image sequence of a dynamic scene captured while rapidly varying the exposure using a conventional video camera which has undergone a simple reprogramming of the auto gain mechanism.
- HDR high dynamic range
- a high dynamic range (HDR) video is generated by taking frames of the precursor video captured at alternating long and short exposures over time and producing HDR video from these frames. In general, this is accomplished by first using a procedure that computes the exposure to be used to capture each frame of the precursor video that simultaneously attempts to satisfy two competing constraints.
- the frames of the precursor video which will typically have some inter-frame motion, are then composited to produce the aforementioned HDR video frames.
- this entails doing motion estimates over a series of the alternating exposed frames of the precursor video to identify pixel correspondences over time, and then deciding how much to weight each pixel of each set of corresponding pixels based on both its exposure and the estimated quality of the motion estimates.
- the corresponding pixel sets are then combined based on their assigned weights to form a composited frame of the HDR video.
- a modified tone mapping technique can be employed for viewing purposes.
- the HDR video generation system and process first involves capturing a video image sequence while varying the exposure of each frame so as to alternate between frames exhibiting a shorter exposure and a longer exposure.
- the exposure for each frame of the video image sequence is set prior to it being captured as a function of the pixel brightness distribution in preceding frames of the video.
- the corresponding pixels between the frame under consideration and both preceding and subsequent neighboring frames are identified.
- For each of the corresponding pixels sets, at least one pixel in the set is identified as representing a trustworthy pixel.
- the pixel color information associated with the trustworthy pixels is then employed to compute a radiance value for that set of pixels. This is repeated for each set of corresponding pixels to form a radiance map representing a frame of the HDR video.
- a tone mapping procedure can then be performed to convert the radiance map into an 8-bit representation of the HDR frame that is suitable for rendering and display.
- the aforementioned setting of the exposure for each frame of the video image sequence is designed to set the exposures so that the exposure ratio between each sequential pair of long and short exposure frames is minimized while simultaneously producing a substantially full range of radiances for the scene depicted in the frames. This can be accomplished by first capturing the prescribed number of initiating frames (which is at least two) and computing respective intensity histograms for the last two frames of the video image sequence. It is then determined whether the exposure settings associated with the last two frames captured are equal. If so, it is further determined for each frame whether their overexposed pixel count and under-exposed pixel count are less then a prescribed over- exposed pixel target count and under-exposed pixel target count, respectively.
- the exposure of a first of the next two frames to be captured is set to a value that would result in the intensity histogram associated with the first of the pair of previously-capture frames under consideration to be centered within the range of its pixel values.
- the exposure of a second of the next two frames to be captured is set to a value that would result in the intensity histogram associated with the second of the pair of previously-capture frames under consideration to be centered within the range of its pixel values.
- Centering the intensity histogram involves multiplying the exposure value of the first or second previously-capture frame under consideration (as the case may be) by a ratio of the inverse of the camera response function at the median intensity value of a range of anticipated intensity values of the scene to the inverse of the camera response function at the intensity value representing the average of the intensity histogram of the first or second frame, respectively.
- the exposure settings associated with the last two frames captured are not equal, or one or both frames of the pair of previously captured frames under consideration have an over-exposed pixel count or under-exposed pixel count that is more than or equal to the prescribed over- exposed pixel target count and under-exposed pixel target count, respectively. It is first determined whether the over-exposed pixel count is less than a prescribed percentage of the over-exposed pixel target count in the frame of the frame pair having the shorter exposure setting. If so, the exposure for the next shorter exposure frame to be captured is set to an exposure value representing an increase over the exposure value employed for the shorter exposure frame under consideration.
- the over-exposed pixel count represents the number of saturated pixels and the over-exposed pixel target count is within a range of about 1 to 5 percent of the total number of pixels in the frame under consideration.
- this entails multiplying the exposure value of shorter exposure frame under consideration by a ratio of the inverse of the camera response function at the intensity value representing a saturated pixel to the inverse of the camera response function at the intensity value representing the highest intensity value obtained among the first 99 percent of the pixels of the shorter exposure frame under consideration when ordered by intensity value starting with the smallest value.
- the under-exposed pixel count is less than a prescribed percentage of the under-exposed pixel target count in the frame of the pair of previously captured frames under consideration having the longer exposure setting. If so, the exposure for the next longer exposure frame to be captured is set to an exposure value representing a decrease over the exposure value employed for the longer exposure frame under consideration.
- the under-exposed pixel count represents the number of black pixels and the under-exposed pixel target count is within a range of about 1 to 5 percent of the total number of pixels in the frame under consideration.
- this entails multiplying the exposure value of the longer exposure frame under consideration by a ratio of the inverse of the camera response function at the intensity value representing a black pixel to the inverse of the camera response function at the intensity value representing the highest intensity value obtained among the first one percent of the pixels of the longer exposure frame under consideration when ordered by intensity value starting with the smallest value.
- the exposure values are re-set. This is accomplished by balancing the over-exposed pixel count associated with the shorter exposure frame under consideration with the under-exposure pixel count associated with the longer exposure frame under consideration to establish revised exposure values for the next longer and shorter exposure frames which do not cause the ratio of these exposure values to exceed the prescribed maximum allowed exposure ratio.
- a revised over-exposed pixel count for the shorter exposure frame under consideration is computed by adding one-half the over-exposed pixel count to one-half of the under-exposed pixel count.
- the intensity histogram generated for the shorter exposure frame is then used to find the intensity value associated with the revised over-exposure pixel count.
- the exposure value of the shorter exposure frame under consideration is next multiplied by a ratio of the inverse of the camera response function at the intensity value representing a saturated pixel to the inverse of the camera response function at the intensity value associated with the revised over-exposure pixel count to produce the exposure value for the next shorter exposure frame to be captured.
- the exposure for the next longer exposure frame to be captured is established by multiplying the exposure computed for the next shorter exposure frame to be captured by the prescribed maximum allowed exposure ratio.
- the ratio of the previously computed tentative exposure values is not greater than or equal to the prescribed maximum allowed exposure ratio, it is instead first determined if the over-exposed pixel count of the shorter exposure frame under consideration is greater than the over-exposed pixel target count. If so, the exposure for the next shorter exposure frame to be captured is set to an exposure value representing a decrease over the exposure value employed for the shorter exposure frame under consideration. This is accomplished by employing a bump procedure, which specifies the percentage the exposure value associated with the shorter exposure frame under consideration is to be decreased to produce the exposure value to be used in capturing the next shorter exposure frame.
- the exposure for the next longer exposure frame to be captured is set to an exposure value representing an increase over the exposure value employed for the longer exposure frame under consideration. This is accomplished by employing a bump procedure , which this time specifies the percentage the exposure value associated with the longer exposure frame under consideration is to be increased to produce the exposure value to be used in capturing the next longer exposure frame.
- the exposure value for the next longer exposure frame to be captured and the exposure value for the next shorter exposure frame to be captured are reset by balancing the over-exposed pixel count associated with the shorter exposure frame under consideration with the under-exposure pixel count associated with the longer exposure frame under consideration to establish revised exposure values for the frames to be captured that do not cause the ratio of the exposure values to exceed the prescribed maximum allowed exposure ratio. This is accomplished as described above.
- the portion of the HDR video generation system and process involved with identifying corresponding pixels between the frame under consideration and both preceding and subsequent neighboring frames is accomplished as follows.
- the immediately preceding neighboring frame is uni-directionally warped to produce an approximate registration of the preceding frame with the frame under consideration.
- the immediately subsequent neighboring frame is uni-directionally warped to produce an approximate registration of the subsequent frame with the frame under consideration.
- both the preceding and subsequent frames are bi- directionally warped to produce interpolated frames representing approximate registrations of these neighboring frames with the frame under consideration. Warping both uni-directionally and bi-directionally creates redundancy in the registration that is later exploited to increase tolerance to registration errors. It is noted that the method used to accomplish the foregoing warping varies depending on whether the neighboring frames are both shorter exposure frames or longer exposure frames, as well as in some cases whether three consecutive frames have all different exposures or not.
- the uni-directional warping of either involves first boosting the intensity of the neighboring frame to substantially match the intensity range of the longer exposure frame under consideration.
- the neighboring frame being warped is the registered with the frame under consideration using a standard forward warping technique for the preceding frame and a standard backward warping technique for the subsequent frame.
- the uni-directional warping of the preceding frame involves first boosting the intensity of the shorter exposure frame under consideration to substantially match the preceding frame's intensity range, and then registering the preceding frame with the frame under consideration using a standard forward warping technique.
- the uni-directional warping of the subsequent frame involves first boosting the intensity of the shorter exposure frame under consideration to substantially match the subsequent frame's intensity range, and then registering the subsequent frame with the frame under consideration using a standard backward warping technique.
- the following procedure is employed.
- a bi-directional flow field is computed for the preceding and subsequent frames, respectively.
- the preceding frame is then warped using the bi-directional flow field computed for that frame to produce a forward warped frame and the subsequent frame is warped using the bidirectional flow field computed for that frame to produce a backward warped frame.
- the forward warped frame and the backward warped frame are combined to produce an interpolated frame.
- the intensity of the interpolated frame is boosted to substantially match the intensity range of the longer exposure frame currently under consideration.
- a refining flow field that best maps the intensity- boosted interpolated frame to the longer exposure frame under consideration is then established.
- a refined forward flow field is computed by concatenating the forward flow field with the refining flow field, and a refined backward flow field is computed by concatenating the backward flow field with the refining flow field.
- the refined forward flow field and refined backward flow field are then applied to the original preceding and subsequent frames, respectively, to produce a refined forward warped frame and refined backward warped frame, respectively.
- the procedure starts by determining whether the exposure associated with the preceding frame is shorter or longer than the exposure associated with the subsequent frame. Whenever it is determined that the exposure associated with the preceding frame is shorter, the intensity of the preceding frame is boosted to substantially match the intensity range of the subsequent frame. Whenever it is determined that the exposure associated with the preceding frame is longer than the exposure associated with the subsequent frame, the intensity of the subsequent frame is boosted to substantially match the intensity range of the preceding frame.
- a bi-directional flow field is then computed for the preceding and subsequent frames, respectively.
- the preceding frame is warped using the bi-directional flow field computed for that frame to produce a forward warped frame and the subsequent frame is warped using the bi-directional flow field computed for that frame to produce a backward warped frame - noting that it is the intensity boosted version of the preceding or subsequent frame that is warped, as the case may be.
- the forward warped frame and the backward warped frame are combined to produce an interpolated frame.
- the intensity of the interpolated frame is boosted to substantially match the intensity range of the longer exposure frame currently under consideration.
- a refining flow field that best maps the intensity- boosted interpolated frame to the longer exposure frame under consideration is then established.
- the refined forward flow field is computed by concatenating the forward flow field with the refining flow field
- the refined backward flow field is computed by concatenating the backward flow field with the refining flow field.
- the refined forward flow field and refined backward flow field are then applied to the original preceding and subsequent frames, respectively, to produce a refined forward warped frame and refined backward warped frame, respectively.
- the bi-directional warping of the neighboring frames in a case where the frame under consideration is a shorter exposure frame and the preceding and subsequent frames are longer exposure frames having substantially identical exposures is accomplished as follows. First, a bi-directional flow field is computed for the preceding and subsequent frames. The preceding frame is then warped using the bi-directional flow field computed for that frame to produce a forward warped frame and the subsequent frame is warped using the bi-directional flow field computed for that frame to produce a backward warped frame. The forward warped frame and the backward warped frame are combined to produce an interpolated frame. Next, the intensity of the frame under consideration is boosted to substantially match the average intensity range of the preceding and subsequent frames.
- a refining flow field that best maps the interpolated frame to the intensity boosted frame under consideration is then established.
- the refined forward flow field is computed by concatenating the forward flow field with the refining flow field
- the refined backward flow field is computed by concatenating the backward flow field with the refining flow field.
- the refined forward flow field and refined backward flow field are then applied to the original preceding and subsequent frames, respectively, to produce a refined forward warped frame and refined backward warped frame, respectively.
- the procedure starts by determining whether the exposure associated with the preceding frame is shorter or longer than the exposure associated with the subsequent frame. Whenever it is determined that the exposure associated with the preceding frame is shorter, the intensity of the preceding frame is boosted to substantially match the intensity range of the subsequent frame. Whenever it is determined that the exposure associated with the preceding frame is longer than the exposure associated with the subsequent frame, the intensity of the subsequent frame is boosted to substantially match the intensity range of the preceding frame.
- a bi-directional flow field is then computed for the preceding and subsequent frames, respectively.
- the preceding frame is warped using the bi-directional flow field computed for that frame to produce a forward warped frame and the subsequent frame is warped using the bi-directional flow field computed for that frame to produce a backward warped frame.
- the forward warped frame and the backward warped frame are combined to produce an interpolated frame.
- the intensity of the frame under consideration is boosted to substantially match the average intensity range of the preceding and subsequent frames.
- a refining flow field that best maps the interpolated frame to the intensity boosted frame under consideration is then established.
- the refined forward flow field is computed by concatenating the forward flow field with the refining flow field, and the refined backward flow field is computed by concatenating the backward flow field with the refining flow field.
- the refined forward flow field and refined backward flow field are then applied to the original preceding and subsequent frames, respectively, to produce a refined forward warped frame and refined backward warped frame, respectively.
- the aforementioned bi-directional flow field is computed for each neighboring frame in the case where the frame under consideration is a longer exposure frame and the neighboring preceding and subsequent frames are shorter exposure frames, as follows.
- the preceding and subsequent frames are globally registered by estimating an affine transform that maps one onto the other.
- a dense motion field is then computed.
- This motion field represents a local correction to the global transform and is computed using a gradient based optical flow. More particularly, a variant of the Lucas and Kanade technique [4] is used in a Laplacian pyramid framework where both the preceding and subsequent frames are warped towards time k corresponding to the time index of the frame under consideration and the residual flow vectors are estimated between each pixel of the two warped images at each level of the pyramid.
- the residual flow vectors computed for each pixel at each level of the pyramid are accumulated to establish the local component of the dense motion field.
- a composite vector is established for each pixel location in the bidirectional flow field.
- This composite vector is the sum of an affine component derived from the affine transform rescaled to warp either from the preceding frame to the forward warped frame in the case of the flow field for the preceding frame and from the subsequent frame to the backward warped frame in the case of the flow field for the subsequent frame, and a local component taken from the dense motion field that forms the local correction for the affine component.
- a bicubic warping technique is then used to transfer each pixel along the appropriate composite vector to form the aforementioned forward or backward warped frame, as the case may be.
- the previously described action of combining the forward warped frame and the backward warped frame to produce the combined interpolated frame can be accomplished by averaging the pixel values from both the forward and backward warped frames for a pixel location of the combined interpolated frame, when both are available. Whenever only one pixel value is available from the forward and backward warped frames for a pixel location of the interpolated frame, the available pixel value is used to establish a pixel value for the combined interpolated frame at that pixel location.
- the pixel value for the combined interpolated frame is established at that pixel location by averaging the two pixel values obtained using a zero motion vector.
- a global homography is computed between the frame under consideration and the intensity-boosted interpolated frame. Then, the frame under consideration is segmented into overlapping quadrants. The overlap is preferably between about 5 to about 20 percent. For each of these quadrants, it is determined whether the intensity variation among the pixels within the quadrant under consideration exceeds a prescribed variation threshold. If so, a counterpart region to the quadrant under consideration is identified in the interpolated frame using the global homography. A local homography between the quadrant under consideration and the identified counterpart region in the interpolated frame is then computed.
- the per-pixel registration error using the local homography is less than the per-pixel registration error using the global homography for the quadrant under consideration. Whenever it is determined the per-pixel registration error using the local homography is less than the per-pixel registration error using the global homography, the local homography is assigned to the quadrant under consideration. Otherwise, the global homography is assigned to the quadrant under consideration. The portion of the refining flow field associated with the quadrant under consideration is established using this local homography, whenever a local homography has been assigned to that quadrant. In cases where a local homography was not computed because the intensity variation test was not passed or where the global homography is assigned to the quadrant, the portion of the refining flow field associated with that quadrant is established using the global homography.
- this procedure can be hierarchical, i.e., it can be recursively applied to each local homography, which is then treated as the global homography at the next higher level.
- a feathering (i.e., weighted averaging) technique can be applied to the flow components residing within the overlapping regions of the quadrants to minimize flow discontinuities across the resulting refining flow field.
- the feathering technique applied to the flow components residing within the overlapping regions of the quadrants can be any desired. However, in tested versions of the HDR video generating system and process, the feathering involved a linear weighting technique.
- a one dimensional linear weighting was applied to each pixel location such that the portion of the flow component for that location derived from the homography associated with each of the overlapping quadrants is in proportion to its distance from the boundaries of the overlapping region with the respective overlapping quadrants.
- a two dimensional linear weighting is applied to each pixel location with the same results. Namely, the portion of the flow component for a location is derived from the homography associated with each of the overlapping quadrants is in proportion to its distance from the boundaries of the overlapping region with the respective overlapping quadrants.
- the aforementioned bi-directional flow field is computed in the same way for each neighboring frame in the case where the frame under consideration is a shorter exposure frame and the neighboring preceding and subsequent frames are longer exposure frames.
- the refining flow field is also computed in a similar manner, except that the global homography is computed between the intensity- boosted version of the frame under consideration and the combined interpolated frame in this latter case. In addition, it is the intensity-boosted version of the frame under consideration that is segmented into overlapping quadrants.
- one part of the HDR video generation system and process involved identifying at least one pixel in the each set of corresponding pixels that represents a trustworthy pixel and employing the pixel color information associated with the one or more identified trustworthy pixels to compute a radiance value for that set of pixels to form a radiance map representing a frame of the HDR video.
- This can be accomplished as follows. First, the frame under consideration, the uni-directionally warped preceding frame, the uni-directionally warped subsequent frame, the bi-directionally warped preceding frame and the bi- directionally warped subsequent frame are each converted to separate radiance images. Note that the original frames, not intensity boosted frames, are the ones that are warped and used from radiance computation. The intensity boosted frames are used to compute the flow fields only.
- the final radiance map is computed using all the radiance images.
- Each radiance value at a given pixel location in the final radiance map is either taken from the radiance image associated with the frame under consideration or is a weighted combination of two or more radiance values taken from the same pixel location in the aforementioned converted radiance images, depending on which values are deemed to be trustworthy based on the intensity of the pixel at that pixel location in the frame under consideration. More particularly, in the case where the frame under consideration is a longer exposure frame, and the preceding and subsequent frames are shorter exposure frames, the radiance map is produced as follows.
- each pixel in the frame under consideration that has an intensity value exceeding a prescribed maximum intensity threshold is identified, and the average of the 5 radiance values associated with the same location in the bi-directionally warped preceding frame and bi-directionally warped subsequent frame is assigned as the radiance value for the corresponding pixel location in the radiance map.
- each pixel in the frame under consideration that has an intensity value less than the prescribed maximum threshold is identified, and for each of these pixel locations, it 0 is determined if the radiance values assigned to the corresponding location in the uni-directionally warped preceding frame and the uni-directionally warped subsequent frame are outside a maximum allowable noise variance of the radiance value assigned to the same location in the frame under consideration. If not, a weighted average of all three radiance values is computed and assigned as the 5 radiance value for that pixel location in the radiance map.
- a weighted average of radiance values assigned to the pixel location in the frame under consideration and o the uni-directionally warped frame whose radiance value did not fall outside the variance is computed. This weighted average is then assigned as the radiance value for that pixel location in the radiance map.
- the radiance value assigned to the pixel location in the frame under consideration is assigned as the radiance value for that pixel location in the radiance map.
- the radiance map is produces as follows. Once the aforementioned frames have been converted to radiance images, each pixel in the frame under consideration that has an intensity value exceeding a prescribed minimum intensity threshold is identified. For each of these pixel locations, it is determined if the radiance values assigned to the corresponding location in the uni-directionally warped preceding frame and the uni-directionally warped subsequent frame are outside a maximum allowable noise variance of the radiance value assigned to the same location in the frame under consideration. If not, a weighted average of all three radiance values is computed, and assigned as the radiance value for that pixel location in the radiance map.
- a weighted average of radiance values assigned to the pixel location in the frame under consideration and the uni-directionally warped frame whose radiance value did not fall outside the variance is computed. This weighted average is then assigned as the radiance value for the corresponding pixel location in the radiance map.
- the radiance value assigned to the pixel location in the frame under consideration is assigned as the radiance value for that pixel location in the radiance map.
- each pixel in the frame under consideration that has an intensity value below the prescribed minimum intensity threshold is identified, and for the corresponding pixel location in the radiance map, the average of the radiance values associated with the same location in the bi- directionally warped preceding frame and bi-directionally warped subsequent frame is assigned as the radiance value.
- the HDR video generation system and process can also involve tonemapping of the radiance map to convert it into an 8-bit representation of the HDR frame that is suitable for rendering and display.
- this entails first converting the radiance map to CIE space and recovering the chromaticity coordinates to produce a luminance image. Next, the dynamic range of the luminance image is compressed and the chrominance re-inserted. The CIE space image is then converted to produce the final 8-bit range RGB image.
- the present tonemapping procedure uses statistics from neighboring frames in order to produce tonemapped images that vary smoothly in time. More particularly, this tonemapping procedure departs from the norm in that the dynamic range compression involves computing the average and maximum luminances using information from both the frame under consideration and at least one previous frame.
- the previously-described hierarchical global registration process has application outside just the generation of HDR video frames.
- this procedure can be employed to establish a flow field that maps any one image of a scene to another image of the scene.
- the procedure is essentially the same as described above when limited to just two hierarchical levels. However, if more than two levels are prescribed, an expanded procedure is employed.
- the hierarchical global registration process for establishing a flow field that maps one image of a scene to another image of a scene in the case where two or more levels are prescribed is accomplished as follows. First, a global homography is computed between the images. One of the images is then segmented into overlapping quadrants.
- the intensity variation among the pixels within the quadrant under consideration exceeds a prescribed variation threshold. If so, a counterpart region to the quadrant under consideration is identified in the non-segmented image using the global homography.
- a local homography is computed between the quadrant under consideration and the identified counterpart region in the non-segmented image. It is then determined if the per-pixel registration error using the local homography is less than the per-pixel registration error using the global homography for the quadrant under consideration. Whenever it is determined the per-pixel registration error using the local homography is less than the per-pixel registration error using the global homography, the local homography is assigned to the quadrant under consideration. Otherwise, the global homography is assigned to the quadrant under consideration.
- each of the quadrants associated with the previous level, which passed the intensity variation test are segmented into overlapping quadrants that representing the quadrants of the current hierarchical level.
- For each quadrant in the current level it is determined whether the intensity variation among the pixels within the quadrant under consideration exceeds the prescribed variation threshold. If so, a counterpart region 5 to the quadrant under consideration is identified in the non-segmented image using the homography assigned to the quadrant in the previous level from which the quadrant under consideration in the current level was segmented. A local homography is then computed between the quadrant under consideration and the identified counterpart region in the non-segmented image.
- a portion of the flow field associated with the quadrant under consideration is computed using the homography computed for and assigned to that 5 quadrant. Otherwise, the portion of the flow field associated with the quadrant under consideration is computed using the homography assigned to the quadrant in the previous level, from which the quadrant under consideration in the current level was segmented.
- a feathering technique can be applied to the flow components residing within the overlapping regions of the quadrants to minimize 0 flow discontinuities across the resulting flow field.
- FIG. 1 is a diagram depicting a general purpose computing device constituting an exemplary system for implementing the present invention.
- FIG. 2 is a flow chart diagramming an overall process for generating HDR video.
- FIG. 3 is a pair of successive frames of a driving video captured in accordance with the video image sequence capture portion of the process of Fig. 2, where the first frame is a longer exposure frame and the second frame is a shorter exposure frame.
- FIG. 4 is a combined intensity histogram of the two images shown in Fig. 3 in radiance space.
- the left hand side of the plot corresponds to the long exposure frame, while the right hand side of the plot corresponds to the short exposure frame.
- FIGS. 5A-B are a flow chart diagramming one embodiment of the video image sequence capture portion of the process of Fig. 2.
- FIG. 6 is a flow chart diagramming one embodiment of the HDR stitching portion of the process of Fig. 2, where a long exposure frame is considered, and adjacent preceding and subsequent short exposure frames having substantially identical exposures are registered with it in various ways.
- FIG. 7 is a block diagram illustrating the bidirectional warping portion of the HDR stitching process of Fig. 6.
- FIG. 8 is a block diagram illustrating the bidirectional warping portion of the HDR stitching process of Fig. 2, where a long exposure frame is considered and the adjacent preceding and subsequent short exposure frames have substantially different exposures.
- FIGS. 9A-B are a flow chart diagramming one embodiment of the HDR stitching portion of the process of Fig. 2, where a long exposure frame is considered, and adjacent preceding and subsequent short exposure frames having substantially different exposures are registered with it in various ways.
- FIG. 10 is a flow chart diagramming one embodiment of the initial phase of the HDR stitching portion of the process of Fig. 2 involving uni-directional warping, where a shorter exposure frame is considered with adjacent preceding and subsequent longer exposure frames.
- FIG. 11 is a block diagram illustrating the bidirectional warping portion of the HDR stitching process of Fig. 2, where a shorter exposure frame is considered and the adjacent preceding and subsequent longer exposure frames have substantially identical exposures.
- FIG. 12 is a flow chart diagramming one embodiment of the bidirectional warping portion of the HDR stitching of the process of Fig. 2, where a shorter exposure frame is considered, and adjacent preceding and subsequent longer exposure frames having substantially identical exposures are registered with it.
- FIG. 13 is a block diagram illustrating the bidirectional warping portion of the HDR stitching process of Fig. 2, where a shorter exposure frame is considered and the adjacent preceding and subsequent longer exposure frames have substantially different exposures.
- FIGS. 14A-B are a flow chart diagramming one embodiment of the bidirectional warping portion of the HDR stitching of the process of Fig. 2, where a shorter exposure frame is considered, and adjacent preceding and subsequent longer exposure frames having substantially different exposures are registered with it.
- FIG. 15 is a diagram illustrating the hierarchical homography process used in the bidirectional warping procedure of Figs. 8, 9, 12 and 14A-B in simplified form where just two levels and one quadrant are considered.
- FIGS. 16(a)-(c) are diagrams illustrating the geometry of the feathering procedure of the hierarchical homography process.
- FIGS. 17A-B are a flow chart diagramming one embodiment of the hierarchical homography process using the example of 2 hierarchical levels.
- FIGS. 18(a)-(b) are graphs associated with the radiance map computation procedure of Fig. 2, where the graph of Fig. 18(a) plots global weight vs. intensity, and the graph of Fig. 18(b) plots the modulation function based on radiance consistency of matched pixels.
- FIGS. 19A-C are a flow chart diagramming one embodiment of the radiance map computation procedure of Fig. 2 for the case where the frame under consideration is a longer exposure frame and the adjacent frames are shorter exposure frames.
- FIGS. 20A-C are a flow chart diagramming one embodiment of the radiance map computation procedure of Fig. 2 for the case where the frame under consideration is a shorter exposure frame and the adjacent frames are longer exposure frames.
- FIGS. 21(a)-(d) are a series of images showing an example of the radiance map computation procedure logic protocol, where a short exposure input frame, the resulting combined bidirectionally warped image, and the resulting uni-directionally warped left and right frames are depicted, respectively, with just those pixels that were chosen to contribute to the final radiance map visible.
- FIG. 22 is a series of images showing representative stills from a fish market scene, where in each scene the top left quadrant is a short exposure frame, the top right quadrant is a long exposure frame, the bottom left quadrant shows what the frame would look like for an exposure equal to the geometric mean of the short and long exposures, and the image in the bottom right quadrant is generated using the process of Fig. 2 according to the present invention.
- FIG. 23 is a series of images showing representative stills from a harbor scene, where in each scene the top left quadrant is a short exposure frame, the top right quadrant is a long exposure frame, the bottom left quadrant shows what the frame would look like for an exposure equal to the geometric mean of the short and long exposures, and the image in the bottom right quadrant is generated using the process of Fig. 2 according to the present invention.
- FIG. 24 is a series of images of a driving scene, which in the top row represent a portion of an input video with alternating short and long exposures, and in the bottom row show the portion of the HDR video generated from the input images in accordance with the present invention.
- Figure 1 illustrates an example of a suitable computing system environment 100.
- the computing system environment 100 is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention. Neither should the computing environment 100 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating environment 100.
- the invention is operational with numerous other general purpose or special purpose computing system environments or configurations.
- Examples of well known computing systems, environments, and/or configurations that may be suitable for use with the invention include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
- an exemplary system for implementing the invention includes a general purpose computing device in the form of a computer 110.
- Components of computer 110 may include, but are not limited to, a processing unit 120, a system memory 130, and a system bus 121 that couples various system components including the system memory to the processing unit 120.
- the system bus 121 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures.
- bus architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus also known as Mezzanine bus.
- Computer 110 typically includes a variety of computer readable media.
- Computer readable media can be any available media that can be accessed by computer 110 and includes both volatile and nonvolatile media, removable and nonremovable media.
- Computer readable media may comprise computer storage media and communication media.
- Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
- Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computer 110.
- Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
- modulated data signal means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
- communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of the any of the above should also be included within the scope of computer readable media.
- the system memory 130 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 131 and random access memory (RAM) 132.
- ROM read only memory
- RAM random access memory
- BIOS basic input/output system
- RAM 132 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 120.
- Figure 1 illustrates operating system 134, application programs 135, other program modules 136, and program data 137.
- the computer 110 may also include other removable/non-removable, volatile/nonvolatile computer storage media.
- Figure 1 illustrates a hard disk drive 141 that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive 151 that reads from or writes to a removable, nonvolatile magnetic disk 152, and an optical disk drive 155 that reads from or writes to a removable, nonvolatile optical disk 156 such as a CD ROM or other optical media.
- removable/non-removable, volatile/nonvolatile computer storage media that can be used in the exemplary operating environment include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like.
- the hard disk drive 141 is typically connected to the system bus 121 through an non- removable memory interface such as interface 140, and magnetic disk drive 151 and optical disk drive 155 are typically connected to the system bus 121 by a removable memory interface, such as interface 150.
- hard disk drive 141 is illustrated as storing operating system 144, application programs 145, other program modules 146, and program data 147. Note that these components can either be the same as or different from operating system 134, application programs 135, other program modules 136, and program data 137. Operating system 144, application programs 145, other program modules 146, and program data 147 are given different numbers here to illustrate that, at a minimum, they are different copies.
- a user may enter commands and information into the computer 110 through input devices such as a keyboard 162 and pointing device 161 , commonly referred to as a mouse, trackball or touch pad.
- Other input devices may include a microphone, joystick, game pad, satellite dish, scanner, or the like.
- These and other input devices are often connected to the processing unit 120 through a user input interface 160 that is coupled to the system bus 121 , but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB).
- a monitor 191 or other type of display device is also connected to the system bus 121 via an interface, such as a video interface 190.
- computers may also include other peripheral output devices such as speakers 197 and printer 196, which may be connected through an output peripheral interface 195.
- a camera 163 (such as a digital/electronic still or video camera, or film/photographic scanner) capable of capturing a sequence of images 164 can also be included as an input device to the personal computer 110. Further, while just one camera is depicted, multiple cameras could be included as input devices to the personal computer 110. The images 164 from the one or more cameras are input into the computer 110 via an appropriate camera interface 165.
- This interface 165 is connected to the system bus 121 , thereby allowing the images to be routed to and stored in the RAM 132, or one of the other data storage devices associated with the computer 110.
- image data can be input into the computer 110 from any of the aforementioned computer-readable media as well, without requiring the use of the camera 163.
- the computer 110 may-operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 180.
- the remote computer 180 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 110, although only a memory storage device 181 has been illustrated in Figure 1.
- the logical connections depicted in Figure 1 include a local area network (LAN) 171 and a wide area network (WAN) 173, but may also include other networks.
- LAN local area network
- WAN wide area network
- Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.
- the computer 110 When used in a LAN networking environment, the computer 110 is connected to the LAN 1 1 through a network interface or adapter 170. When used in a WAN networking environment, the computer 110 typically includes a modem 172 or other means for establishing communications over the WAN 173, such as the Internet.
- the modem 172 which may be internal or external, may be connected to the system bus 121 via the user input interface 160, or other appropriate mechanism.
- program modules depicted relative to the computer 110, or portions thereof may be stored in the remote memory storage device.
- Figure 1 illustrates remote application programs 185 as residing on memory device 181. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
- the system and process according to the present invention involves generating a High Dynamic Range (HDR) video from a video image sequence captured while varying the exposure of each frame. In general, this is accomplished via the following process actions, as shown in the high-level flow diagram of Fig.
- HDR High Dynamic Range
- process action 200 a) capturing a video image sequence while varying the exposure of each frame so as to alternate between frames exhibiting a shorter exposure and a longer exposure as a function of the pixel brightness distribution in preceding frames of the video (process action 200); b) for each frame of the video after a prescribed number of initiating frames, identifying the corresponding pixels between the frame under consideration and both preceding and subsequent neighboring frames (process action 202); c) for each of the corresponding pixels sets, identifying at least one pixel in the set that represents a trustworthy pixel and then employing the pixel color information associated with the trustworthy pixels in each set to compute a radiance value for that set of pixels, thus forming a radiance map representing a frame of the HDR video (process action 204); and, d) performing a tone mapping procedure to convert the radiance map into an 8-bit representation of the HDR frame that is suitable for rendering and display (process action 206).
- the auto gain control (AGC) of a typical video camera measures the brightness of the scene and computes an appropriate exposure.
- Most scenes have a greater dynamic range than can be captured by the camera's 8 bit per pixel sensor. Because of this, regardless of the exposure settings, some pixels will be saturated and some will be under exposed.
- the present HDR video system varies exposure settings on a per frame basis. The general idea is to sequence the settings between different values that appropriately expose dark and bright regions of the scene in turn. A post processing step, which will be described later, then combines these differently exposed frames.
- a conventional digital video camera having a programmable control unit was employed.
- the firmware of this camera was updated with a bank of four shutter (CCD integration time) and gain (ADC gain) registers.
- CCD integration time CCD integration time
- ADC gain ADC gain registers.
- the camera does a round robin through the bank using a different register set at every frame time.
- the camera tags every frame with the current settings so that they can be used during the radiance map computation.
- all the current setting of the camera were tagged as metadata to each frame being captured.
- a real time AGC algorithm determines the next group of four settings.
- the exposure settings alternate between two different values. More particularly, the appropriate exposures are automatically determined from scene statistics, which are computed on a sub-sampled frame.
- the present HDR video system is designed so that the exposure ratio between long and short exposures is minimized while simultaneously allowing a larger range of scene radiances to be accommodated. This increases the number of pixels that are useful for matching in both frames.
- the first step in calculating exposure settings to be used in capturing subsequent frames is to compute an intensity histogram for each of a pair of immediately preceding, already captured frames (process action 500).
- the system uses these histograms along with several programmable constraints to compute the subsequent exposure setting for the same number of frames. These constraints are the maximum exposure ratio, the over exposed (saturated) target pixel count, and the under exposed (black) target pixel count. In tested embodiments of the present HDR video generation process, the maximum exposure ratio was set to 16, the over exposed target pixel count was set to 1 percent of the total number of pixels, and the under exposed target pixel count was also set to 1 percent of the total number of pixels.
- the maximum exposure ratio could be set within a range from about 1 to about 32.
- the over exposed target pixel count could be set within a range of about 1 to 5 percent, where the target count would be higher when the scene depicted in the frames under consideration is relatively dark.
- the under exposed target pixel count can also be set within a range of about 1 to 5 percent. In this latter case the target count would be set higher when the scene depicted in the frames under consideration is relatively bright.
- the aforementioned pair of previously captured frames analyzed to compute the exposure settings for a like number of subsequent frames yet to be captured requires that a few frames be captured to initialize the process (i.e., at least two). These initial frames can be captured using the normal exposure setting feature of the camera (which will choose a "middle" exposure setting), or a prescribed set of exposure values could bootstrap the system.
- the prescribed number of frames analyzed will be assumed to be two. This is a good choice for typical applications. However, if it is known that the scene being captured will exhibit very large brightness ranges, such as a dark indoor scene looking out on a bright day, using just two exposures may not be adequate. Increasing the exposure gap between successive frames will capture the dynamic range better, but will make the image registration procedure that will be discussed later more brittle and will lead to increased image noise in the mid tones. Using more than two exposures is another option, but similar exposures (where registration has the best chance of success) are then temporally farther apart, again leading to potential registration and interpolation problems.
- process action 502 by determining if the exposure settings associated with the last two frames captured are equal (e.g., the ratio of the long exposure setting value/short exposure setting value is 1).
- the shorter exposure frame determines, for the shorter exposure frame, if its over exposed pixel count (e.g., the number of pixels with an intensity value equal to 255 in a typical 8-bit representation of intensity, which are often referred to as saturated pixels) is less than the aforementioned over exposed pixel cont target, and the under exposed pixel count of the lower exposure frame (e.g., the number of pixels with an intensity value equal to about 16 or less in a typical 8- bit representation of intensity, which are often referred to as black pixels), is less than the aforementioned under exposed pixel target count (process action 504). If so, a new exposure setting is chosen for the next pair of shorter and longer exposure frames to be captured, such that the histogram associated with the appropriate frame under consideration is centered within the range of pixel values (process action 506). This is accomplished using the following equation:
- F respome ⁇ represents the response function of the camera used to capture the frames
- F respome (l2Z) represents the response of the camera at a center of the range of pixel values (assuming an 8-bit representation of 0-255 brightness levels)
- F me (x) represents the response of the camera at the brightness level "x" (which in this case corresponds to level where the aforementioned brightness histogram is currently centered)
- expo is the exposure level associated with the frame under consideration
- exp ⁇ is the new exposure level.
- the revised exposure settings are then applied to capture the next two frames of the video (process action 508).
- F re ⁇ 0llse (255) represents the response of the camera at the brightness saturation level (assuming an 8-bit representation of 0-255 brightness levels)
- F respmse (x 99% ) represents the response of the camera at the brightness level "X 99% "
- expo sh ort is the exposure level associated with frame having the shorter exposure value
- exp ⁇ shori is the new shorter exposure level.
- the short exposure setting is changed, it is next determined in process action 514 whether the under exposed count of the longer exposure frame is much less than its target (e.g., only about 0.25 of the total number of pixels in the frame if the under exposed target count is set at about 1 percent). If not, then the long exposure setting value (i.e., the exposure time assigned for the long exposure frame of the pair of frames under consideration) is not changed. If, however, it is determined that the under exposed count is much less than its target, then the long exposure setting value is decreased (process action 516). This is accomplished by computing the new long exposure setting using the following equation: F 1 (BlkVa ue)
- F n ⁇ ome (BlkV ⁇ lue) represents the response of the camera at the brightness level associated with a black pixel (which is typically about 16 or less in an 8-bit representation of brightness levels)
- F ⁇ spome (x l% ) represents the response of the camera at the brightness level "x 1% " (which is the highest level obtained among the first 1 percent of the pixels when ordered by brightness level starting with the smallest value)
- e po/ong is the exposure level associated with frame having the longer exposure value
- exp ⁇ tong is the new longer exposure level.
- o o value is to compute a cumulative histogram for the longer exposure frame.
- the long exposure setting is changed, it is next determined in process action 518 if the ratio of the exposure settings, changed or otherwise, for the long and short exposure frames (e.g., long exposure setting value/short exposure setting value) is greater than or equal to the aforementioned prescribed maximum allowed ratio (R). If so, the short and long exposure settings respectively assigned to the next two frames to be captured are chosen such that the over exposed and under exposed pixel counts of the shorter and longer exposure frames under consideration are balanced, while not going above the maximum ratio
- This balancing is accomplished as follows. Essentially, a new over exposed pixel count (Sat ne w) is computed first, from the histogram associated with the frame captured at the shorter exposure setting of the pair of frames under consideration, using the equation:
- Sat 0 i d is the over exposed pixel count of the shorter exposure frame and Blk 0 i is the under exposed pixel count of the longer exposure frame.
- the new over exposed count (Sat new ) is then used to determine a brightness value ( ⁇ Sa tnew) by finding Sat ne on the aforementioned cumulative histogram associated with the shorter exposure frame and reading the corresponding brightness level.
- the new exposure level for the short exposure frame that is about to be captured is then computed using the following equation:
- a different procedure is employed. Namely, it is first determined whether the over exposed count of the shorter exposure frame is greater than the over exposed target count (process action 522). If so, the short exposure setting value is decreased (process action 524). In this case, a prescribed decrease "bump" schedule is employed. This involves decreasing the short exposure setting by a first prescribed amount (e.g., 10% in tested embodiments of the procedure) if this value was not decreased in connection with the processing of the shorter exposure frame immediately preceding the pair currently under consideration.
- a first prescribed amount e.g., 10% in tested embodiments of the procedure
- the first prescribed decrease amount is applied to the short exposure setting. If, however, a decrease was applied to the aforementioned preceding short exposure frame, then a different decrease amount is applied.
- the updated decrease amount is computed as follows. If the over exposed count did not change or increased relative to the previous frame then the decrease amount is doubled relative to the previous amount (e.g., 10%, 20%, 40%, ...to a maximum of 100% in the tested embodiment). If the over exposed count did however decrease but hasn't yet reached the target count then the short exposure setting is again decreased by the most recently used decrease amount. Finally if the target count has been reached the "bump" schedule is terminated. It is noted that in order to implement the foregoing decrease procedure, each consecutive decrease event must be noted and stored for use in determining the decrease amount in a subsequent iteration of the procedure.
- the short exposure setting has been decreased, or if it was determined that the over exposed count did not exceed the target count, it is next determined if the under exposed count of the longer exposure frame exceeds its target count (process action 526). If so, then the long exposure setting value is increased (process action 528).
- a similar procedure is employed in that a prescribed increase "bump" schedule is employed. This involves increasing the long exposure setting by a first prescribed amount (e.g., 10% in tested embodiments of the procedure) if this value was not increased in connection with the processing of the longer exposure frame immediately preceding the pair currently under consideration. If, however, an increase was applied to the aforementioned preceding long exposure frame, then a different increase amount is applied.
- the updated increase amount is computed as follows.
- the increase amount is doubled relative to the previous amount (e.g., 10%, 20%, 40%, ... to a maximum of 100% in the tested embodiment). If the under exposed count did however increase but hasn't yet reached the target count then the long exposure setting is again increased by the most recently used increase amount. Finally if the target count has been reached the "bump" schedule is terminated. Accordingly, here too, for the foregoing increase procedure, each consecutive increase event must be noted and stored for use in determining the increase amount in a subsequent iteration of the procedure.
- the long exposure setting has been increased, or if it was determined that the under exposed count did not exceed the target count, it is next determined in process action 530 if the aforementioned ratio of exposure settings equals or exceeds the prescribed maximum ratio (R). If so, the short and long exposure settings respectively assigned to the next two frames to be captured are chosen to balance these two counts while not going above the maximum ratio using the balancing procedure described above (process action 532). The revised exposure settings are then applied to capture the next two frames of the video (process action 508). If, however, it is determined that the ratio of the exposure settings for the long and short exposure frames do not equal or exceed the prescribed maximum (R), the exposure settings are not changed any further and these unchanged settings are used to capture the next pair of frames of the video (process action 508).
- R prescribed maximum ratio
- the foregoing process is repeated for each pair of frames captured in the video.
- the frames of the video will alternate between a short exposure frame and a long exposure frame throughout.
- the final HDR video is generated from this precursor video using the temporally varying exposure frames, as will be described in the next section.
- HDR stitching Since the frames of the above-described precursor video are captured with temporally varying exposures, generating an HDR frame at any given time requires the transfer of pixel color information from neighboring frames. This, in turn, requires that the pixel correspondences across different frames be highly accurate. The process of computing the pixel correspondences, transferring color information from neighboring frames and extracting the HDR image is referred to as HDR stitching.
- the precursor video contains alternating long and short exposure frames.
- the first step in HDR stitching is to generate a set of long and short exposure frames at every instant so that a radiance map can be computed from this set. This is preferably accomplished by synthesizing the missing exposures using a warping process.
- the HDR stitching process generates four warped frames.
- the four warped frames are a bi-directionally warped (interpolated) short exposure frames from the left and right neighbors (S and S k ' ), a uni-directionally warped short exposure left frame (S °), and a uni-directionally warped short exposure right frame (S » °).
- the four warped frames are a bi-directionally warped (interpolated) long exposure frames from the left and right neighbors (Z' and L' ), a uni-directionally warped long exposure left frame (I ⁇ °), and a uni-directionally warped long exposure right frame (£ ⁇ ° ).
- the redundancy represented by these warped frames is later
- S refers to a short exposure frame
- L refers to a long exposure frame
- F refers to a forward warped frame
- B refers to a backward warped frame
- k refers to the time index of the frame.
- process action 602 15 backward matching and warping methods. However, this is done only after boosting the intensity of the preceding and subsequent frames to substantially match the intensity range of the long exposure frame (process action 600).
- the warped preceding and subsequent frames are the aforementioned S ° and S frames, respectively. It is noted that the short exposure frames are boosted
- each pixel intensity or color is multiplied by a factor greater than 1. It is preferable to boost the short exposure frames rather than downscale the long exposure frame to prevent mismatch in pixel intensities in the saturated regions of the long exposure frame.
- the short exposure frames are boosted to match loss
- boosted images are only used to compute the flow field, as will be explained. They are not used to compute the radiance map due to the noise introduced in the boosting process.
- the aforementioned bidirectionally-warped frames s and S k ' are computed using all three input frames from the precursor video, as illustrated diagrammatically in Fig. 7 and process-wise in Fig. 6.
- the HDR stitching process begins as follows.
- the bidirectional flow fields (forward warp F (704) for Sj f - ⁇ (700) and backward warp i KB (706) for S, (+ ⁇ (702)) are computed using a gradient based technique which will be described shortly (process action 604). These flow fields are then used to warp the respective images to produce two images-namely S (708) and S (710) in process action 606. These images are combined in process action 608 to produce an intermediate image (i.e., the aforementioned bidirectionally- warped interpolated frame S ⁇ (712)). This intermediate image should be close in appearance to Lk (714).
- a hierarchical global registration technique (which will also be described shortly) is employed in process action 612 to compute a refining flow f ⁇ (718) that best maps fa (716) to L k (714).
- the images L k , S kt ° , S kt ° , s , and S are used together to compute an HDR image at time k as will be explained shortly.
- S k ' For the case where the preceding and subsequent exposures are different from one another, a modified procedure is employed to compute S k ' , and S .
- this modified procedure the intensity of the preceding or subsequent frame that has the lower exposure is boosted to match the other side image before S fo is computed.
- S ⁇ will be computed using S ⁇ and S k+1Boost if expS - > expS k+ - ⁇ , or S/( + ⁇ and S k _ lBoost if expS ⁇ ⁇ expSw.
- S k - ⁇ is warped to produce S 7 ' using .
- S f , and S will have different exposures associated with them.
- the HDR stitching process begins as follows.
- the preceding and subsequent frames are boosted in intensity to match the current frame that has a longer exposure (process action 900). These frames are then registered with the current frame using conventional forward and backward warping methods (process action 902).
- the warped preceding and subsequent frames are the aforementioned S ° and S k ° frames, respectively.
- the bidirectional flow fields (forward warp f ,F (808) for S ⁇ - ⁇ (800) and backward warp f
- the individual pixel intensities of S (816) are next boosted to match the corresponding pixels of L (818) in process action 916 to produce the image fc (820).
- the images L k , S£° , S k ° , S' , and s£ are used together to compute an HDR image at time k.
- S is first boosted to match the intensity range of L kA (process action 1000) prior to forward warping to produce l ⁇ ° (process action 1002), and S is boosted to match the intensity range of L; f +-i (process action 1004) prior to backward warping to produce L B ° (process action 1006).
- the aforementioned bidirectionally- warped frame L ⁇ is computed using all three input frames from the precursor video, as illustrated diagramically in Fig. 11 and process-wise in Fig. 12, for the case where adjacent exposures (associated with L kA (1100) and (1102) in this case) are identical. More particularly, the HDR stitching process begins as follows.
- the bidirectional flow fields (forward warp f k , F (1104) for L k - ⁇ (1100) and backward warp f k , ⁇ (1106) for ⁇ +i (1102)) are computed using the aforementioned gradient based technique (process action 1200). These flow fields are then used to warp the respective images to produce two images-namely L B h (1108) and z (1 110) in process action 1202. The images are combined in process action 1204 to produce an intermediate image (i.e., the aforementioned bidirectionally-warped frame __ fc (11 12)).
- This intermediate image should be close in appearance to the aforementioned intensity-boosted version of S k (1114).
- the aforementioned hierarchical global registration technique is employed in process action 1208 to compute the refining flow f/. (1118) that best maps L kt (1112) to S k oo ⁇ i (1116).
- the images S k , L ⁇ ° , L B k ° , Z , and Zf are used together to compute an HDR image at time k as will be explained shortly.
- Zf , and Zf will have different exposures associated with them.
- the second phase of the HDR stitching process is as follows.
- the bidirectional flow fields (forward warp ⁇ F (1308) for L kA (1300) and backward warp ,B (1310) for L k ⁇ (1302)) are computed using the aforementioned gradient based technique (process action 1406). These flow fields are then used to warp the respective images to produce two images-namely L B ⁇ Boosl (1312) and L F k Boost ) (1314) (where the (Boost) subscript designator indicates the image may be based on an intensity boosted frame or not) in process action 1408, one of which will be based on a boosted version of the original frame from the precursor video. These images are combined in process action 1410 to produce an intermediate image (i.e., the aforementioned bidirectionall -warped frame Lfc(1316)). This intermediate image should be close in appearance to an intensity boosted version of S (1318) as described below.
- the composite flow fields are then used to warp L kA (1300) to produce ' (1330), and warp L k+ ⁇ (1302) to produce Z' _ (1328) in process action 1418.
- S k , L ⁇ ° , Zf° , Zf , and Zf are used to compute an HDR image at time k.
- the bidirectional flow fields i k ⁇ F and i k ⁇ B are computed using a gradient based technique.
- Frame interpolation involves synthesizing the missing exposures at intermediate times using information from a pair of adjacent frames. To do this, a dense motion match 0 is computed between equal exposures (e.g., S kA and S i) and this is used to warp pixel information forwards and backwards along the motion trajectories to produce an intermediate image (e.g., S vigorous). This procedure is also used to generate missing
- the present motion estimation procedure consists of two stages: First, the two frames are globally registered by estimating an affine transform that maps one onto the other. Then, a gradient based optical flow is used to compute a dense motion field that forms a local correction to the global transform.
- the bidirectional field is computed at the intermediate time k. This avoids the hole filling problems of forward warping when generating each interpolated frame.
- composite vectors are obtained that point into the subsequent frame /c+1 and the preceding frame, /c-1. These vectors are each the 5 sum of affine and local components.
- the affine component is derived from the global warping parameters, re-scaled to warp either from /c-1 to k or from /c+1 to k, and the local component is generated by the symmetrical optical flow procedure.
- a variant of the Lucas and Kanade [4] technique 0 is used in a Laplacian pyramid framework.
- Techniques to handle degenerate flow cases can also be added by computing the eigenvalues of the matrix of summed partial derivatives and determining if it is ill-conditioned. Rather than simply warping one source image progressively towards the other at each iteration, both source images are warped towards the output time k and the residual flow vectors are estimated between these two warped images. As the residuals are accumulated down the pyramid, they give rise to a symmetric flow field centered at time k.
- This technique is augmented by including the global affine flow during the warping so the accumulated residuals are always represented in terms of a symmetrical local correction to this asymmetric global flow.
- bicubic warping is used to transfer pixels along the appropriate vectors from times /c-1 and /c+1 to each location in the output frame.
- the forward and backward warped pixels are averaged if they are available. If only one is available, that pixel value is used as the value for the corresponding location in the interpolated frame. If both source pixels are outside the frame, the two pixels obtained using a zero motion vector are averaged together.
- a hierarchical global registration technique is employed to compute the refining flow f i ⁇ .
- This novel technique will now be described.
- the technique is used to refine registration between the interpolated frame (i.e., L ki , in all cases) and the actual frame (i.e., either a long exposure frame L k or the intensity boosted version of a short exposure frame
- Hierarchical homography To accomplish the foregoing task what will be referred to as a hierarchical homography will be employed.
- the idea of hierarchical homography is shown in Fig. 15, which is simplified to illustrate two levels and one quadrant only.
- level 0 (1500) full frame registration is performed to find the best 2D perspective transform (i.e., homography) between two input images using conventional methods-thus producing homography /-/ 0 (1502).
- the reference image (Image 1 (1504)) is then broken up into overlapping quadrants 1506 shown in dotted lines. A 5 to 20 percent overlap is reasonable. In tested versions of the invention, a 10 percent overlap was used with success.
- the reference image 1504 is either L k or S k in the HDR video generating process.
- the quadrant 1506 If there is insufficient intensity variation within the quadrant 1506 (which was set at a threshold of about 10 gray levels in tested versions of the present process), it is left alone. Otherwise, its global motion is refined by performing a full image registration between that quadrant 1506 of the reference image 1504 and the appropriately sampled counterpart region 1508 from the second image 1510 (which is f o in the HDR video generating process) to find the best 2D perspective transform (i.e., homography) between the two regions.
- the boundary of the sub- image 1508 from the second image 1510 is computed based on Ho 1502. In the example shown in Fig.
- this refined transform between the sub-image pair 1506 and 1508 is referred to as H ⁇ , ⁇ (1512) where the first subscript refers to the level and the second subscript refers to the quadrant number (i.e., 1-4).
- the refinement procedure is repeated for each quadrant meeting the aforementioned intensity variation test.
- the forgoing refinement procedure could be extended further into additional levels to further refine the registration of the images.
- a full image registration is then performed between the level 1 quadrant of the reference image under consideration and the appropriately sampled counterpart region from the second image to find the best 2D perspective transform (i.e., homography) between the two regions which would be designated as H 2 , ⁇ .
- the refinement procedure is repeated for all the levels and all the quadrants of each level.
- the resulting full image refining flow f f c is then computed using the local homographies computed between each region of the images.
- their flows are feathered (i.e., weight averaged) to minimize flow discontinuities.
- Figs. 16(a)-(c) assume two refinement levels and that all the quadrants passed the intensity variation test.
- local homographies Hi,i, H ⁇ ,2 , H ⁇ ,3 ,H ⁇ , 4
- the local homography is used to compute the flow for the pixels effected in the conventional manner.
- a 1 D linear weighting technique is employed to "feather" the flow map for those pixels contained in the overlap region. Essentially, the closer the pixel is to a non-overlapping region, the more it is weighted by that portion.
- This weighting technique will be described in reference to Fig. 16(b), which shows an enlarged view of the overlap region 1602.
- the overlap distance m is used, along with the distance / defining the shortest distance from one of the borders of the overlap region to the pixel under consideration ⁇ -, to establish ⁇ , which is in turn used to compute the flow for that pixel in the full image refining flow field f fc . More particularly, using the example shown in Fig. 16(b) where the distance / is measured as the shortest distance between the border of the overlap region adjacent the exclusive "W ⁇ , 2 region" and > ; -,
- i ,2 H ⁇ pt andjc 1
- H ⁇ A p ⁇ ⁇
- a 2D linear weighting technique is employed to feather the flow map for those pixels contained in that region.
- the overlap distances m and m' are used, along with the distance / defining the distance from a first of the borders of the central overlap region to the pixel under consideration ⁇ and distance /' defining the distance from one of the borders adjacent of the first border to p to establish p . More particularly,
- the procedure begins by computing homography H 0 between the reference image L or S k ⁇ st (as the case may be) and L kt (process action 1700).
- the reference image is then segmented into overlapping quadrants (process action 1702), and a previously unselected one of the quadrants is selected (process action 1704). It is next determined whether the intensity variation among the pixels within the selected quadrant exceeds a prescribed variation threshold (process action 1706).
- H 0 is the parent homography.
- a local homography H (where i refers to the selected quadrant index) is computed between the selected quadrant of the reference image and the counterpart region identified in L kt (process action 1710).
- H- t j is a child homography. It is next determined if the per-pixel registration error using the child homography is less than the per-pixel registration error using the parent homography for the same quadrant (process action 1712). If so, the computed child homography is assigned to the selected quadrant (process action 1714). Otherwise, if it is determined the per-pixel registration error using the child homography is not less than the per-pixel registration error using the parent homography, or if in process action 1706 it was determined that the intensity variation among the pixels within the selected quadrant did not exceed the prescribed variation threshold, the parent homography is assigned to the selected quadrant (process action 1716), i.e., the child inherits from its parent.
- process action 1718 It is next determined whether there are any remaining previously unselected quadrants (process action 1718). If there are more quadrants to process, then process actions 1704 through 1718 are repeated. However, if it is found there are no more quadrants to process, then in process action 1720, the full image refining flow field f / s computed using the local homographies H f / computed between each corresponding region of the images, or the global homography Ho , depending on which is assigned to the quadrant . Finally, in process action 1722, the flows in f kt associated with pixels falling in one of the overlap regions of the quadrants are feathered to minimize flow discontinuities.
- the radiance map recovery employed in the HDR stitching is accomplished as follows for the case where the input image is a long exposure frame and the adjacent frames are short exposure frames.
- pixels in the input image L that fall in an intensity range below the prescribed maximum are identified (process action 1906) and a previously unselected one of them selected (process action 1908). These pixels represent regions of the scene that could be reasonably exposed in either a long or short exposure frame.
- the radiance values from S F0 and S B0 are compared with the corresponding pixel in Z (process action 1910). If it is found that the radiance values from both S F0 and S °are below a prescribed maximum allowed noise variance from the radiance value ofZ (process action 1912), then a weighted average of all three is computed and used as the radiance value for that pixel location in the final radiance map (process action 1914).
- the weighted average radiance is computed as follows:
- the subscripts c, F, and B refer to pixels in the current, left warped, and right warped radiance images respectively.
- the current image is L
- the left warped image is S F0
- the right warped image is S B0 .
- f M () is defined by
- ⁇ max is a user specified parameter that represents the aforementioned maximum allowed noise variance. In tested versions of the present radiance map recovery procedure, the maximum allowed noise variance was set to 16 intensity levels. An example of the modulation function is plotted in the graph shown in Fig. 18(b).
- process action 1924 It is next determined in process action 1924 if there are any pixels identified as having an intensity below the maximum intensity threshold that have not yet been selected and processed. If there are, process actions 1908 through 1924 are repeated. Otherwise the process ends.
- the radiance map recovery employed in the HDR stitching is accomplished as follows.
- process action 2000 These radiance images are denoted byS , L ⁇ L B0 , Lf' , and Lf' respectively.
- a prescribed minimum e.g. 16 in tested versions of the present radiance map recovery procedure
- process action 2004 pixels in the input image S that fall an intensity range above a prescribed minimum (e.g., 16 in tested versions of the present radiance map recovery procedure) are identified (process action 2002) and a previously unselected one of them selected (process action 2004). These pixels represent regions of the scene that could be reasonably exposed in either a long or short exposure frame.
- the radiance values for the pixel from L F0 and L B are compared with the corresponding pixel in S (process action 2006).
- a weighted average of all three is computed and used as the radiance value for that pixel location in the final radiance map (process action 2010).
- the weighted average radiance is computed as described above using Eq (9).
- both the radiance values from Iff and L B0 are outside the maximum allowed noise variance (process action 2016)
- both values are thrown out in accordance with Eq. (10)
- process action 2018 the radiance value used in the final radiance map for that pixel location is taken from S directly, in accordance with Eq. (9).
- process action 2020 it is next determined in process action 2020 if there are any pixels identified as having an intensity above the minimum intensity threshold that have not yet been selected and processed. If there are, process actions 2004 through 2020 are repeated. If not, the process continues as 5 follows.
- process action 2022 pixels in the input image S that are below the aforementioned minimum intensity value are identified. These pixels are assumed to produce poor registration with adjacent frames. Instead of 0 using these values in the final radiance map, values from the bidirectionally warped frames L'f and Lf' are employed (process action
- Figs. 21(a)-(d) show for a short exposure input frame and the bidirectional (created by averaging the values from Lf' , and Lf' ), left and right warped frames derived from the neighboring long exposure frames, respectively, o those pixels that were chosen to contribute to the final radiance map.
- Tone mapping is used to convert floating point radiance maps into an 8-bit 5 representation suitable for rendering in typical systems. This process must reduce the dynamic range of each frame while also maintaining a good contrast level for both brightly and darkly illuminated regions. In addition, there must be consistency of the transform among captured views so that there are no temporal artifacts such as flickering. 0
- the present HDR video generation system makes use of a modified version the tonemapper presented by [8], which is based on the photographic technique of dodging and burning.
- the tonemapping process begins by converting the radiance image to CIE space via conventional methods and recovering the chromaticity coordinates. The luminance image is then processed to compress the dynamic range. Finally, the chrominance is re-inserted and the CIE space image converted to produce the final byte-range RGB image.
- certain global parameters have to be set to control the overall brightness balance. Essentially, a statistical analysis of the input image being tonemapped is performed to decide how to set these global parameters. While this process works well for tonemapping images, its direct application to the present HDR video generation system would be problematic as flickering could result from the fact that each frame would be analyzed in isolation.
- the present temporally-adapted tonemapper includes computing the average and maximum luminances (i.e., the aforementioned global parameters), which control the transfer function that provides a good initial luminance mapping, using information from both the frame under consideration and the previous frame.
- the log average luminance is given by
- N is the total number of pixels in both frames
- Fj is the causal temporal neighborhood consisting of frames i at times /c-1 and k.
- the maximum luminance is determined by considering pixels in both frames.
- Figs. 22 and 23 show representative stills from the fish market and harbor scenes.
- the top left quadrant is a short exposure frame
- the top right quadrant is a long exposure frame.
- the bottom left quadrant shows what the frame would look like for an exposure equal to the geometric mean of the short and long exposures. This is reasonable, given that radiance is normally handled in logarithmic space.
- the image in the bottom right quadrant is generated using our method.
- Driving scene The results for the driving scene can be seen in Fig. 24.
- the top row shows a portion of the input video sequence with it alternating shorter and longer exposures, while the bottom row shows the HDR video frames generated from these frames.
- the driver drives through a busy street at about 25 mph. This was a particularly difficult scene because occasionally there is large frame to frame displacement due to the fast motion of the driver's hand. Our optical flow algorithm sometimes fails for such large motions, but this problem could be alleviated using a higher frame rate camera.
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2006509622A JP4397048B2 (en) | 2003-04-29 | 2004-04-01 | System and method for generating high dynamic range video |
EP04749672A EP1618737A4 (en) | 2003-04-29 | 2004-04-01 | A system and process for generating high dynamic range video |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/425,338 | 2003-04-29 | ||
US10/425,338 US6879731B2 (en) | 2003-04-29 | 2003-04-29 | System and process for generating high dynamic range video |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2004098167A2 true WO2004098167A2 (en) | 2004-11-11 |
WO2004098167A3 WO2004098167A3 (en) | 2005-04-28 |
Family
ID=33309678
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2004/010167 WO2004098167A2 (en) | 2003-04-29 | 2004-04-01 | A system and process for generating high dynamic range video |
Country Status (7)
Country | Link |
---|---|
US (4) | US6879731B2 (en) |
EP (1) | EP1618737A4 (en) |
JP (1) | JP4397048B2 (en) |
KR (1) | KR101026577B1 (en) |
CN (1) | CN100524345C (en) |
TW (1) | TWI396433B (en) |
WO (1) | WO2004098167A2 (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8537893B2 (en) | 2006-01-23 | 2013-09-17 | Max-Planck-Gesellschaft Zur Forderung Der Wissenschaften E.V. | High dynamic range codecs |
WO2014093048A1 (en) * | 2012-12-13 | 2014-06-19 | Google Inc. | Determining an image capture payload burst structure |
WO2014093042A1 (en) * | 2012-12-13 | 2014-06-19 | Google Inc. | Determining an image capture payload burst structure based on metering image capture sweep |
WO2014099326A1 (en) * | 2012-12-20 | 2014-06-26 | Google Inc. | Determining image alignment failure |
US8866928B2 (en) | 2012-12-18 | 2014-10-21 | Google Inc. | Determining exposure times using split paxels |
US8995784B2 (en) | 2013-01-17 | 2015-03-31 | Google Inc. | Structure descriptors for image processing |
US9066017B2 (en) | 2013-03-25 | 2015-06-23 | Google Inc. | Viewfinder display based on metering images |
US9077913B2 (en) | 2013-05-24 | 2015-07-07 | Google Inc. | Simulating high dynamic range imaging with virtual long-exposure images |
US9100589B1 (en) | 2012-09-11 | 2015-08-04 | Google Inc. | Interleaved capture for high dynamic range image acquisition and synthesis |
US9615012B2 (en) | 2013-09-30 | 2017-04-04 | Google Inc. | Using a second camera to adjust settings of first camera |
US9686537B2 (en) | 2013-02-05 | 2017-06-20 | Google Inc. | Noise models for image processing |
US10742895B2 (en) | 2012-07-26 | 2020-08-11 | DePuy Synthes Products, Inc. | Wide dynamic range using monochromatic sensor |
Families Citing this family (300)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7738015B2 (en) | 1997-10-09 | 2010-06-15 | Fotonation Vision Limited | Red-eye filter method and apparatus |
JP3719590B2 (en) * | 2001-05-24 | 2005-11-24 | 松下電器産業株式会社 | Display method, display device, and image processing method |
US7092584B2 (en) | 2002-01-04 | 2006-08-15 | Time Warner Entertainment Company Lp | Registration of separations |
US7133563B2 (en) * | 2002-10-31 | 2006-11-07 | Microsoft Corporation | Passive embedded interaction code |
US6879731B2 (en) * | 2003-04-29 | 2005-04-12 | Microsoft Corporation | System and process for generating high dynamic range video |
JP4392492B2 (en) * | 2003-06-02 | 2010-01-06 | 国立大学法人静岡大学 | Wide dynamic range image sensor |
US8698924B2 (en) * | 2007-03-05 | 2014-04-15 | DigitalOptics Corporation Europe Limited | Tone mapping for low-light video frame enhancement |
US8264576B2 (en) | 2007-03-05 | 2012-09-11 | DigitalOptics Corporation Europe Limited | RGBW sensor array |
US7639889B2 (en) | 2004-11-10 | 2009-12-29 | Fotonation Ireland Ltd. | Method of notifying users regarding motion artifacts based on image analysis |
US8989516B2 (en) | 2007-09-18 | 2015-03-24 | Fotonation Limited | Image processing method and apparatus |
US8199222B2 (en) * | 2007-03-05 | 2012-06-12 | DigitalOptics Corporation Europe Limited | Low-light video frame enhancement |
US8417055B2 (en) | 2007-03-05 | 2013-04-09 | DigitalOptics Corporation Europe Limited | Image processing method and apparatus |
US7596284B2 (en) * | 2003-07-16 | 2009-09-29 | Hewlett-Packard Development Company, L.P. | High resolution image reconstruction |
KR100810310B1 (en) * | 2003-08-29 | 2008-03-07 | 삼성전자주식회사 | Device and method for reconstructing picture having illumination difference |
US7492375B2 (en) * | 2003-11-14 | 2009-02-17 | Microsoft Corporation | High dynamic range image viewing on low dynamic range displays |
US7778466B1 (en) * | 2003-12-02 | 2010-08-17 | Hrl Laboratories, Llc | System and method for processing imagery using optical flow histograms |
US7583842B2 (en) * | 2004-01-06 | 2009-09-01 | Microsoft Corporation | Enhanced approach of m-array decoding and error correction |
US7263224B2 (en) * | 2004-01-16 | 2007-08-28 | Microsoft Corporation | Strokes localization by m-array decoding and fast image matching |
US7649539B2 (en) * | 2004-03-10 | 2010-01-19 | Microsoft Corporation | Image formats for video capture, processing and display |
US7317843B2 (en) * | 2004-04-01 | 2008-01-08 | Microsoft Corporation | Luminance correction |
US7463296B2 (en) * | 2004-04-01 | 2008-12-09 | Microsoft Corporation | Digital cameras with luminance correction |
US7330586B2 (en) * | 2004-10-12 | 2008-02-12 | Seiko Epson Corporation | Low-light exposure modes for digital photo sensors with free-running shutters |
US20060088225A1 (en) | 2004-10-26 | 2006-04-27 | The Secretary Of State For The Home Department | Comparison |
US8355030B2 (en) | 2005-01-07 | 2013-01-15 | Corel Corporation | Display methods for high dynamic range images and user interfaces for the same |
US20060170956A1 (en) * | 2005-01-31 | 2006-08-03 | Jung Edward K | Shared image devices |
US7538794B2 (en) * | 2005-01-31 | 2009-05-26 | Hewlett-Packard Development Company, L.P. | Method and apparatus for motion estimation in a digital imaging device |
US9082456B2 (en) | 2005-01-31 | 2015-07-14 | The Invention Science Fund I Llc | Shared image device designation |
US7826074B1 (en) | 2005-02-25 | 2010-11-02 | Microsoft Corporation | Fast embedded interaction code printing with custom postscript commands |
US7477784B2 (en) * | 2005-03-01 | 2009-01-13 | Microsoft Corporation | Spatial transforms from displayed codes |
US20060215913A1 (en) * | 2005-03-24 | 2006-09-28 | Microsoft Corporation | Maze pattern analysis with image matching |
US7403658B2 (en) * | 2005-04-15 | 2008-07-22 | Microsoft Corporation | Direct homography computation by local linearization |
US7421439B2 (en) | 2005-04-22 | 2008-09-02 | Microsoft Corporation | Global metadata embedding and decoding |
US20060242562A1 (en) * | 2005-04-22 | 2006-10-26 | Microsoft Corporation | Embedded method for embedded interaction code array |
US20070109411A1 (en) * | 2005-06-02 | 2007-05-17 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Composite image selectivity |
US9967424B2 (en) * | 2005-06-02 | 2018-05-08 | Invention Science Fund I, Llc | Data storage usage protocol |
US20070008326A1 (en) * | 2005-06-02 | 2007-01-11 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Dual mode image capture technique |
US9191611B2 (en) * | 2005-06-02 | 2015-11-17 | Invention Science Fund I, Llc | Conditional alteration of a saved image |
US20070098348A1 (en) * | 2005-10-31 | 2007-05-03 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Degradation/preservation management of captured data |
US8964054B2 (en) * | 2006-08-18 | 2015-02-24 | The Invention Science Fund I, Llc | Capturing selected image objects |
US9167195B2 (en) * | 2005-10-31 | 2015-10-20 | Invention Science Fund I, Llc | Preservation/degradation of video/audio aspects of a data stream |
US10003762B2 (en) | 2005-04-26 | 2018-06-19 | Invention Science Fund I, Llc | Shared image devices |
US9621749B2 (en) * | 2005-06-02 | 2017-04-11 | Invention Science Fund I, Llc | Capturing selected image objects |
US9076208B2 (en) * | 2006-02-28 | 2015-07-07 | The Invention Science Fund I, Llc | Imagery processing |
US20070222865A1 (en) * | 2006-03-15 | 2007-09-27 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Enhanced video/still image correlation |
US9942511B2 (en) | 2005-10-31 | 2018-04-10 | Invention Science Fund I, Llc | Preservation/degradation of video/audio aspects of a data stream |
US9451200B2 (en) * | 2005-06-02 | 2016-09-20 | Invention Science Fund I, Llc | Storage access technique for captured data |
US20070139529A1 (en) * | 2005-06-02 | 2007-06-21 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Dual mode image capture technique |
US7400777B2 (en) * | 2005-05-25 | 2008-07-15 | Microsoft Corporation | Preprocessing for information pattern analysis |
US7729539B2 (en) * | 2005-05-31 | 2010-06-01 | Microsoft Corporation | Fast error-correcting of embedded interaction codes |
US7580576B2 (en) * | 2005-06-02 | 2009-08-25 | Microsoft Corporation | Stroke localization and binding to electronic document |
US20070127909A1 (en) | 2005-08-25 | 2007-06-07 | Craig Mowry | System and apparatus for increasing quality and efficiency of film capture and methods of use thereof |
JP4839035B2 (en) * | 2005-07-22 | 2011-12-14 | オリンパス株式会社 | Endoscopic treatment tool and endoscope system |
US7454136B2 (en) * | 2005-07-28 | 2008-11-18 | Mitsubishi Electric Research Laboratories, Inc. | Method and apparatus for acquiring HDR flash images |
US7817816B2 (en) * | 2005-08-17 | 2010-10-19 | Microsoft Corporation | Embedded interaction code enabled surface type identification |
US7596280B2 (en) * | 2005-09-29 | 2009-09-29 | Apple Inc. | Video acquisition with integrated GPU processing |
US8098256B2 (en) * | 2005-09-29 | 2012-01-17 | Apple Inc. | Video acquisition with integrated GPU processing |
US7711200B2 (en) * | 2005-09-29 | 2010-05-04 | Apple Inc. | Video acquisition with integrated GPU processing |
US20070120980A1 (en) | 2005-10-31 | 2007-05-31 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Preservation/degradation of video/audio aspects of a data stream |
US20070160134A1 (en) * | 2006-01-10 | 2007-07-12 | Segall Christopher A | Methods and Systems for Filter Characterization |
US7664336B2 (en) * | 2006-01-26 | 2010-02-16 | Microsoft Corporation | Video noise reduction |
JP4561649B2 (en) * | 2006-02-13 | 2010-10-13 | セイコーエプソン株式会社 | Image compression apparatus, image compression program and image compression method, HDR image generation apparatus, HDR image generation program and HDR image generation method, image processing system, image processing program and image processing method |
US8014445B2 (en) * | 2006-02-24 | 2011-09-06 | Sharp Laboratories Of America, Inc. | Methods and systems for high dynamic range video coding |
TWI322622B (en) * | 2006-03-22 | 2010-03-21 | Quanta Comp Inc | Image processing apparatus and method of the same |
US8194997B2 (en) * | 2006-03-24 | 2012-06-05 | Sharp Laboratories Of America, Inc. | Methods and systems for tone mapping messaging |
US7623683B2 (en) * | 2006-04-13 | 2009-11-24 | Hewlett-Packard Development Company, L.P. | Combining multiple exposure images to increase dynamic range |
US8880571B2 (en) * | 2006-05-05 | 2014-11-04 | Microsoft Corporation | High dynamic range data format conversions for digital media |
IES20070229A2 (en) * | 2006-06-05 | 2007-10-03 | Fotonation Vision Ltd | Image acquisition method and apparatus |
KR101313637B1 (en) * | 2006-06-09 | 2013-10-02 | 서강대학교산학협력단 | Image processing apparatus and method for contrast enhancement |
US7840078B2 (en) * | 2006-07-10 | 2010-11-23 | Sharp Laboratories Of America, Inc. | Methods and systems for image processing control based on adjacent block characteristics |
US8130822B2 (en) * | 2006-07-10 | 2012-03-06 | Sharp Laboratories Of America, Inc. | Methods and systems for conditional transform-domain residual accumulation |
US8422548B2 (en) * | 2006-07-10 | 2013-04-16 | Sharp Laboratories Of America, Inc. | Methods and systems for transform selection and management |
US8532176B2 (en) * | 2006-07-10 | 2013-09-10 | Sharp Laboratories Of America, Inc. | Methods and systems for combining layers in a multi-layer bitstream |
US8059714B2 (en) * | 2006-07-10 | 2011-11-15 | Sharp Laboratories Of America, Inc. | Methods and systems for residual layer scaling |
US7885471B2 (en) * | 2006-07-10 | 2011-02-08 | Sharp Laboratories Of America, Inc. | Methods and systems for maintenance and use of coded block pattern information |
US8253752B2 (en) * | 2006-07-20 | 2012-08-28 | Qualcomm Incorporated | Method and apparatus for encoder assisted pre-processing |
US8155454B2 (en) * | 2006-07-20 | 2012-04-10 | Qualcomm Incorporated | Method and apparatus for encoder assisted post-processing |
US7822289B2 (en) * | 2006-07-25 | 2010-10-26 | Microsoft Corporation | Locally adapted hierarchical basis preconditioning |
GB2443663A (en) * | 2006-07-31 | 2008-05-14 | Hewlett Packard Development Co | Electronic image capture with reduced noise |
US8055034B2 (en) * | 2006-09-13 | 2011-11-08 | Fluidigm Corporation | Methods and systems for image processing of microfluidic devices |
DE102006046720A1 (en) * | 2006-10-02 | 2008-04-03 | Arnold & Richter Cine Technik Gmbh & Co. Betriebs Kg | Digital motion-picture camera for taking motion pictures, has opto-electronic sensor device, which has multiple sensor elements for producing of received signals as function of exposure time |
JP4806329B2 (en) * | 2006-10-23 | 2011-11-02 | 三洋電機株式会社 | Imaging apparatus and imaging method |
KR100843090B1 (en) * | 2006-10-25 | 2008-07-02 | 삼성전자주식회사 | Apparatus and method for improving a flicker for images |
US7714892B2 (en) * | 2006-11-08 | 2010-05-11 | Avago Technologies Ecbu Ip (Singapore) Pte. Ltd. | Systems, devices and methods for digital camera image stabilization |
JP4885690B2 (en) * | 2006-11-28 | 2012-02-29 | 株式会社エヌ・ティ・ティ・ドコモ | Image adjustment amount determination device, image adjustment amount determination method, image adjustment amount determination program, and image processing device |
US7719568B2 (en) * | 2006-12-16 | 2010-05-18 | National Chiao Tung University | Image processing system for integrating multi-resolution images |
KR20080059937A (en) * | 2006-12-26 | 2008-07-01 | 삼성전자주식회사 | Display apparatus and processing method for 3d image and processing system for 3d image |
WO2008085815A1 (en) * | 2007-01-05 | 2008-07-17 | Objectvideo, Inc. | Video-based sensing for lighting controls |
TW200830861A (en) * | 2007-01-11 | 2008-07-16 | Ind Tech Res Inst | Method for calibrating a response curve of a camera |
US8665942B2 (en) | 2007-01-23 | 2014-03-04 | Sharp Laboratories Of America, Inc. | Methods and systems for inter-layer image prediction signaling |
US7826673B2 (en) * | 2007-01-23 | 2010-11-02 | Sharp Laboratories Of America, Inc. | Methods and systems for inter-layer image prediction with color-conversion |
US8503524B2 (en) * | 2007-01-23 | 2013-08-06 | Sharp Laboratories Of America, Inc. | Methods and systems for inter-layer image prediction |
US8233536B2 (en) | 2007-01-23 | 2012-07-31 | Sharp Laboratories Of America, Inc. | Methods and systems for multiplication-free inter-layer image prediction |
TW200834470A (en) * | 2007-02-05 | 2008-08-16 | Huper Lab Co Ltd | Method of noise reduction based on diamond working windows |
US7760949B2 (en) | 2007-02-08 | 2010-07-20 | Sharp Laboratories Of America, Inc. | Methods and systems for coding multiple dynamic range images |
KR20080076004A (en) * | 2007-02-14 | 2008-08-20 | 삼성전자주식회사 | Image pickup device and method of extending dynamic range thereof |
US20080198235A1 (en) * | 2007-02-16 | 2008-08-21 | Shou-Lung Chen | High dynamic range image recorder |
US8054886B2 (en) * | 2007-02-21 | 2011-11-08 | Microsoft Corporation | Signaling and use of chroma sample positioning information |
US9307212B2 (en) | 2007-03-05 | 2016-04-05 | Fotonation Limited | Tone mapping for low-light video frame enhancement |
US8767834B2 (en) | 2007-03-09 | 2014-07-01 | Sharp Laboratories Of America, Inc. | Methods and systems for scalable-to-non-scalable bit-stream rewriting |
JP4306750B2 (en) * | 2007-03-14 | 2009-08-05 | ソニー株式会社 | Imaging apparatus, imaging method, exposure control method, program |
US7773118B2 (en) | 2007-03-25 | 2010-08-10 | Fotonation Vision Limited | Handheld article with movement discrimination |
JP4849339B2 (en) * | 2007-03-30 | 2012-01-11 | ソニー株式会社 | Information processing apparatus and method |
US8731322B2 (en) | 2007-05-03 | 2014-05-20 | Mtekvision Co., Ltd. | Image brightness controlling apparatus and method thereof |
CN101312494B (en) * | 2007-05-21 | 2012-04-04 | 华为技术有限公司 | Method for computing camera response curve and synthesizing image with large dynamic range and apparatus therefor |
US20090015713A1 (en) * | 2007-07-05 | 2009-01-15 | Horton David C | Arrangement and method for processing image data |
US7961983B2 (en) * | 2007-07-18 | 2011-06-14 | Microsoft Corporation | Generating gigapixel images |
US20090033755A1 (en) * | 2007-08-03 | 2009-02-05 | Tandent Vision Science, Inc. | Image acquisition and processing engine for computer vision |
US7983502B2 (en) * | 2007-08-06 | 2011-07-19 | Microsoft Corporation | Viewing wide angle images using dynamic tone mapping |
US8019215B2 (en) | 2007-08-06 | 2011-09-13 | Adobe Systems Incorporated | Method and apparatus for radiance capture by multiplexing in the frequency domain |
CN101394485B (en) * | 2007-09-20 | 2011-05-04 | 华为技术有限公司 | Image generating method, apparatus and image composition equipment |
JP4438847B2 (en) * | 2007-09-28 | 2010-03-24 | ソニー株式会社 | Imaging apparatus, imaging control method, and imaging control program |
WO2009051062A1 (en) * | 2007-10-15 | 2009-04-23 | Nippon Telegraph And Telephone Corporation | Image generation method, device, its program and recording medium stored with program |
US20090123068A1 (en) * | 2007-11-13 | 2009-05-14 | Himax Technologies Limited | Method for adaptively adjusting image and image processing apparatus using the same |
KR101411912B1 (en) * | 2007-11-21 | 2014-06-25 | 삼성전자주식회사 | Apparatus for processing digital image and method for controlling thereof |
US8723961B2 (en) * | 2008-02-26 | 2014-05-13 | Aptina Imaging Corporation | Apparatus and method for forming and displaying high dynamic range (HDR) images |
US8149300B2 (en) | 2008-04-28 | 2012-04-03 | Microsoft Corporation | Radiometric calibration from noise distributions |
KR101475464B1 (en) | 2008-05-09 | 2014-12-22 | 삼성전자 주식회사 | Multi-layer image sensor |
US8244058B1 (en) | 2008-05-30 | 2012-08-14 | Adobe Systems Incorporated | Method and apparatus for managing artifacts in frequency domain processing of light-field images |
JP5083046B2 (en) * | 2008-06-03 | 2012-11-28 | ソニー株式会社 | Imaging apparatus and imaging method |
US8165393B2 (en) * | 2008-06-05 | 2012-04-24 | Microsoft Corp. | High dynamic range texture compression |
US20090322777A1 (en) * | 2008-06-26 | 2009-12-31 | Microsoft Corporation | Unified texture compression framework |
US8134624B2 (en) * | 2008-07-03 | 2012-03-13 | Aptina Imaging Corporation | Method and apparatus providing multiple exposure high dynamic range sensor |
JP5271631B2 (en) * | 2008-08-07 | 2013-08-21 | Hoya株式会社 | Image processing unit, imaging device, composite image creation program |
GB0819069D0 (en) | 2008-10-17 | 2008-11-26 | Forensic Science Service Ltd | Improvements in and relating to methods and apparatus for comparison |
EP2180692A1 (en) | 2008-10-22 | 2010-04-28 | STWireless SA | Method of and a device for producing a digital picture |
CN101394487B (en) * | 2008-10-27 | 2011-09-14 | 华为技术有限公司 | Image synthesizing method and system |
DE102008044322A1 (en) * | 2008-12-03 | 2010-06-10 | Robert Bosch Gmbh | Control device for a camera arrangement, camera arrangement for a vehicle and method for controlling a camera of a vehicle |
US20100157079A1 (en) * | 2008-12-19 | 2010-06-24 | Qualcomm Incorporated | System and method to selectively combine images |
US8339475B2 (en) * | 2008-12-19 | 2012-12-25 | Qualcomm Incorporated | High dynamic range image combining |
US8406569B2 (en) * | 2009-01-19 | 2013-03-26 | Sharp Laboratories Of America, Inc. | Methods and systems for enhanced dynamic range images and video from multiple exposures |
US8774559B2 (en) * | 2009-01-19 | 2014-07-08 | Sharp Laboratories Of America, Inc. | Stereoscopic dynamic range image sequence |
US8189089B1 (en) | 2009-01-20 | 2012-05-29 | Adobe Systems Incorporated | Methods and apparatus for reducing plenoptic camera artifacts |
US9380260B2 (en) * | 2009-01-21 | 2016-06-28 | Texas Instruments Incorporated | Multichannel video port interface using no external memory |
WO2010088465A1 (en) | 2009-02-02 | 2010-08-05 | Gentex Corporation | Improved digital image processing and systems incorporating the same |
TWI393992B (en) * | 2009-02-05 | 2013-04-21 | Nat Univ Chung Cheng | High dynamic range image synthesis method |
JP4748230B2 (en) * | 2009-02-17 | 2011-08-17 | カシオ計算機株式会社 | Imaging apparatus, imaging method, and imaging program |
US8290295B2 (en) * | 2009-03-03 | 2012-10-16 | Microsoft Corporation | Multi-modal tone-mapping of images |
WO2010102135A1 (en) * | 2009-03-04 | 2010-09-10 | Wagner Paul A | Temporally aligned exposure bracketing for high dynamic range imaging |
DE102009001518B4 (en) * | 2009-03-12 | 2011-09-01 | Trident Microsystems (Far East) Ltd. | Method for generating an HDR video image sequence |
CN101841631B (en) * | 2009-03-20 | 2011-09-21 | 微星科技股份有限公司 | Shadow exposure compensation method and image processing device using same |
JP4678061B2 (en) * | 2009-04-02 | 2011-04-27 | 株式会社ニコン | Image processing apparatus, digital camera equipped with the same, and image processing program |
WO2010118177A1 (en) * | 2009-04-08 | 2010-10-14 | Zoran Corporation | Exposure control for high dynamic range image capture |
US8248481B2 (en) * | 2009-04-08 | 2012-08-21 | Aptina Imaging Corporation | Method and apparatus for motion artifact removal in multiple-exposure high-dynamic range imaging |
CN101867727B (en) * | 2009-04-16 | 2011-12-07 | 华为技术有限公司 | Method and device for processing video |
US8111300B2 (en) * | 2009-04-22 | 2012-02-07 | Qualcomm Incorporated | System and method to selectively combine video frame image data |
WO2010123923A1 (en) * | 2009-04-23 | 2010-10-28 | Zoran Corporation | Multiple exposure high dynamic range image capture |
US8570396B2 (en) * | 2009-04-23 | 2013-10-29 | Csr Technology Inc. | Multiple exposure high dynamic range image capture |
US8525900B2 (en) | 2009-04-23 | 2013-09-03 | Csr Technology Inc. | Multiple exposure high dynamic range image capture |
TWI381718B (en) * | 2009-05-20 | 2013-01-01 | Acer Inc | A system for reducing video noise and a method thereof, and an image capturing apparatus |
US8363062B2 (en) * | 2009-06-26 | 2013-01-29 | Sony Corporation | Method and unit for generating a radiance map |
EP2449526A1 (en) | 2009-06-29 | 2012-05-09 | Thomson Licensing | Zone-based tone mapping |
US8346009B2 (en) * | 2009-06-29 | 2013-01-01 | Thomson Licensing | Automatic exposure estimation for HDR images based on image statistics |
US8345975B2 (en) * | 2009-06-29 | 2013-01-01 | Thomson Licensing | Automatic exposure estimation for HDR images based on image statistics |
WO2011008239A1 (en) | 2009-06-29 | 2011-01-20 | Thomson Licensing | Contrast enhancement |
KR101614914B1 (en) | 2009-07-23 | 2016-04-25 | 삼성전자주식회사 | Motion adaptive high dynamic range image pickup apparatus and method |
JP5458865B2 (en) * | 2009-09-18 | 2014-04-02 | ソニー株式会社 | Image processing apparatus, imaging apparatus, image processing method, and program |
KR101633460B1 (en) * | 2009-10-21 | 2016-06-24 | 삼성전자주식회사 | Method and Apparatus for controlling multi-exposure |
KR20110043833A (en) * | 2009-10-22 | 2011-04-28 | 삼성전자주식회사 | Dynamic range extended mode of digital camera decision method using fuzzy rule and apparatus for performing the method |
KR101367821B1 (en) | 2009-12-21 | 2014-02-26 | 한국전자통신연구원 | video identification method and apparatus using symmetric information of hierachical image blocks |
CN102111534B (en) * | 2009-12-25 | 2013-01-23 | 财团法人工业技术研究院 | System and method for constructing high dynamic range images (HDRI) |
TWI424371B (en) * | 2009-12-30 | 2014-01-21 | Altek Corp | Video processing device and processing method thereof |
WO2011093994A1 (en) * | 2010-01-27 | 2011-08-04 | Thomson Licensing | High dynamic range (hdr) image synthesis with user input |
US8606009B2 (en) | 2010-02-04 | 2013-12-10 | Microsoft Corporation | High dynamic range image generation and rendering |
US8896715B2 (en) | 2010-02-11 | 2014-11-25 | Microsoft Corporation | Generic platform video image stabilization |
US8860833B2 (en) | 2010-03-03 | 2014-10-14 | Adobe Systems Incorporated | Blended rendering of focused plenoptic camera data |
WO2011115635A1 (en) * | 2010-03-19 | 2011-09-22 | University Of Central Florida Research Foundation, Inc. | Object tracking with opposing image capture devices |
DE102010013451A1 (en) | 2010-03-30 | 2011-10-06 | Tobias Hornig | Method for optimization of light dark dynamics of optical device e.g. video/film photo camera, involves controlling different lightness of single image or several image sequences by different extinction of optical device |
JP5432799B2 (en) * | 2010-03-30 | 2014-03-05 | オリンパスイメージング株式会社 | Imaging apparatus, imaging system, and imaging method |
KR101664123B1 (en) | 2010-06-14 | 2016-10-11 | 삼성전자주식회사 | Apparatus and method of creating high dynamic range image empty ghost image by using filtering |
US10108314B2 (en) | 2010-06-25 | 2018-10-23 | Interdigital Ce Patent Holdings | Method and system for displaying and processing high dynamic range video and images |
KR101661215B1 (en) * | 2010-08-16 | 2016-09-30 | 삼성전자주식회사 | Image processing method and image processing apparatus |
WO2012027290A1 (en) | 2010-08-23 | 2012-03-01 | Red. Com, Inc. | High dynamic range video |
US8803918B2 (en) | 2010-08-27 | 2014-08-12 | Adobe Systems Incorporated | Methods and apparatus for calibrating focused plenoptic camera data |
US8724000B2 (en) | 2010-08-27 | 2014-05-13 | Adobe Systems Incorporated | Methods and apparatus for super-resolution in integral photography |
US8749694B2 (en) | 2010-08-27 | 2014-06-10 | Adobe Systems Incorporated | Methods and apparatus for rendering focused plenoptic camera data using super-resolved demosaicing |
US8665341B2 (en) | 2010-08-27 | 2014-03-04 | Adobe Systems Incorporated | Methods and apparatus for rendering output images with simulated artistic effects from focused plenoptic camera data |
KR101642964B1 (en) * | 2010-11-03 | 2016-07-27 | 삼성전자주식회사 | Apparatus and method for dynamic controlling integration time of depth camera for accuracy enhancement |
GB2500835B (en) * | 2010-12-10 | 2014-02-12 | Ibm | High-dynamic range video tone mapping |
EP2466872B1 (en) | 2010-12-14 | 2018-06-06 | Axis AB | Method and digital video camera for improving the image quality of images in a video image stream |
TWI463862B (en) * | 2010-12-31 | 2014-12-01 | Wt Microelectronics Co Ltd | Apparatus and method for processing wide dynamic range image |
US9024951B2 (en) * | 2011-02-16 | 2015-05-05 | Apple Inc. | Devices and methods for obtaining high-local-contrast image data |
US8711248B2 (en) | 2011-02-25 | 2014-04-29 | Microsoft Corporation | Global alignment for high-dynamic range image generation |
EP2681710B1 (en) | 2011-03-02 | 2018-09-12 | Dolby Laboratories Licensing Corporation | Local multiscale tone-mapping operator |
WO2012130208A1 (en) | 2011-03-17 | 2012-10-04 | Spheron Vr Ag | Video arrangement for providing hdr video streams |
US9030550B2 (en) | 2011-03-25 | 2015-05-12 | Adobe Systems Incorporated | Thin plenoptic cameras using solid immersion lenses |
JP5791336B2 (en) | 2011-04-01 | 2015-10-07 | キヤノン株式会社 | Image processing apparatus and control method thereof |
US9077910B2 (en) | 2011-04-06 | 2015-07-07 | Dolby Laboratories Licensing Corporation | Multi-field CCD capture for HDR imaging |
EP2702766B1 (en) * | 2011-04-28 | 2017-06-14 | Koninklijke Philips N.V. | Apparatuses and methods for hdr image encoding and decoding |
JP2012247874A (en) * | 2011-05-25 | 2012-12-13 | Sony Corp | Image processing apparatus and method |
US9769430B1 (en) | 2011-06-23 | 2017-09-19 | Gentex Corporation | Imager system with median filter and method thereof |
CN102413315A (en) * | 2011-07-29 | 2012-04-11 | 中兴通讯股份有限公司 | Video monitoring method and video monitoring system |
US9824426B2 (en) | 2011-08-01 | 2017-11-21 | Microsoft Technology Licensing, Llc | Reduced latency video stabilization |
US20130044237A1 (en) * | 2011-08-15 | 2013-02-21 | Broadcom Corporation | High Dynamic Range Video |
CN102510502B (en) * | 2011-09-30 | 2014-01-22 | 苏州佳世达电通有限公司 | Method and system for generating high-dynamic-range image |
US8913153B2 (en) | 2011-10-06 | 2014-12-16 | Aptina Imaging Corporation | Imaging systems and methods for generating motion-compensated high-dynamic-range images |
US9172889B2 (en) * | 2012-02-09 | 2015-10-27 | Semiconductor Components Industries, Llc | Imaging systems and methods for generating auto-exposed high-dynamic-range images |
US9041838B2 (en) | 2012-02-14 | 2015-05-26 | Gentex Corporation | High dynamic range imager system |
GB2499668B (en) * | 2012-02-27 | 2019-03-06 | Apical Ltd | Exposure controller |
JP2013183434A (en) * | 2012-03-05 | 2013-09-12 | Toshiba Corp | Solid-state imaging apparatus |
US9733707B2 (en) | 2012-03-22 | 2017-08-15 | Honeywell International Inc. | Touch screen display user interface and method for improving touch interface utility on the same employing a rules-based masking system |
US9699429B2 (en) | 2012-03-27 | 2017-07-04 | Sony Corporation | Image processing apparatus, imaging device, image processing method, and program for reducing noise or false colors in an image |
US9137456B2 (en) | 2012-06-06 | 2015-09-15 | Apple Inc. | Intelligent auto-exposure bracketing |
US9148582B2 (en) * | 2012-06-29 | 2015-09-29 | Intel Corporation | Method and system for perfect shot imaging from multiple images |
US9489706B2 (en) | 2012-07-02 | 2016-11-08 | Qualcomm Technologies, Inc. | Device and algorithm for capturing high dynamic range (HDR) video |
JP6120500B2 (en) * | 2012-07-20 | 2017-04-26 | キヤノン株式会社 | Imaging apparatus and control method thereof |
US9423871B2 (en) * | 2012-08-07 | 2016-08-23 | Honeywell International Inc. | System and method for reducing the effects of inadvertent touch on a touch screen controller |
US9390681B2 (en) * | 2012-09-11 | 2016-07-12 | Apple Inc. | Temporal filtering for dynamic pixel and backlight control |
US9338372B2 (en) | 2012-09-19 | 2016-05-10 | Semiconductor Components Industries, Llc | Column-based high dynamic range imaging systems |
FR2996034B1 (en) * | 2012-09-24 | 2015-11-20 | Jacques Joffre | METHOD FOR CREATING IMAGES WITH A DYNAMIC RANGE EXTENDED IN FIXED IMAGING AND VIDEO, AND IMAGING DEVICE IMPLEMENTING THE METHOD. |
KR101408365B1 (en) | 2012-11-02 | 2014-06-18 | 삼성테크윈 주식회사 | Apparatus and method for analyzing image |
US9128580B2 (en) | 2012-12-07 | 2015-09-08 | Honeywell International Inc. | System and method for interacting with a touch screen interface utilizing an intelligent stencil mask |
WO2014099320A1 (en) | 2012-12-17 | 2014-06-26 | The Trustees Of Columbia University In The City Of New York | Methods, systems, and media for high dynamic range imaging |
CN103973988B (en) | 2013-01-24 | 2018-02-02 | 华为终端(东莞)有限公司 | scene recognition method and device |
US9241128B2 (en) | 2013-02-14 | 2016-01-19 | Warner Bros. Entertainment Inc. | Video conversion technology |
JP6236658B2 (en) * | 2013-02-19 | 2017-11-29 | 日本電気株式会社 | Imaging system and imaging method |
US11013398B2 (en) | 2013-03-13 | 2021-05-25 | Stryker Corporation | System for obtaining clear endoscope images |
US9117134B1 (en) | 2013-03-19 | 2015-08-25 | Google Inc. | Image merging with blending |
CN103237168A (en) * | 2013-04-02 | 2013-08-07 | 清华大学 | Method for processing high-dynamic-range image videos on basis of comprehensive gains |
US9955084B1 (en) | 2013-05-23 | 2018-04-24 | Oliver Markus Haynold | HDR video camera |
US9131201B1 (en) | 2013-05-24 | 2015-09-08 | Google Inc. | Color correcting virtual long exposures with true long exposures |
TWI502990B (en) * | 2013-06-27 | 2015-10-01 | Altek Semiconductor Corp | Method for generating high dynamic range image and image sensor thereof |
TWI513310B (en) * | 2013-07-12 | 2015-12-11 | Univ Nat Yunlin Sci & Tech | Device and method for expanding dynamic range of camera |
TWI630820B (en) * | 2013-07-19 | 2018-07-21 | 新力股份有限公司 | File generation device, file generation method, file reproduction device, and file reproduction method |
US9053558B2 (en) | 2013-07-26 | 2015-06-09 | Rui Shen | Method and system for fusing multiple images |
TWI464526B (en) * | 2013-08-08 | 2014-12-11 | Quanta Comp Inc | Method of controlling exposure time of high dynamic range image |
KR20150025602A (en) * | 2013-08-29 | 2015-03-11 | 삼성전자주식회사 | Method for recoding video and an electronic device thereof |
US20150097978A1 (en) * | 2013-10-07 | 2015-04-09 | Qualcomm Incorporated | System and method for high fidelity, high dynamic range scene reconstruction with frame stacking |
CN105684412B (en) * | 2013-10-22 | 2017-04-26 | 杜比实验室特许公司 | Calendar mechanism for a clock movement |
US9380229B2 (en) | 2014-02-28 | 2016-06-28 | Samsung Electronics Co., Ltd. | Digital imaging systems including image sensors having logarithmic response ranges and methods of determining motion |
CN104917973B (en) * | 2014-03-11 | 2019-03-05 | 宏碁股份有限公司 | Dynamic exposure method of adjustment and its electronic device |
US9204048B2 (en) * | 2014-03-27 | 2015-12-01 | Facebook, Inc. | Stabilization of low-light video |
US9407832B2 (en) * | 2014-04-25 | 2016-08-02 | Himax Imaging Limited | Multi-exposure imaging system and method for eliminating rolling shutter flicker |
US9307162B2 (en) * | 2014-05-21 | 2016-04-05 | Himax Imaging Limited | Local enhancement apparatus and method to generate high dynamic range images by blending brightness-preserved and brightness-adjusted blocks |
KR101623947B1 (en) * | 2014-06-20 | 2016-05-24 | 경남정보대학교 산학협력단 | Apparatus amd method for producing hdr dynamic image |
US10104388B2 (en) | 2014-06-30 | 2018-10-16 | Sony Corporation | Video processing system with high dynamic range sensor mechanism and method of operation thereof |
US9852531B2 (en) | 2014-07-11 | 2017-12-26 | Samsung Electronics Co., Ltd. | Electronic apparatus and method for controlling the same |
US10277771B1 (en) | 2014-08-21 | 2019-04-30 | Oliver Markus Haynold | Floating-point camera |
US10225485B1 (en) | 2014-10-12 | 2019-03-05 | Oliver Markus Haynold | Method and apparatus for accelerated tonemapping |
EP3046319A1 (en) | 2015-01-19 | 2016-07-20 | Thomson Licensing | Method for generating an HDR image of a scene based on a tradeoff between brightness distribution and motion |
US20170347113A1 (en) * | 2015-01-29 | 2017-11-30 | Koninklijke Philips N.V. | Local dynamic range adjustment color processing |
US10268886B2 (en) | 2015-03-11 | 2019-04-23 | Microsoft Technology Licensing, Llc | Context-awareness through biased on-device image classifiers |
US20160267349A1 (en) * | 2015-03-11 | 2016-09-15 | Microsoft Technology Licensing, Llc | Methods and systems for generating enhanced images using multi-frame processing |
US10055672B2 (en) | 2015-03-11 | 2018-08-21 | Microsoft Technology Licensing, Llc | Methods and systems for low-energy image classification |
EP3273854B1 (en) | 2015-03-26 | 2021-09-22 | Universidade de Coimbra | Systems for computer-aided surgery using intra-operative video acquired by a free moving camera |
US10127644B2 (en) * | 2015-04-10 | 2018-11-13 | Apple Inc. | Generating synthetic video frames using optical flow |
TWI546769B (en) | 2015-04-10 | 2016-08-21 | 瑞昱半導體股份有限公司 | Image processing device and method thereof |
EP3284252B1 (en) * | 2015-04-13 | 2021-07-21 | Universidade De Coimbra | Methods and systems for camera characterization in terms of response function, color, and vignetting under non-uniform illumination |
JP6584131B2 (en) * | 2015-05-08 | 2019-10-02 | キヤノン株式会社 | Imaging apparatus, imaging system, and signal processing method |
TWI558207B (en) | 2015-06-29 | 2016-11-11 | 瑞昱半導體股份有限公司 | Wide dynamic rage imaging method |
CN105187730B (en) * | 2015-07-31 | 2018-08-07 | 上海兆芯集成电路有限公司 | High dynamic range images production method and the device for using this method |
KR101598959B1 (en) * | 2015-09-11 | 2016-03-02 | 주식회사 유니시큐 | A plurality of transmission method of the video stream for the improvement of the background and object identification capability of the IP camera |
US10015409B2 (en) * | 2015-09-17 | 2018-07-03 | Mediatek Inc. | Electronic device, processor and method for setting a sensor |
CN105472265B (en) * | 2015-12-04 | 2018-12-14 | 中国神华能源股份有限公司 | A kind of device and method obtaining high dynamic range images |
JP2017118296A (en) * | 2015-12-24 | 2017-06-29 | キヤノン株式会社 | Imaging apparatus, image processing apparatus, image processing method, image processing program, and storage medium |
JP6233424B2 (en) | 2016-01-05 | 2017-11-22 | ソニー株式会社 | Imaging system and imaging method |
KR102153607B1 (en) * | 2016-01-22 | 2020-09-08 | 삼성전자주식회사 | Apparatus and method for detecting foreground in image |
CN105721773B (en) * | 2016-01-29 | 2019-04-30 | 深圳市美好幸福生活安全系统有限公司 | A kind of video acquisition and method |
US9852523B2 (en) * | 2016-02-24 | 2017-12-26 | Ondrej Jamri{hacek over (s)}ka | Appearance transfer techniques maintaining temporal coherence |
US9870638B2 (en) | 2016-02-24 | 2018-01-16 | Ondrej Jamri{hacek over (s)}ka | Appearance transfer techniques |
EP3430472B1 (en) * | 2016-03-13 | 2023-06-07 | B.G. Negev Technologies and Applications Ltd. at Ben-Gurion University | Method of producing video images that are independent of the background lighting |
CN106447618B (en) * | 2016-05-20 | 2019-04-12 | 北京九艺同兴科技有限公司 | A kind of human action sequence noise-reduction method dictionary-based learning |
US10129485B2 (en) * | 2016-06-10 | 2018-11-13 | Microsoft Technology Licensing, Llc | Methods and systems for generating high dynamic range images |
EP3507981B1 (en) * | 2016-08-30 | 2023-11-29 | Dolby Laboratories Licensing Corporation | Real-time reshaping of single-layer backwards-compatible codec |
US9955085B2 (en) * | 2016-09-22 | 2018-04-24 | Apple Inc. | Adaptive bracketing techniques |
KR102615738B1 (en) * | 2016-10-06 | 2023-12-19 | 한화비전 주식회사 | Image processing apparatus and method thereof |
CN106570849A (en) * | 2016-10-12 | 2017-04-19 | 成都西纬科技有限公司 | Image optimization method |
US10187584B2 (en) | 2016-12-20 | 2019-01-22 | Microsoft Technology Licensing, Llc | Dynamic range extension to produce high dynamic range images |
US10706512B2 (en) * | 2017-03-07 | 2020-07-07 | Adobe Inc. | Preserving color in image brightness adjustment for exposure fusion |
WO2018170181A1 (en) | 2017-03-14 | 2018-09-20 | Universidade De Coimbra | Systems and methods for 3d registration of curves and surfaces using local differential information |
US10009551B1 (en) * | 2017-03-29 | 2018-06-26 | Amazon Technologies, Inc. | Image processing for merging images of a scene captured with differing camera parameters |
US10645302B2 (en) * | 2017-03-30 | 2020-05-05 | Egis Technology Inc. | Image sensing device having adjustable exposure periods and sensing method using the same |
US10489897B2 (en) * | 2017-05-01 | 2019-11-26 | Gopro, Inc. | Apparatus and methods for artifact detection and removal using frame interpolation techniques |
US10334141B2 (en) | 2017-05-25 | 2019-06-25 | Denso International America, Inc. | Vehicle camera system |
US11288781B2 (en) * | 2017-06-16 | 2022-03-29 | Dolby Laboratories Licensing Corporation | Efficient end-to-end single layer inverse display management coding |
US10755392B2 (en) * | 2017-07-13 | 2020-08-25 | Mediatek Inc. | High-dynamic-range video tone mapping |
US10467733B2 (en) | 2017-07-27 | 2019-11-05 | Raytheon Company | Multiplexed high dynamic range images |
EP3493150A1 (en) * | 2017-11-30 | 2019-06-05 | InterDigital VC Holdings, Inc. | Tone mapping adaptation for saturation control |
KR102412591B1 (en) | 2017-12-21 | 2022-06-24 | 삼성전자주식회사 | Method of generating composite image using a plurality of images with different exposure values and electronic device supporting the same |
KR102048369B1 (en) * | 2017-12-22 | 2019-11-25 | 에이스웨이브텍(주) | Fusion dual IR camera using for LWIR and SWIR with image fusion algorithm |
KR102524671B1 (en) | 2018-01-24 | 2023-04-24 | 삼성전자주식회사 | Electronic apparatus and controlling method of thereof |
JP6833751B2 (en) | 2018-03-20 | 2021-02-24 | 株式会社東芝 | Imaging control device, imaging device, and imaging control method |
US10445865B1 (en) * | 2018-03-27 | 2019-10-15 | Tfi Digital Media Limited | Method and apparatus for converting low dynamic range video to high dynamic range video |
WO2019199701A1 (en) | 2018-04-09 | 2019-10-17 | Dolby Laboratories Licensing Corporation | Hdr image representations using neural network mappings |
US10609299B1 (en) * | 2018-09-17 | 2020-03-31 | Black Sesame International Holding Limited | Method of measuring light using dual cameras |
RU2699812C1 (en) * | 2018-11-08 | 2019-09-11 | Вячеслав Михайлович Смелков | Method for controlling the sensitivity of a television camera on a ccd matrix in conditions of complex illumination and/or complex brightness of objects, computer recording of a video signal and reproduction thereof |
CN112823375A (en) | 2018-11-09 | 2021-05-18 | 三星电子株式会社 | Image resynthesis using forward warping, gap discriminator and coordinate-based inpainting |
RU2726160C1 (en) * | 2019-04-29 | 2020-07-09 | Самсунг Электроникс Ко., Лтд. | Repeated synthesis of image using direct deformation of image, pass discriminator and coordinate-based remodelling |
US10853927B2 (en) | 2019-03-19 | 2020-12-01 | Apple Inc. | Image fusion architecture |
US10853928B2 (en) | 2019-03-29 | 2020-12-01 | Apple Inc. | Image fusion processing module |
TWI703863B (en) * | 2019-06-13 | 2020-09-01 | 瑞昱半導體股份有限公司 | Method for detecting video quality and image processing circuit thereof |
IL268612A (en) * | 2019-08-08 | 2021-03-01 | HYATT Yonatan | Use of an hdr image in a visual inspection process |
US11252440B2 (en) * | 2019-11-07 | 2022-02-15 | Comcast Cable Communications, Llc | Pixel filtering for content |
CN110708473B (en) * | 2019-11-14 | 2022-04-15 | 深圳市道通智能航空技术股份有限公司 | High dynamic range image exposure control method, aerial camera and unmanned aerial vehicle |
US11503221B2 (en) * | 2020-04-01 | 2022-11-15 | Samsung Electronics Co., Ltd. | System and method for motion warping using multi-exposure frames |
CN111462021B (en) * | 2020-04-27 | 2023-08-29 | Oppo广东移动通信有限公司 | Image processing method, apparatus, electronic device, and computer-readable storage medium |
KR20220010264A (en) * | 2020-07-17 | 2022-01-25 | 에스케이하이닉스 주식회사 | Noise removing circuit, image sensing device and operation method thereof |
US11490078B2 (en) | 2020-12-29 | 2022-11-01 | Tencent America LLC | Method and apparatus for deep neural network based inter-frame prediction in video coding |
CN113572972B (en) * | 2021-07-05 | 2022-04-12 | 深圳市阿达视高新技术有限公司 | High dynamic range image synthesis method, system, image processing apparatus and medium |
US11595590B1 (en) * | 2022-01-24 | 2023-02-28 | Dell Products L.P. | Method for intelligent frame capture for high-dynamic range images |
CN114845137B (en) * | 2022-03-21 | 2023-03-10 | 南京大学 | Video light path reconstruction method and device based on image registration |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002271681A (en) | 2001-03-09 | 2002-09-20 | Hitachi Kokusai Electric Inc | Television camera |
US20020145674A1 (en) | 2001-04-09 | 2002-10-10 | Satoru Nakamura | Imaging apparatus and signal processing method for the same |
Family Cites Families (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3074967B2 (en) * | 1992-10-27 | 2000-08-07 | 松下電器産業株式会社 | High dynamic range imaging / synthesis method and high dynamic range imaging apparatus |
US5517242A (en) * | 1993-06-29 | 1996-05-14 | Kabushiki Kaisha Toyota Chuo Kenkyusho | Image sensing device having expanded dynamic range |
US5801773A (en) * | 1993-10-29 | 1998-09-01 | Canon Kabushiki Kaisha | Image data processing apparatus for processing combined image signals in order to extend dynamic range |
US5929908A (en) * | 1995-02-03 | 1999-07-27 | Canon Kabushiki Kaisha | Image sensing apparatus which performs dynamic range expansion and image sensing method for dynamic range expansion |
US5828793A (en) * | 1996-05-06 | 1998-10-27 | Massachusetts Institute Of Technology | Method and apparatus for producing digital images having extended dynamic ranges |
KR20000064963A (en) * | 1997-02-21 | 2000-11-06 | 엠. 제이. 엠. 반 캄 | Method and apparatus for recording and playing video images |
NZ332626A (en) * | 1997-11-21 | 2000-04-28 | Matsushita Electric Ind Co Ltd | Expansion of dynamic range for video camera |
JPH11155098A (en) * | 1997-11-21 | 1999-06-08 | Matsushita Electric Ind Co Ltd | Device and method for processing signal |
US7057150B2 (en) * | 1998-03-16 | 2006-06-06 | Panavision Imaging Llc | Solid state imager with reduced number of transistors per pixel |
US6560285B1 (en) * | 1998-03-30 | 2003-05-06 | Sarnoff Corporation | Region-based information compaction as for digital images |
JP4284570B2 (en) * | 1999-05-31 | 2009-06-24 | ソニー株式会社 | Imaging apparatus and method thereof |
US6864916B1 (en) * | 1999-06-04 | 2005-03-08 | The Trustees Of Columbia University In The City Of New York | Apparatus and method for high dynamic range imaging using spatially varying exposures |
US6760484B1 (en) * | 2000-01-26 | 2004-07-06 | Hewlett-Packard Development Company, L.P. | Method for improved contrast mapping of digital images |
US6850642B1 (en) * | 2000-01-31 | 2005-02-01 | Micron Technology, Inc. | Dynamic histogram equalization for high dynamic range images |
US7336309B2 (en) * | 2000-07-05 | 2008-02-26 | Vision-Sciences Inc. | Dynamic range compression method |
JP4554094B2 (en) * | 2001-01-26 | 2010-09-29 | オリンパス株式会社 | Imaging device |
WO2003024091A1 (en) * | 2000-11-16 | 2003-03-20 | California Institute Of Technology | Photodiode cmos imager with column-feedback soft-reset |
US7050205B2 (en) * | 2000-12-26 | 2006-05-23 | Fuji Photo Film Co., Ltd. | Method of correcting image data picked up from photographic film |
JP2002314885A (en) * | 2001-04-09 | 2002-10-25 | Toshiba Corp | Imaging device |
JP3740394B2 (en) * | 2001-07-27 | 2006-02-01 | 日本電信電話株式会社 | High dynamic range video generation method and apparatus, execution program for the method, and recording medium for the execution program |
JP3948229B2 (en) * | 2001-08-01 | 2007-07-25 | ソニー株式会社 | Image capturing apparatus and method |
EP1313066B1 (en) * | 2001-11-19 | 2008-08-27 | STMicroelectronics S.r.l. | A method for merging digital images to obtain a high dynamic range digital image |
US20040008267A1 (en) * | 2002-07-11 | 2004-01-15 | Eastman Kodak Company | Method and apparatus for generating images used in extended range image composition |
US20040100565A1 (en) * | 2002-11-22 | 2004-05-27 | Eastman Kodak Company | Method and system for generating images used in extended range panorama composition |
US6879731B2 (en) * | 2003-04-29 | 2005-04-12 | Microsoft Corporation | System and process for generating high dynamic range video |
WO2005027491A2 (en) * | 2003-09-05 | 2005-03-24 | The Regents Of The University Of California | Global motion estimation image coding and processing |
JP2005250235A (en) * | 2004-03-05 | 2005-09-15 | Seiko Epson Corp | Optical modulating device, optical display device, optical modulation control program, optical display device control program, optical modulation control method, and optical display device control method |
-
2003
- 2003-04-29 US US10/425,338 patent/US6879731B2/en not_active Expired - Fee Related
-
2004
- 2004-04-01 CN CNB2004800153340A patent/CN100524345C/en not_active Expired - Fee Related
- 2004-04-01 EP EP04749672A patent/EP1618737A4/en not_active Withdrawn
- 2004-04-01 JP JP2006509622A patent/JP4397048B2/en not_active Expired - Fee Related
- 2004-04-01 WO PCT/US2004/010167 patent/WO2004098167A2/en active Search and Examination
- 2004-04-01 KR KR1020057020537A patent/KR101026577B1/en not_active IP Right Cessation
- 2004-04-05 TW TW093109392A patent/TWI396433B/en not_active IP Right Cessation
- 2004-10-15 US US10/965,935 patent/US7010174B2/en not_active Expired - Fee Related
-
2005
- 2005-01-14 US US11/036,944 patent/US7382931B2/en not_active Expired - Fee Related
-
2006
- 2006-01-23 US US11/338,910 patent/US7239757B2/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002271681A (en) | 2001-03-09 | 2002-09-20 | Hitachi Kokusai Electric Inc | Television camera |
US20020145674A1 (en) | 2001-04-09 | 2002-10-10 | Satoru Nakamura | Imaging apparatus and signal processing method for the same |
Non-Patent Citations (1)
Title |
---|
See also references of EP1618737A4 |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8989267B2 (en) | 2006-01-23 | 2015-03-24 | Max-Planck-Gesellschaft Zur Forderung Der Wissenschaften E.V. | High dynamic range codecs |
US8611421B1 (en) | 2006-01-23 | 2013-12-17 | Max-Plank-Gesselschaft zur Forderung der Wissenschaften E.V. | High dynamic range codecs |
US10165297B2 (en) | 2006-01-23 | 2018-12-25 | Max-Planck-Gesellschaft Zur Forderung Der Wissenschaften E.V. | High dynamic range codecs |
US9544610B2 (en) | 2006-01-23 | 2017-01-10 | MAX-PLANCK-Gesellschaft zur Förderung der Wissenschaften e.V. | High dynamic range codecs |
US9894374B2 (en) | 2006-01-23 | 2018-02-13 | Max-Planck-Gesellschaft Zur Forderund Der Wissenschaften E.V. | High dynamic range codecs |
US9210439B2 (en) | 2006-01-23 | 2015-12-08 | Max-Planck Gesellschaft Zur Forderung Der Wissenschaften E.V. | High dynamic range codecs |
US8537893B2 (en) | 2006-01-23 | 2013-09-17 | Max-Planck-Gesellschaft Zur Forderung Der Wissenschaften E.V. | High dynamic range codecs |
US10931961B2 (en) | 2006-01-23 | 2021-02-23 | Max-Planck-Gesellschaft Zur Forderung Der Wissenschaften E.V. | High dynamic range codecs |
US11751757B2 (en) | 2012-07-26 | 2023-09-12 | DePuy Synthes Products, Inc. | Wide dynamic range using monochromatic sensor |
US11082627B2 (en) | 2012-07-26 | 2021-08-03 | DePuy Synthes Products, Inc. | Wide dynamic range using monochromatic sensor |
US10742895B2 (en) | 2012-07-26 | 2020-08-11 | DePuy Synthes Products, Inc. | Wide dynamic range using monochromatic sensor |
US9100589B1 (en) | 2012-09-11 | 2015-08-04 | Google Inc. | Interleaved capture for high dynamic range image acquisition and synthesis |
US8964060B2 (en) | 2012-12-13 | 2015-02-24 | Google Inc. | Determining an image capture payload burst structure based on a metering image capture sweep |
US9087391B2 (en) | 2012-12-13 | 2015-07-21 | Google Inc. | Determining an image capture payload burst structure |
US8866927B2 (en) | 2012-12-13 | 2014-10-21 | Google Inc. | Determining an image capture payload burst structure based on a metering image capture sweep |
WO2014093042A1 (en) * | 2012-12-13 | 2014-06-19 | Google Inc. | Determining an image capture payload burst structure based on metering image capture sweep |
WO2014093048A1 (en) * | 2012-12-13 | 2014-06-19 | Google Inc. | Determining an image capture payload burst structure |
US8866928B2 (en) | 2012-12-18 | 2014-10-21 | Google Inc. | Determining exposure times using split paxels |
WO2014099326A1 (en) * | 2012-12-20 | 2014-06-26 | Google Inc. | Determining image alignment failure |
US8995784B2 (en) | 2013-01-17 | 2015-03-31 | Google Inc. | Structure descriptors for image processing |
US9749551B2 (en) | 2013-02-05 | 2017-08-29 | Google Inc. | Noise models for image processing |
US9686537B2 (en) | 2013-02-05 | 2017-06-20 | Google Inc. | Noise models for image processing |
US9066017B2 (en) | 2013-03-25 | 2015-06-23 | Google Inc. | Viewfinder display based on metering images |
US9077913B2 (en) | 2013-05-24 | 2015-07-07 | Google Inc. | Simulating high dynamic range imaging with virtual long-exposure images |
US9615012B2 (en) | 2013-09-30 | 2017-04-04 | Google Inc. | Using a second camera to adjust settings of first camera |
Also Published As
Publication number | Publication date |
---|---|
WO2004098167A3 (en) | 2005-04-28 |
CN1799057A (en) | 2006-07-05 |
EP1618737A4 (en) | 2011-05-11 |
US20040218830A1 (en) | 2004-11-04 |
JP2006525747A (en) | 2006-11-09 |
US6879731B2 (en) | 2005-04-12 |
JP4397048B2 (en) | 2010-01-13 |
TW200509687A (en) | 2005-03-01 |
US7010174B2 (en) | 2006-03-07 |
KR20060012278A (en) | 2006-02-07 |
US7239757B2 (en) | 2007-07-03 |
US20050243177A1 (en) | 2005-11-03 |
TWI396433B (en) | 2013-05-11 |
CN100524345C (en) | 2009-08-05 |
KR101026577B1 (en) | 2011-03-31 |
US20060133688A1 (en) | 2006-06-22 |
EP1618737A2 (en) | 2006-01-25 |
US20050047676A1 (en) | 2005-03-03 |
US7382931B2 (en) | 2008-06-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6879731B2 (en) | System and process for generating high dynamic range video | |
CA2471671C (en) | A system and process for generating high dynamic range images from multiple exposures of a moving scene | |
Kang et al. | High dynamic range video | |
Eden et al. | Seamless image stitching of scenes with large motions and exposure differences | |
US7239805B2 (en) | Method and system for combining multiple exposure images having scene and camera motion | |
US11025830B1 (en) | Deghosting camera | |
EP3631754B1 (en) | Image processing apparatus and method | |
Bennett et al. | Video enhancement using per-pixel virtual exposures | |
US8774559B2 (en) | Stereoscopic dynamic range image sequence | |
US8933985B1 (en) | Method, apparatus, and manufacture for on-camera HDR panorama | |
Mangiat et al. | High dynamic range video with ghost removal | |
Schubert et al. | A hands-on approach to high-dynamic-range and superresolution fusion | |
Shen et al. | Recovering high dynamic range by Multi-Exposure Retinex | |
CN114885094B (en) | Image processing method, image processor, image processing module and device | |
US20240040251A1 (en) | Systems, apparatus, and methods for stabilization and blending of exposures | |
Neiterman et al. | Adaptive Enhancement of Extreme Low-Light Images | |
CN115797224A (en) | High-dynamic image generation method and device for removing ghosts and storage medium | |
Li et al. | High dynamic range video with synthesized gain control | |
CN117412185A (en) | Image generation method, device, electronic equipment and computer readable storage medium | |
Hebbalaguppe et al. | An efficient multiple exposure image fusion in JPEG domain |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A2 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 4875/DELNP/2005 Country of ref document: IN |
|
REEP | Request for entry into the european phase |
Ref document number: 2004749672 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2004749672 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020057020537 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2006509622 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 20048153340 Country of ref document: CN |
|
WWP | Wipo information: published in national office |
Ref document number: 2004749672 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 1020057020537 Country of ref document: KR |
|
DPEN | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed from 20040101) |