CN104463817A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN104463817A
CN104463817A CN201310416756.XA CN201310416756A CN104463817A CN 104463817 A CN104463817 A CN 104463817A CN 201310416756 A CN201310416756 A CN 201310416756A CN 104463817 A CN104463817 A CN 104463817A
Authority
CN
China
Prior art keywords
images
image
template
frequency content
intelligent terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310416756.XA
Other languages
Chinese (zh)
Inventor
朱聪超
罗巍
邓斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Device Co Ltd
Original Assignee
Huawei Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Device Co Ltd filed Critical Huawei Device Co Ltd
Priority to CN201310416756.XA priority Critical patent/CN104463817A/en
Publication of CN104463817A publication Critical patent/CN104463817A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses an image processing method and a device, relates to the image processing field and enables an intelligent terminal in motion to acquire intelligible images of different shooting themes in one scene. The method comprises steps that, at least two images having different focal points in one same scene are acquired by the intelligent terminal in motion; one of the at least two images is randomly selected as a template, characteristics of the at least two images are extracted, the image characteristics of the at least two images except image characteristics of the template are matched with the image characteristics of the template, according to the characteristic pairs in matching success, and conversion parameters between the at least two images except the template and the template are acquired; the images except the template are converted according to the conversion parameters, and the template and the images after conversion are taken as matching results of the at least two images; fusion of the matching results is carried out to acquire images after fusion. The method is applied to image processing.

Description

A kind of image processing method and device
Technical field
The present invention relates to image processing field, particularly relate to a kind of image processing method and device.
Background technology
The image acquisition method of current mobile intelligent terminal, mainly the scene of intelligent terminal to image capture is focused, and obtains the focal plane at focus place, obtains digital picture by image-signal processor, wherein, near focal plane, the image of the theme of the degree of depth in photo can not seem fuzzy.
Multifocal superimposing technique adopts the method for Digital Image Processing, detailed information in multiple the not confocal images taken under Same Scene is combined, form all theme all clear photographs, but, the athletic meeting of mobile terminal when taking causes the difference of multiple images and causes multifocal superimposing technique normally to realize, and the image of acquisition has ghost image.
Therefore, how to obtain when intelligent terminal moves in a scene different photographed subject all clearly image be have problem to be solved.
Summary of the invention
Embodiments of the invention provide a kind of image processing method and device, can obtain different photographed subject image all clearly in a scene when intelligent terminal moves.
Embodiments of the invention adopt following technical scheme:
First aspect, provides a kind of image processing method, is applied to intelligent intelligent terminal, comprises:
Under the state that described intelligent terminal moves, gather at least two images of Same Scene, described at least two images adopt different focuses;
One is selected arbitrarily as template in described at least two images, to described at least two image zooming-out unique points, respectively the unique point of the image in described at least two images except described template is mated with the unique point of described template, obtain the feature point pairs that the match is successful, according to the described feature point pairs that the match is successful, the image at least two images described in acquisition except described template and the conversion parameter between described template;
According to described conversion parameter, the image in described at least two images except described template is converted, using the registration result of the image after described template and conversion as described at least two images;
The registration result of described at least two images is carried out focus fusion, obtains the image after merging.
In the implementation that the first is possible, in conjunction with first aspect, described to described at least two image zooming-out unique points, respectively the unique point of the image in described at least two images except described template is mated with the unique point of described template, obtain the feature point pairs that the match is successful, specifically comprise:
According to the feature point extraction algorithm feature descriptor of extract minutiae and described unique point in described at least two images respectively;
Respectively the value of the feature descriptor of the individual features point in the value of the feature descriptor of the described unique point belonged in different images and described template is contrasted, select the unique point of the difference of the value of the feature descriptor of the individual features point in the value of feature descriptor and described template in preset range as the feature point pairs that the match is successful.
In the implementation that the second is possible, in conjunction with the implementation that the first is possible, described feature descriptor comprises: edge, profile, the gradient information of unique point place neighborhood.
In the implementation that the third is possible, in conjunction with first aspect, the image at least two images described in described acquisition except described template and the conversion parameter between described template, also comprise:
Obtain the displacement of the camera lens of described intelligent terminal during at least two images described in gathering, according to the displacement of camera lens of described intelligent terminal during at least two images described in gathering, the picture size difference of at least two images described in acquisition, according to the feature point pairs that the match is successful and described picture size difference, obtain described conversion parameter;
Or,
Obtain the displacement of described intelligent terminal during at least two images described in gathering, according to gather described at least two images time described intelligent terminal displacement, the image translation parameter of at least two images described in acquisition, according to the feature point pairs that the match is successful and described image translation parameter, obtain described conversion parameter;
Or,
Obtain gather described at least two images time described intelligent terminal camera lens displacement and gather described at least two images time described intelligent terminal displacement, according to the displacement of camera lens of described intelligent terminal during at least two images described in gathering, the picture size difference of at least two images described in acquisition, according to described in gathering during at least two images the displacement of described intelligent terminal obtain described in the image translation parameter of at least two images;
According to the feature point pairs that the match is successful, described picture size difference and described image translation parameter, the conversion parameter described in acquisition between at least two images.
In the 4th kind of possible implementation, in conjunction with first aspect, described according to described registration result by conversion after described at least two images carry out focus fusion, obtain the image after described fusion, specifically comprise:
After the registration result of described at least two images is corresponded to frequency domain, decompose according to frequency, at least one frequency content of registration result of at least two images described in obtaining respectively, one section of fixing frequency separation in each frequency content correspondence image;
Each corresponding frequencies composition at least one frequency content described in the registration result of at least two images described in contrasting respectively, the frequency content that in each corresponding frequencies composition described in choosing, gradient is larger is as the frequency content at least one frequency content of the image after described fusion;
According to each frequency content at least one frequency content described, the frequency content at least one frequency content of the image after the described fusion chosen corresponding for each frequency content described is merged, generates the image after described fusion.
Second aspect, provides a kind of image processing apparatus, comprising:
Image acquisition units, for gathering at least two images of Same Scene under the state of moving at described image processing apparatus, described at least two images adopt different focuses;
Motion estimation unit, for obtaining at least two images described in the collection of described image acquisition units, one is selected arbitrarily as template in described at least two images, to described at least two image zooming-out unique points, respectively the unique point of the image in described at least two images except described template is mated with the unique point of described template, obtain the feature point pairs that the match is successful, according to the described feature point pairs that the match is successful, the image at least two images described in acquisition except described template and the conversion parameter between described template;
Image registration unit, for obtaining the conversion parameter that described in the acquisition of described motion estimation unit, at least two images and described motion estimation unit obtain, according to described conversion parameter, the image in described at least two images except described template is converted, using the registration result of the image after described template and conversion as described at least two images;
Image co-registration unit, for obtaining the registration result of at least two images described in the formation of described image registration unit, carrying out focus fusion by the registration result of described at least two images, obtaining the image after described fusion.
In the implementation that the first is possible, in conjunction with second aspect, described motion estimation unit, comprises characteristic extracting module, characteristic matching module and parameter acquisition module;
Described characteristic extracting module, for obtain described image acquisition units gather described at least two images, according to the feature point extraction algorithm feature descriptor of extract minutiae and described unique point in described at least two images respectively;
Described characteristic matching module, for obtaining the unique point and the feature descriptor of described unique point that described characteristic extracting module extracts, respectively the value of the feature descriptor of the individual features point in the value of the feature descriptor of the described unique point belonged in different images and described template is contrasted, select the unique point of the difference of the value of the feature descriptor of the individual features point in the value of feature descriptor and described template in preset range as the feature point pairs that the match is successful;
Described parameter acquisition module, for according to the described feature point pairs that the match is successful, the conversion parameter described in acquisition between at least two images.
In the implementation that the second is possible, in conjunction with the implementation that the first is possible, described feature descriptor comprises: edge, profile, the gradient information of unique point place neighborhood.
In the implementation that the third is possible, in conjunction with second aspect, described parameter acquisition module, also for:
The displacement of the camera lens of described intelligent terminal when described image acquisition units is also for obtaining at least two images described in collection;
Described parameter acquisition module, also for according to the displacement of camera lens of described intelligent terminal during at least two images described in gathering, the picture size difference of at least two images described in acquisition, according to the feature point pairs that the match is successful and described picture size difference, obtains described conversion parameter;
Or,
The displacement of described intelligent terminal when described image acquisition units is also for obtaining at least two images described in collection;
Described parameter acquisition module, also for according to gather described at least two images time described intelligent terminal displacement, the image translation parameter of at least two images described in acquisition, according to the feature point pairs that the match is successful and described image translation parameter, the conversion parameter described in acquisition between at least two images;
Or,
The displacement of the camera lens of described intelligent terminal and the displacement of described intelligent terminal during at least two images described in gathering when described image acquisition units is also for obtaining at least two images described in collection;
Described parameter acquisition module, also for according to the displacement of camera lens of described intelligent terminal during at least two images described in gathering, the picture size difference of at least two images described in acquisition, according to described in gathering during at least two images the displacement of described intelligent terminal obtain described in the image translation parameter of at least two images;
According to the feature point pairs that the match is successful, described picture size difference and described image translation parameter, the conversion parameter described in acquisition between at least two images.
In the 4th kind of possible implementation, in conjunction with second aspect, described image co-registration unit, comprises band decomposition module, frequency band Fusion Module and image co-registration module:
Described band decomposition module, after the registration result of described at least two images is corresponded to frequency domain, decompose according to frequency, at least one frequency content of registration result of at least two images described in obtaining respectively, one section of fixing frequency separation in each frequency content correspondence image;
Described frequency band Fusion Module, for contrast at least two images that described band decomposition module obtains respectively registration result in described in each corresponding frequencies composition at least one frequency content, choose frequency content that in each corresponding frequencies composition described, gradient is larger as the frequency content at least one frequency content of the image after described fusion;
Described image co-registration module, for according to each frequency content at least one frequency content described, frequency content at least one frequency content of image after the described fusion choose described frequency band Fusion Module corresponding for each frequency content described merges, and generates the image after described fusion.
The image processing method that embodiments of the invention provide and device, according to carrying out registration and focus fusion to multiple images collected, obtain different photographed subject image all clearly in a scene when intelligent terminal moves.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, the accompanying drawing in the following describes is only some embodiments of the present invention.
A kind of image processing apparatus structural representation that Fig. 1 provides for embodiments of the invention;
A kind of image processing apparatus structural representation that Fig. 2 provides for another embodiment of the present invention;
The device structure schematic diagram of a kind of application image treating apparatus that Fig. 3 provides for embodiments of the invention;
A kind of image processing method schematic flow sheet that Fig. 4 provides for embodiments of the invention;
A kind of image processing method schematic flow sheet that Fig. 5 provides for another embodiment of the present invention;
The schematic diagram one of the image processing process in the image processing method that Fig. 6 provides for embodiments of the invention;
The schematic diagram two of the image processing process in the image processing method that Fig. 7 provides for embodiments of the invention;
The schematic diagram three of the image processing process in the image processing method that Fig. 8 provides for embodiments of the invention;
The schematic diagram four of the image processing process in the image processing method that Fig. 9 provides for embodiments of the invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.
A kind of image processing apparatus of embodiments of the invention, this device is specifically as follows the camera module on camera, video camera, mobile phone, palm PC or panel computer, concrete, and with reference to shown in Fig. 1, this device comprises:
Image acquisition units 11, for gathering at least two images of Same Scene under the state of moving at image processing apparatus 1, at least two images adopt different focuses.
Here image acquisition units can know the motion of image processing apparatus according to the sensor be arranged on image processing apparatus, or knows the motion of image processing apparatus according to the change of same object relative position in two images collected.
Motion estimation unit 12, for obtaining at least two images that image acquisition units 11 gathers, one is selected arbitrarily as template at least two images, respectively at least two image zooming-out unique points, respectively the unique point of the image outside removing template at least two images is mated with the unique point of template, obtain the feature point pairs that the match is successful, according to the feature point pairs that the match is successful, obtain the conversion parameter between image at least two images outside removing template and template.
Image registration unit 13, for the conversion parameter that at least two images and motion estimation unit 12 that obtain image acquisition units 11 acquisition obtain, according to conversion parameter, the image outside removing template at least two images is converted, using the registration result of the image after template and conversion as at least two images.
Image co-registration unit 14, for obtaining the registration result of at least two images that image registration unit 13 is formed, carries out focus fusion by the registration result of at least two images, obtains the image after merging.
Such image processing apparatus can obtain the not confocal image of Same Scene, eliminate intelligent terminal in shooting process by estimation to move the difference of two images caused, and extract each theme of Same Scene at not confocal image, by different themes visual fusion the most clearly, obtain different photographed subject image all clearly in a scene when intelligent terminal moves.
Owing to being the fusion of image of at least two near focal point, therefore, single image before relative fusion, improve the level of the depth of field or the depth of field, such as in image one with theme one for focus is taken, in image two with theme two for focus is taken, and theme one in image one and theme two in image two, have the different depth of field, image after then merging comprises the content of two depth of field, which enhances the depth queuing of image; When in image one for the shooting focus of theme one and image two in for the shooting focus of theme two not in same focal plane time (plane at focus place is called focal plane, the wherein axes normal of focal plane and stationary lens), the depth of field after then merging is greater than the front depth of field of any one theme in its image of fusion, and be less than or equal to the superposition of merging the first two theme depth of field in its image, therefore can improve the overall depth of field of picture.
The image processing apparatus that embodiments of the invention provide, according to carrying out registration and focus fusion to multiple images collected, obtains different photographed subject all clear photographs in a scene when intelligent terminal moves.
With reference to shown in Fig. 2, to apply the mobile phone of the image processing apparatus that embodiments of the invention provide, be namely described for the mobile phone possessing camera function, this image processing apparatus comprises:
Image acquisition units 11, for gathering at least two images of Same Scene, at least two images adopt different focuses, optionally, image acquisition units 11 can also comprise displacement transducer 111 as shown in Figure 2 and speed pickup 112, wherein, the displacement of camera lens when displacement transducer 111 obtains image acquisition, speed pickup 112 obtains the displacement of intelligent terminal, concrete, and speed pickup 112 can adopt gyroscope or acceleration transducer.Be understandable that the displacement of camera lens in the gatherer process of image and the displacement of intelligent terminal can gather metadata corresponding to image as characterizing, the now displacement of camera lens and the displacement of intelligent terminal are equivalent to the feature of image; Wherein, the displacement of camera lens is the motion that object of reference judges camera lens with intelligent terminal, is usually understandable that, can changes the size of picture when camera lens is subjected to displacement; The displacement of intelligent terminal is the motion that object of reference judges intelligent terminal with photographed subject, is understandable that when intelligent terminal is subjected to displacement, can changes the scene content captured by picture.
Motion estimation unit 12, for obtaining the image that image acquisition units 11 gathers, one is selected arbitrarily as template at least two images, to at least two image zooming-out unique points, respectively the unique point of the image outside removing template at least two images is mated with the unique point of template, obtain the feature point pairs that the match is successful, and according to the feature point pairs that the match is successful, obtain the conversion parameter between image at least two images outside removing template and template.Concrete, this motion estimation unit 12 comprises characteristic extracting module 121 as shown in Figure 3, characteristic matching module 122 and parameter acquisition module 123, wherein characteristic extracting module 121 is for obtaining the image of image acquisition units 11 collection, according to the feature point extraction algorithm feature descriptor of extract minutiae and unique point at least two images respectively, wherein, arbitrfary point in unique point and image, feature descriptor can be the edge of unique point place neighborhood, profile, the information such as gradient, concrete, characteristic extracting module 121 selectes the coordinate of unique point, to the region around unique point coordinate, namely the local feature in feature neighborhood of a point quantizes, by the edge in feature neighborhood of a point, profile, the information such as gradient use vector representation respectively, the set of vector corresponding for these information is called the feature descriptor of this unique point, wherein, feature point extraction algorithm can adopt Scale invariant features transform algorithm (Scale Invariant Feature Transform is called for short SIFI).
Characteristic matching module 122 is for obtaining unique point and the feature descriptor thereof of characteristic extracting module 121 extraction, respectively the value of the feature descriptor of the individual features point in the value of the feature descriptor of the unique point belonged in different images and template is contrasted, select the unique point of the difference of the value of the feature descriptor of the individual features point in the value of feature descriptor and template in preset range as the feature point pairs that the match is successful, contrast respectively with the corresponding vector in the feature descriptor of this unique point in template by the vector in feature descriptor, judge whether the vector in these feature descriptors is less than default scope with the difference of the corresponding vector in the feature descriptor of this unique point in template.
Parameter acquisition module 123 is for obtaining characteristic matching module 122 feature point pairs that the match is successful, according to the feature point pairs that the match is successful, obtain the conversion parameter between at least two images, concrete, it can be the conversion parameter converted between two images according to the coordinate acquisition unique point of two unique points in the feature point pairs that the match is successful, wherein this conversion parameter at least can characterize the position relationship of two unique points in respective image, and image scaling ratio.
Optionally, image acquisition units 11 is also for the displacement of the camera lens by mobile phone during displacement transducer 111 acquisition collection at least two images, parameter acquisition module 123 obtains the picture size difference of at least two images according to the displacement of the camera lens of mobile phone, according to the feature point pairs that the match is successful and picture size difference, obtain conversion parameter; Concrete, according to the principle of imaging in physics, the movement of known camera lens can change the image distance in image-forming principle, by formula (wherein, m is the enlargement factor of the picture of object after imaging, f is image distance, s is object distance) obtain image distance f, just can obtain object enlargement factor in the picture, then just obtain the knots modification of image distance according to the displacement of camera lens, and then obtain the difference of object enlargement factor in the picture, the picture size difference of two images can be calculated according to the difference of object enlargement factor in the picture.
Then, using the initial value of the picture size difference of two images as conversion parameter, calculate according to the initial value of the feature point pairs that the match is successful to conversion parameter, obtain conversion parameter, wherein, the picture size difference of two images at least can characterize the image scaling ratio of two objects in respective image, and this conversion parameter at least can characterize the position relationship of two unique points in respective image, and image scaling ratio.Wherein, the displacement of camera lens is the motion that object of reference judges camera lens with intelligent terminal, is usually understandable that, can changes the size of picture when camera lens is subjected to displacement; The displacement of intelligent terminal is the motion that object of reference judges intelligent terminal with photographed subject, is understandable that when intelligent terminal is subjected to displacement, can changes the scene content captured by picture.
Or, image acquisition units 11 is also for the displacement of mobile phone during Negotiation speed sensor 112 acquisition collection at least two images, parameter acquisition module 123 obtains the image translation parameter of at least two images according to the displacement of mobile phone, according to the feature point pairs that the match is successful and image translation parameter, obtain conversion parameter.
Concrete, using the initial value of the image translation parameter of two images as conversion parameter, the initial value of conversion parameter is calculated according to the feature point pairs that the match is successful, obtain conversion parameter, wherein, the image translation parameter of two images at least can characterize the position relationship of two unique points in respective image that the match is successful, and this conversion parameter at least can characterize the position relationship of two unique points in respective image, and the scaling between two images at the match is successful two unique point places.
Or, image acquisition units 11 is also for the displacement of the camera lens by mobile phone during displacement transducer 111 acquisition collection at least two images, Negotiation speed sensor 112 obtains the displacement of mobile phone when gathering at least two images, parameter acquisition module 123 obtains the picture size difference of at least two images according to the displacement of the camera lens of mobile phone, the image translation parameter of at least two images is obtained according to the displacement of mobile phone, according to the feature point pairs that the match is successful, picture size difference and image translation parameter, obtain conversion parameter.
Concrete, using the image translation parameter of two images and the mean value of the size difference initial value as conversion parameter, calculate according to the initial value of the feature point pairs that the match is successful to conversion parameter, obtain conversion parameter, wherein, the picture size difference of two images at least can characterize the image scaling ratio of two unique points in respective image that the match is successful, the image translation parameter of two images at least can characterize the position relationship of two unique points in respective image that the match is successful, this conversion parameter at least can characterize the position relationship of two unique points in respective image that the match is successful, and two image scaling ratios at the match is successful two unique point places.
Image registration unit 13, for selecting arbitrarily one as template at least two images, converts the image outside removing template at least two images according to conversion parameter, using the registration result of the image after template and conversion as at least two images.
Wherein, this conversion parameter at least can characterize the position relationship of two unique points in respective image, and image scaling ratio.Be understandable that because conversion parameter characterizes at least to characterize the position relationship of two unique points in respective image that the match is successful, and two image scaling ratios at the match is successful two unique point places, therefore when using when wherein an image is as template, according to the conversion that conversion parameter is carried out another image, namely adopt and convergent-divergent adjustment is carried out to another image, to two images, registration is carried out to another image to carry out translation method according to the position of the unique point that the match is successful simultaneously.
Concrete, image registration unit 13, for the conversion parameter that at least two images and motion estimation unit 12 that obtain image acquisition units 11 acquisition obtain, according to conversion parameter, the image outside removing template at least two images is converted, using the registration result of the image after template and conversion as at least two images.
Image co-registration unit 14, for obtaining the registration result of at least two images that image registration unit 13 is formed, carries out focus fusion by the registration result of at least two images, obtains the image after merging.Concrete, image co-registration unit 14 comprises band decomposition module 141, frequency band Fusion Module 142 and image co-registration module 143.Wherein, band decomposition module 141, after the registration result of at least two images is corresponded to frequency domain, decompose according to frequency, obtain at least one frequency content of the registration result of at least two images respectively, in each frequency content correspondence image, one section of fixing frequency separation, namely in a frequency domain, carrys out the space interval in token image by frequency separation; Frequency band Fusion Module 142, for contrasting each the corresponding frequencies composition at least one frequency content in the registration result of at least two images that band decomposition module 141 obtains, choose frequency content that in each corresponding frequencies composition, gradient is larger this frequency content as the image after merging, namely frequency change faster frequency content as this frequency content at least one frequency content of the image after merging; Image co-registration module 143, for according to each frequency content at least one frequency content, frequency content at least one frequency content of image after the fusion choose frequency band Fusion Module 142 corresponding for each frequency content merges, and generates the image after merging.
Certainly, time for mobile phone, this mobile phone can also comprise display 15, wherein, image co-registration unit 14, can also strengthen the image of input, comprise denoising, contrast strengthen, sharpening, color treatments, user can check image by display 15, or user can be transferred to network, personal computer (personal computer, PC) machine etc.
The image processing apparatus that embodiments of the invention provide, according to carrying out registration and focus fusion to multiple images collected, effectively improves the depth of field of taking pictures, obtains different photographed subject all clear photographs in a scene when intelligent terminal moves.
Based on existing intelligent intelligent terminal function, in said apparatus embodiment, the functional realiey of modules can pass through central processing unit (Central Processing Unit, CPU) overall control realization is carried out, namely the function performed by image processing apparatus unit that the embodiment of the invention described above provides can be performed by a processor, its structure is with reference to shown in Fig. 3, this image processing apparatus 3 comprises: at least one processor 31, bus 32, storer 33, communication interface 34 and camera lens 35, at least one processor 31, storer 33, communication interface 34 and camera lens 35 are connected by bus 32 and complete mutual communication, wherein:
This bus 32 can be industry standard architecture (Industry StandardArchitecture, ISA) bus, peripheral component interconnect (Peripheral Component, PCI) bus or extended industry-standard architecture (Extended Industry StandardArchitecture, EISA) bus etc.This bus 32 can be divided into address bus, data bus, control bus etc.For ease of representing, only representing with a thick line in Fig. 3, but not representing the bus only having a bus or a type.
Storer 33 is for stores executable programs code and corresponding data, and this program code comprises computer-managed instruction.Storer 33 may comprise high-speed RAM storer, still may comprise nonvolatile memory, and storer is at least for storing feature point extraction algorithm, multiband picture breakdown algorithm and multiband Image Fusion in the present invention.
Communication interface 34, for realizing image processing apparatus 3 and extraneous exchanges data.
Camera lens 35, for gathering at least two images of Same Scene under the state of moving at intelligent terminal, at least two images adopt different focuses.
Processor 31, for selecting arbitrarily one as template at least two images, to at least two image zooming-out unique points, respectively the unique point of the image outside removing template at least two images is mated with the unique point of template, obtain the feature point pairs that the match is successful, according to the feature point pairs that the match is successful, obtain the conversion parameter between image at least two images outside removing template and template;
Processor 31 also for converting the image outside removing template at least two images according to conversion parameter, using the image after template and conversion as the registration result of at least two images;
Processor 31, also for the registration result of at least two images is carried out focus fusion, obtains the image after merging.
Optionally, processor 31 is specifically for according to the feature point extraction algorithm feature descriptor of extract minutiae and unique point at least two images respectively; Respectively the value of the feature descriptor of the individual features point in the value of the feature descriptor of the unique point in different images and template is contrasted, select the unique point of the difference of the value of the feature descriptor of the individual features point in the value of feature descriptor and template in preset range as the feature point pairs that the match is successful.
Optionally, also comprise the displacement transducer 36 and/or speed pickup 37 that are connected with bus 32,
The displacement of the camera lens of intelligent terminal when displacement transducer 36 gathers at least two images specifically for obtaining, processor 31 is according to the displacement of the camera lens of intelligent terminal during collection at least two images, obtain the picture size difference of at least two images, according to the feature point pairs that the match is successful and picture size difference, obtain conversion parameter;
Or,
The displacement of intelligent terminal when speed pickup 37 is for obtaining collection at least two images, processor 31 is according to the displacement of intelligent terminal during collection at least two images, obtain the image translation parameter of at least two images, according to the feature point pairs that the match is successful and image translation parameter, obtain conversion parameter;
Or,
The displacement of the camera lens of intelligent terminal when displacement transducer 36 is for obtaining collection at least two images, the displacement of intelligent terminal when speed pickup 37 is for gathering at least two images, processor 31 is for the displacement of the camera lens according to intelligent terminal during collection at least two images, obtain the picture size difference of at least two images, obtain the image translation parameter of at least two images according to the displacement of intelligent terminal during collection at least two images;
According to the feature point pairs that the match is successful, picture size difference and image translation parameter, obtain the conversion parameter between at least two images.
Concrete, processor 31 is also for after corresponding to frequency domain by the registration result of at least two images, decompose according to frequency, obtain at least one frequency content of the registration result of at least two images respectively, one section of fixing frequency separation in each frequency content correspondence image; Contrast each the corresponding frequencies composition at least one frequency content in the registration result of at least two images respectively, choose frequency content that in each corresponding frequencies composition, gradient is larger as the frequency content at least one frequency content of the image after merging; According to each frequency content at least one frequency content, the frequency content at least one frequency content of the image after the fusion chosen corresponding for each frequency content is merged, generate the image after merging.
User can realize man-machine interaction by the form of touch-screen or button thus control the image processing apparatus that embodiments of the invention provide.
The image processing method that embodiments of the invention provide, is applied to the device that apparatus of the present invention embodiment provides, and also can directly apply to the intelligent intelligent terminals such as camera, mobile phone, palm PC, concrete, and with reference to shown in Fig. 4, this image processing method comprises:
401, under the state of intelligent terminal motion, gather at least two images of Same Scene, at least two images adopt different focuses.
402, respectively at least two image zooming-out unique points, one is selected arbitrarily as template at least two images, respectively the unique point of the image outside removing template at least two images is mated with the unique point of template, obtain the feature point pairs that the match is successful, according to the feature point pairs that the match is successful, obtain the conversion parameter between image at least two images outside removing template and template.
403, according to conversion parameter, the image outside removing template at least two images is converted, using the registration result of the image after template and conversion as at least two images.
404, the registration result of at least two images is carried out focus fusion, obtain the image after merging.
The image processing method that can provide according to the embodiment of the present invention like this obtains the not confocal image of Same Scene, the impact that in shooting process, intelligent terminal motion produces is eliminated by estimation, and extract each theme of Same Scene at not confocal image, by different themes visual fusion the most clearly, obtain different themes image all clearly in a scene when intelligent terminal moves.
The image processing method that embodiments of the invention provide, merges by carrying out characteristic matching, image conversion and focus to multiple images collected, effectively improves the sharpness of image.
The image processing method that embodiments of the invention provide, is applied to the device that apparatus of the present invention embodiment provides, and also can directly apply to the intelligent intelligent terminals such as camera, mobile phone, palm PC, concrete, and with reference to shown in Fig. 5, this image processing method comprises:
501a, under the state of intelligent terminal motion, gather at least two images of Same Scene, at least two images adopt different focuses.
501b, obtain gather at least two images time intelligent terminal camera lens displacement and/or gather at least two images time intelligent terminal displacement.The displacement of camera lens is the motion that object of reference judges camera lens with intelligent terminal, is usually understandable that, can changes the size of picture when camera lens is subjected to displacement; The displacement of intelligent terminal is the motion that object of reference judges intelligent terminal with photographed subject, is understandable that when intelligent terminal is subjected to displacement, can changes the scene content captured by picture.
502a, at least two images, select arbitrarily one as template, according to the feature point extraction algorithm feature descriptor of extract minutiae and unique point at least two images respectively.Feature descriptor comprises the information such as edge, profile, gradient of unique point place neighborhood.
502b, the displacement of camera lens according to intelligent terminal when gathering at least two images, obtain the picture size difference of at least two images, and/or obtain the image translation parameter of at least two images according to the displacement of intelligent terminal when gathering at least two images.
503, respectively the value of the feature descriptor of the individual features point in the feature descriptor of the unique point belonged in different images and template is contrasted, select the unique point of the difference of the value of the feature descriptor of the individual features point in the value of feature descriptor and template in preset range as the feature point pairs that the match is successful.
504, the conversion parameter between at least two images is obtained according to the feature point pairs that the match is successful and picture size difference;
Or,
According to the conversion parameter between the feature point pairs that the match is successful and image translation parameter acquiring at least two images;
Or,
According to the conversion parameter between the feature point pairs that the match is successful, picture size difference and image translation parameter acquiring at least two images.
505, according to conversion parameter, the image outside removing template at least two images is converted, using the registration result of the image after template and conversion as at least two images.
506, after the registration result of at least two images being corresponded to frequency domain, decompose according to frequency, obtain at least one frequency content of the registration result of at least two images respectively.In each frequency content correspondence image, one section of fixing frequency separation, namely in a frequency domain, carrys out the space interval in token image by frequency separation.
507, contrast each the corresponding frequencies composition at least one frequency content in the registration result of at least two images respectively, choose frequency content that in each corresponding frequencies composition, gradient is larger as the frequency content at least one frequency content of the image after merging.
508, according to each frequency content at least one frequency content, the frequency content at least one frequency content of the image after the fusion chosen corresponding for each frequency content is merged, generate the image after merging.
The image processing method that embodiments of the invention provide, merges by carrying out characteristic matching, image conversion and focus to multiple images collected, effectively improves the sharpness of image.
Concrete for mobile phone photograph, be described with reference to Fig. 2, user opens the image capture software on mobile phone, as camera function etc., light projects on imageing sensor by the camera lens in image acquisition units 11, light signal is changed into electric signal and forms at least two view data (this embodiment is described for two images) by imageing sensor, motion estimation unit 12 selects arbitrarily one as template in two images, characteristic extracting module 121 pairs of view data process, extract the unique point of two images, unique point is mated by characteristic matching module 122, parameter acquisition module 123, the conversion parameter between two images is obtained according to the feature point pairs that the match is successful.Image registration unit 13, converts the image outside removing template in two images according to conversion parameter, using the registration result of the image after template and conversion as these two images.After the registration result of two images adopts the method for multiband picture breakdown to correspond to frequency domain by band decomposition module 141, by small echo according to frequency resolution, obtain at least one frequency content of the registration result of two images, wherein, one section of fixing frequency separation in each frequency content correspondence image; Frequency band Fusion Module 142, contrast each the corresponding frequencies composition at least one frequency content in the registration result of at least two images respectively, choose frequency content that in each corresponding frequencies composition, gradient is larger as the frequency content at least one frequency content of the image after merging, image co-registration module 143, adopt the method for multiband image co-registration, according to each frequency content at least one frequency content, the part that the frequency content chosen is corresponding in respective image merges, and generates the image after merging.Here, the method for multiband image co-registration adopts wavelet analysis (" small echo " that other places of full text relate to is with " wavelet analysis " herein) to process, and certain wavelet analysis is a kind of possibility, is not limited thereto here.Optionally, before parameter acquisition module 123 to obtain the conversion parameter between two images according to the feature point pairs that the match is successful, also comprise: the displacement of the camera lens of mobile phone when displacement transducer 111 obtains collection two images, parameter acquisition module 123 obtains the picture size difference of two images according to the displacement of the camera lens of mobile phone during collection two images, the initial value of conversion parameter is obtained according to picture size difference, again according to the feature point pairs that the match is successful, the initial value of conversion parameter is calculated, obtains the conversion parameter between two images.The displacement of camera lens is the motion that object of reference judges camera lens with intelligent terminal, is usually understandable that, can changes the size of picture when camera lens is subjected to displacement; The displacement of intelligent terminal is the motion that object of reference judges intelligent terminal with photographed subject, is understandable that when intelligent terminal is subjected to displacement, can changes the scene content captured by picture.
Or, speed pickup 112 obtains the displacement of mobile phone, parameter acquisition module 123 obtains the image translation parameter of two images according to the displacement of mobile phone, according to the initial value of image translation parameter acquiring conversion parameter, again according to the feature point pairs that the match is successful, the initial value of conversion parameter is calculated, obtains the conversion parameter between two images.
Or, the displacement of the camera lens of mobile phone when displacement transducer 111 obtains collection two images, speed pickup 112 obtains the displacement of mobile phone, parameter acquisition module 123 obtains the picture size difference of two images according to the displacement of the camera lens of mobile phone during collection two images, obtains the image translation parameter of two images according to the displacement of mobile phone; Parameter acquisition module 123, converts the initial value of parameter according to picture size difference and image translation parameter acquiring, then according to the feature point pairs that the match is successful, calculates, obtain the conversion parameter between two images to the initial value of conversion parameter.Conversion parameter between two images generated in this embodiment with the picture size difference obtained and image translation parameter simultaneously for reference to being described.
With reference to the device shown in Fig. 2, with mobile phone, example is fused to the image 1 of the Same Scene gathered and image 2 62, as shown in Figure 6, the focus of image 1 is remote at camera lens, the focus of image 2 62 at camera lens closely, namely be clear area in the region 612,613 that image 1 medium shot is far away, the region 611 nearer apart from camera lens is fuzzy region, the region 621 nearer at image 2 62 medium shot is clear area, and the region 622,623 far away apart from camera lens is fuzzy region.Concrete, the processing procedure that embodiments of the invention provide is as described below, and whole image processing process is divided into image acquisition (as shown in step 501a and 501b in Fig. 5), image registration (as shown in step 502-505 in Fig. 5) and image co-registration (as shown in 506-508 in Fig. 5) three parts.
Image acquisition part, mobile phone arranges different focuses and carries out image acquisition to Same Scene, and obtain image 1 and image 2 62, the method wherein arranging focus can be manual, also can be automatic.Now, because focus is different, the region sharpness that image 1 therefore can be caused different with image 2 62 is different, and the regional compare of near focal point is clear, and other regions are not because near focal point, occur fuzzy; In addition, in the process gathering image 1 and image 2 62, inevitably hand-held shakiness causes the relative jitter of mobile phone, and focus difference causes the stretching of camera lens to move, image 1 can be caused different with the visual field of image 2 62, the same object size be reflected in image 1 and image 2 62 also can be slightly different phenomenon, as shown in Figure 6, region 611 is different from picture position in region 621, and region 612 is different from article size in region 622.
In above process, displacement transducer 111 obtains the displacement of camera lens when gathering image 1 and image 2 62, speed pickup 112 obtains the displacement of mobile phone when gathering image 1 and image 2 62, and concrete, speed pickup 112 can adopt gyroscope or acceleration transducer.
Image registration portion, as shown in Figure 7, carries out registration to above-mentioned image 1 and image 2 62, eliminates the different impact of on image co-registration focus merging different from same article size in the visual field.Concrete, motion estimation unit 12 select arbitrarily in two images one as template (in this embodiment with image 1 for template is described, identical therewith as template using other pictures), characteristic extracting module 121, according to feature point extraction algorithm, distinguishes the feature descriptor of extract minutiae (as shown in Figure 7 round dot) and unique point to image 1 and image 2 62, as shown in Figure 8, characteristic matching module 122, the feature descriptor of the individual features point of the feature descriptor and image 2 62 that belong to the unique point of image 1 is contrasted, in these unique points, select a pair unique point of the difference of the value of feature descriptor in image 1 and image 2 62 in preset range as the feature point pairs that the match is successful, concrete, feature descriptor can be the edge of unique point place neighborhood, angle point, the information such as profile, contrast with the corresponding vector in the feature descriptor of this unique point in image 1 by the vector in the feature descriptor of unique point in image 2 62, judge whether the vector in these image 2 62 feature descriptors is less than default scope with the difference of the corresponding vector in the feature descriptor of this unique point in image 1, parameter acquisition module 123, according to the displacement of camera lens when collection image 1 and image 2 62, obtain the picture size difference of image 1 and image 2 62, the image translation parameter of image 1 and image 2 62 is obtained according to the displacement of mobile phone when collection image 1 and image 2 62, the initial value of conversion parameter is calculated according to the size difference of image 1 and image 2 62 and image translation parameter, again according to the feature point pairs that the match is successful, the initial value of this conversion parameter is calculated, obtains the conversion parameter of image 1 and image 2 62.
Image registration unit 13, according to conversion parameter, image 2 62 is converted, eliminates the difference of two images caused due to the motion of mobile phone in shooting process, using the image 2 62 after conversion with as the image 1 of template as the registration result of image 1 and image 2 62.
Image co-registration part, after the registration result of image 1 and image 2 62 adopts the method for multiband picture breakdown to correspond to frequency domain by band decomposition module 141, decomposed by small echo, obtain multiple frequency contents of the registration result of image 1 and image 2 62, wherein, in each frequency content correspondence image, one section of fixing frequency separation, namely in a frequency domain, carrys out the space interval in token image by frequency separation.Frequency band Fusion Module 142, each corresponding frequencies composition at least one frequency content in the registration result of contrast images 1 and image 2 62, corresponding frequency separation in reference picture 1 and image 2 62, choose frequency content that in each corresponding frequencies composition of image 1 and image 2 62, gradient is larger as this frequency content at least one frequency content of the image 9 after merging, wherein, image gradient near each image focal point is larger, more clear, namely in the region 612 that image 1 medium shot is far away, 613 and the nearer region 621 of image 2 62 medium shot, image co-registration module 143, adopt the method for multiband image co-registration, according to each frequency content at least one frequency content, by small echo, the frequency content at least one frequency content of the image after the fusion chosen corresponding for each frequency content is merged, generate the image 9 after merging, namely as shown in Figure 9, with the nearer region 621 of image 2 62 medium shot as the nearer region 91 of the distance camera lens of the image 9 after merging, with image 1 medium shot region 612 far away as the distance camera lens of image 9 region 92 far away, with the distance camera lens region 93 of image 1 medium shot region 613 as image 9, obtain the near focal point theme image 9 all clearly of image 1 and image 2 62.
The above; be only the specific embodiment of the present invention, but protection scope of the present invention is not limited thereto, is anyly familiar with those skilled in the art in the technical scope that the present invention discloses; change can be expected easily or replace, all should be encompassed within protection scope of the present invention.Therefore, protection scope of the present invention should described be as the criterion with the protection domain of claim.

Claims (10)

1. an image processing method, is applied to intelligent terminal, it is characterized in that, comprising:
Under the state that described intelligent terminal moves, gather at least two images of Same Scene, described at least two images adopt different focuses;
One is selected arbitrarily as template in described at least two images, to described at least two image zooming-out unique points, respectively the unique point of the image in described at least two images except described template is mated with the unique point of described template, obtain the feature point pairs that the match is successful, according to the described feature point pairs that the match is successful, the image at least two images described in acquisition except described template and the conversion parameter between described template;
According to described conversion parameter, the image in described at least two images except described template is converted, using the registration result of the image after described template and conversion as described at least two images;
The registration result of described at least two images is carried out focus fusion, obtains the image after merging.
2. image processing method according to claim 1, it is characterized in that, described to described at least two image zooming-out unique points, respectively the unique point of the image in described at least two images except described template is mated with the unique point of described template, obtain the feature point pairs that the match is successful, specifically comprise:
According to the feature point extraction algorithm feature descriptor of extract minutiae and described unique point in described at least two images respectively;
Respectively the value of the feature descriptor of the individual features point in the value of the feature descriptor of the described unique point in different images and described template is contrasted, select the unique point of the difference of the value of the feature descriptor of the individual features point in the value of feature descriptor and described template in preset range as the feature point pairs that the match is successful.
3. image processing method according to claim 2, is characterized in that, described feature descriptor comprises: edge, profile, the gradient information of unique point place neighborhood.
4. image processing method according to claim 1, is characterized in that, the image at least two images described in described acquisition except described template and the conversion parameter between described template, also comprise:
Obtain the displacement of the camera lens of described intelligent terminal during at least two images described in gathering, according to the displacement of camera lens of described intelligent terminal during at least two images described in gathering, the picture size difference of at least two images described in acquisition, according to the feature point pairs that the match is successful and described picture size difference, obtain described conversion parameter;
Or,
Obtain the displacement of described intelligent terminal during at least two images described in gathering, according to gather described at least two images time described intelligent terminal displacement, the image translation parameter of at least two images described in acquisition, according to the feature point pairs that the match is successful and described image translation parameter, obtain described conversion parameter;
Or,
Obtain gather described at least two images time described intelligent terminal camera lens displacement and gather described at least two images time described intelligent terminal displacement, according to the displacement of camera lens of described intelligent terminal during at least two images described in gathering, the picture size difference of at least two images described in acquisition, according to described in gathering during at least two images the displacement of described intelligent terminal obtain described in the image translation parameter of at least two images;
According to the feature point pairs that the match is successful, described picture size difference and described image translation parameter, the conversion parameter described in acquisition between at least two images.
5. image processing method according to claim 1, is characterized in that, described the registration result of described at least two images is carried out focus fusion, obtains the image after described fusion, specifically comprises:
After the registration result of described at least two images is corresponded to frequency domain, decompose according to frequency, at least one frequency content of registration result of at least two images described in obtaining respectively, one section of fixing frequency separation in each frequency content correspondence image;
Each corresponding frequencies composition at least one frequency content described in the registration result of at least two images described in contrasting respectively, the frequency content that in each corresponding frequencies composition described in choosing, gradient is larger is as the frequency content at least one frequency content of the image after described fusion;
According to each frequency content at least one frequency content described, the frequency content at least one frequency content of the image after the described fusion chosen corresponding for each frequency content described is merged, generates the image after described fusion.
6. an image processing apparatus, is characterized in that, comprising:
Image acquisition units, for gathering at least two images of Same Scene under the state of moving at described image processing apparatus, described at least two images adopt different focuses;
Motion estimation unit, for obtaining at least two images described in the collection of described image acquisition units, one is selected arbitrarily as template in described at least two images, to described at least two image zooming-out unique points, respectively the unique point of the image in described at least two images except described template is mated with the unique point of described template, obtain the feature point pairs that the match is successful, according to the described feature point pairs that the match is successful, the image at least two images described in acquisition except described template and the conversion parameter between described template;
Image registration unit, for obtaining the conversion parameter that described in the acquisition of described motion estimation unit, at least two images and described motion estimation unit obtain, according to described conversion parameter, the image in described at least two images except described template is converted, using the registration result of the image after described template and conversion as described at least two images;
Image co-registration unit, for obtaining the registration result of at least two images described in the formation of described image registration unit, carrying out focus fusion by the registration result of described at least two images, obtaining the image after described fusion.
7. image processing apparatus according to claim 6, is characterized in that, described motion estimation unit, comprises characteristic extracting module, characteristic matching module and parameter acquisition module;
Described characteristic extracting module, for obtain described image acquisition units gather described at least two images, according to the feature point extraction algorithm feature descriptor of extract minutiae and described unique point in described at least two images respectively;
Described characteristic matching module, for obtaining the unique point and the feature descriptor of described unique point that described characteristic extracting module extracts, respectively the value of the feature descriptor of the individual features point in the value of the feature descriptor of the described unique point belonged in different images and described template is contrasted, select the unique point of the difference of the value of the feature descriptor of the individual features point in the value of feature descriptor and described template in preset range as the feature point pairs that the match is successful;
Described parameter acquisition module, for according to the described feature point pairs that the match is successful, the conversion parameter described in acquisition between at least two images.
8. image processing apparatus according to claim 7, is characterized in that, described feature descriptor comprises: edge, profile, the gradient information of unique point place neighborhood.
9. image processing apparatus according to claim 6, is characterized in that:
The displacement of the camera lens of described intelligent terminal when described image acquisition units is also for obtaining at least two images described in collection;
Described parameter acquisition module, also for according to the displacement of camera lens of described intelligent terminal during at least two images described in gathering, the picture size difference of at least two images described in acquisition, according to the feature point pairs that the match is successful and described picture size difference, obtains described conversion parameter;
Or,
The displacement of described intelligent terminal when described image acquisition units is also for obtaining at least two images described in collection;
Described parameter acquisition module, also for according to gather described at least two images time described intelligent terminal displacement, the image translation parameter of at least two images described in acquisition, according to the feature point pairs that the match is successful and described image translation parameter, the conversion parameter described in acquisition between at least two images;
Or,
The displacement of the camera lens of described intelligent terminal and the displacement of described intelligent terminal during at least two images described in gathering when described image acquisition units is also for obtaining at least two images described in collection;
Described parameter acquisition module, also for according to the displacement of camera lens of described intelligent terminal during at least two images described in gathering, the picture size difference of at least two images described in acquisition, according to described in gathering during at least two images the displacement of described intelligent terminal obtain described in the image translation parameter of at least two images;
According to the feature point pairs that the match is successful, described picture size difference and described image translation parameter, the conversion parameter described in acquisition between at least two images.
10. image processing apparatus according to claim 6, is characterized in that, described image co-registration unit, comprises band decomposition module, frequency band Fusion Module and image co-registration module;
Described band decomposition module, after the registration result of described at least two images is corresponded to frequency domain, decompose according to frequency, at least one frequency content of registration result of at least two images described in obtaining respectively, one section of fixing frequency separation in each frequency content correspondence image;
Described frequency band Fusion Module, for contrast at least two images that described band decomposition module obtains respectively registration result in described in each corresponding frequencies composition at least one frequency content, choose frequency content that in each corresponding frequencies composition described, gradient is larger as the frequency content at least one frequency content of the image after described fusion;
Described image co-registration module, for according to each frequency content at least one frequency content described, frequency content at least one frequency content of image after the described fusion choose described frequency band Fusion Module corresponding for each frequency content described merges, and generates the image after described fusion.
CN201310416756.XA 2013-09-12 2013-09-12 Image processing method and device Pending CN104463817A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310416756.XA CN104463817A (en) 2013-09-12 2013-09-12 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310416756.XA CN104463817A (en) 2013-09-12 2013-09-12 Image processing method and device

Publications (1)

Publication Number Publication Date
CN104463817A true CN104463817A (en) 2015-03-25

Family

ID=52909805

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310416756.XA Pending CN104463817A (en) 2013-09-12 2013-09-12 Image processing method and device

Country Status (1)

Country Link
CN (1) CN104463817A (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046676A (en) * 2015-08-27 2015-11-11 上海斐讯数据通信技术有限公司 Image fusion method and equipment based on intelligent terminal
CN105430266A (en) * 2015-11-30 2016-03-23 努比亚技术有限公司 Image processing method based on multi-scale transform and terminal
CN105611181A (en) * 2016-03-30 2016-05-25 努比亚技术有限公司 Multi-frame photographed image synthesizer and method
CN105631804A (en) * 2015-12-24 2016-06-01 小米科技有限责任公司 Image processing method and device
CN105812656A (en) * 2016-02-29 2016-07-27 广东欧珀移动通信有限公司 Control method, control apparatus and electronic apparatus
WO2017091927A1 (en) * 2015-11-30 2017-06-08 华为技术有限公司 Image processing method and dual-camera system
CN106875348A (en) * 2016-12-30 2017-06-20 成都西纬科技有限公司 A kind of heavy focus image processing method
CN108573467A (en) * 2017-03-09 2018-09-25 南昌黑鲨科技有限公司 Track synthetic method, device and terminal based on image
CN108694705A (en) * 2018-07-05 2018-10-23 浙江大学 A kind of method multiple image registration and merge denoising
CN108986181A (en) * 2018-06-15 2018-12-11 广东数相智能科技有限公司 Image processing method, device and computer readable storage medium based on dot
CN110300267A (en) * 2019-07-19 2019-10-01 维沃移动通信有限公司 Photographic method and terminal device
WO2020001034A1 (en) * 2018-06-30 2020-01-02 华为技术有限公司 Image processing method and device
CN111242880A (en) * 2019-12-30 2020-06-05 广州市明美光电技术有限公司 Multi-depth-of-field image superposition method, equipment and medium for microscope
CN111311491A (en) * 2020-01-20 2020-06-19 当家移动绿色互联网技术集团有限公司 Image processing method and device, storage medium and electronic equipment
CN111932476A (en) * 2020-08-04 2020-11-13 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
WO2021102716A1 (en) * 2019-11-27 2021-06-03 深圳市晟视科技有限公司 Depth-of-field synthesis system, camera, and microscope
CN116883461A (en) * 2023-05-18 2023-10-13 珠海移科智能科技有限公司 Method for acquiring clear document image and terminal device thereof

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090141966A1 (en) * 2007-11-30 2009-06-04 Microsoft Corporation Interactive geo-positioning of imagery
CN102063713A (en) * 2010-11-11 2011-05-18 西北工业大学 Neighborhood normalized gradient and neighborhood standard deviation-based multi-focus image fusion method
CN102521814A (en) * 2011-10-20 2012-06-27 华南理工大学 Wireless sensor network image fusion method based on multi-focus fusion and image splicing
US20130063485A1 (en) * 2011-09-13 2013-03-14 Casio Computer Co., Ltd. Image processing device that synthesizes image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090141966A1 (en) * 2007-11-30 2009-06-04 Microsoft Corporation Interactive geo-positioning of imagery
CN102063713A (en) * 2010-11-11 2011-05-18 西北工业大学 Neighborhood normalized gradient and neighborhood standard deviation-based multi-focus image fusion method
US20130063485A1 (en) * 2011-09-13 2013-03-14 Casio Computer Co., Ltd. Image processing device that synthesizes image
CN102521814A (en) * 2011-10-20 2012-06-27 华南理工大学 Wireless sensor network image fusion method based on multi-focus fusion and image splicing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
罗文超 等: "SIFT和改进的RANSAC算法在图像配准中的应用", 《计算机工程与应用》 *

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046676A (en) * 2015-08-27 2015-11-11 上海斐讯数据通信技术有限公司 Image fusion method and equipment based on intelligent terminal
CN105430266A (en) * 2015-11-30 2016-03-23 努比亚技术有限公司 Image processing method based on multi-scale transform and terminal
WO2017091927A1 (en) * 2015-11-30 2017-06-08 华为技术有限公司 Image processing method and dual-camera system
US10445890B2 (en) 2015-11-30 2019-10-15 Huawei Technologies Co., Ltd. Dual camera system and image processing method for eliminating an alignment vector of an occlusion area in an image alignment vector field
CN105631804A (en) * 2015-12-24 2016-06-01 小米科技有限责任公司 Image processing method and device
CN105631804B (en) * 2015-12-24 2019-04-16 小米科技有限责任公司 Image processing method and device
CN105812656B (en) * 2016-02-29 2019-05-31 Oppo广东移动通信有限公司 Control method, control device and electronic device
CN105812656A (en) * 2016-02-29 2016-07-27 广东欧珀移动通信有限公司 Control method, control apparatus and electronic apparatus
CN105611181A (en) * 2016-03-30 2016-05-25 努比亚技术有限公司 Multi-frame photographed image synthesizer and method
CN106875348A (en) * 2016-12-30 2017-06-20 成都西纬科技有限公司 A kind of heavy focus image processing method
CN106875348B (en) * 2016-12-30 2019-10-18 成都西纬科技有限公司 A kind of heavy focus image processing method
CN108573467A (en) * 2017-03-09 2018-09-25 南昌黑鲨科技有限公司 Track synthetic method, device and terminal based on image
CN108986181A (en) * 2018-06-15 2018-12-11 广东数相智能科技有限公司 Image processing method, device and computer readable storage medium based on dot
CN110660088B (en) * 2018-06-30 2023-08-22 华为技术有限公司 Image processing method and device
EP3800616A4 (en) * 2018-06-30 2021-08-11 Huawei Technologies Co., Ltd. Image processing method and device
WO2020001034A1 (en) * 2018-06-30 2020-01-02 华为技术有限公司 Image processing method and device
CN110660088A (en) * 2018-06-30 2020-01-07 华为技术有限公司 Image processing method and device
US11798147B2 (en) 2018-06-30 2023-10-24 Huawei Technologies Co., Ltd. Image processing method and device
CN108694705A (en) * 2018-07-05 2018-10-23 浙江大学 A kind of method multiple image registration and merge denoising
CN108694705B (en) * 2018-07-05 2020-12-11 浙江大学 Multi-frame image registration and fusion denoising method
CN110300267A (en) * 2019-07-19 2019-10-01 维沃移动通信有限公司 Photographic method and terminal device
CN110300267B (en) * 2019-07-19 2022-01-25 维沃移动通信有限公司 Photographing method and terminal equipment
WO2021102716A1 (en) * 2019-11-27 2021-06-03 深圳市晟视科技有限公司 Depth-of-field synthesis system, camera, and microscope
CN111242880A (en) * 2019-12-30 2020-06-05 广州市明美光电技术有限公司 Multi-depth-of-field image superposition method, equipment and medium for microscope
CN111311491A (en) * 2020-01-20 2020-06-19 当家移动绿色互联网技术集团有限公司 Image processing method and device, storage medium and electronic equipment
CN111932476A (en) * 2020-08-04 2020-11-13 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN116883461A (en) * 2023-05-18 2023-10-13 珠海移科智能科技有限公司 Method for acquiring clear document image and terminal device thereof
CN116883461B (en) * 2023-05-18 2024-03-01 珠海移科智能科技有限公司 Method for acquiring clear document image and terminal device thereof

Similar Documents

Publication Publication Date Title
CN104463817A (en) Image processing method and device
KR101893047B1 (en) Image processing method and image processing device
JP6298540B2 (en) Image display method and apparatus
CN102055834B (en) Double-camera photographing method of mobile terminal
EP3236391B1 (en) Object detection and recognition under out of focus conditions
CN107483834B (en) Image processing method, continuous shooting method and device and related medium product
WO2017080237A1 (en) Camera imaging method and camera device
CN103685940A (en) Method for recognizing shot photos by facial expressions
CN108073857A (en) The method and device of dynamic visual sensor DVS event handlings
EP3005286B1 (en) Image refocusing
CN110837750B (en) Face quality evaluation method and device
CN104270573A (en) Multi-touch focus imaging system and method, as well as applicable mobile terminal
CN105516590B (en) A kind of image processing method and device
CN110766706A (en) Image fusion method and device, terminal equipment and storage medium
US20160093028A1 (en) Image processing method, image processing apparatus and electronic device
CN114445315A (en) Image quality enhancement method and electronic device
CN106412423A (en) Focusing method and device
CN109167893A (en) Shoot processing method, device, storage medium and the mobile terminal of image
CN106598211A (en) Gesture interaction system and recognition method for multi-camera based wearable helmet
CN112036311A (en) Image processing method and device based on eye state detection and storage medium
US9995905B2 (en) Method for creating a camera capture effect from user space in a camera capture system
CN114331902B (en) Noise reduction method and device, electronic equipment and medium
CN107357424B (en) Gesture operation recognition method and device and computer readable storage medium
CN109981967B (en) Shooting method and device for intelligent robot, terminal equipment and medium
JP2011022927A (en) Hand image recognition device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20171027

Address after: Metro Songshan Lake high tech Industrial Development Zone, Guangdong Province, Dongguan City Road 523808 No. 2 South Factory (1) project B2 -5 production workshop

Applicant after: HUAWEI terminal (Dongguan) Co., Ltd.

Address before: 518129 Longgang District, Guangdong, Bantian HUAWEI base B District, building 2, building No.

Applicant before: Huawei Device Co., Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20150325