CN105096266A - Information processing method and device, and terminal - Google Patents

Information processing method and device, and terminal Download PDF

Info

Publication number
CN105096266A
CN105096266A CN201510334771.9A CN201510334771A CN105096266A CN 105096266 A CN105096266 A CN 105096266A CN 201510334771 A CN201510334771 A CN 201510334771A CN 105096266 A CN105096266 A CN 105096266A
Authority
CN
China
Prior art keywords
image
present frame
terminal
destination object
current location
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510334771.9A
Other languages
Chinese (zh)
Other versions
CN105096266B (en
Inventor
张圣杰
申世安
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201510334771.9A priority Critical patent/CN105096266B/en
Publication of CN105096266A publication Critical patent/CN105096266A/en
Application granted granted Critical
Publication of CN105096266B publication Critical patent/CN105096266B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses an information processing method. The method also comprises the steps of acquiring target objects in a first image, wherein the target objects are a part of dynamic objects in the first image; determining target locations, wherein the target locations are the locations of the target objects in the first image; collecting a second image by an image collecting component of a terminal and using the second image as a current frame, wherein the image collecting component collects the second image after collecting the first image; determining current locations, wherein the current locations are locations of the target objects in the current frame; when the target locations are inconsistent with the current locations, performing displacement compensation on current features according to target features, so as to obtain the compensated current frame; and displaying the compensated current frame on a display screen of the terminal. The invention also discloses an information processing device and the terminal.

Description

A kind of information processing method and device, terminal
Technical field
The present invention relates to electronic technology, particularly relate to a kind of information processing method and device, terminal.
Background technology
Camera function as smart mobile phone, panel computer, intelligent watch etc., is a very important function for present terminal.The technology that use electronic equipment carries out local dynamic station shooting is comparatively ripe; but carry out in the process of taking pictures in use terminal; usually can encounter a kind of situation; object of finding a view in view-finder is all much dynamic; such as, street bustling with vehicles after the electric fan of rotation status and electric fan is in.For this situation, if user wants part object rest in the multiple dynamic objects in view-finder, and remaining dynamic object still keeps dynamic image result, the effect that such as user wants the image taken to reach is: fan is static, and street bustling with vehicles is dynamic.In the prior art, in order to reach above-mentioned effect of taking pictures, in order to obtain the static image in local, general way is after picture shooting, post-processed is carried out to picture and obtains the static image in local, such as, the static image in local is synthesized into after plurality of pictures being carried out PS (PhotoShop) process again.The shortcoming that prior art exists is, PS treatment technology requires too high to user, and not all user can use PS treatment technology; And PS treatment technology can not present final effect in real time, namely in the process of shooting image, some moving object of local of shooting view-finder, cannot make it staticly get off, cannot meet the demand that shutterbugs are growing.
Summary of the invention
In view of this, the embodiment of the present invention provides a kind of information processing method and device, terminal for solving in prior art at least one problem of existing, can in real time for user provides view-finder some moving object of local to carry out the effect of showing in the picture in a static manner.
The technical scheme of the embodiment of the present invention is achieved in that
First aspect, the embodiment of the present invention provides a kind of information processing method, and described method comprises:
Obtain the destination object in the first image, described destination object is the partial dynamic object in described first image in multiple dynamic object;
Determine target location, described target location is the position of described destination object in described first image;
The image acquisition component in terminal is utilized to gather the second image as present frame, after the time that described image acquisition component gathers described second image occurs in described first image;
Determine current location, described current location is the position of described destination object at described present frame;
When described target location and described current location inconsistent time, according to described target signature, bit shift compensation is carried out to described current signature, the present frame after being compensated;
Present frame after compensating is presented on the display screen of described terminal.
Second aspect, the embodiment of the present invention provides a kind of signal conditioning package, and described device comprises the first acquiring unit, the first determining unit, the first collecting unit, the second determining unit, compensating unit and display unit, wherein:
Described first acquiring unit, for obtaining the destination object in the first image, described destination object is the partial dynamic object in described first image in multiple dynamic object;
Described first determining unit, for determining target location, described target location is the position of described destination object in described first image;
Described first collecting unit, for utilizing the image acquisition component in terminal to gather the second image as present frame, after the time that described image acquisition component gathers described second image occurs in described first image;
Described second determining unit, for determining current location, described current location is the position of described destination object at described present frame;
Described compensating unit, for when described target location and described current location inconsistent time, according to described target signature, bit shift compensation is carried out to described current signature, the present frame after being compensated;
Described display unit, for being presented at the display screen of described terminal by the present frame after compensation.
The third aspect, the embodiment of the present invention provides a kind of terminal, and described terminal comprises display screen, image acquisition component and processor, wherein:
Described display screen, for by compensate after present frame;
Described image acquisition component, the image acquisition component in described terminal gathers the second image;
Described processor, for obtaining the destination object in the first image, the partial dynamic object in described first image in multiple dynamic object; Determine target location, described target location is the position of described destination object in described first image; Described image acquisition component is utilized to gather the second image as present frame, after the time that described image acquisition component gathers described second image occurs in described first image; Determine current location, described current location is the position of described destination object at described present frame; When described target location and described current location inconsistent time, according to described target signature, bit shift compensation is carried out to described current signature, the present frame after being compensated; Present frame after compensating is presented on the display screen of described terminal.
A kind of information processing method that the embodiment of the present invention provides and device, terminal, wherein, obtain the destination object in the first image, and described destination object is the partial dynamic object in described first image in multiple dynamic object; Determine target location, described target location is the position of described destination object in described first image; The image acquisition component in terminal is utilized to gather the second image as present frame, after the time that described image acquisition component gathers described second image occurs in described first image; Determine current location, described current location is the position of described destination object at described present frame; When described target location and described current location inconsistent time, according to described target signature, bit shift compensation is carried out to described current signature, the present frame after being compensated; On the display screen that present frame after compensating is presented at described terminal so, can in real time for user provides view-finder some moving object of local to carry out the effect of showing in the picture in a static manner.
Accompanying drawing explanation
Fig. 1-1 is for realizing the hardware configuration schematic diagram of the mobile terminal of each embodiment of the present invention;
Fig. 1-2 is the wireless communication system schematic diagram of mobile terminal as Figure 1-1;
Fig. 1-3 is the realization flow schematic diagram of the embodiment of the present invention one information processing method;
Fig. 2 is the realization flow schematic diagram of the embodiment of the present invention two information processing method;
Fig. 3 is the realization flow schematic diagram of the embodiment of the present invention three information processing method;
Fig. 4 is the realization flow schematic diagram of the embodiment of the present invention four information processing method;
Fig. 5 is the realization flow schematic diagram of the embodiment of the present invention five information processing method;
Fig. 6-1 is the realization flow schematic diagram of the embodiment of the present invention six information processing method;
Fig. 6-2 is the schematic diagram of the destination object of embodiment of the present invention scene;
The design sketch that the technical scheme that Fig. 6-3 provides for the embodiment of the present invention finally draws;
Fig. 7-1 is the composition structural representation one of the embodiment of the present invention seven signal conditioning package;
Fig. 7-2 is the composition structural representation two of the embodiment of the present invention seven signal conditioning package;
Fig. 7-3 is the composition structural representation three of the embodiment of the present invention seven signal conditioning package;
Fig. 7-4 is the composition structural representation four of the embodiment of the present invention seven signal conditioning package;
Fig. 7-5 is the composition structural representation five of the embodiment of the present invention seven signal conditioning package;
Fig. 8 is the composition structural representation of the embodiment of the present invention eight terminal.
Embodiment
Should be appreciated that specific embodiment described herein only in order to explain technical scheme of the present invention, the protection domain be not intended to limit the present invention.
The mobile terminal realizing each embodiment of the present invention is described referring now to accompanying drawing.In follow-up description, use the suffix of such as " module ", " parts " or " unit " for representing element only in order to be conducive to explanation of the present invention, itself is specific meaning not.Therefore, " module " and " parts " can mixedly use.
Mobile terminal can be implemented in a variety of manners.Such as, the terminal described in the present invention can comprise the such as mobile terminal of mobile phone, smart phone, notebook computer, digit broadcasting receiver, PDA (personal digital assistant), PAD (panel computer), PMP (portable media player), guider etc. and the fixed terminal of such as digital TV, desk-top computer etc.Below, suppose that terminal is mobile terminal.But it will be appreciated by those skilled in the art that except the element except being used in particular for mobile object, structure according to the embodiment of the present invention also can be applied to the terminal of fixed type.
Fig. 1-1 is for realizing the hardware configuration signal of the mobile terminal of each embodiment of the present invention.
Mobile terminal 100 can comprise wireless communication unit 110, A/V (audio/video) input block 120, user input unit 130, sensing cell 140, output unit 150, storer 160, interface unit 170, controller 180 and power supply unit 190 etc.Fig. 1-1 shows the mobile terminal with various assembly, it should be understood that, does not require to implement all assemblies illustrated.Can alternatively implement more or less assembly.Will be discussed in more detail below the element of mobile terminal.
Wireless communication unit 110 generally includes one or more assembly, and it allows the wireless communication between mobile terminal 100 and wireless communication system or network.Such as, wireless communication unit can comprise at least one in broadcast reception module 111, mobile communication module 112, wireless Internet module 113, short range communication module 114 and positional information module 115.
Broadcast reception module 111 via broadcast channel from external broadcasting management server receiving broadcast signal and/or broadcast related information.Broadcast channel can comprise satellite channel and/or terrestrial channel.Broadcast management server can be generate and send the server of broadcast singal and/or broadcast related information or the broadcast singal generated before receiving and/or broadcast related information and send it to the server of terminal.Broadcast singal can comprise TV broadcast singal, radio signals, data broadcasting signal etc.And broadcast singal may further include the broadcast singal combined with TV or radio signals.Broadcast related information also can provide via mobile communications network, and in this case, broadcast related information can be received by mobile communication module 112.Broadcast singal can exist in a variety of manners, such as, it can exist with the form of the electronic service guidebooks (ESG) of the electronic program guides of DMB (DMB) (EPG), digital video broadcast-handheld (DVB-H) etc.Broadcast reception module 111 can by using the broadcast of various types of broadcast system Received signal strength.Especially, broadcast reception module 111 can by using such as multimedia broadcasting-ground (DMB-T), DMB-satellite (DMB-S), digital video broadcasting-hand-held (DVB-H), forward link media (MediaFLO ) Radio Data System, received terrestrial digital broadcasting integrated service (ISDB-T) etc. digit broadcasting system receive digital broadcasting.Broadcast reception module 111 can be constructed to be applicable to providing the various broadcast system of broadcast singal and above-mentioned digit broadcasting system.The broadcast singal received via broadcast reception module 111 and/or broadcast related information can be stored in storer 160 (or storage medium of other type).
Radio signal is sent at least one in base station (such as, access point, Node B etc.), exterior terminal and server and/or receives radio signals from it by mobile communication module 112.Various types of data that such radio signal can comprise voice call signal, video calling signal or send according to text and/or Multimedia Message and/or receive.
Wireless Internet module 113 supports the Wi-Fi (Wireless Internet Access) of mobile terminal.This module can be inner or be externally couple to terminal.Wi-Fi (Wireless Internet Access) technology involved by this module can comprise WLAN (WLAN) (Wi-Fi), Wibro (WiMAX), Wimax (worldwide interoperability for microwave access), HSDPA (high-speed downlink packet access) etc.
Short range communication module 114 is the modules for supporting junction service.Some examples of short-range communication technology comprise bluetooth tM, radio-frequency (RF) identification (RFID), Infrared Data Association (IrDA), ultra broadband (UWB), purple honeybee tMetc..
Positional information module 115 is the modules of positional information for checking or obtain mobile terminal.The typical case of positional information module is GPS (GPS).According to current technology, GPS module 115 calculates from the range information of three or more satellite and correct time information and for the Information application triangulation calculated, thus calculates three-dimensional current location information according to longitude, latitude and pin-point accuracy.Current, the method for calculating position and temporal information uses three satellites and by using the error of the position that goes out of an other satellite correction calculation and temporal information.In addition, GPS module 115 can carry out computing velocity information by Continuous plus current location information in real time.
A/V input block 120 is for audio reception or vision signal.A/V input block 120 can comprise camera 121 and microphone 1220, and the view data of camera 121 to the static images obtained by image capture apparatus in Video Capture pattern or image capture mode or video processes.Picture frame after process may be displayed on display unit 151.Picture frame after camera 121 processes can be stored in storer 160 (or other storage medium) or via wireless communication unit 110 and send, and can provide two or more cameras 1210 according to the structure of mobile terminal.Such acoustic processing can via microphones sound (voice data) in telephone calling model, logging mode, speech recognition mode etc. operational mode, and can be voice data by microphone 122.Audio frequency (voice) data after process can be converted to the formatted output that can be sent to mobile communication base station via mobile communication module 112 when telephone calling model.Microphone 122 can be implemented various types of noise and eliminate (or suppress) algorithm and receiving and sending to eliminate (or suppression) noise or interference that produce in the process of sound signal.
User input unit 130 can generate key input data to control the various operations of mobile terminal according to the order of user's input.User input unit 130 allows user to input various types of information, and keyboard, the young sheet of pot, touch pad (such as, detecting the touch-sensitive assembly of the change of the resistance, pressure, electric capacity etc. that cause owing to being touched), roller, rocking bar etc. can be comprised.Especially, when touch pad is superimposed upon on display unit 151 as a layer, touch-screen can be formed.
Sensing cell 140 detects the current state of mobile terminal 100, (such as, mobile terminal 100 open or close state), the position of mobile terminal 100, user for mobile terminal 100 contact (namely, touch input) presence or absence, the orientation of mobile terminal 100, the acceleration or deceleration of mobile terminal 100 move and direction etc., and generate order or the signal of the operation for controlling mobile terminal 100.Such as, when mobile terminal 100 is embodied as sliding-type mobile phone, sensing cell 140 can sense this sliding-type phone and open or close.In addition, whether whether sensing cell 140 can detect power supply unit 190 provides electric power or interface unit 170 to couple with external device (ED).Sensing cell 140 can comprise proximity transducer 1410 and will be described this in conjunction with touch-screen below.
Interface unit 170 is used as at least one external device (ED) and is connected the interface that can pass through with mobile terminal 100.Such as, external device (ED) can comprise wired or wireless head-band earphone port, external power source (or battery charger) port, wired or wireless FPDP, memory card port, for connecting the port, audio frequency I/O (I/O) port, video i/o port, ear port etc. of the device with identification module.Identification module can be that storage uses the various information of mobile terminal 100 for authentication of users and can comprise subscriber identification module (UIM), client identification module (SIM), Universal Subscriber identification module (USIM) etc.In addition, the device (hereinafter referred to " recognition device ") with identification module can take the form of smart card, and therefore, recognition device can be connected with mobile terminal 100 via port or other coupling arrangement.Interface unit 170 may be used for receive from external device (ED) input (such as, data message, electric power etc.) and the input received be transferred to the one or more element in mobile terminal 100 or may be used for transmitting data between mobile terminal and external device (ED).
In addition, when mobile terminal 100 is connected with external base, interface unit 170 can be used as to allow by it electric power to be provided to the path of mobile terminal 100 from base or can be used as the path that allows to be transferred to mobile terminal by it from the various command signals of base input.The various command signal inputted from base or electric power can be used as and identify whether mobile terminal is arranged on the signal base exactly.Output unit 150 is constructed to provide output signal (such as, sound signal, vision signal, alarm signal, vibration signal etc.) with vision, audio frequency and/or tactile manner.Output unit 150 can comprise display unit 151, dio Output Modules 152, alarm unit 153 etc.
Display unit 151 may be displayed on the information of process in mobile terminal 100.Such as, when mobile terminal 100 is in telephone calling model, display unit 151 can show with call or other communicate (such as, text messaging, multimedia file are downloaded etc.) be correlated with user interface (UI) or graphic user interface (GUI).When mobile terminal 100 is in video calling pattern or image capture mode, display unit 151 can the image of display capture and/or the image of reception, UI or GUI that video or image and correlation function are shown etc.
Meanwhile, when display unit 151 and touch pad as a layer superposed on one another to form touch-screen time, display unit 151 can be used as input media and output unit.Display unit 151 can comprise at least one in liquid crystal display (LCD), thin film transistor (TFT) LCD (TFT-LCD), Organic Light Emitting Diode (OLED) display, flexible display, three-dimensional (3D) display etc.Some in these displays can be constructed to transparence and watch from outside to allow user, and this can be called transparent display, and typical transparent display can be such as TOLED (transparent organic light emitting diode) display etc.According to the specific embodiment wanted, mobile terminal 100 can comprise two or more display units (or other display device), such as, mobile terminal can comprise outernal display unit (not shown) and inner display unit (not shown).Touch-screen can be used for detecting touch input pressure and touch input position and touch and inputs area.
When dio Output Modules 152 can be under the isotypes such as call signal receiving mode, call mode, logging mode, speech recognition mode, broadcast reception mode at mobile terminal, voice data convert audio signals that is that wireless communication unit 110 is received or that store in storer 160 and exporting as sound.And dio Output Modules 152 can provide the audio frequency relevant to the specific function that mobile terminal 100 performs to export (such as, call signal receives sound, message sink sound etc.).Dio Output Modules 152 can comprise loudspeaker, hummer etc.
Alarm unit 153 can provide and export that event informed to mobile terminal 100.Typical event can comprise calling reception, message sink, key signals input, touch input etc.Except audio or video exports, alarm unit 153 can provide in a different manner and export with the generation of notification event.Such as, alarm unit 153 can provide output with the form of vibration, when receive calling, message or some other enter communication (incomingcommunication) time, alarm unit 153 can provide sense of touch to export (that is, vibrating) to notify to user.By providing such sense of touch to export, even if when the mobile phone of user is in the pocket of user, user also can identify the generation of various event.Alarm unit 153 also can provide the output of the generation of notification event via display unit 151 or dio Output Modules 152.
Storer 160 software program that can store process and the control operation performed by controller 180 etc., or temporarily can store oneself through exporting the data (such as, telephone directory, message, still image, video etc.) that maybe will export.And, storer 160 can store about when touch be applied to touch-screen time the vibration of various modes that exports and the data of sound signal.
Storer 160 can comprise the storage medium of at least one type, described storage medium comprises flash memory, hard disk, multimedia card, card-type storer (such as, SD or DX storer etc.), random access storage device (RAM), static random-access memory (SRAM), ROM (read-only memory) (ROM), Electrically Erasable Read Only Memory (EEPROM), programmable read only memory (PROM), magnetic storage, disk, CD etc.And mobile terminal 100 can be connected the memory function of execute store 160 network storage device with by network cooperates.
Controller 180 controls the overall operation of mobile terminal usually.Such as, controller 180 performs the control relevant to voice call, data communication, video calling etc. and process.In addition, controller 180 can comprise for reproducing or the multi-media module 1810 of multimedia playback data, and multi-media module 1810 can be configured in controller 180, or can be configured to be separated with controller 180.Controller 180 can pattern recognition process, is identified as character or image so that input is drawn in the handwriting input performed on the touchscreen or picture.
Power supply unit 190 receives external power or internal power and provides each element of operation and the suitable electric power needed for assembly under the control of controller 180.
Various embodiment described herein can to use such as computer software, the computer-readable medium of hardware or its any combination implements.For hardware implementation, embodiment described herein can by using application-specific IC (ASIC), digital signal processor (DSP), digital signal processing device (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), processor, controller, microcontroller, microprocessor, being designed at least one performed in the electronic unit of function described herein and implementing, in some cases, such embodiment can be implemented in controller 180.For implement software, the embodiment of such as process or function can be implemented with allowing the independent software module performing at least one function or operation.Software code can be implemented by the software application (or program) write with any suitable programming language, and software code can be stored in storer 160 and to be performed by controller 180.
So far, oneself is through the mobile terminal according to its functional description.Below, for the sake of brevity, by the slide type mobile terminal that describes in various types of mobile terminals of such as folded form, board-type, oscillating-type, slide type mobile terminal etc. exemplarily.Therefore, the present invention can be applied to the mobile terminal of any type, and is not limited to slide type mobile terminal.
Mobile terminal 100 as shown in Fig. 1-1 can be constructed to utilize and send the such as wired and wireless communication system of data via frame or grouping and satellite-based communication system operates.
Describe wherein according to the communication system that mobile terminal of the present invention can operate referring now to Fig. 1-2.
Such communication system can use different air interfaces and/or Physical layer.Such as, the air interface used by communication system comprises such as frequency division multiple access (FDMA), time division multiple access (TDMA) (TDMA), CDMA (CDMA) and universal mobile telecommunications system (UMTS) (especially, Long Term Evolution (LTE)), global system for mobile communications (GSM) etc.As non-limiting example, description below relates to cdma communication system, but such instruction is equally applicable to the system of other type.
With reference to Fig. 1-2, cdma wireless communication system can comprise multiple mobile terminal 100, multiple base station (BS) 270, base station controller (BSC) 275 and mobile switching centre (MSC) 280.MSC280 is constructed to form interface with Public Switched Telephony Network (PSTN) 290.MSC280 is also constructed to form interface with the BSC275 that can be couple to base station 270 via back haul link.Back haul link can construct according to any one in some interfaces that oneself knows, described interface comprises such as E1/T1, ATM, IP, PPP, frame relay, HDSL, ADSL or xDSL.Will be appreciated that system as shown in figs. 1-2 can comprise multiple BSC2750.
Each BS270 can serve one or more subregion (or region), by multidirectional antenna or point to specific direction each subregion of antenna cover radially away from BS270.Or each subregion can by two or more antenna covers for diversity reception.Each BS270 can be constructed to support multiple parallel compensate, and each parallel compensate has specific frequency spectrum (such as, 1.25MHz, 5MHz etc.).
Subregion can be called as CDMA Channel with intersecting of parallel compensate.BS270 also can be called as base station transceiver subsystem (BTS) or other equivalent terms.Under these circumstances, term " base station " may be used for broadly representing single BSC275 and at least one BS270.Base station also can be called as " cellular station ".Or each subregion of particular B S270 can be called as multiple cellular station.
As shown in figs. 1-2, broadcast singal is sent to the mobile terminal 100 at operate within systems by broadcsting transmitter (BT) 295.Broadcast reception module 111 as shown in Fig. 1-1 is arranged on mobile terminal 100 and sentences the broadcast singal receiving and sent by BT295.In Fig. 1-2, show several GPS (GPS) satellite 300.Satellite 300 helps at least one in the multiple mobile terminal 100 in location.
In Fig. 1-2, depict multiple satellite 300, but understand, the satellite of any number can be utilized to obtain useful locating information.GPS module 115 as shown in Fig. 1-1 is constructed to coordinate to obtain the locating information wanted with satellite 300 usually.Substitute GPS tracking technique or outside GPS tracking technique, can use can other technology of position of tracking mobile terminal.In addition, at least one gps satellite 300 optionally or extraly can process satellite dmb transmission.
As a typical operation of wireless communication system, BS270 receives the reverse link signal from various mobile terminal 100.Mobile terminal 100 participates in call usually, information receiving and transmitting communicates with other type.Each reverse link signal that certain base station 270 receives is processed by particular B S270.The data obtained are forwarded to relevant BSC275.BSC provides call Resourse Distribute and comprises the mobile management function of coordination of the soft switching process between BS270.The data received also are routed to MSC280 by BSC275, and it is provided for the extra route service forming interface with PSTN290.Similarly, PSTN290 and MSC280 forms interface, and MSC and BSC275 forms interface, and BSC275 correspondingly control BS270 so that forward link signals is sent to mobile terminal 100.
Based on above-mentioned mobile terminal hardware configuration and communication system, each embodiment of the inventive method will be proposed below.
Below in conjunction with the drawings and specific embodiments, the technical solution of the present invention is further elaborated.
Embodiment one
Based on aforesaid embodiment, the embodiment of the present invention provides a kind of information processing method, the method is applied to terminal, the function that this information processing method realizes can be realized by the processor calling program code in terminal, certain program code can be kept in computer-readable storage medium, visible, this terminal at least comprises processor and storage medium.
Fig. 1-3 is the realization flow schematic diagram of the embodiment of the present invention one information processing method, and as Figure 1-3, this information processing method comprises:
Step S101, obtains the destination object in the first image, and described destination object is the partial dynamic object in described first image in multiple dynamic object;
Here, described first image comprises multiple dynamic object, and wherein object refers to the object, personage, scenery etc. shown in the first image, and described destination object is the partial dynamic object in described first image in multiple dynamic object; Such as the first image comprises: be in street bustling with vehicles after the electric fan of rotation status and electric fan; Wherein destination object can be electric fan, also can for the car run in street.Here, object refers to the image-region representing " object " in image, and such as, the face of personage is object, and in fact the object in image refers to the face area of personage; For another example, flower is object, and in fact the flower in picture refers to the region of flower.
Step S102, determines target location, and described target location is the position of described destination object in described first image;
Here, described target location can adopt the coordinate range of the pixel of destination object in described first image to describe.For the image of 1280 × 1960 pixels, described position can be the coordinate of described destination object all pixels in described first image, and wherein the scope of coordinate is from 0 × 0 to 1280 × 1960.
Step S103, utilizes described image acquisition component to gather the second image as present frame, after the time that described image acquisition component gathers described second image occurs in described first image;
Here, in the process of specific implementation, described image acquisition component can be camera, and camera belongs to conventional hardware configuration on existing middle terminal device, is not therefore described further.
Step S104, determines current location, and described current location is the position of described destination object at described present frame;
Here, described current location can adopt the coordinate range of the pixel of destination object in described present frame to describe.It should be noted that, the first image can be identical with the number of pixel in present frame.
Here, those skilled in the art can realize step S102 and step S104 by various prior art, repeats no more here.Such as, carry out Image Edge-Detection based on wavelet transformation, extract the image-region of destination object; Then, determine the coordinate range of the pixel of the image-region of destination object in the first image, namely determine target area, or determine the image-region coordinate range in the current frame of destination object, namely determine current location.
Step S105, when described target location and described current location inconsistent time, according to described target signature, bit shift compensation is carried out to described current signature, the present frame after being compensated;
Here, bit shift compensation can adopt various prior art to realize, and therefore repeats no more.
Step S106, is presented at the present frame after compensating on the display screen of described terminal.
The technical scheme that the embodiment of the present invention provides, user is in the process of shooting, and terminal can realize the image effect that user wants, and the partial objects namely in image in multiple dynamic object is static.Without the need to as prior art, to the post-processed of image, in order to obtain effect same in prior art, user must continuous multiple images of shooting, just then by accomplishing above-mentioned effect to the superposition of image and PS process.Compared with prior art, user can obtain in real time, and user is without the need to superb image procossing technical ability.
Embodiment two
Based on aforesaid embodiment, the embodiment of the present invention provides a kind of information processing method, the method is applied to terminal, the function that this information processing method realizes can be realized by the processor calling program code in terminal, certain program code can be kept in computer-readable storage medium, visible, this terminal at least comprises processor and storage medium.
Fig. 2 is the realization flow schematic diagram of the embodiment of the present invention two information processing method, and as shown in Figure 2, this information processing method comprises:
Step S201, utilizes the image acquisition component in terminal to gather the first image;
Step S202, utilize the input equipment in described terminal to obtain the first operation, described first is operating as Offered target object from described first image;
Step S203, based on described first operation, obtain the destination object in described first image, described destination object is the partial dynamic object in described first image in multiple dynamic object;
Step S204, determines target location, and described target location is the position of described destination object in described first image;
Step S205, utilizes described image acquisition component to gather the second image as present frame, after the time that described image acquisition component gathers described second image occurs in described first image;
Here, in the process of specific implementation, described image acquisition component can be camera, and camera belongs to conventional hardware configuration on existing middle terminal device, is not therefore described further.
Step S206, determines current location, and described current location is the position of described destination object at described present frame;
Here, those skilled in the art can realize step S204 and step S206 by various prior art, repeats no more here.
Step S207, judges that whether described target location is consistent with described current location, when being, enters step S205, otherwise, enter step S208;
Step S208, carries out bit shift compensation according to described target signature to described current signature, the present frame after being compensated;
Step S209, is presented at the present frame after compensating on the display screen of described terminal.
Here, the input mode that terminal input device provides can be key-press input mode or touch input mode, and accordingly, input equipment can be button and touch-screen.In the process of specific implementation, suppose that input equipment is touch-screen, user can pass through the object in touch first image, thus terminal obtains destination object according to touch operation (i.e. the first operation).
The method that the embodiment of the present invention provides is in the concrete process implemented, can with in the terminal of the present user of form body of application software (APP), as a kind of preferred embodiment, the image acquisition APP (as camera APP) in existing terminal can be improved, existing camera APP realizes the method that the embodiment of the present invention provides, in the process of specific implementation, the method can be a function on camera APP, when user opens this function time, the method that terminal will start to perform the embodiment of the present invention and provides.When user needs to get off static for the local objects in the dynamic object in view-finder, so start camera APP, then camera starts to take an image as the first image, then user specifies one or more object as destination object from the first image, and then terminal starts to perform step S203.
Embodiment three
Based on aforesaid embodiment, the embodiment of the present invention provides a kind of information processing method, the method is applied to terminal, the function that this information processing method realizes can be realized by the processor calling program code in terminal, certain program code can be kept in computer-readable storage medium, visible, this terminal at least comprises processor and storage medium.
Fig. 3 is the realization flow schematic diagram of the embodiment of the present invention three information processing method, and as shown in Figure 3, this information processing method comprises:
Step 301, obtains the destination object in the first image, and described destination object is the partial dynamic object in described first image in multiple dynamic object;
Here, described first image comprises multiple dynamic object, and wherein object refers to the object, personage, scenery etc. shown in the first image, and described destination object is the partial dynamic object in described first image in multiple dynamic object; Such as the first image comprises: be in street bustling with vehicles after the electric fan of rotation status and electric fan; Wherein destination object can be electric fan, also can for the car run in street.
Step 302, utilizes image characteristics extraction algorithm to extract target signature and target location;
Here, described target signature is the characteristics of image of described destination object, and described target location is the position of described target signature in described first image;
Step 303, utilizes described image acquisition component to gather the second image as present frame, after the time that described image acquisition component gathers described second image occurs in described first image;
Here, in the process of specific implementation, described image acquisition component can be camera, and camera belongs to conventional hardware configuration on existing middle terminal device, is not therefore described further.
Step 304, utilizes image characteristics extraction algorithm to extract characteristics of image in described present frame;
Step 305, determines current signature according to described target signature from the characteristics of image described present frame, the characteristics of image of destination object in current frame image described in described current signature;
Step 306, according to described current signature determination current location, described current location is the position of described destination object at described present frame;
Here, in fact step 304 to step 306 provides a kind of method determining current location.
Step 307, when described target location and described current location inconsistent time, according to described target signature, bit shift compensation is carried out to described current signature, the present frame after being compensated;
Step 308, is presented at the present frame after compensating on the display screen of described terminal.
Embodiment four
Based on aforesaid embodiment, the embodiment of the present invention provides a kind of information processing method, the method is applied to terminal, the function that this information processing method realizes can be realized by the processor calling program code in terminal, certain program code can be kept in computer-readable storage medium, visible, this terminal at least comprises processor and storage medium.
Fig. 4 is the realization flow schematic diagram of the embodiment of the present invention four information processing method, and as shown in Figure 4, this information processing method comprises:
Step 401, obtains the destination object in the first image, and described destination object is the partial dynamic object in described first image in multiple dynamic object;
Here, described first image comprises multiple dynamic object, and wherein object refers to the object, personage, scenery etc. shown in the first image, and described destination object is the partial dynamic object in described first image in multiple dynamic object; Such as the first image comprises: be in street bustling with vehicles after the electric fan of rotation status and electric fan; Wherein destination object can be electric fan, also can for the car run in street.
Step 402, determines target location, and described target location is the position of described destination object in described first image;
Step 403, utilizes described image acquisition component to gather the second image as present frame, after the time that described image acquisition component gathers described second image occurs in described first image;
Here, in the process of specific implementation, described image acquisition component can be camera, and camera belongs to conventional hardware configuration on existing middle terminal device, is not therefore described further.
Step 404, determines current location, and described current location is the position of described destination object at described present frame;
Here, those skilled in the art can realize step S102 and step S104 by various prior art, repeats no more here.
Step 405, judges that whether described target location is consistent with described current location, when being, enters step 403; Otherwise enter step 406;
Step 406, carries out bit shift compensation according to described target signature to described current signature, the present frame after being compensated;
Step 407, judges whether the present frame after compensating meets the condition preset; When being, enter step 408, otherwise enter step 403;
Here, judge whether the present frame after compensating meets the condition preset, and obtains the first judged result; When the present frame after described first judged result shows described compensation meets default condition, enter step 408, when described first judged result shows that the present frame after described compensation does not meet the condition preset, enter step 403;
Step 408, is presented at the present frame after described compensation on the display screen of described terminal.
Embodiment five
Based on aforesaid embodiment, the embodiment of the present invention provides a kind of information processing method, the method is applied to terminal, the function that this information processing method realizes can be realized by the processor calling program code in terminal, certain program code can be kept in computer-readable storage medium, visible, this terminal at least comprises processor and storage medium.
Fig. 5 is the realization flow schematic diagram of the embodiment of the present invention five information processing method, and as shown in Figure 5, this information processing method comprises:
Step 501, obtains the destination object in the first image, and described destination object is the partial dynamic object in described first image in multiple dynamic object;
Here, described first image comprises multiple dynamic object, and wherein object refers to the object, personage, scenery etc. shown in the first image, and described destination object is the partial dynamic object in described first image in multiple dynamic object; Such as the first image comprises: be in street bustling with vehicles after the electric fan of rotation status and electric fan; Wherein destination object can be electric fan, also can for the car run in street.
Step 502, determines target location, and described target location is the position of described destination object in described first image;
Step 503, utilizes described image acquisition component to gather the second image as present frame, after the time that described image acquisition component gathers described second image occurs in described first image;
Here, in the process of specific implementation, described image acquisition component can be camera, and camera belongs to conventional hardware configuration on existing middle terminal device, is not therefore described further.
Step 504, determines current location, and described current location is the position of described destination object at described present frame;
Here, those skilled in the art can realize step S102 and step S104 by various prior art, repeats no more here.
Step 505, when described target location and described current location inconsistent time, according to described target signature, bit shift compensation is carried out to described current signature, the present frame after being compensated;
Step 506, is presented at the present frame after described compensation on the display screen of described terminal.
Step 507, utilizes the input equipment of described terminal to obtain the second operation, described second be operating as user determine described compensation after present frame whether meet the expectation of user;
Here, described second operation and the first above-mentioned class of operation seemingly, therefore repeat no more.
Step 508, obtain the first information based on described second operation, whether the described first information meets the expectation of user for the present frame after showing described compensation;
Step 509, when the present frame after determining described compensation meets the expectation of user, gets back in step 503 based on the described first information.
Embodiment six
For the shortcoming of current technology, the embodiment of the present invention proposes a kind of information processing method, will carry out image characteristics extraction and coupling in the method to picture, then realizes the predictable image lamination techniques to picture, thus more select possibility for user provides, hommization more.
In embodiments of the present invention, terminal should at least assemble following device: for gather picture image acquisition component, for show image display unit, for carrying out mutual input equipment with user, in the process of specific implementation, image acquisition units can be camera, display unit can be display screen, and input equipment can be keyboard or touching display screen, when terminal has touching display screen, touching display screen both as common display screen, also as input equipment.
Fig. 6-1 is the realization flow schematic diagram of the embodiment of the present invention six information processing method, as in Figure 6-1, comprises the steps:
Step 601, user opens a switch on shooting menu, and this switch is for opening and closing the function of shooting local rest image, and user chooses the object of wishing that local is static in initial frame by the input equipment in terminal;
Here, the first image in initial frame and above-described embodiment, the destination object in the static object in local and above-described embodiment.
Step 602, by wishing that to user static local objects carries out image characteristics extraction;
Here, conventional characteristics of image has color characteristic, textural characteristics, shape facility, spatial relationship feature, wherein: the extracting method of color characteristic includes but not limited to color histogram, color set, color moment, color convergence vector, color correlogram; The extracting method of textural characteristics includes but not limited to gray level co-occurrence matrixes, Tamura textural characteristics, autoregression texture model, wavelet transformation etc.; The extracting method of shape facility comprises boundary characteristic method, Fourier's shape description symbols method, geometry parameter method, shape invariance moments method, finite element method, rotation function method, wavelet descriptor method etc.; The extracting method of spatial relationship feature includes but not limited to based on the Attitude estimation method of model, the Attitude estimation method etc. based on study.
Step 603, when this local objects starts mobile, carry out feature extraction, and the eigenwert of initial frame is mated, and matches the object space of present frame to the current frame image that camera obtains.
Step 604, when difference appears in position, contrast present frame and initial frame obtain the view data of local objects.
Step 605, carries out unified superposition to the view data of local objects and other background data, then repeats step 603 to step 605, until obtain customer satisfaction system effect.
For example bright below, first image comprises the ferris wheel of meteor and rotation, the ferris wheel of meteor and rotation is all dynamic, then user arranges a destination object and the destination object shown in 6-2, through the process of the technical scheme that the embodiment of the present invention provides, obtain final design sketch as shown in Fig. 6-3, can see, the moving subjects (ferris wheel) that user chooses is actionless at this Fig. 6-3, in Fig. 6-3, other moving subjects (as starry sky) is then unaffected, leaves longer movement locus in Fig. 6 is-3.
Embodiment seven
Based on aforesaid embodiment of the method, the embodiment of the present invention provides a kind of signal conditioning package, the unit such as the first acquiring unit in this device, the first determining unit, the first collecting unit, the second determining unit, compensating unit and display unit, and each module that each unit is included separately, can be realized by the processor in terminal, certainly also realize by concrete logical circuit; In the process of specific embodiment, processor can be central processing unit (CPU), microprocessor (MPU), digital signal processor (DSP) or field programmable gate array (FPGA) etc.
Fig. 7-1 is the composition structural representation of the embodiment of the present invention four signal conditioning package, as shown in Fig. 7-1, this device 700 comprises the first acquiring unit 701, first determining unit 702, first collecting unit 703, second determining unit 704, compensating unit 705 and display unit 706, wherein:
Described first acquiring unit 701, for obtaining the destination object in the first image, described destination object is the partial dynamic object in described first image in multiple dynamic object;
Described first determining unit 702, for determining target location, described target location is the position of described destination object in described first image;
Described first collecting unit 703, for utilizing described image acquisition component to gather the second image as present frame, after the time that described image acquisition component gathers described second image occurs in described first image;
Described second determining unit 704, for determining current location, described current location is the position of described destination object at described present frame;
Described compensating unit 705, for when described target location and described current location inconsistent time, according to described target signature, bit shift compensation is carried out to described current signature, the present frame after being compensated;
Described display unit 706, for being presented at the display screen of described terminal by the present frame after compensation.
In the embodiment of the present invention, as shown in Fig. 7-2, this device 700 also comprises the second collecting unit 707 and second acquisition unit 708, wherein:
Described second collecting unit 707, gathers the first image for utilizing the image acquisition component in terminal;
Described second acquisition unit 708, for utilizing the input equipment in described terminal to obtain the first operation, described first is operating as Offered target object from described first image;
Described first acquiring unit 701, for operating based on described first, obtain the destination object in described first image, described destination object is the part in described first image.
In the embodiment of the present invention, as shown in Fig. 7-3, described first determining unit 702 comprises the first extraction module 7021 and the first determination module 7022, wherein:
Described first extraction module 7021, for utilizing image characteristics extraction algorithm to extract target signature, described target signature is the characteristics of image of described destination object;
Described first determination module 7022, for determining target location, described target location is the position of described target signature in described first image;
Accordingly, described second determining unit 704 comprises the second extraction module 7041, second determination module 7042 and the 3rd determination module 7043, wherein:
Described second extraction module 7041, extracts characteristics of image in described present frame for utilizing image characteristics extraction algorithm;
Described second determination module 7042, for determining current signature according to described target signature from the characteristics of image described current frame image, the characteristics of image of destination object in present frame described in described current signature;
Described 3rd determination module 7043, for according to described current signature determination current location, described current location is the position of described destination object at described present frame.
In the embodiment of the present invention, as shown in Fig. 7-4, described device also comprises the 3rd acquiring unit 709, for when described target location is consistent with described current location, again obtains piece image as present frame, then determines current location.
In the embodiment of the present invention, as shown in Fig. 7-5, described device 700 also comprises judging unit 710, for judging whether the present frame after compensating meets the condition preset, and obtains the first judged result;
Accordingly, described display unit 706, for when the present frame after described first judged result shows described compensation meets default condition, is presented at the present frame after described compensation on the display screen of described terminal.
In the embodiment of the present invention, described device also comprises the 3rd acquiring unit, during for showing that when described first judged result the present frame after described compensation does not meet the condition preset, again obtaining piece image as present frame, then determining current location.
In the embodiment of the present invention, described device 700 also comprises the 4th acquiring unit 711, the 5th acquiring unit 712 and the 3rd acquiring unit 710, wherein:
Described 4th acquiring unit 711, for utilizing the input equipment of described terminal to obtain the second operation, described second be operating as user determine described compensation after present frame whether meet the expectation of user;
Described 5th acquiring unit 712, for obtaining the first information based on described second operation, whether the described first information meets the expectation of user for the present frame after showing described compensation;
Described 3rd acquiring unit 710, when whether meeting the expectation of user for the present frame after determining described compensation based on the described first information, again obtains piece image as present frame, then determines current location.
Here it is to be noted: the description of above device embodiment is similar with the description of said method embodiment, has the beneficial effect that same embodiment of the method is similar, does not therefore repeat.For the ins and outs do not disclosed in apparatus of the present invention embodiment, please refer to the description of the inventive method embodiment and understand, for saving length, therefore repeating no more.
Embodiment eight
Based on aforesaid embodiment, the embodiment of the present invention provides a kind of terminal, and Fig. 8 is the composition structural representation of the embodiment of the present invention five terminal, and as shown in Figure 8, this terminal 800 comprises display screen 801, image acquisition component 802 and processor 803, wherein:
Described display screen 801, for by compensate after present frame;
Described image acquisition component 802, gathers the second image for described image acquisition component;
Described processor 803, for obtaining the destination object in the first image, described destination object is the partial dynamic object in described first image in multiple dynamic object; Determine target location, described target location is the position of described destination object in described first image; Described image acquisition component is utilized to gather the second image as present frame, after the time that described image acquisition component gathers described second image occurs in described first image; Determine current location, described current location is the position of described destination object at described present frame; When described target location and described current location inconsistent time, according to described target signature, bit shift compensation is carried out to described current signature, the present frame after being compensated; Present frame after compensating is presented on the display screen of described terminal.
Here it is to be noted: the description of above terminal embodiment item, it is similar for describing with said method, has the beneficial effect that same embodiment of the method is identical, does not therefore repeat.For the ins and outs do not disclosed in terminal embodiment of the present invention, those skilled in the art please refer to the description of the inventive method embodiment and understands, and for saving length, repeats no more here.
Here it is to be noted:
Should be understood that during instructions in the whole text that " embodiment " or " embodiment " mentioned means that the special characteristic relevant with embodiment, structure or characteristic comprise at least one embodiment of the present invention.Therefore, " in one embodiment " or " in one embodiment " that occur everywhere at whole instructions does not necessarily refer to identical embodiment.In addition, these specific feature, structure or characteristics can combine in one or more embodiments in any suitable manner.Should understand, in various embodiments of the present invention, the size of the sequence number of above-mentioned each process does not also mean that the priority of execution sequence, and the execution sequence of each process should be determined with its function and internal logic, and should not form any restriction to the implementation process of the embodiment of the present invention.The invention described above embodiment sequence number, just to describing, does not represent the quality of embodiment.
It should be noted that, in this article, term " comprises ", " comprising " or its any other variant are intended to contain comprising of nonexcludability, thus make to comprise the process of a series of key element, method, article or device and not only comprise those key elements, but also comprise other key elements clearly do not listed, or also comprise by the intrinsic key element of this process, method, article or device.When not more restrictions, the key element limited by statement " comprising ... ", and be not precluded within process, method, article or the device comprising this key element and also there is other identical element.
In several embodiments that the application provides, should be understood that disclosed equipment and method can realize by another way.Apparatus embodiments described above is only schematic, such as, the division of described unit, be only a kind of logic function to divide, actual can have other dividing mode when realizing, and as: multiple unit or assembly can be in conjunction with, maybe can be integrated into another system, or some features can be ignored, or do not perform.In addition, the coupling each other of shown or discussed each ingredient or direct-coupling or communication connection can be by some interfaces, and the indirect coupling of equipment or unit or communication connection can be electrical, machinery or other form.
The above-mentioned unit illustrated as separating component or can may not be and physically separates, and the parts as unit display can be or may not be physical location; Both can be positioned at a place, also can be distributed in multiple network element; Part or all of unit wherein can be selected according to the actual needs to realize the object of the present embodiment scheme.
In addition, each functional unit in various embodiments of the present invention can all be integrated in a processing unit, also can be each unit individually as a unit, also can two or more unit in a unit integrated; Above-mentioned integrated unit both can adopt the form of hardware to realize, and the form that hardware also can be adopted to add SFU software functional unit realizes.
One of ordinary skill in the art will appreciate that: all or part of step realizing said method embodiment can have been come by the hardware that programmed instruction is relevant, aforesaid program can be stored in computer read/write memory medium, this program, when performing, performs the step comprising said method embodiment; And aforesaid storage medium comprises: movable storage device, ROM (read-only memory) (ReadOnlyMemory, ROM), magnetic disc or CD etc. various can be program code stored medium.
Or, if the above-mentioned integrated unit of the present invention using the form of software function module realize and as independently production marketing or use time, also can be stored in a computer read/write memory medium.Based on such understanding, the technical scheme of the embodiment of the present invention can embody with the form of software product the part that prior art contributes in essence in other words, this computer software product is stored in a storage medium, comprises some instructions and performs all or part of of method described in each embodiment of the present invention in order to make a computer equipment (can be personal computer, server or the network equipment etc.).And aforesaid storage medium comprises: movable storage device, ROM, magnetic disc or CD etc. various can be program code stored medium.
The above; be only the specific embodiment of the present invention, but protection scope of the present invention is not limited thereto, is anyly familiar with those skilled in the art in the technical scope that the present invention discloses; change can be expected easily or replace, all should be encompassed within protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion with the protection domain of described claim.

Claims (10)

1. an information processing method, is characterized in that, described method comprises:
Obtain the destination object in the first image, described destination object is the partial dynamic object in described first image in multiple dynamic object;
Determine target location, described target location is the position of described destination object in described first image;
The image acquisition component in terminal is utilized to gather the second image as present frame, after the time that described image acquisition component gathers described second image occurs in described first image;
Determine current location, described current location is the position of described destination object at described present frame;
When described target location and described current location inconsistent time, according to described target signature, bit shift compensation is carried out to described current signature, the present frame after being compensated;
Present frame after compensating is presented on the display screen of described terminal.
2. method according to claim 1, is characterized in that, before the destination object in described acquisition first image, described method also comprises: utilize the image acquisition component in terminal to gather the first image;
Utilize the input equipment in described terminal to obtain the first operation, described first is operating as Offered target object from described first image;
Based on described first operation, obtain the destination object in described first image.
3. method according to claim 1, is characterized in that, describedly determines current location, comprising:
Utilize image characteristics extraction algorithm to extract target signature, described target signature is the characteristics of image of described destination object;
Image characteristics extraction algorithm is utilized to extract characteristics of image in described present frame;
From the characteristics of image described current frame image, current signature is determined, the characteristics of image of destination object in present frame described in described current signature according to described target signature;
According to described current signature determination current location, described current location is the position of described destination object at described present frame.
4. method according to claim 1, is characterized in that, described method also comprises:
When described target location is consistent with described current location, again obtains piece image as present frame, then determine current location.
5. method according to claim 1, is characterized in that, before on described display screen present frame after compensation being presented at described terminal, described method also comprises:
Judge whether the present frame after compensating meets the condition preset, and obtains the first judged result;
When the present frame after described first judged result shows described compensation meets default condition, the present frame after described compensation is presented on the display screen of described terminal.
6. method according to claim 5, is characterized in that, described method also comprises:
When described first judged result shows that the present frame after described compensation does not meet the condition preset, again obtain piece image as present frame, then determine current location.
7. the method according to any one of claim 1 to 6, is characterized in that, described method also comprises:
Utilize the input equipment of described terminal to obtain the second operation, described second be operating as user determine described compensation after present frame whether meet the expectation of user;
Obtain the first information based on described second operation, whether the described first information meets the expectation of user for the present frame after showing described compensation;
When whether the present frame after determining described compensation based on the described first information meets the expectation of user, again obtain piece image as present frame, then determine current location.
8. a signal conditioning package, is characterized in that, described device comprises the first acquiring unit, the first determining unit, the first collecting unit, the second determining unit, compensating unit and display unit, wherein:
Described first acquiring unit, for obtaining the destination object in the first image, described destination object is the partial dynamic object in described first image in multiple dynamic object;
Described first determining unit, for determining target location, described target location is the position of described destination object in described first image;
Described first collecting unit, for utilizing the image acquisition component in terminal to gather the second image as present frame, after the time that described image acquisition component gathers described second image occurs in described first image;
Described second determining unit, for determining current location, described current location is the position of described destination object at described present frame;
Described compensating unit, for when described target location and described current location inconsistent time, according to described target signature, bit shift compensation is carried out to described current signature, the present frame after being compensated;
Described display unit, for being presented at the display screen of described terminal by the present frame after compensation.
9. device according to claim 8, is characterized in that, described first determining unit comprises the first extraction module and the first determination module, wherein:
Described first extraction module, for utilizing image characteristics extraction algorithm to extract target signature, described target signature is the characteristics of image of described destination object;
Described first determination module, for determining target location, described target location is the position of described target signature in described first image;
Accordingly, described second unit comprises the second extraction module, the second determination module and the 3rd determination module, wherein: described second extraction module, extracts characteristics of image in described present frame for utilizing image characteristics extraction algorithm;
Described second determination module, for determining current signature according to described target signature from the characteristics of image described current frame image, the characteristics of image of destination object in present frame described in described current signature;
Described 3rd determination module, for according to described current signature determination current location, described current location is the position of described destination object at described present frame.
10. a terminal, is characterized in that, described terminal comprises display screen, image acquisition component and processor, wherein:
Described display screen, for by compensate after present frame;
Described image acquisition component, the image acquisition component in described terminal gathers the second image;
Described processor, for obtaining the destination object in the first image, the partial dynamic object in described first image in multiple dynamic object; Determine target location, described target location is the position of described destination object in described first image; Described image acquisition component is utilized to gather the second image as present frame, after the time that described image acquisition component gathers described second image occurs in described first image; Determine current location, described current location is the position of described destination object at described present frame; When described target location and described current location inconsistent time, according to described target signature, bit shift compensation is carried out to described current signature, the present frame after being compensated; Present frame after compensating is presented on the display screen of described terminal.
CN201510334771.9A 2015-06-16 2015-06-16 A kind of information processing method and device, terminal Active CN105096266B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510334771.9A CN105096266B (en) 2015-06-16 2015-06-16 A kind of information processing method and device, terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510334771.9A CN105096266B (en) 2015-06-16 2015-06-16 A kind of information processing method and device, terminal

Publications (2)

Publication Number Publication Date
CN105096266A true CN105096266A (en) 2015-11-25
CN105096266B CN105096266B (en) 2018-04-10

Family

ID=54576617

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510334771.9A Active CN105096266B (en) 2015-06-16 2015-06-16 A kind of information processing method and device, terminal

Country Status (1)

Country Link
CN (1) CN105096266B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018019124A1 (en) * 2016-07-29 2018-02-01 努比亚技术有限公司 Image processing method and electronic device and storage medium
CN109192088A (en) * 2018-10-31 2019-01-11 北京明瑞之光科技有限公司 A kind of vertical runner device display methods
CN110378172A (en) * 2018-04-13 2019-10-25 北京京东尚科信息技术有限公司 Information generating method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070014554A1 (en) * 2004-12-24 2007-01-18 Casio Computer Co., Ltd. Image processor and image processing program
CN101866092A (en) * 2009-04-17 2010-10-20 索尼公司 Generate the long exposure image that simulated in response to a plurality of short exposures
US20130051700A1 (en) * 2011-08-31 2013-02-28 Sony Corporation Image processing apparatus, image processing method, and program
CN103888677A (en) * 2012-12-21 2014-06-25 联想(北京)有限公司 Information processing method and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070014554A1 (en) * 2004-12-24 2007-01-18 Casio Computer Co., Ltd. Image processor and image processing program
CN101866092A (en) * 2009-04-17 2010-10-20 索尼公司 Generate the long exposure image that simulated in response to a plurality of short exposures
US20130051700A1 (en) * 2011-08-31 2013-02-28 Sony Corporation Image processing apparatus, image processing method, and program
CN103888677A (en) * 2012-12-21 2014-06-25 联想(北京)有限公司 Information processing method and electronic equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018019124A1 (en) * 2016-07-29 2018-02-01 努比亚技术有限公司 Image processing method and electronic device and storage medium
CN110378172A (en) * 2018-04-13 2019-10-25 北京京东尚科信息技术有限公司 Information generating method and device
CN109192088A (en) * 2018-10-31 2019-01-11 北京明瑞之光科技有限公司 A kind of vertical runner device display methods

Also Published As

Publication number Publication date
CN105096266B (en) 2018-04-10

Similar Documents

Publication Publication Date Title
CN104915141A (en) Method and device for previewing object information
CN105227837A (en) A kind of image combining method and device
CN105303543A (en) Image enhancement method and mobile terminal
CN104954689A (en) Method and shooting device for acquiring photo through double cameras
CN104850259A (en) Combination operation method, combination operation apparatus, touch screen operating method and electronic device
CN104735255A (en) Split screen display method and system
CN104699404A (en) Soft keyboard display method and device
CN105468158A (en) Color adjustment method and mobile terminal
CN104731472A (en) Rapid icon clearing-up method and device
CN105681582A (en) Control color adjusting method and terminal
CN104657482A (en) Method for displaying application interface and terminal
CN104915140A (en) Processing method based on virtual key touch operation data and processing device based on virtual key touch operation data
CN105100413A (en) Information processing method, device and terminal
CN104968033A (en) Terminal network processing method and apparatus
CN104951236A (en) Wallpaper configuration method for terminal device, and terminal device
CN105160628A (en) Method and device for acquiring RGB data
CN104917965A (en) Shooting method and device
CN104731456A (en) Desktop widget display method and device
CN105138255A (en) Terminal and image information acquisition method
CN105278995A (en) Management method of application, system, server and mobile terminal
CN105245938A (en) Device and method for playing multimedia files
CN105227829A (en) Preview picture device and its implementation
CN105487803A (en) Touch response method and mobile terminal
CN104866352A (en) Method for starting application and mobile terminal
CN105120054A (en) Information processing method and mobile terminal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant