WO2015044514A1 - Method and apparatus for plenoptic imaging - Google Patents

Method and apparatus for plenoptic imaging Download PDF

Info

Publication number
WO2015044514A1
WO2015044514A1 PCT/FI2014/050690 FI2014050690W WO2015044514A1 WO 2015044514 A1 WO2015044514 A1 WO 2015044514A1 FI 2014050690 W FI2014050690 W FI 2014050690W WO 2015044514 A1 WO2015044514 A1 WO 2015044514A1
Authority
WO
WIPO (PCT)
Prior art keywords
lens array
movement
stabilization
objective
plenoptic lens
Prior art date
Application number
PCT/FI2014/050690
Other languages
French (fr)
Inventor
Gururaj Gopal Putraya
Mithun Uliyar
Basavaraja S V
Original Assignee
Nokia Technologies Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Technologies Oy filed Critical Nokia Technologies Oy
Publication of WO2015044514A1 publication Critical patent/WO2015044514A1/en

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B13/00Optical objectives specially designed for the purposes specified below
    • G02B13/001Miniaturised objectives for electronic devices, e.g. portable telephones, webcams, PDAs, small digital cameras
    • G02B13/0085Miniaturised objectives for electronic devices, e.g. portable telephones, webcams, PDAs, small digital cameras employing wafer level optics
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/64Imaging systems using optical elements for stabilisation of the lateral and angular position of the image
    • G02B27/646Imaging systems using optical elements for stabilisation of the lateral and angular position of the image compensating for small deviations, e.g. due to vibration or shake
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6812Motion detection based on additional sensors, e.g. acceleration sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/685Vibration or motion blur correction performed by mechanical compensation
    • H04N23/687Vibration or motion blur correction performed by mechanical compensation by shifting the lens or sensor position
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B3/00Simple or compound lenses
    • G02B3/0006Arrays

Definitions

  • the present application generally relates to a method and apparatus for plenoptic imaging.
  • the present application relates in particular, though not exclusively, to plenoptic imaging in the presence of camera shaking.
  • Plenoptic imaging is a technique in which a light field is captured so as to capture three- dimensional images.
  • a network or grid of lenses is arranged around the focal plane of the main or the collective objective lens. By this arrangement a scene point in the world space will be projected under different lenses within the grid. This enables to do a lot of post capture processing to enable features like post capture refocusing, computing depth etc., although with a loss of resolution of the reconstructed image.
  • the producing of three-dimensional image data results in the ability to perform sharper digital image stabilization that accounts for camera movement on all axes unlike current optical and digital systems in which the image stabilization works generally on two to five different axes (X-Y -movements and rotation over three different Cartesian axes).
  • the structure of the plenoptic cameras also has prevented development of optical image stabilization, as it is apparent that any movement between collecting lens, networked lenses and the sensor will have a multitude of complex side effects: no actual optical image stabilization technique is known for plenoptic cameras. Systems such as SteadicamTM that stabilize the entire camera are naturally usable, but such systems require external equipment and as fall outside the normal definition of optical image stabilization.
  • an apparatus comprising:
  • a plenoptic lens array optically in series with the collective objective and configured to form, for capture by an image sensor, a plurality of images from a scene seen through the collective objective;
  • a first optical image stabilization actuator configured to controllably move the collective objective
  • a second optical image stabilization actuator configured to controllably move the plenoptic lens array.
  • the apparatus may comprise a processor configured to control the first and second image stabilization actuators.
  • the controlling may comprise causing the first optical image stabilization actuator to move the collective objective by a first stabilization movement d in compensation of a hand shake movement c affecting on the apparatus.
  • the controlling may comprise causing the second optical image stabilization actuator to move the plenoptic lens array by a second stabilization movement s as a function of the hand shake movement and the first stabilization movement.
  • the collective objective may be formed of one lens.
  • the collective objective may be formed of more than one optical element such as lenses and / or prisms.
  • the first stabilization movement d may be linear movement configured to offset the collective objective.
  • the offset may be directed perpendicular to the optical axis of the collective objective.
  • the second stabilization movement s may be a linear movement configured to offset the plenoptic lens array.
  • the offset may be directed perpendicular to the optical axis of the plenoptic lens array.
  • the offset may be directed parallel to the image sensor.
  • the first stabilization movement d may be directly proportional to a resultant component of the hand shake movement c in the direction of the first stabilization movement.
  • the first stabilization movement may be inversely proportional to a sum of 1 + a / (q + b), in which a is distance from the collecting objective to a scene point, q is distance from the collecting objective to the plenoptic lens array, and b is distance from the plenoptic lens array to the image sensor, when provided in series with the plenoptic lens array.
  • the distance from the collecting objective to a scene point may refer to the distance from the object side principal plane in thick lens or the optical center in case of a thin lens of the collecting objective to a scene point.
  • the distance from the collecting objective to the plenoptic lens array may refer to the optical distance thereof, in which the optical distance is calculated from the principal planes of the collecting objective and of the plenoptic lens array.
  • the second stabilization movement may be directly proportional to a resultant component of the hand shake movement c in the direction of the first stabilization movement.
  • the second stabilization movement s may be inversely proportional to a sum of 1 + (a + q) / b in which a is distance from the collecting objective to a scene point, q is distance from the collecting objective to the plenoptic lens array, and b is distance from the plenoptic lens array to the image sensor, when provided in series with the plenoptic lens array.
  • One of the first and second optical image stabilization actuators may be configured to move both the collective objective and the plenoptic lens array while the remaining one of the first and second optical image stabilization actuators may be configured to move only one of the collective objective and the plenoptic lens array.
  • the distance to the scene point may be automatically selected by shape recognition.
  • the distance to the scene point may be preset or set by a user.
  • the method may comprise controlling the first and second image stabilization actuators.
  • the controlling may comprise causing the first optical image stabilization actuator to move the collective objective by a first stabilization movement d in compensation of a hand shake movement c affecting on the apparatus.
  • a computer program comprising:
  • the computer program may be stored in a memory medium.
  • the memory medium may be a non-transitory memory medium.
  • the memory medium may comprise a digital data storage such as a data disc or diskette, optical storage, magnetic storage, holographic storage, opto-magnetic storage, phase-change memory, resistive random access memory, magnetic random access memory, solid-electrolyte memory, ferroelectric random access memory, organic memory or polymer memory.
  • the memory medium may be formed into a device without other substantial functions than storing memory or it may be formed as part of a device with other functions, including but not limited to a memory of a computer, a chip set, and a sub assembly of an electronic device.
  • an apparatus comprising a processor configured to:
  • an apparatus comprising:
  • At least one memory including computer program code
  • the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: cause capturing by an image sensor a plurality of images from a scene seen through a collective objective by a plenoptic lens array that is optically in series with the collective objective;
  • a computer program comprising:
  • a device comprising the apparatus of the first example aspect.
  • the device may be selected from a group consisting of: a mobile telephone; a tablet computer; a laptop computer; a game console; a portable electronic device; a portable camera; a mobile camera; a vehicular camera; a portable video camera; a mobile video camera; and a movable surveillance camera.
  • Fig. 1 shows a schematic system for use as a reference with which some example embodiments of the invention can be explained;
  • Fig. 2 shows a block diagram of an apparatus of an example embodiment
  • Fig. 3 shows a block diagram of a camera unit of an example embodiment
  • Fig. 4 shows an exaggerated illustration of a camera unit according to an example embodiment
  • Fig. 5 shows another illustration of the camera unit of Fig. 4 after an offset of c of a camera unit has occurred with relation to the scene point;
  • Fig. 6 shows a light ray diagram before the second image stabilization movement s
  • Fig. 7 shows a light ray diagram after the second image stabilization movement s
  • Fig. 8 shows a light ray diagram for illustrating an example embodiment
  • Fig 9 shows a flow chart illustrating a process according to an example embodiment.
  • Fig. 1 shows a schematic system 100 for use as a reference with which some example embodiments can be explained.
  • the system 100 comprises an electronic device 1 10 such as a camera phone, camera, smartphone, gaming device, personal digital assistant or a tablet computer having a camera unit 120 that is capable of capturing images with a field of view 130 using light field recording.
  • the device 1 10 further comprises a display 140.
  • Fig. 1 also shows image objects 150, 160 and 170 at different distances from the camera unit that are being imaged by the camera unit 120.
  • Fig. 2 shows a block diagram of an apparatus 200 of an example embodiment.
  • the apparatus 200 is suited for operating as the device 1 10.
  • the apparatus 200 comprises a communication interface 220, a host processor 210 coupled to the communication interface module 220, and a memory 240 coupled to the host processor 210.
  • the memory 240 comprises a work memory and a non-volatile memory such as a read- only memory, flash memory, optical or magnetic memory.
  • a non-volatile memory such as a read- only memory, flash memory, optical or magnetic memory.
  • the software 250 may comprise one or more software modules and can be in the form of a computer program product that is software stored in a memory medium.
  • the apparatus 200 further comprises a camera unit 260, a viewfinder 270 and a sensor unit 280 each coupled to the host processor 210.
  • the sensor unit comprises, for example, one or more elements selected from a group of: an acceleration sensor; a compass; and a gyroscopic orientation sensor.
  • the camera unit 260 and the processor 210 are connected via a camera interface 290.
  • Term host processor refers to a processor in the apparatus 200 in distinction of one or more processors in the camera unit 260, referred to as camera processor(s) 340 in Fig. 3.
  • different example embodiments share processing of image and/or light field information and control of the camera unit 260 differently between the camera unit and one or more processors outside the camera unit.
  • the processing is performed on the fly in an example embodiment and with buffering in another example embodiment. It is also possible that a given amount of images or image information can be processed on the fly and after than buffered operation mode is used as in one example embodiment.
  • any coupling in this document refers to functional or operational coupling; there may be intervening components or circuitries in between coupled elements unless expressly otherwise described.
  • the communication interface module 220 is configured to provide local communications over one or more local links.
  • the links may be wired and/or wireless links.
  • the communication interface 220 may further or alternatively implement telecommunication links suited for establishing links with other users or for data transfer, e.g. using the Internet.
  • Such telecommunication links may be links using any of: wireless local area network links, Bluetooth, ultra-wideband, cellular or satellite communication links.
  • the communication interface 220 may be integrated into the apparatus 200 or into an adapter, card or the like that may be inserted into a suitable slot or port of the apparatus 200. While Fig. 2 shows one communication interface 220, the apparatus may comprise a plurality of communication interfaces 220.
  • the host processor 210 is, for instance, a central processing unit (CPU), a microprocessor, a digital signal processor (DSP), a graphics processing unit, an application specific integrated circuit (ASIC), a field programmable gate array, a microcontroller or a combination of such elements.
  • Figure 2 shows one host processor 210, but the apparatus 200 may comprise a plurality of host processors.
  • the memory 240 may comprise non-transitory non-volatile and a non-volatile memory, such as a read-only memory (ROM), a programmable readonly memory (PROM), erasable programmable read-only memory (EPROM), a random- access memory (RAM), a flash memory, a data disk, an optical storage, a magnetic storage, a smart card, or the like.
  • ROM read-only memory
  • PROM programmable readonly memory
  • EPROM erasable programmable read-only memory
  • RAM random- access memory
  • flash memory a data disk
  • the apparatus comprises a plurality of memories.
  • various elements are integrated.
  • the memory 240 can be constructed as a part of the apparatus 200 or inserted into a slot, port, or the like.
  • the memory 240 may serve the sole purpose of storing data, or it may be constructed as a part of an apparatus serving other purposes, such as processing data. Similar options are thinkable also for various other elements.
  • the apparatus 200 may comprise other elements, such as microphones, displays, as well as additional circuitry such as further input/output (I/O) circuitries, memory chips, application- specific integrated circuits (ASIC), processing circuitry for specific purposes such as source coding/decoding circuitry, channel coding/decoding circuitry, ciphering/deciphering circuitry, and the like. Additionally, the apparatus 200 may comprise a disposable or rechargeable battery (not shown) for powering the apparatus if external power supply is not available.
  • I/O input/output
  • ASIC application- specific integrated circuits
  • processing circuitry for specific purposes such as source coding/decoding circuitry, channel coding/decoding circuitry, ciphering/deciphering circuitry, and the like.
  • the apparatus 200 may comprise a disposable or rechargeable battery (not shown) for powering the apparatus if external power supply is not available.
  • apparatus refers to the processor 210, an input of the processor 210 configured to receive information from the camera unit and an output of the processor 210 configured to provide information to the viewfinder.
  • the apparatus refers to a device that receives image information from the image sensor via a first input and produces sub-images to a second input of an image processor, which image processor is any circuitry that makes use of the produced sub-images.
  • the image processor may comprise the processor 210 and the device in question may comprise the camera processor 340 and the camera interface 290 shown in Fig. 3.
  • Fig. 3 shows a block diagram of a camera unit 260 of an example embodiment.
  • the camera unit 260 comprises optics, e.g. following components in optical series: a collecting objective 310, and a plenoptic lens array 320.
  • the plenoptic lens array 320 is configured to form a plurality of images on an image sensor 330.
  • the apparatus 300 further comprises a first optical image stabilization unit or actuator 315 configured to move the collective objective 310 and a second optical image stabilization unit or actuator 325 configured to move the plenoptic lens array 320.
  • one of the optical image stabilization units 315, 325 is configured to operate both the collective objective 310 and the plenoptic lens array 320, while the other one of the remaining one of the optical image stabilization units 315, 325 is configured to adjust only one of the collective objective 310 and the plenoptic lens array 320.
  • a typical implementation makes use of such stabilization in two or three directions such as those of Cartesian co-ordinates system (x- y or x-y-z). If the movements are not perpendicular to the optical axis of the moved optical element(s), the perpendicular resultant (to the optical axis of the moved optical element(s)) of the movement is used in determining desired image stabilization movement.
  • the apparatus 300 further comprises a camera processor 340, a memory 350 comprising data 354 and software 352 with which the camera processor 340 can manage operations of the camera unit 260.
  • the camera processor 340 operates as an image and light field information processing circuitry of an example embodiment.
  • An input/output or camera interface 290 is also provided to enable exchange of information between the camera unit 260 and the host processor 210.
  • the camera unit has a light sensitive film medium instead of an image sensor 330.
  • the image sensor 330 is, for instance, a charge-coupled device (CCD) or complementary metal oxide semiconductor (CMOS) unit.
  • CCD charge-coupled device
  • CMOS complementary metal oxide semiconductor
  • the image sensor 330 can also contain built-in analog-to-digital implemented on common silicon chip with the image sensor 330.
  • A/D analog-to-digital
  • the camera processor 340 takes care in example embodiments of one or more of the following functions: pixel color interpolation; white balance correction; edge enhancement; anti-aliasing of images; vignetting correction; combining of subsequent images for high dynamic range imaging; bayer reconstruction filtering; chromatic aberration correction; dust effect compensation; image stabilization.
  • the apparatus 200 further comprises a user interface (U/l) 230.
  • the user interface comprises one or more elements with which the user operates the apparatus 200 and the camera unit 260.
  • Said elements comprise for example a shutter button, menu buttons and a touch screen.
  • the shutter button and the menu buttons may be hardware buttons or for example buttons displayed on a touch screen.
  • Fig. 4 shows an exaggerated illustration of a camera unit according to an example embodiment.
  • An arbitrary scene point A resides at an upper left hand corner. This point A is drawn by the collective objective and one of the lenses in the plenoptic lens array 320 onto the image sensor 330 at pixel A.
  • the distances between the scene point A, the collecting objective 310, plenoptic lens array 320, and the image sensor 330 are denoted as a, q and b, respectively.
  • a direct line is drawn to interconnect the scene point A and the pixel A through the centers of the collective objective 310 and one of the lenses of the plenoptic lens array 320.
  • the distance from the collecting objective 310 to the scene point A may refer to the distance from the object side principal plane in thick lens or the optical center in case of a thin lens of the collecting objective 310 to the scene point A.
  • the distance from the collecting objective 310 to the plenoptic lens 320 array may refer to the optical distance thereof, in which the optical distance is calculated from the principal planes of the collecting objective 310 and of the plenoptic lens array 320.
  • Fig. 5 shows another illustration of the camera unit of Fig. 4 after an offset of c (e.g. due to hand shake) has offset the camera unit with relation to the scene point. This movement can be detected e.g. using the sensor unit 280. Now, a direct line drawn from the scene point A through the center of the collective objective 310 would fall on the image sensor at point A'. Notice that Fig. 5 shows direct line drawn through the center of the collective objective without taking into account for the bending that refraction would cause to a real light ray, as Fig. 5 is intended to illustrate how the plenoptic lenslet array 320 should be adjusted, whereas Figs. 6 and 8 provide more realistic light ray representations.
  • optical image stabilization is used to move the collective objective 310 against the direction of the offset of the camera unit (upwards in Fig. 5) by amount d.
  • the new position of the collective objective 310 and the new diagonal line passing from the scene point A through the center of the collective objective 310 are drawn by dashed line.
  • the hand shake has moved all of the collective objective 310, plenoptic lens array 320 and the image sensor 330, the movement needed for correcting the image of the scene point A back to pixel A is relatively small.
  • the plenoptic lens array 320 is also moved by a second image stabilization movement s so that the change in the geometry between the scene point A, collective objective 310 and the image sensor 330 is compensated. See Figs. 6 and 7 for a light ray diagram before and after the second image stabilization movement s.
  • the collective objective is moved by the first image stabilization unit 315 for optical image stabilization by amount:
  • the plenoptic lens array 320 is moved by the second image stabilization unit 325 for optical image stabilization by amount:
  • D is the pitch of the lenslets or micro-lenses or the distance between two adjacent micro-lenses.
  • the distance s could be achieved in multiples of D, or if the distance s is greater than D, then the lenslet can be moved by s-D.
  • the scale is greatly exaggerated: the height of the plenoptic lens array 320 is much less than the horizontal distance a from the collective objective 310 to the scene point A. In real life applications, the distance a is far greater and thus the angle of the light arriving from the scene point A onto different lenses in the plenoptic lens array 320 has little deviation.
  • the adjustment of the position of the plenoptic lens array 320 may be performed based on a central lenslet so that the maximum errors at far ends would remain insignificant.
  • a plenoptic lens array 320 different lenses have different focusing distances through the collective objective 310.
  • it is determined e.g. computationally or by image analysis which of the different lenslets produce sufficiently sharp images on the image sensor 330 e.g. by comparing a suitable descriptive parameter such as ideal focusing distance of a lenslet or a contrast describing parameter or the spot radius of a scene point under an micro-lens.
  • a suitable descriptive parameter such as ideal focusing distance of a lenslet or a contrast describing parameter or the spot radius of a scene point under an micro-lens.
  • the shift in the copies of the scene point under various lenslets compared to its original position can be considered in the movement of the lenslet. In the above movement of s, only the micro-lens which casts the scene point at the center of its micro-image is considered to be sharp with other copies being slightly blurred.
  • the correcting of movements of the collective objective and lenslet is so performed that the ray passing through the center of collective objective lens and centrally located micro-lens should have no blur caused due to handshake. Then, a blur free copy for the sensor location A (or A') is obtained, while some blur may occur for A1 (or A1 ') and A2 (or A2'). Similarly, the micro-lens array could be moved such that there is a blur free copy for A1 (or A1 ') but not for A (or A') and A2 (or A2').
  • the micro-lens array is moved such that the three lenslets as a whole produce copies of the scene point with least blur, while none of the copies need not be completely blur free.
  • the implementation of this extended optimization is performed e.g. using a cost function such as square sum of blur characteristics of each lenslet.
  • the lenslets with sufficiently sharp images are determined, the corresponding images taken with the image sensor 330 can be recorded. Images corresponding to other lenslets may be discarded.
  • one or more new exposure periods may be used to capture new images with respectively changed parameter a. The number of such new exposure periods depends in an example embodiment on the exposure time (i.e. indirectly e.g. from ISO value, f-value, ambient light and availability of flashlight).
  • the number of new exposures may inversely depend on the exposure time so as to avoid taking the different exposures over an excessively long period of time that could appear inconvenient for the user.
  • each lenslet is used to take one image while others are discarded, using parameter a selected suitably for the lenslet in turn to take the image and thus adapting the position of the plenoptic lenslet array 320.
  • a plenoptic image may be produced even when optical image stabilization is being performed.
  • the electronic device 1 10 is configured to select the images produced by one lenslet for display on the viewfinder display 140.
  • the focusing distance of that lenslet is used in one example embodiment to select the parameter a for the calculation of the suitable plenoptic lens array adaptation s.
  • the parameter a can be first set to equal with the focusing distance of the lenslet that produces the viewfinder image.
  • Fig.9 shows a flow chart illustrating a process according to an example embodiment.
  • a viewfinder image is formed. In a plenoptic camera device, this can be done by allowing the user select one of the lenslets or by using a default lenslet to form the viewfinder image.
  • step 915 the offset c is detected e.g. based on the signal received from the sensor unit 280.
  • any computation needed for determining the offset c is performed by the sensor unit 280 itself, the processor 210, or by any suitable one or more computation components.
  • a first stabilization movement d is determined for the first optical image stabilization actuator 315.
  • This movement can be e.g. calculated from equation (1 ) or selected from a look-up table that is predetermined e.g. based on the equation (1 ).
  • a second stabilization movement s is determined for the second optical image stabilization actuator 325.
  • This movement can be e.g. calculated from equation (2) or selected from a look-up table that is predetermined e.g. based on the equation (2).
  • step 930 the first and second stabilization movements are caused.
  • a processor running the process can issue suitable commands to the actuators themselves or to an intermediate controller. It is also understood that the order of various steps can be changed and various acts can be combined or divided depending on the implementation.
  • the second stabilization movement s can be determined before the first stabilization movement d, and one movement can be carried out before another in any order or simultaneously.
  • step 935 images are exposed with the plenoptic lens array 320.
  • step 940 it is checked if more than one exposures are needed. If yes, the process advances to step 940, otherwise the process resumes to step 910.
  • a computational assessment of the likely quality of different lenslet images can be performed or the lenslet images can be evaluated using a suitable algorithm e.g. similar to those used with auto-focus circuitries that determine the best focus from contrast in the produced images.
  • step 945 it is checked if any criterion is met against taking a new exposure. If yes, the process resumes to step 910, otherwise the process continues to step 945.
  • the total time can be kept below a given maximum threshold; a user preference for maximum number of exposure periods may be compared against a counter that keeps track on the number of subsequent exposures taken with different lenslet focusing distance adaptation; the available illumination (possibly with the available flashlight) may not suffice for taking good quality images of objects beyond given distance; it can be determined if there is any lenslet the image of which has not yet been exposed with suitable adaptation of the plenoptic lens array 320; and / or the scene change or a movement is detected that is indicative of a need to take new image with a preferred lenslet.
  • the next lenslet(s) will be selected.
  • the target distance a is set accordingly.
  • step 955 the optical image stabilization of steps 915 to 930 is repeated.
  • step 960 a new exposure is made and images are recorded for each of the selected one or more lenslets. Then, the process resumes to step 940.
  • a technical effect of one or more of the example embodiments disclosed herein is that a plenoptic camera can be optically image stabilized for one or more lenslets. Another technical effect of one or more of the example embodiments disclosed herein is that sharp images can be taken by a plenoptic camera in the presence of hand shake. Another technical effect of one or more of the example embodiments disclosed herein is that a plenoptic camera can be optically image stabilized so that images are produced over a focusing range greater than that produced by single lenslet.
  • Embodiments of the present invention may be implemented in software, hardware, application logic or a combination of software, hardware and application logic.
  • the software, application logic and/or hardware may reside entirely or in part on a host apparatus, camera unit or a dedicated circuitry.
  • the application logic, software or an instruction set is maintained on any one of various conventional computer-readable media.
  • a "computer- readable medium" may be any non-transitory media or means that can contain, store, communicate, propagate or transport the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer, with one example of a computer described and depicted in Fig. 2.
  • a computer-readable medium may comprise a computer-readable storage medium that may be any media or means that can contain or store the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer.
  • the different functions discussed herein may be performed in a different order and/or concurrently with each other. Furthermore, if desired, one or more of the before- described functions may be optional or may be combined.

Abstract

An apparatus, method and computer program for collecting light by a collective objective; forming by a plenoptic lens array that is optically in series with the collective objective, for capture by an image sensor, a plurality of images from a scene seen through the collective objective; controllably moving the collective objective by a first optical image stabilization actuator; and controllably moving the plenoptic lens array by a second optical image stabilization actuator.

Description

METHOD AND APPARATUS FOR PLENOPTIC IMAGING
TECHNICAL FIELD
The present application generally relates to a method and apparatus for plenoptic imaging. The present application relates in particular, though not exclusively, to plenoptic imaging in the presence of camera shaking. BACKGROUND
Plenoptic imaging is a technique in which a light field is captured so as to capture three- dimensional images. A network or grid of lenses is arranged around the focal plane of the main or the collective objective lens. By this arrangement a scene point in the world space will be projected under different lenses within the grid. This enables to do a lot of post capture processing to enable features like post capture refocusing, computing depth etc., although with a loss of resolution of the reconstructed image.
The producing of three-dimensional image data results in the ability to perform sharper digital image stabilization that accounts for camera movement on all axes unlike current optical and digital systems in which the image stabilization works generally on two to five different axes (X-Y -movements and rotation over three different Cartesian axes). The structure of the plenoptic cameras also has prevented development of optical image stabilization, as it is apparent that any movement between collecting lens, networked lenses and the sensor will have a multitude of complex side effects: no actual optical image stabilization technique is known for plenoptic cameras. Systems such as Steadicam™ that stabilize the entire camera are naturally usable, but such systems require external equipment and as fall outside the normal definition of optical image stabilization.
It is an object of the present invention to avoid or mitigate problems related to image stabilization in plenoptic imaging or at least to provide a new technical alternative to existing plenoptic imaging techniques. SUMMARY
Various aspects of examples of the invention are set out in the claims. According to a first example aspect of the present invention, there is provided an apparatus comprising:
a collective objective;
a plenoptic lens array optically in series with the collective objective and configured to form, for capture by an image sensor, a plurality of images from a scene seen through the collective objective;
a first optical image stabilization actuator configured to controllably move the collective objective; and
a second optical image stabilization actuator configured to controllably move the plenoptic lens array.
The apparatus may comprise a processor configured to control the first and second image stabilization actuators. The controlling may comprise causing the first optical image stabilization actuator to move the collective objective by a first stabilization movement d in compensation of a hand shake movement c affecting on the apparatus.
The controlling may comprise causing the second optical image stabilization actuator to move the plenoptic lens array by a second stabilization movement s as a function of the hand shake movement and the first stabilization movement.
The collective objective may be formed of one lens. Alternatively, the collective objective may be formed of more than one optical element such as lenses and / or prisms.
The first stabilization movement d may be linear movement configured to offset the collective objective. The offset may be directed perpendicular to the optical axis of the collective objective. The second stabilization movement s may be a linear movement configured to offset the plenoptic lens array. The offset may be directed perpendicular to the optical axis of the plenoptic lens array. The offset may be directed parallel to the image sensor.
The first stabilization movement d may be directly proportional to a resultant component of the hand shake movement c in the direction of the first stabilization movement.
The first stabilization movement may be inversely proportional to a sum of 1 + a / (q + b), in which a is distance from the collecting objective to a scene point, q is distance from the collecting objective to the plenoptic lens array, and b is distance from the plenoptic lens array to the image sensor, when provided in series with the plenoptic lens array.
The distance from the collecting objective to a scene point may refer to the distance from the object side principal plane in thick lens or the optical center in case of a thin lens of the collecting objective to a scene point.
The distance from the collecting objective to the plenoptic lens array may refer to the optical distance thereof, in which the optical distance is calculated from the principal planes of the collecting objective and of the plenoptic lens array.
The second stabilization movement may be directly proportional to a resultant component of the hand shake movement c in the direction of the first stabilization movement. The second stabilization movement s may be inversely proportional to a sum of 1 + (a + q) / b in which a is distance from the collecting objective to a scene point, q is distance from the collecting objective to the plenoptic lens array, and b is distance from the plenoptic lens array to the image sensor, when provided in series with the plenoptic lens array.
One of the first and second optical image stabilization actuators may be configured to move both the collective objective and the plenoptic lens array while the remaining one of the first and second optical image stabilization actuators may be configured to move only one of the collective objective and the plenoptic lens array.
The distance to the scene point may be automatically selected by shape recognition. Alternatively, the distance to the scene point may be preset or set by a user.
According to a second example aspect of the present invention, there is provided a method comprising:
collecting light by a collective objective;
forming by a plenoptic lens array that is optically in series with the collective objective, for capture by an image sensor, a plurality of images from a scene seen through the collective objective;
controllably moving the collective objective by a first optical image stabilization actuator; and
controllably moving the plenoptic lens array by a second optical image stabilization actuator. The method may comprise controlling the first and second image stabilization actuators.
The controlling may comprise causing the first optical image stabilization actuator to move the collective objective by a first stabilization movement d in compensation of a hand shake movement c affecting on the apparatus.
According to a third example aspect of the present invention, there is provided a computer program comprising:
computer executable program code configured to cause an apparatus to perform, when executing the program code, the method of the second example aspect.
The computer program may be stored in a memory medium. The memory medium may be a non-transitory memory medium. The memory medium may comprise a digital data storage such as a data disc or diskette, optical storage, magnetic storage, holographic storage, opto-magnetic storage, phase-change memory, resistive random access memory, magnetic random access memory, solid-electrolyte memory, ferroelectric random access memory, organic memory or polymer memory. The memory medium may be formed into a device without other substantial functions than storing memory or it may be formed as part of a device with other functions, including but not limited to a memory of a computer, a chip set, and a sub assembly of an electronic device.
According to a fourth example aspect of the present invention, there is provided an apparatus, comprising a processor configured to:
cause capturing by an image sensor a plurality of images from a scene seen through a collective objective by a plenoptic lens array that is optically in series with the collective objective;
controllably move the collective objective by a first optical image stabilization actuator; and
controllably move the plenoptic lens array by a second optical image stabilization actuator.
According to a fifth example aspect of the present invention, there is provided an apparatus comprising:
at least one processor; and
at least one memory including computer program code
the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: cause capturing by an image sensor a plurality of images from a scene seen through a collective objective by a plenoptic lens array that is optically in series with the collective objective;
controllably move the collective objective by a first optical image stabilization actuator; and
controllably move the plenoptic lens array by a second optical image stabilization actuator.
According to a sixth example aspect of the present invention, there is provided a computer program comprising:
code for causing capturing by an image sensor a plurality of images from a scene seen through a collective objective by a plenoptic lens array that is optically in series with the collective objective;
code for controllably moving the collective objective by a first optical image stabilization actuator; and
code for controllably moving the plenoptic lens array by a second optical image stabilization actuator;
when the computer program is run on a processor.
According to a seventh example aspect of the present invention, there is provided a device comprising the apparatus of the first example aspect. The device may be selected from a group consisting of: a mobile telephone; a tablet computer; a laptop computer; a game console; a portable electronic device; a portable camera; a mobile camera; a vehicular camera; a portable video camera; a mobile video camera; and a movable surveillance camera.
Different non-binding example aspects and embodiments of the present invention have been illustrated in the foregoing. The embodiments in the foregoing are used merely to explain selected aspects or steps that may be utilized in implementations of the present invention. Some embodiments may be presented only with reference to certain example aspects of the invention. It should be appreciated that corresponding embodiments may apply to other example aspects as well.
BRIEF DESCRIPTION OF THE DRAWINGS
For a more complete understanding of example embodiments of the present invention, reference is now made to the following descriptions taken in connection with the accompanying drawings in which: Fig. 1 shows a schematic system for use as a reference with which some example embodiments of the invention can be explained;
Fig. 2 shows a block diagram of an apparatus of an example embodiment;
Fig. 3 shows a block diagram of a camera unit of an example embodiment;
Fig. 4 shows an exaggerated illustration of a camera unit according to an example embodiment;
Fig. 5 shows another illustration of the camera unit of Fig. 4 after an offset of c of a camera unit has occurred with relation to the scene point;
Fig. 6 shows a light ray diagram before the second image stabilization movement s; Fig. 7 shows a light ray diagram after the second image stabilization movement s;
Fig. 8 shows a light ray diagram for illustrating an example embodiment; and
Fig 9 shows a flow chart illustrating a process according to an example embodiment.
DETAILED DESCRIPTON OF THE DRAWINGS
An example embodiment of the present invention and its potential advantages are understood by referring to Figs. 1 through 9 of the drawings. In this document, like reference signs denote like parts or steps. Fig. 1 shows a schematic system 100 for use as a reference with which some example embodiments can be explained. The system 100 comprises an electronic device 1 10 such as a camera phone, camera, smartphone, gaming device, personal digital assistant or a tablet computer having a camera unit 120 that is capable of capturing images with a field of view 130 using light field recording. The device 1 10 further comprises a display 140. Fig. 1 also shows image objects 150, 160 and 170 at different distances from the camera unit that are being imaged by the camera unit 120.
Fig. 2 shows a block diagram of an apparatus 200 of an example embodiment. The apparatus 200 is suited for operating as the device 1 10. In an example embodiment, the apparatus 200 comprises a communication interface 220, a host processor 210 coupled to the communication interface module 220, and a memory 240 coupled to the host processor 210.
The memory 240 comprises a work memory and a non-volatile memory such as a read- only memory, flash memory, optical or magnetic memory. In the memory 240, typically at least initially in the non-volatile memory 245, there is stored software 250 operable to be loaded into and executed by the host processor 210. The software 250 may comprise one or more software modules and can be in the form of a computer program product that is software stored in a memory medium. The apparatus 200 further comprises a camera unit 260, a viewfinder 270 and a sensor unit 280 each coupled to the host processor 210. The sensor unit comprises, for example, one or more elements selected from a group of: an acceleration sensor; a compass; and a gyroscopic orientation sensor. The camera unit 260 and the processor 210 are connected via a camera interface 290.
Term host processor refers to a processor in the apparatus 200 in distinction of one or more processors in the camera unit 260, referred to as camera processor(s) 340 in Fig. 3. Depending on implementation, different example embodiments share processing of image and/or light field information and control of the camera unit 260 differently between the camera unit and one or more processors outside the camera unit. Also, the processing is performed on the fly in an example embodiment and with buffering in another example embodiment. It is also possible that a given amount of images or image information can be processed on the fly and after than buffered operation mode is used as in one example embodiment.
It shall be understood that any coupling in this document refers to functional or operational coupling; there may be intervening components or circuitries in between coupled elements unless expressly otherwise described.
The communication interface module 220 is configured to provide local communications over one or more local links. The links may be wired and/or wireless links. The communication interface 220 may further or alternatively implement telecommunication links suited for establishing links with other users or for data transfer, e.g. using the Internet. Such telecommunication links may be links using any of: wireless local area network links, Bluetooth, ultra-wideband, cellular or satellite communication links. The communication interface 220 may be integrated into the apparatus 200 or into an adapter, card or the like that may be inserted into a suitable slot or port of the apparatus 200. While Fig. 2 shows one communication interface 220, the apparatus may comprise a plurality of communication interfaces 220.
The host processor 210 is, for instance, a central processing unit (CPU), a microprocessor, a digital signal processor (DSP), a graphics processing unit, an application specific integrated circuit (ASIC), a field programmable gate array, a microcontroller or a combination of such elements. Figure 2 shows one host processor 210, but the apparatus 200 may comprise a plurality of host processors. As mentioned in the foregoing, the memory 240 may comprise non-transitory non-volatile and a non-volatile memory, such as a read-only memory (ROM), a programmable readonly memory (PROM), erasable programmable read-only memory (EPROM), a random- access memory (RAM), a flash memory, a data disk, an optical storage, a magnetic storage, a smart card, or the like. In some example embodiments, only volatile or nonvolatile memory is present in the apparatus 200. Moreover, in some example embodiments, the apparatus comprises a plurality of memories. In some example embodiments, various elements are integrated. For instance, the memory 240 can be constructed as a part of the apparatus 200 or inserted into a slot, port, or the like. Further still, the memory 240 may serve the sole purpose of storing data, or it may be constructed as a part of an apparatus serving other purposes, such as processing data. Similar options are thinkable also for various other elements.
A skilled person appreciates that in addition to the elements shown in Fig. 2, the apparatus 200 may comprise other elements, such as microphones, displays, as well as additional circuitry such as further input/output (I/O) circuitries, memory chips, application- specific integrated circuits (ASIC), processing circuitry for specific purposes such as source coding/decoding circuitry, channel coding/decoding circuitry, ciphering/deciphering circuitry, and the like. Additionally, the apparatus 200 may comprise a disposable or rechargeable battery (not shown) for powering the apparatus if external power supply is not available.
It is also useful to realize that the term apparatus is used in this document with varying scope. In some of the broader claims and examples, the apparatus may refer to only a subset of the features presented in Fig. 2 or even be implemented without any one of the features of Fig. 2. In an example embodiment term apparatus refers to the processor 210, an input of the processor 210 configured to receive information from the camera unit and an output of the processor 210 configured to provide information to the viewfinder. In one example embodiment, the apparatus refers to a device that receives image information from the image sensor via a first input and produces sub-images to a second input of an image processor, which image processor is any circuitry that makes use of the produced sub-images. For instance, the image processor may comprise the processor 210 and the device in question may comprise the camera processor 340 and the camera interface 290 shown in Fig. 3.
Fig. 3 shows a block diagram of a camera unit 260 of an example embodiment. The camera unit 260 comprises optics, e.g. following components in optical series: a collecting objective 310, and a plenoptic lens array 320. The plenoptic lens array 320 is configured to form a plurality of images on an image sensor 330. The apparatus 300 further comprises a first optical image stabilization unit or actuator 315 configured to move the collective objective 310 and a second optical image stabilization unit or actuator 325 configured to move the plenoptic lens array 320. In an example embodiment, one of the optical image stabilization units 315, 325 is configured to operate both the collective objective 310 and the plenoptic lens array 320, while the other one of the remaining one of the optical image stabilization units 315, 325 is configured to adjust only one of the collective objective 310 and the plenoptic lens array 320. It should be noticed that while this document generally explains optical image stabilization using linear movements in direction, a typical implementation makes use of such stabilization in two or three directions such as those of Cartesian co-ordinates system (x- y or x-y-z). If the movements are not perpendicular to the optical axis of the moved optical element(s), the perpendicular resultant (to the optical axis of the moved optical element(s)) of the movement is used in determining desired image stabilization movement.
The apparatus 300 further comprises a camera processor 340, a memory 350 comprising data 354 and software 352 with which the camera processor 340 can manage operations of the camera unit 260. The camera processor 340 operates as an image and light field information processing circuitry of an example embodiment. An input/output or camera interface 290 is also provided to enable exchange of information between the camera unit 260 and the host processor 210. Furthermore, in an example embodiment, the camera unit has a light sensitive film medium instead of an image sensor 330.
The image sensor 330 is, for instance, a charge-coupled device (CCD) or complementary metal oxide semiconductor (CMOS) unit. In case of a CMOS unit, the image sensor 330 can also contain built-in analog-to-digital implemented on common silicon chip with the image sensor 330. In an alternative example embodiment, a separate analog-to-digital (A/D) conversion is provided between the image sensor 330 and the camera processor 340. In addition to the conventional image processing and the calculations or operations needed in light field recording, the camera processor 340 takes care in example embodiments of one or more of the following functions: pixel color interpolation; white balance correction; edge enhancement; anti-aliasing of images; vignetting correction; combining of subsequent images for high dynamic range imaging; bayer reconstruction filtering; chromatic aberration correction; dust effect compensation; image stabilization. In an example embodiment, the apparatus 200 further comprises a user interface (U/l) 230. The user interface comprises one or more elements with which the user operates the apparatus 200 and the camera unit 260. Said elements comprise for example a shutter button, menu buttons and a touch screen. The shutter button and the menu buttons may be hardware buttons or for example buttons displayed on a touch screen.
Fig. 4 shows an exaggerated illustration of a camera unit according to an example embodiment. An arbitrary scene point A resides at an upper left hand corner. This point A is drawn by the collective objective and one of the lenses in the plenoptic lens array 320 onto the image sensor 330 at pixel A.
The distances between the scene point A, the collecting objective 310, plenoptic lens array 320, and the image sensor 330 are denoted as a, q and b, respectively. A direct line is drawn to interconnect the scene point A and the pixel A through the centers of the collective objective 310 and one of the lenses of the plenoptic lens array 320.
Notice that the distance from the collecting objective 310 to the scene point A may refer to the distance from the object side principal plane in thick lens or the optical center in case of a thin lens of the collecting objective 310 to the scene point A. The distance from the collecting objective 310 to the plenoptic lens 320 array may refer to the optical distance thereof, in which the optical distance is calculated from the principal planes of the collecting objective 310 and of the plenoptic lens array 320.
Fig. 5 shows another illustration of the camera unit of Fig. 4 after an offset of c (e.g. due to hand shake) has offset the camera unit with relation to the scene point. This movement can be detected e.g. using the sensor unit 280. Now, a direct line drawn from the scene point A through the center of the collective objective 310 would fall on the image sensor at point A'. Notice that Fig. 5 shows direct line drawn through the center of the collective objective without taking into account for the bending that refraction would cause to a real light ray, as Fig. 5 is intended to illustrate how the plenoptic lenslet array 320 should be adjusted, whereas Figs. 6 and 8 provide more realistic light ray representations.
In order to avoid drawing of the scene point A at different pixels as the camera unit moves e.g. due to hand shake, optical image stabilization is used to move the collective objective 310 against the direction of the offset of the camera unit (upwards in Fig. 5) by amount d. The new position of the collective objective 310 and the new diagonal line passing from the scene point A through the center of the collective objective 310 are drawn by dashed line. As the hand shake has moved all of the collective objective 310, plenoptic lens array 320 and the image sensor 330, the movement needed for correcting the image of the scene point A back to pixel A is relatively small. However, the dashed line passing from the scene point A to pixel A through the center of the collective objective 310 does not coincide with the center of the lens of the plenoptic lens array 320 through which the pixel A was drawn in Fig. 4. Hence, the plenoptic lens array 320 would cause drawing the scene point A on some other pixel. In result, the optical image stabilization might not produce a sharp image. Hence, in Fig. 5, the plenoptic lens array 320 is also moved by a second image stabilization movement s so that the change in the geometry between the scene point A, collective objective 310 and the image sensor 330 is compensated. See Figs. 6 and 7 for a light ray diagram before and after the second image stabilization movement s. In an example embodiment, the collective objective is moved by the first image stabilization unit 315 for optical image stabilization by amount:
d = c / ( 1 + a / (q + b) ) (1 )
On the other hand, the plenoptic lens array 320 is moved by the second image stabilization unit 325 for optical image stabilization by amount:
s = b c / (a + q + b) (2)
Notice that the distance s can be added or subtracted with D. (see Fig. 6) where D is the pitch of the lenslets or micro-lenses or the distance between two adjacent micro-lenses.
As all the micro-lenses are similar, the distance s could be achieved in multiples of D, or if the distance s is greater than D, then the lenslet can be moved by s-D.
It shall be understood that in Figs. 4 and 5, the scale is greatly exaggerated: the height of the plenoptic lens array 320 is much less than the horizontal distance a from the collective objective 310 to the scene point A. In real life applications, the distance a is far greater and thus the angle of the light arriving from the scene point A onto different lenses in the plenoptic lens array 320 has little deviation. In order to optimize the operation over number of different lenses or lenslets of the plenoptic lens array 320, the adjustment of the position of the plenoptic lens array 320 may be performed based on a central lenslet so that the maximum errors at far ends would remain insignificant.
It is also understood that in a plenoptic lens array 320, different lenses have different focusing distances through the collective objective 310. In an example embodiment, it is determined e.g. computationally or by image analysis which of the different lenslets produce sufficiently sharp images on the image sensor 330 e.g. by comparing a suitable descriptive parameter such as ideal focusing distance of a lenslet or a contrast describing parameter or the spot radius of a scene point under an micro-lens. The shift in the copies of the scene point under various lenslets compared to its original position can be considered in the movement of the lenslet. In the above movement of s, only the micro-lens which casts the scene point at the center of its micro-image is considered to be sharp with other copies being slightly blurred. However it is not limited to this case only, one can also setup an optimization function such that the shift in the copies of the scene point under different lenslets is minimized compared to its original position, however no copy is completely sharp, with this the movement s would be slightly different. This optimization can be extended for different points in a single depth plane under focus. For example, considering the scene point A, it is realized that the lenslets surrounding the central one through which a ray is drawn in Fig. 8, copies of that scene point A can be drawn by many lenslets, e.g. by the neighboring lenslets. When no hand shake is present, three different lenslets would produce e.g. copies at A, A1 and A2 (not shown), however after handshake they would have made their copies to be A', A1 ' and A2'. In an example embodiment, the correcting of movements of the collective objective and lenslet is so performed that the ray passing through the center of collective objective lens and centrally located micro-lens should have no blur caused due to handshake. Then, a blur free copy for the sensor location A (or A') is obtained, while some blur may occur for A1 (or A1 ') and A2 (or A2'). Similarly, the micro-lens array could be moved such that there is a blur free copy for A1 (or A1 ') but not for A (or A') and A2 (or A2'). In another example embodiment with extended optimization, the micro-lens array is moved such that the three lenslets as a whole produce copies of the scene point with least blur, while none of the copies need not be completely blur free. The implementation of this extended optimization is performed e.g. using a cost function such as square sum of blur characteristics of each lenslet. When the lenslets with sufficiently sharp images are determined, the corresponding images taken with the image sensor 330 can be recorded. Images corresponding to other lenslets may be discarded. In an example embodiment, one or more new exposure periods may be used to capture new images with respectively changed parameter a. The number of such new exposure periods depends in an example embodiment on the exposure time (i.e. indirectly e.g. from ISO value, f-value, ambient light and availability of flashlight). For example, the number of new exposures may inversely depend on the exposure time so as to avoid taking the different exposures over an excessively long period of time that could appear inconvenient for the user. In an example embodiment, each lenslet is used to take one image while others are discarded, using parameter a selected suitably for the lenslet in turn to take the image and thus adapting the position of the plenoptic lenslet array 320. By taking a number of exposures with different parameter a values in the computation of the plenoptic lens array 320 adjustment, a plenoptic image may be produced even when optical image stabilization is being performed.
In the example embodiment in which a single exposure is taken, fewer images may be produced, but still the user can be provided with a sharp image by at least one lenslet.
In an example embodiment, the electronic device 1 10 is configured to select the images produced by one lenslet for display on the viewfinder display 140. The focusing distance of that lenslet is used in one example embodiment to select the parameter a for the calculation of the suitable plenoptic lens array adaptation s. For example, the parameter a can be first set to equal with the focusing distance of the lenslet that produces the viewfinder image.
Fig.9 shows a flow chart illustrating a process according to an example embodiment. In step 910, a viewfinder image is formed. In a plenoptic camera device, this can be done by allowing the user select one of the lenslets or by using a default lenslet to form the viewfinder image.
In step 915, the offset c is detected e.g. based on the signal received from the sensor unit 280. Depending on implementation, any computation needed for determining the offset c is performed by the sensor unit 280 itself, the processor 210, or by any suitable one or more computation components.
In step 920, a first stabilization movement d is determined for the first optical image stabilization actuator 315. This movement can be e.g. calculated from equation (1 ) or selected from a look-up table that is predetermined e.g. based on the equation (1 ).
In step 925, a second stabilization movement s is determined for the second optical image stabilization actuator 325. This movement can be e.g. calculated from equation (2) or selected from a look-up table that is predetermined e.g. based on the equation (2).
In step 930, the first and second stabilization movements are caused. For example, a processor running the process can issue suitable commands to the actuators themselves or to an intermediate controller. It is also understood that the order of various steps can be changed and various acts can be combined or divided depending on the implementation. For example, the second stabilization movement s can be determined before the first stabilization movement d, and one movement can be carried out before another in any order or simultaneously.
In step 935, images are exposed with the plenoptic lens array 320.
In step 940, it is checked if more than one exposures are needed. If yes, the process advances to step 940, otherwise the process resumes to step 910.
For instance, a computational assessment of the likely quality of different lenslet images can be performed or the lenslet images can be evaluated using a suitable algorithm e.g. similar to those used with auto-focus circuitries that determine the best focus from contrast in the produced images.
In step 945, it is checked if any criterion is met against taking a new exposure. If yes, the process resumes to step 910, otherwise the process continues to step 945. For example, the total time can be kept below a given maximum threshold; a user preference for maximum number of exposure periods may be compared against a counter that keeps track on the number of subsequent exposures taken with different lenslet focusing distance adaptation; the available illumination (possibly with the available flashlight) may not suffice for taking good quality images of objects beyond given distance; it can be determined if there is any lenslet the image of which has not yet been exposed with suitable adaptation of the plenoptic lens array 320; and / or the scene change or a movement is detected that is indicative of a need to take new image with a preferred lenslet. In step 950, the next lenslet(s) will be selected. The target distance a is set accordingly.
In step 955, the optical image stabilization of steps 915 to 930 is repeated.
In step 960, a new exposure is made and images are recorded for each of the selected one or more lenslets. Then, the process resumes to step 940.
Without in any way limiting the scope, interpretation, or application of the claims appearing below, a technical effect of one or more of the example embodiments disclosed herein is that a plenoptic camera can be optically image stabilized for one or more lenslets. Another technical effect of one or more of the example embodiments disclosed herein is that sharp images can be taken by a plenoptic camera in the presence of hand shake. Another technical effect of one or more of the example embodiments disclosed herein is that a plenoptic camera can be optically image stabilized so that images are produced over a focusing range greater than that produced by single lenslet.
Embodiments of the present invention may be implemented in software, hardware, application logic or a combination of software, hardware and application logic. The software, application logic and/or hardware may reside entirely or in part on a host apparatus, camera unit or a dedicated circuitry. In an example embodiment, the application logic, software or an instruction set is maintained on any one of various conventional computer-readable media. In the context of this document, a "computer- readable medium" may be any non-transitory media or means that can contain, store, communicate, propagate or transport the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer, with one example of a computer described and depicted in Fig. 2. A computer-readable medium may comprise a computer-readable storage medium that may be any media or means that can contain or store the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer.
If desired, the different functions discussed herein may be performed in a different order and/or concurrently with each other. Furthermore, if desired, one or more of the before- described functions may be optional or may be combined.
Although various aspects of the invention are set out in the independent claims, other aspects of the invention comprise other combinations of features from the described embodiments and/or the dependent claims with the features of the independent claims, and not solely the combinations explicitly set out in the claims.
It is also noted herein that while the foregoing describes example embodiments of the invention, these descriptions should not be viewed in a limiting sense. Rather, there are several variations and modifications which may be made without departing from the scope of the present invention as defined in the appended claims.

Claims

Claims:
1. An apparatus comprising:
a collective objective;
a plenoptic lens array optically in series with the collective objective and configured to form, for capture by an image sensor, a plurality of images from a scene seen through the collective objective;
a first optical image stabilization actuator configured to controllably move the collective objective; and
a second optical image stabilization actuator configured to controllably move the plenoptic lens array.
2. The apparatus of claim 1 comprising:
a processor configured to control the first and second image stabilization actuators.
3. The apparatus of claim 2, the controlling comprising:
causing the first optical image stabilization actuator to move the collective objective by a first stabilization movement d in compensation of a hand shake movement c affecting on the apparatus.
4. The apparatus of claim 3, wherein the first stabilization movement is linear movement d configured to offset the collective objective.
5. The apparatus of claim 3 or 4, wherein the first stabilization movement is configured to offset is directed the collective objective perpendicularly to the optical axis of the collective objective.
6. The apparatus of any of claims 3 to 5, wherein the first stabilization movement d is directly proportional to a resultant component of the hand shake movement c in the direction of the first stabilization movement d.
7. The apparatus of any of claims 3 to 6, wherein the first stabilization movement is inversely proportional to a sum of 1 + a / (q + b), in which a is distance from the collecting objective to a scene point, q is distance from the collecting objective to the plenoptic lens array, and b is distance from the plenoptic lens array to the image sensor, when provided in series with the plenoptic lens array.
8. The apparatus of any of claims 2 to 7, the controlling comprising: causing the second optical image stabilization actuator to move the plenoptic lens array by a second stabilization movement s as a function of the hand shake movement and the first stabilization movement.
9. The apparatus of claim 8, wherein the second stabilization movement s is a linear movement configured to offset the plenoptic lens array.
10. The apparatus of claim 8 or 9, wherein the second stabilization movement is configured to offset the plenoptic lens array perpendicularly to the optical axis of the plenoptic lens array.
1 1 . The apparatus of any of claims 8 to 10, wherein the second stabilization movement is directly proportional to a resultant component of the hand shake movement c in the direction of the first stabilization movement.
12. The apparatus of any of claims 8 to 1 1 , wherein the second stabilization movement is inversely proportional to a sum of 1 + (a + q) / b, in which a is distance from the collecting objective to a scene point, q is distance from the collecting objective to the plenoptic lens array, and b is distance from the plenoptic lens array to the image sensor, when provided in series with the plenoptic lens array.
13. The apparatus of any of preceding claims, wherein one of the first and second optical image stabilization actuators is configured to move both the collective objective and the plenoptic lens array while the remaining one of the first and second optical image stabilization actuators is configured to move only one of the collective objective and the plenoptic lens array.
14. The apparatus of any of preceding claims, configured to automatically select the distance to the scene point by shape recognition.
15. The apparatus of any of preceding claims, wherein the distance to the scene point is preset or set by a user.
16. A method comprising:
collecting light by a collective objective;
forming by a plenoptic lens array that is optically in series with the collective objective, for capture by an image sensor, a plurality of images from a scene seen through the collective objective; controllably moving the collective objective by a first optical image stabilization actuator; and
controllably moving the plenoptic lens array by a second optical image stabilization actuator.
17. The method of claim 16, comprising controlling the first and second image stabilization actuators.
18. The method of claim 17, wherein controlling comprises causing the first optical image stabilization actuator to move the collective objective by a first stabilization movement d in compensation of a hand shake movement c affecting on the apparatus.
19. The method of claim 18, wherein the first stabilization movement is linear movement d configured to offset the collective objective.
20. The method of claim 18 or 19, wherein the first stabilization movement is configured to offset is directed the collective objective perpendicularly to the optical axis of the collective objective.
21 . The method of any of claims 18 to 20, wherein the first stabilization movement d is directly proportional to a resultant component of the hand shake movement c in the direction of the first stabilization movement d.
22. The method of any of claims 18 to 21 , wherein the first stabilization movement is inversely proportional to a sum of 1 + a / (q + b), in which a is distance from the collecting objective to a scene point, q is distance from the collecting objective to the plenoptic lens array, and b is distance from the plenoptic lens array to the image sensor, when provided in series with the plenoptic lens array.
23. The method of any of claims 17 to 22, the controlling comprising:
causing the second optical image stabilization actuator to move the plenoptic lens array by a second stabilization movement s as a function of the hand shake movement and the first stabilization movement.
24. The method of claim 23, wherein the second stabilization movement s is a linear movement configured to offset the plenoptic lens array.
25. The method of claim 23 or 24, wherein the second stabilization movement is configured to offset the plenoptic lens array perpendicularly to the optical axis of the plenoptic lens array.
26. The method of any of claims 23 to 25, wherein the second stabilization movement is directly proportional to a resultant component of the hand shake movement c in the direction of the first stabilization movement.
27. The method of any of claims 23 to 26, wherein the second stabilization movement is inversely proportional to a sum of 1 + (a + q) / b, in which a is distance from the collecting objective to a scene point, q is distance from the collecting objective to the plenoptic lens array, and b is distance from the plenoptic lens array to the image sensor, when provided in series with the plenoptic lens array.
28. The method of any of claims 16 to 27, wherein one of the first and second optical image stabilization actuators is controlled to move both the collective objective and the plenoptic lens array while the remaining one of the first and second optical image stabilization actuators is controlled to move only one of the collective objective and the plenoptic lens array.
29. The method of any of claims 16 to 28, comprising automatically selecting the distance to the scene point by shape recognition.
30. The method of any of claims 16 to 29, wherein the distance to the scene point is preset or set by a user.
31 . An apparatus, comprising a processor configured to:
cause capturing by an image sensor a plurality of images from a scene seen through a collective objective by a plenoptic lens array that is optically in series with the collective objective;
controllably move the collective objective by a first optical image stabilization actuator; and
controllably move the plenoptic lens array by a second optical image stabilization actuator.
32. The apparatus of claim 31 , wherein the processor comprises at least one memory that contains executable instructions that if executed by the processor cause the apparatus to: causing capturing by an image sensor a plurality of images from a scene seen through a collective objective by a plenoptic lens array that is optically in series with the collective objective;
controllably move the collective objective by a first optical image stabilization actuator; and
controllably move the plenoptic lens array by a second optical image stabilization actuator.
33. An apparatus comprising:
at least one processor; and
at least one memory including computer program code
the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following:
cause capturing by an image sensor a plurality of images from a scene seen through a collective objective by a plenoptic lens array that is optically in series with the collective objective;
controllably move the collective objective by a first optical image stabilization actuator; and
controllably move the plenoptic lens array by a second optical image stabilization actuator.
34. A computer program comprising:
code for causing capturing by an image sensor a plurality of images from a scene seen through a collective objective by a plenoptic lens array that is optically in series with the collective objective;
code for controllably moving the collective objective by a first optical image stabilization actuator; and
code for controllably moving the plenoptic lens array by a second optical image stabilization actuator;
when the computer program is run on a processor.
35. A device comprising the apparatus of any of claims 1 to 15, 31 , 32 or 33, wherein the device is selected from a group consisting of: a mobile telephone; a tablet computer; a laptop computer; a game console; a portable electronic device; a portable camera; a mobile camera; a vehicular camera; a portable video camera; a mobile video camera; and a movable surveillance camera.
PCT/FI2014/050690 2013-09-30 2014-09-11 Method and apparatus for plenoptic imaging WO2015044514A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN4412/CHE/2013 2013-09-30
IN4412CH2013 IN2013CH04412A (en) 2013-09-30 2014-09-11

Publications (1)

Publication Number Publication Date
WO2015044514A1 true WO2015044514A1 (en) 2015-04-02

Family

ID=52742137

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FI2014/050690 WO2015044514A1 (en) 2013-09-30 2014-09-11 Method and apparatus for plenoptic imaging

Country Status (2)

Country Link
IN (1) IN2013CH04412A (en)
WO (1) WO2015044514A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5155520A (en) * 1990-01-16 1992-10-13 Olympus Optical Co., Ltd. Camera apparatus having image correcting function against instability of shooting thereof
US20100026852A1 (en) * 2006-02-07 2010-02-04 Yi-Ren Ng Variable imaging arrangements and methods therefor
US20100141802A1 (en) * 2008-12-08 2010-06-10 Timothy Knight Light Field Data Acquisition Devices, and Methods of Using and Manufacturing Same
US20110234977A1 (en) * 2010-03-23 2011-09-29 Steven Roger Verdooner Apparatus and method for imaging an eye
US20130208082A1 (en) * 2012-02-13 2013-08-15 Raytheon Company Multi-plenoptic system with image stacking and method for wide field-of-regard high-resolution imaging

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5155520A (en) * 1990-01-16 1992-10-13 Olympus Optical Co., Ltd. Camera apparatus having image correcting function against instability of shooting thereof
US20100026852A1 (en) * 2006-02-07 2010-02-04 Yi-Ren Ng Variable imaging arrangements and methods therefor
US20100141802A1 (en) * 2008-12-08 2010-06-10 Timothy Knight Light Field Data Acquisition Devices, and Methods of Using and Manufacturing Same
US20110234977A1 (en) * 2010-03-23 2011-09-29 Steven Roger Verdooner Apparatus and method for imaging an eye
US20130208082A1 (en) * 2012-02-13 2013-08-15 Raytheon Company Multi-plenoptic system with image stacking and method for wide field-of-regard high-resolution imaging

Also Published As

Publication number Publication date
IN2013CH04412A (en) 2015-04-03

Similar Documents

Publication Publication Date Title
CN109194876B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
US10009540B2 (en) Image processing device, image capturing device, and image processing method for setting a combination parameter for combining a plurality of image data
US9191578B2 (en) Enhanced image processing with lens motion
CN108432230B (en) Imaging device and method for displaying an image of a scene
CN114449163A (en) Apparatus and method for adjusting focus based on focus target information
JP5756572B2 (en) Image processing apparatus and method, and imaging apparatus
US8718459B2 (en) Method and digital camera having improved autofocus
CN106488116B (en) Photographic device
KR102059598B1 (en) Digital photographing apparatus and control method thereof
CN107040718B (en) Display control apparatus and control method thereof
CA2815458C (en) Method and digital camera having improved autofocus
US9111129B2 (en) Subject detecting method and apparatus, and digital photographing apparatus
JP2020095069A (en) Imaging device
JP2019186911A (en) Image processing apparatus, image processing method, and imaging apparatus
US20160292842A1 (en) Method and Apparatus for Enhanced Digital Imaging
JP2020017807A (en) Image processing apparatus, image processing method, and imaging apparatus
CN106412419B (en) Image pickup apparatus and control method thereof
US20130176463A1 (en) Method and Apparatus for Image Scaling in Photography
US9456140B2 (en) Optimized image stabilization
JP2019087924A (en) Image processing apparatus, imaging apparatus, image processing method, and program
JP2019087923A (en) Imaging apparatus, imaging method, and program
KR100736565B1 (en) Method of taking a panorama image and mobile communication terminal thereof
JP2015095857A (en) Imaging apparatus
WO2015044514A1 (en) Method and apparatus for plenoptic imaging
KR20220122317A (en) Image signal proccessor, image processing system of performing auto zoom and auto focus, image processing method of the image signal proccessor having the same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14847061

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14847061

Country of ref document: EP

Kind code of ref document: A1