US20110101241A1 - Solid-State Photodetector Pixel and Photodetecting Method - Google Patents

Solid-State Photodetector Pixel and Photodetecting Method Download PDF

Info

Publication number
US20110101241A1
US20110101241A1 US12/987,669 US98766911A US2011101241A1 US 20110101241 A1 US20110101241 A1 US 20110101241A1 US 98766911 A US98766911 A US 98766911A US 2011101241 A1 US2011101241 A1 US 2011101241A1
Authority
US
United States
Prior art keywords
waveguide
electromagnetic radiation
camera
pixel
grating structure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/987,669
Inventor
Kaspar Cottier
Rolf Kaufmann
Rino E. Kunz
Thierry Oggier
Guy Voirin
Simon Neukom
Michael Lehmann
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ams Sensors Singapore Pte Ltd
Original Assignee
Mesa Imaging AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mesa Imaging AG filed Critical Mesa Imaging AG
Priority to US12/987,669 priority Critical patent/US20110101241A1/en
Publication of US20110101241A1 publication Critical patent/US20110101241A1/en
Priority to US14/161,053 priority patent/US9209327B2/en
Assigned to HEPTAGON MICRO OPTICS PTE. LTD. reassignment HEPTAGON MICRO OPTICS PTE. LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MESA IMAGING AG
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L31/00Semiconductor devices sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation; Processes or apparatus specially adapted for the manufacture or treatment thereof or of parts thereof; Details thereof
    • H01L31/02Details
    • H01L31/0232Optical elements or arrangements associated with the device
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14601Structural or functional details thereof
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14601Structural or functional details thereof
    • H01L27/1462Coatings
    • H01L27/14623Optical shielding
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/148Charge coupled imagers
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L31/00Semiconductor devices sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation; Processes or apparatus specially adapted for the manufacture or treatment thereof or of parts thereof; Details thereof
    • H01L31/02Details
    • H01L31/02002Arrangements for conducting electric current to or from the device in operations
    • H01L31/02005Arrangements for conducting electric current to or from the device in operations for device characterised by at least one potential jump barrier or surface barrier
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/62Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light
    • G01N21/63Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light optically excited
    • G01N21/64Fluorescence; Phosphorescence
    • G01N21/645Specially adapted constructive features of fluorimeters
    • G01N21/648Specially adapted constructive features of fluorimeters using evanescent coupling or surface plasmon coupling for the excitation of fluorescence
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar

Definitions

  • This invention relates to solid-state photodetecting, especially charge-coupled-device (CCD) photodetecting.
  • CCD charge-coupled-device
  • Possible applications of the invention lie, for instance, in range detection or in (bio-)chemical sensing.
  • Solid-state one-dimensional and two-dimensional photodetectors with picture elements (pixels) based on the charge-coupled-device (CCD) principle are well-known; cf. A. J. P. Theu Giveaway: “Solid State Imaging with Charge Coupled Devices”, Kluwer Academic Publishers, Dordrecht, 1995. They find use not only in consumer goods such as ordinary cameras, but also in combination with a dedicated light source in special applications such as time-of-flight (TOF) range cameras (cf. EP-1′152′261 A) or in (bio-)chemical sensing (cf. EP-1′085′315 A). For such special applications, the properties of standard CCD pixels may not be sufficient.
  • TOF time-of-flight
  • the pixels perform the tasks of the photo-generation of charges, the separation of said charges and the charge accumulation of said separated charges either in at least one storage site and at least one dump site or in at least two storage sites.
  • means of repeated charge accumulation such as described in U.S. Pat. No. 5,856,667.
  • the shutter efficiency is defined as the number of photogenerated charge carriers transported to the desired accumulation site divided by the total number of photogenerated charge carriers. Using state-of-the-art technology, this quantity is in the range between 80% and 95%, which is not sufficient for special applications such as mentioned above, as will be described in the following.
  • the pixel according to the invention is formed in a semiconductor substrate with a plane surface for use in a photodetector. It comprises an active region for converting incident electromagnetic radiation into charge carriers of a first and a second charge type, charge-separation means for generating a lateral electric potential across the active region, and charge-storage means for storing charge carriers of at least one type generated in the active region, said charge-storage means being placed outside the active region.
  • the pixel further comprises separation-enhancing means for additionally enhancing charge separation in the active region and charge transport from the active region to the charge-storage means.
  • the solid-state image sensor according to the invention comprises a plurality of pixels according to the invention, arranged in a one- or two-dimensional array.
  • the solid-state image sensor according to the invention is preferably used in a range camera or in a chemical and/or biochemical sensor.
  • the camera according to the invention comprises a solid-state image sensor according to the invention.
  • the method according to the invention for detecting incident electromagnetic radiation comprises the steps of converting the incident electromagnetic radiation into charge carriers of a first and a second charge type in an active region of a semiconductor material with a plane surface, generating a lateral electric potential across the active region, and storing outside the active region charge carriers of at least one type generated in the active region. Charge separation in the active region and charge transport from the active region to the charge-storage means is additionally enhanced.
  • the method according to the invention is preferably used for range detection or for chemical and/or biochemical sensing.
  • the method according to the invention for determining a distance between a measurement system and a remote object comprises the steps of illuminating the object by modulated electromagnetic radiation emitted from the measurement system, reflecting and/or scattering at least part of the electromagnetic radiation from the object, detecting electromagnetic radiation reflected and/or scattered from the object in the measurement system, the detection using the detection method according to the invention, and determining the distance from the detected electromagnetic radiation.
  • the method for sensing a chemical and/or biochemical substance comprises the steps of illuminating the substance with electromagnetic radiation, causing the electromagnetic radiation to interact with the substance directly or indirectly through a transducer, and detecting electromagnetic radiation resulting from the interaction with the substance, the detection using the detection method according to the invention.
  • FIGS. 1 and 2 show cross-sectional views of a CCD part of a first and a second embodiment of a photodetector pixel according to the invention, respectively.
  • FIG. 3 shows electric potential distributions versus the vertical coordinate (a) for a pixel with a homogeneous substrate, and (b) for a pixel with a buried channel.
  • FIGS. 4 and 5 show cross-sectional views of a CCD part of a fourth and a fifth embodiment of a photodetector pixel according to the invention, respectively.
  • FIG. 6 shows the principle of adaptive background suppression (or minimal charge transfer).
  • FIG. 7 shows the lateral potential distribution for the sense diffusion, the isolation gate and the integration gate of a photodetector pixel.
  • FIG. 8 shows a flow diagram of a first embodiment of the photodetecting method according to the invention.
  • FIG. 9 shows the method of FIG. 8 in an analogous representation as FIG. 7 , for different times.
  • FIGS. 10 and 11 show flow diagrams of a second and a third embodiment of the photodetecting method according to the invention.
  • FIG. 12 shows a diagram for performing the first embodiment of the photodetecting method according to the invention.
  • FIGS. 13-16 show setups for four application examples for the camera according to the invention.
  • FIG. 17 shows a light-wavelength modulation as applied in the example of FIG. 16 , versus time.
  • FIG. 18 shows a diagram of the intensity vs. the light wavelength as measured in the setup of FIG. 16 , for two different samples.
  • FIG. 19 shows a setup for a fifth application example for the camera according to the invention.
  • FIG. 20 shows a diagram of the intensity vs. the position as measured in the setup of FIG. 16 , for two different samples.
  • FIG. 21 shows a setup for a sixth application example for the camera according to the invention.
  • FIG. 1 shows a cross-sectional view of a CCD part of a first embodiment of a photodetector pixel 1 according to the invention.
  • the pixel 1 comprises the well-known structures such as a left photogate PGL, a middle photogate PGM and a right photogate PGR.
  • the region beneath the photogates PGL, PGM and PGR can be called the “active region” of the pixel, since this is the region in which incident radiation In is converted into charge carriers.
  • the pixel 1 further comprises either at least one integration gate (IG) for storing charge carriers generated in the active region and at least one dump site (DDiff), or at least two integration gates, the integration gates (IG) and the dump-site (DDiff) being placed outside the active region.
  • IG integration gate
  • DDiff dump-site
  • a shield layer SL typically made of metal protects all elements except the photogates PGL, PGM, PGR from incident light In.
  • the shield layer SL has an opening OP in the active region of the photogates PGL, PGM, PGR, so that the photogates PGL, PGM, PGR can be exposed to the incident light In passing through the opening OP.
  • a lateral potential distribution ⁇ (x) on top of the substrate S is also drawn in FIG. 1 as a function of a horizontal coordinate x.
  • the potential gradient in the substrate S can be influenced.
  • the potential under the integration gate IG is high and decreases towards the right photogate PGR, so that charge carriers generated by the incident light In are transported to the integration gate IG, where they are integrated over a certain exposure time.
  • the potential distribution ⁇ (x) is often graphically represented as a sequence of ideal, discrete steps and/or wells, this is not the case in praxis.
  • the real potential distribution ⁇ (x) is rather continuous, as sketched in FIG. 1 .
  • the invention avoids the above-described charge-separation imperfection by a design of the shield layer SL that takes into account the predicted real potential distribution ⁇ (x).
  • the opening in the shield layer made possible an illumination of the entire photogate area by the incident light, i.e., in the representation of FIG. 1 , the edges of the shield layer coincided with the outer edges of the left and right photogates, respectively.
  • the left and right photogates PGL, PGR are partially covered by the shield layer SL, and the edges of the shield layer SL are located on a position where the minimum min of the potential distribution ⁇ (x) is expected to be. This position depends on the geometry and the materials of the pixel 1 and on the voltages applied. It may be determined based on calculation. Simulation tools, typically computer software, for calculating such potential distributions ⁇ (x) are available.
  • This embodiment uses, besides a topmost shielding layer SL 1 , other metal layers SL 2 , SL 3 , which lie closer to the substrate S as further shielding layers.
  • Such metal layers SL 2 , SL 3 , . . . are provided, e.g., in a standard complemetary-metal-oxide-semiconductor (CMOS) process for interconnecting lines.
  • CMOS complemetary-metal-oxide-semiconductor
  • the lowermost metal layer SL 3 which lies typically about 1 ⁇ m above the surface of the substrate S, defines most precisely the position on which light In will impinge, with a low degree of dependence on the angle of incidence ⁇ .
  • Part of the light In could be coupled between the metal layers SL 1 , SL 2 , SL 3 , and guided to very distant points like in a waveguide.
  • vertical opaque barriers B are provided. These barriers B can be designed and realized as the vias known from the standard CMOS process, i.e., as tungsten connections through the insulating interlayers between the horizontal metal layers SL 1 , SL 2 , SL 3 ,
  • FIG. 3 shows transverse potential distributions ⁇ (z) in the pixel substrate S versus the vertical coordinate z for two different substrates S.
  • the potential distribution ⁇ (z) has a maximum value at the surface of the substrate S and monotonically decreases with the depth z in the substrate S, as shown in FIG. 3( a ).
  • a transverse inhomogeneity BC is provided in the substrate S.
  • Such an inhomogeneity can be realized, e.g., as a buried channel BC of an n doped material in the p bulk substrate S, with a thickness of 0.3 ⁇ m to 3 ⁇ m and preferably about 1 ⁇ m.
  • the potential distribution ⁇ (z) now has a maximum value inside the buried channel BC, which maximum value is even higher than that of FIG. 3( a ).
  • the inhomogeneity BC helps to create a high potential inside the substrate S and thus enhances and makes faster the charge separation. It also reduces noise, since flicker noise is much higher for charges moving along the substrate surface due to numerous trapping centres at the silicon/silicon-dioxide interface. This is avoided in the embodiment of FIG. 3( b ).
  • the bulk substrate S is made of a material that impedes charge-carrier transport and/or supports charge-carrier recombination, e.g., a bad-quality and/or a highly doped silicon substrate of the p doping type. Charge carriers photogenerated in the bulk of such a substrate S are trapped by material defects and/or recombine before reaching the drift field region.
  • a detection layer DL of good quality and low doping is arranged on the surface of the substrate S.
  • the detection layer DL can be, e.g., an epitaxial silicon layer with a thickness of 2 ⁇ m to 20 ⁇ m, and preferably of 5 ⁇ m to 10 ⁇ m.
  • FIG. 5 Another embodiment of the pixel 1 according to the invention which deals with the problem set forth with reference to FIG. 4 is shown in FIG. 5 .
  • light In is prevented from penetrating into deeper regions of the substrate S, or even reflected back from deeper regions of the substrate S into a region DL beneath the substrate surface, where it can be used for the photodetection process.
  • an opaque and/or reflecting layer R is provided in the substrate S between said detection layer DL (epitaxial layer) and the highly doped bulk S.
  • the opaque and/or reflecting layer R can be a metal layer, e.g., an aluminum layer and/or a silicon dioxide layer.
  • the metal layer has to be electrically isolated from the detection layer DL.
  • the preferred configuration in this case is a silicon absorption layer on top, a silicon dioxide layer for the isolation and the metal layer for the reflection.
  • the substrate S is preferably a silicon-on-insulator (SOI) substrate, i.e., a silicon substrate with an oxide layer buried in it.
  • SOI silicon-on-insulator
  • One preferred application of the pixel 1 according to the invention is in a solid-state time-of-flight (TOF) range camera.
  • TOF time-of-flight
  • a first embodiment of a photodetecting method according to the invention relates to a TOF range camera, the pixels of which have at least one active region and at least two (e.g., four) charge storages for on-pixel driving voltage control or at least one charge storage for off-pixel driving voltage control, which are used to store the different charge types.
  • the demodulation of a modulated light signal can be obtained, e.g., by storing the different charge types deriving from different phases (e.g., 0°, 90°, 180°and 270°.
  • the photogenerated charge is converted to an output voltage by the storage capacitance.
  • the storage capacitance of the integration gate IG is by factors higher than the capacitance of the sense diffusion SDiff. Therefore, the limiting storage capacitance is given by the sense diffusion SDiff and not by the integration gate IG.
  • the readout capacitance cannot be designed arbitrarily high because of noise restrictions and the resolution of the analog-to-digital conversion.
  • the main restriction of dynamic range is given by background illumination.
  • the background usually generates for all light-sensitive areas an equal amount of charge carriers that are not needed for the distance calculation.
  • the inventive concept consists in taking into account only those charge carriers which contribute to the difference. In other words: a constant offset is subtracted from all signals. This is schematically illustrated in FIG. 6 . With thus reduced signals, a smaller readout capacitance can be used and, even so, acceptable output voltage signals are obtained.
  • FIG. 7 schematically shows the lateral potential distribution ⁇ (x) for the sense diffusion SDiff, the barrier-last CCD gate (the isolation gate OUTG) and the second last CCD gate (the integration gate IG, charge storage) of a photodetector pixel 1 as shown in FIG. 1 .
  • the above-mentioned offset subtraction is performed by transferring not all the charge carriers from the integration gate IG to the sense diffusion SDiff, but only those charge carriers which are necessary to perform correct distance calculations. After having collected all charge carriers on the integration gate IG, its control voltage is decreased to such a level for which every sense node Sdiff receives charge carriers.
  • the isolation gate OUTG then pinches off the remaining charge carriers which are of the same amount for all photosensitive areas. Thus, the differential functionality is ensured.
  • the appropriate driving voltage for the integration gates IG can be adjusted on-pixel by a closed loop from the sense nodes to the driving voltage, as shown in the flow chart of FIG. 8 .
  • the loop can be performed pixelwise.
  • the potential of the integration gate IG can be stepwise decreased and the pixel can be read out off-pixel until all samplings obtain a signal. This requires a non-destructive readout or a summing function off-pixel.
  • FIG. 9 illustrates the adaptive background subtraction for a pixel 1 with four photosensitive areas corresponding to the phases 0°, 90°, 180°, 270°(cf. FIG. 6 ), for six different, subsequent times t 0 -t 5 . It illustrates the on- as well as off-pixel background pinch off.
  • the dynamic range is increased, and high accuracies are possible due to longer possible integration times.
  • a second embodiment of a photodetecting method according to the invention also relates to a TOF range camera with at least one photosensitive area and a plurality of charge storage areas.
  • This embodiment takes into account the possibility of providing a feedback loop from the sense node to the integration gate, and is based on the recognition that for a range measurement, an absolute intensity value (and thus an absolute integration time) is not important, since only a phase information is extracted from the plurality of storage areas of a pixel. Therefore, different pixels can have different integration times, as long as the integration time is the same for all charge storage areas within each pixel.
  • the integration time for an individual pixel is preferably chosen such that it yields a high signal just below the saturation level. Thus, the dynamic range is increased.
  • a flow diagram of the method is shown in FIG. 10 .
  • a barrier prevents the charge carriers from drifting from the integration gate IG to the sense node SDiff.
  • the shift process is performed during integration in short shift intervals until a given threshold close to saturation is reached.
  • FIG. 11 A simplified flow diagram of this embodiment is shown in FIG. 11 .
  • This loop is repeated as long as the output of each photosensitive area is smaller than a certain threshold. When the output reaches the threshold, the shift process will not be performed any longer, so that saturation at the sense node SDiff is avoided.
  • FIG. 12 shows a simple circuit diagram for performing the photodetection method according to the invention.
  • the output signals from the sense nodes or sense diffusions Sdiff are fed into a comparator COMP, where they are compared with a decreasing control voltage int_ref for the integration gate IG.
  • the output of the comparator COMP is used as the decreasing potential applied to all integration gates IG.
  • a saturation flag is set when the potential of the integration gates IG falls below a certain value predetermined by a reference signal sat —ref.
  • the potential of the integration gate IG can be read out as a further quantity to be measured.
  • This quantity corresponds to the DC offset value common to the charges accumulated in all integration gates IG of a pixel. From this quantity, further information can be extracted, e.g., the level of the background illumination and/or whether an overflow of at least one integration gate IG occurred during the charge-storage process.
  • a third embodiment of a photodetecting method according to the invention also relates to a TOF range camera comprising a plurality of photosensitive areas.
  • This method allows to control whether a pixel 1 is intact and works correctly.
  • the idea is to create redundancy by performing at least two measurements of the same scene, however, with different phases of the emitted CW-modulated light.
  • a phase shift of ⁇ (in degrees) introduces an artificial distance shift of
  • ⁇ ⁇ ⁇ L ⁇ 360 ° ⁇ c 2 ⁇ ⁇ f ,
  • c ⁇ 3 ⁇ 10 8 m/s is the light velocity in air and f is the modulation frequency.
  • Another preferred field of application of the pixel according to the invention is (bio-)chemical sensing. Some examples for such applications are given in the following.
  • a first application concerns (bio-)chemical sensors using luminescent labels.
  • Two general measurement methods aiming at reaching high sensitivity apply, namely time-domain and frequency-domain measurements, such as described in “Elimination of Fluorescence and Scattering Backgrounds in Luminescence Lifetime Measurements Using Gated-Phase Fluorometry” by H. M. Rowe et al., Analytical Chemistry, 2002, 74(18), 4821-4827, and in “Principles of Fluorescence Spectroscopy”, by Lakowicz J., Kluwer Academic and Plenum Publishers, New York, pages 1-24 (1999), respectively.
  • frequency-domain measurements the luminescent sample is excited with periodically modulated light, and the detection of luminescent emission occurs at the same frequency.
  • This measurement method is often referred to as “lock-in technique”.
  • the second measurement method uses repeated excitation using short pulses, with subsequent detection of the luminescent emission inside a different time window. This method is often referred to as “time-gated detection”. Both measurement methods may be used for the determination of luminescent intensity and/or lifetime. All measurement methods cited above and combinations thereof will profit from the present invention through the properties of parallel detection combined with on-chip implementation of signal treatment, the latter leading to a lower detection limit in comparison to conventional imaging systems.
  • a first example for practical implementation is luminescence imaging, e.g., fluorescence microscopy, using the general measurement methods described above.
  • a fluorescence-microscopy setup with a sample 9 , a modulated light source 2 , a microscope 3 , a wavelength filter 4 and a camera 10 according to the invention is schematically shown in FIG. 13 .
  • the camera 10 comprises an image sensor comprising an array of pixels 1 according to the invention.
  • the light source 2 is triggered by the camera 10 .
  • the advantages over other detection methods are that imaging of the sample 9 and parallel detection of the luminescence signals are possible with a low detection limit. Compared to other imaging methods, the inventive method described above yields a higher sensitivity and a lower detection limit.
  • the microarray 90 may be arranged on a planar waveguide 5 , as shown in FIG. 14 .
  • a light source 2 e.g., a laser, emits light onto an input grating 51 on the waveguide 5 .
  • the light is coupled into the waveguide 5 and guided to the microarray 90 , where its evanescent portion excites luminescence.
  • the microarray 90 is imaged onto the camera 10 by an optical imaging system 6 . Thus, a part of the luminescent light is received by the camera 10 .
  • the general luminescent measurement methods described above may be used by modulating the light source 2 and triggering it by the camera 10 .
  • Another part of the luminescent light is coupled into the waveguide 5 and propagates towards an output grating 52 where it is coupled out. This part can also be detected, as is known from the prior art.
  • the measurements by the camera 10 according to the invention and by the conventional outcoupling approach may be combined in order to efficiently suppress the excitation light.
  • Shielding means 7 are preferably provided for shielding the camera 10 from undesired scattered light.
  • samples to be sensed are deposited on sensing pads or measurement units arranged on a planar waveguide 5 .
  • Each sensing pad comprises a diffraction grating 53 - 56 .
  • Light from a light source 2 is coupled into the waveguide 2 in the region of a sensing pad by the corresponding diffraction grating 53 , excites luminescence in the sample, and the luminescent and the excitation light are coupled out by the same diffraction grating 53 .
  • a wavelength filter 4 is preferably arranged in the path of the outcoupled light in order to suppress the excitation light.
  • each measurement unit may comprise a pair of input and output coupling gratings.
  • FIG. 16 The setup of a fifth application example of the camera 10 according to the invention in the field of biochemical sensing is shown in FIG. 16 .
  • a planar waveguide is provided with pairs of diffraction gratings 57 , 58 .
  • a first grating 57 of each pair is used as the input grating and is comprised in a sensing pad for a sample to be sensed, whereas a second grating 58 of each pair is used as the output grating.
  • Modulated light emitted by a light source 2 is incoupled into the waveguide 5 under a certain angle ⁇ in .
  • the wavelength ⁇ of the emitted light is modulated, e.g., periodically sweeped over a certain range.
  • VCSEL vertical-cavity surface-emitting laser
  • the modulation is triggered by the camera 10 .
  • a resonance, i.e., an intensity peak is detected at a certain wavelength ⁇ 1 that fulfils the well-known incoupling condition for input gratings on planar waveguides, as shown in FIG. 18 .
  • the intensity peak is shifted from the first wavelength ⁇ 1 to a second, different wavelength ⁇ 2 .
  • This method allows imaging of the output grating 58 and is cost-effective compared to other wavelength-interrogated optical-sensing (WIOS) methods. It can be combined with a luminescence-sensing method.
  • WIOS wavelength-interrogated optical-sensing
  • the detection occurs at the same frequency as the wavelength sweep of the light source 2 .
  • the wavelength peak position translates therefore into an equivalent phase, which is calculated by the camera 10 based on the individual samples acquired inside the acquisition period. It can be calculated for instance using the known formalism for sinusoidal signal modulation, as it is also used for distance measurements (see, for instance, the article “Solid-State Time-Of-Flight Range Camera” by R. Lange et al., published in IEEE Journal of Quantum Electronics, No. 37, 2001, pp 390ss).
  • FIG. 19 A still further application example of the camera 10 according to the invention in the field of biochemical sensing is discussed with reference to FIG. 19 .
  • the setup is similar to that of FIG. 16 , with the differences mentioned in the following.
  • the light emitted by the light source 2 is distributed over a certain range of angles of incidence ⁇ in , e.g., by a collimating lens 21 , such that light is always coupled into the waveguide 5 .
  • the sensing pad is on the output grating 58 , not on the input grating 57 . Since the outcoupling angle ⁇ out depends on the wavelength ⁇ , the outcoupled light beam periodically sweeps over the camera 10 .
  • the maximum intensity I occurs at a given time, characterized by an equivalent phase when using the lock-in properties of the camera 10 as described above; cf.
  • FIG. 20 This results in a characteristic phase distribution on the camera 10 .
  • a change in the optical properties of the sample results in a shift of the intensity maximum from position a 1 to another position a 2 , and eventually in a shift of the phase distribution.
  • this method offers an increased measurement range.
  • FIG. 21 shows an embodiment of the apparatus according to the invention in which all sensing pads or gratings of the waveguide 5 are illuminated by the light source 2 , e.g., by means of an optical system 23 , 24 for widening the light beam and offering a limited range of incident angles to the gratings.
  • All sensing pads or gratings or the waveguide 5 are imaged, e.g., by means of imaging optics 25 , onto a photodetector chip 10 with an integrated array of lock-in pixels forming a lock-in camera.
  • the camera is connected to the light source 2 as described above.
  • the relevant phase information is extracted at the pixel level of the camera.

Abstract

A pixel is formed in a semiconductor substrate (S) with a plane surface for use in a photodetector. It comprises an active region for converting incident light (In) into charge carriers, photogates (PGL, PGM, PGR) for generating a lateral electric potential (Φ(x)) across the active region, and an integration gate (IG) for storing charge carriers generated in the active region and a dump site (Ddiff). The pixel further comprises separation-enhancing means (SL) for additionally enhancing charge separation in the active region and charge transport from the active region to the integration gate (IG). The separation-enhancing means (SL) are for instance a shield layer designed such that for a given lateral electric potential (Φ(x)), the incident light (In) does not impinge on the section from which the charge carriers would not be transported to the integration gate (IG).

Description

    FIELD OF THE INVENTION
  • This invention relates to solid-state photodetecting, especially charge-coupled-device (CCD) photodetecting. In particular, it relates to a photodetector pixel and a photodetecting method according to the preambles of the independent claims. Possible applications of the invention lie, for instance, in range detection or in (bio-)chemical sensing.
  • BACKGROUND OF THE INVENTION
  • Solid-state one-dimensional and two-dimensional photodetectors with picture elements (pixels) based on the charge-coupled-device (CCD) principle are well-known; cf. A. J. P. Theuwissen: “Solid State Imaging with Charge Coupled Devices”, Kluwer Academic Publishers, Dordrecht, 1995. They find use not only in consumer goods such as ordinary cameras, but also in combination with a dedicated light source in special applications such as time-of-flight (TOF) range cameras (cf. EP-1′152′261 A) or in (bio-)chemical sensing (cf. EP-1′085′315 A). For such special applications, the properties of standard CCD pixels may not be sufficient. The pixels perform the tasks of the photo-generation of charges, the separation of said charges and the charge accumulation of said separated charges either in at least one storage site and at least one dump site or in at least two storage sites. For the above-mentioned special applications it is also advantageous to incorporate means of repeated charge accumulation, such as described in U.S. Pat. No. 5,856,667.
  • An important property of such a CCD pixel is the so-called shutter efficiency. The shutter efficiency is defined as the number of photogenerated charge carriers transported to the desired accumulation site divided by the total number of photogenerated charge carriers. Using state-of-the-art technology, this quantity is in the range between 80% and 95%, which is not sufficient for special applications such as mentioned above, as will be described in the following.
  • SUMMARY OF THE INVENTION
  • It is an object of the invention to provide a photodetector pixel and a photodetecting method which provide a higher shutter efficiency and a higher sensitivity.
  • These and other objects are solved by the photodetector pixel and the photodetecting method as defined in the independent claims. Advantageous embodiments are defined in the dependent claims.
  • The pixel according to the invention is formed in a semiconductor substrate with a plane surface for use in a photodetector. It comprises an active region for converting incident electromagnetic radiation into charge carriers of a first and a second charge type, charge-separation means for generating a lateral electric potential across the active region, and charge-storage means for storing charge carriers of at least one type generated in the active region, said charge-storage means being placed outside the active region. The pixel further comprises separation-enhancing means for additionally enhancing charge separation in the active region and charge transport from the active region to the charge-storage means.
  • The solid-state image sensor according to the invention comprises a plurality of pixels according to the invention, arranged in a one- or two-dimensional array.
  • The solid-state image sensor according to the invention is preferably used in a range camera or in a chemical and/or biochemical sensor.
  • The camera according to the invention comprises a solid-state image sensor according to the invention.
  • The method according to the invention for detecting incident electromagnetic radiation, comprises the steps of converting the incident electromagnetic radiation into charge carriers of a first and a second charge type in an active region of a semiconductor material with a plane surface, generating a lateral electric potential across the active region, and storing outside the active region charge carriers of at least one type generated in the active region. Charge separation in the active region and charge transport from the active region to the charge-storage means is additionally enhanced.
  • The method according to the invention is preferably used for range detection or for chemical and/or biochemical sensing.
  • The method according to the invention for determining a distance between a measurement system and a remote object comprises the steps of illuminating the object by modulated electromagnetic radiation emitted from the measurement system, reflecting and/or scattering at least part of the electromagnetic radiation from the object, detecting electromagnetic radiation reflected and/or scattered from the object in the measurement system, the detection using the detection method according to the invention, and determining the distance from the detected electromagnetic radiation.
  • The method for sensing a chemical and/or biochemical substance, comprises the steps of illuminating the substance with electromagnetic radiation, causing the electromagnetic radiation to interact with the substance directly or indirectly through a transducer, and detecting electromagnetic radiation resulting from the interaction with the substance, the detection using the detection method according to the invention.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Embodiments of the invention are described in greater detail hereinafter relative to the attached schematic drawings.
  • FIGS. 1 and 2 show cross-sectional views of a CCD part of a first and a second embodiment of a photodetector pixel according to the invention, respectively.
  • FIG. 3 shows electric potential distributions versus the vertical coordinate (a) for a pixel with a homogeneous substrate, and (b) for a pixel with a buried channel.
  • FIGS. 4 and 5 show cross-sectional views of a CCD part of a fourth and a fifth embodiment of a photodetector pixel according to the invention, respectively.
  • FIG. 6 shows the principle of adaptive background suppression (or minimal charge transfer).
  • FIG. 7 shows the lateral potential distribution for the sense diffusion, the isolation gate and the integration gate of a photodetector pixel.
  • FIG. 8 shows a flow diagram of a first embodiment of the photodetecting method according to the invention.
  • FIG. 9 shows the method of FIG. 8 in an analogous representation as FIG. 7, for different times.
  • FIGS. 10 and 11 show flow diagrams of a second and a third embodiment of the photodetecting method according to the invention.
  • FIG. 12 shows a diagram for performing the first embodiment of the photodetecting method according to the invention.
  • FIGS. 13-16 show setups for four application examples for the camera according to the invention.
  • FIG. 17 shows a light-wavelength modulation as applied in the example of FIG. 16, versus time.
  • FIG. 18 shows a diagram of the intensity vs. the light wavelength as measured in the setup of FIG. 16, for two different samples.
  • FIG. 19 shows a setup for a fifth application example for the camera according to the invention.
  • FIG. 20 shows a diagram of the intensity vs. the position as measured in the setup of FIG. 16, for two different samples.
  • FIG. 21 shows a setup for a sixth application example for the camera according to the invention.
  • DESCRIPTION OF PREFERRED EMBODIMENTS
  • FIG. 1 shows a cross-sectional view of a CCD part of a first embodiment of a photodetector pixel 1 according to the invention. The pixel 1 comprises the well-known structures such as a left photogate PGL, a middle photogate PGM and a right photogate PGR. The region beneath the photogates PGL, PGM and PGR can be called the “active region” of the pixel, since this is the region in which incident radiation In is converted into charge carriers. The pixel 1 further comprises either at least one integration gate (IG) for storing charge carriers generated in the active region and at least one dump site (DDiff), or at least two integration gates, the integration gates (IG) and the dump-site (DDiff) being placed outside the active region. The embodiment of FIG. 1 comprises an integration gate IG, an isolation gate OUTG, a sense diffusion SDiff and a dump diffusion DDiff. All these elements PGL, PGM, PGR, IG, OUTG, SDiff, DDiff are arranged above or beneath a gate-oxide layer 0 on a substrate S made of, e.g., bulk silicon of the p doping type. A shield layer SL typically made of metal protects all elements except the photogates PGL, PGM, PGR from incident light In. The shield layer SL has an opening OP in the active region of the photogates PGL, PGM, PGR, so that the photogates PGL, PGM, PGR can be exposed to the incident light In passing through the opening OP.
  • A lateral potential distribution Φ(x) on top of the substrate S is also drawn in FIG. 1 as a function of a horizontal coordinate x. By applying proper voltages to the photogates PGL, PGM, PGR, the potential gradient in the substrate S can be influenced. In the state shown in FIG. 1, the potential under the integration gate IG is high and decreases towards the right photogate PGR, so that charge carriers generated by the incident light In are transported to the integration gate IG, where they are integrated over a certain exposure time. Although the potential distribution Φ(x) is often graphically represented as a sequence of ideal, discrete steps and/or wells, this is not the case in praxis. The real potential distribution Φ(x) is rather continuous, as sketched in FIG. 1. This continuous character of the potential distribution Φ(x) results in a potential minimum min beneath the right photogate PGR. Thus, charge carriers optically generated in the region on the right-hand side of the minimum min are not transported to the integration gate IG but rather to the dump diffusion DDiff. Such an undesired electron flow substantially decreases the shutter efficiency of the pixel 1. An analogous undesired effect occurs in case that the potential Φ(x) increases from the left photogate PGL towards the right photogate PGR.
  • The invention avoids the above-described charge-separation imperfection by a design of the shield layer SL that takes into account the predicted real potential distribution Φ(x). In pixels known up to now, the opening in the shield layer made possible an illumination of the entire photogate area by the incident light, i.e., in the representation of FIG. 1, the edges of the shield layer coincided with the outer edges of the left and right photogates, respectively. According to the invention, however, the left and right photogates PGL, PGR are partially covered by the shield layer SL, and the edges of the shield layer SL are located on a position where the minimum min of the potential distribution Φ(x) is expected to be. This position depends on the geometry and the materials of the pixel 1 and on the voltages applied. It may be determined based on calculation. Simulation tools, typically computer software, for calculating such potential distributions Φ(x) are available.
  • It has been found that a shielding layer SL as shown in FIG. 1 shields well in case of normal incidence (α=0) of light on the pixel, but not in other cases, and especially not for large incidence angles α of, e.g., 15° and more. This is due to the three-dimensional geometric arrangement of the gates and the shielding layer SL. The latter is placed about 4 μm (depending on the used process) above the surface of the substrate S, so that light In impinging at an angle α≠0 reaches, e.g., part of the integration gate IG. This is avoided in a second embodiment of a pixel according to the invention, shown in FIG. 2. This embodiment uses, besides a topmost shielding layer SL1, other metal layers SL2, SL3, which lie closer to the substrate S as further shielding layers. Such metal layers SL2, SL3, . . . are provided, e.g., in a standard complemetary-metal-oxide-semiconductor (CMOS) process for interconnecting lines. The lowermost metal layer SL3, which lies typically about 1 μm above the surface of the substrate S, defines most precisely the position on which light In will impinge, with a low degree of dependence on the angle of incidence α.
  • Part of the light In could be coupled between the metal layers SL1, SL2, SL3, and guided to very distant points like in a waveguide. To avoid this, vertical opaque barriers B are provided. These barriers B can be designed and realized as the vias known from the standard CMOS process, i.e., as tungsten connections through the insulating interlayers between the horizontal metal layers SL1, SL2, SL3,
  • FIG. 3 shows transverse potential distributions Φ(z) in the pixel substrate S versus the vertical coordinate z for two different substrates S. In case of a homogeneous semiconductor substrate S and a positive voltage applied at the gate, the potential distribution Φ(z) has a maximum value at the surface of the substrate S and monotonically decreases with the depth z in the substrate S, as shown in FIG. 3( a). According to another embodiment of the invention, a transverse inhomogeneity BC is provided in the substrate S. Such an inhomogeneity can be realized, e.g., as a buried channel BC of an n doped material in the p bulk substrate S, with a thickness of 0.3 μm to 3 μm and preferably about 1 μm. As schematically shown in FIG. 3( b), the potential distribution Φ(z) now has a maximum value inside the buried channel BC, which maximum value is even higher than that of FIG. 3( a). Thus, the inhomogeneity BC helps to create a high potential inside the substrate S and thus enhances and makes faster the charge separation. It also reduces noise, since flicker noise is much higher for charges moving along the substrate surface due to numerous trapping centres at the silicon/silicon-dioxide interface. This is avoided in the embodiment of FIG. 3( b).
  • In both cases shown in FIGS. 3( a) and (b), respectively, it is noted that the electric potential cD reaches about z 10 μm into the silicon substrate S. Charge carriers generated in this area 0≦z≦10 μm are transported towards the potential maximum due to a vertical drift field (i.e., they are collected). However, the absorption length of red and infra-red light in silicon is up to 30 μm. Consequently, there are also charge carriers photogenerated in the deep substrate region z>10 μm without such a vertical drift field. These charge carriers are undesired, because they have first to diffuse to the potential well in order to be collected. Given that the diffusion of these charges happens in the neutral bulk, no predefined direction for this diffusion exists. Therefore, the resulting net movement cannot be predicted. It is well possible that an electron is created below one pixel, diffuses in the bulk over a net distance of several pixel pitches and is then collected by a different pixel than the one below which it was created. An embodiment of the pixel 1 according to the invention in which this problem is eliminated is shown in FIG. 4. The bulk substrate S is made of a material that impedes charge-carrier transport and/or supports charge-carrier recombination, e.g., a bad-quality and/or a highly doped silicon substrate of the p doping type. Charge carriers photogenerated in the bulk of such a substrate S are trapped by material defects and/or recombine before reaching the drift field region. In order to make possible photodetection at all, a detection layer DL of good quality and low doping is arranged on the surface of the substrate S. The detection layer DL can be, e.g., an epitaxial silicon layer with a thickness of 2 μm to 20 μm, and preferably of 5 μm to 10 μm.
  • Another embodiment of the pixel 1 according to the invention which deals with the problem set forth with reference to FIG. 4 is shown in FIG. 5. In contrast to the embodiment of FIG. 4, light In is prevented from penetrating into deeper regions of the substrate S, or even reflected back from deeper regions of the substrate S into a region DL beneath the substrate surface, where it can be used for the photodetection process. For this purpose, an opaque and/or reflecting layer R is provided in the substrate S between said detection layer DL (epitaxial layer) and the highly doped bulk S. The opaque and/or reflecting layer R can be a metal layer, e.g., an aluminum layer and/or a silicon dioxide layer. In the first case (with a metal layer), the metal layer has to be electrically isolated from the detection layer DL. The preferred configuration in this case is a silicon absorption layer on top, a silicon dioxide layer for the isolation and the metal layer for the reflection. In the second case (without a metal layer), the substrate S is preferably a silicon-on-insulator (SOI) substrate, i.e., a silicon substrate with an oxide layer buried in it.
  • One preferred application of the pixel 1 according to the invention is in a solid-state time-of-flight (TOF) range camera. The inventive methods described in the following are suitable for use in such a camera.
  • A first embodiment of a photodetecting method according to the invention relates to a TOF range camera, the pixels of which have at least one active region and at least two (e.g., four) charge storages for on-pixel driving voltage control or at least one charge storage for off-pixel driving voltage control, which are used to store the different charge types.
  • The demodulation of a modulated light signal can be obtained, e.g., by storing the different charge types deriving from different phases (e.g., 0°, 90°, 180°and 270°.
  • In a CCD pixel, the photogenerated charge is converted to an output voltage by the storage capacitance. It has been found that the storage capacitance of the integration gate IG is by factors higher than the capacitance of the sense diffusion SDiff. Therefore, the limiting storage capacitance is given by the sense diffusion SDiff and not by the integration gate IG. However, the readout capacitance cannot be designed arbitrarily high because of noise restrictions and the resolution of the analog-to-digital conversion. The main restriction of dynamic range is given by background illumination. The background usually generates for all light-sensitive areas an equal amount of charge carriers that are not needed for the distance calculation. The inventive concept consists in taking into account only those charge carriers which contribute to the difference. In other words: a constant offset is subtracted from all signals. This is schematically illustrated in FIG. 6. With thus reduced signals, a smaller readout capacitance can be used and, even so, acceptable output voltage signals are obtained.
  • FIG. 7 schematically shows the lateral potential distribution Φ(x) for the sense diffusion SDiff, the barrier-last CCD gate (the isolation gate OUTG) and the second last CCD gate (the integration gate IG, charge storage) of a photodetector pixel 1 as shown in FIG. 1. The above-mentioned offset subtraction is performed by transferring not all the charge carriers from the integration gate IG to the sense diffusion SDiff, but only those charge carriers which are necessary to perform correct distance calculations. After having collected all charge carriers on the integration gate IG, its control voltage is decreased to such a level for which every sense node Sdiff receives charge carriers. The isolation gate OUTG then pinches off the remaining charge carriers which are of the same amount for all photosensitive areas. Thus, the differential functionality is ensured.
  • The appropriate driving voltage for the integration gates IG can be adjusted on-pixel by a closed loop from the sense nodes to the driving voltage, as shown in the flow chart of FIG. 8. By implementing appropriate integrated circuits, the loop can be performed pixelwise. Alternatively, the potential of the integration gate IG can be stepwise decreased and the pixel can be read out off-pixel until all samplings obtain a signal. This requires a non-destructive readout or a summing function off-pixel.
  • FIG. 9 illustrates the adaptive background subtraction for a pixel 1 with four photosensitive areas corresponding to the phases 0°, 90°, 180°, 270°(cf. FIG. 6), for six different, subsequent times t0-t5. It illustrates the on- as well as off-pixel background pinch off. Thus, the dynamic range is increased, and high accuracies are possible due to longer possible integration times.
  • A second embodiment of a photodetecting method according to the invention also relates to a TOF range camera with at least one photosensitive area and a plurality of charge storage areas. This embodiment takes into account the possibility of providing a feedback loop from the sense node to the integration gate, and is based on the recognition that for a range measurement, an absolute intensity value (and thus an absolute integration time) is not important, since only a phase information is extracted from the plurality of storage areas of a pixel. Therefore, different pixels can have different integration times, as long as the integration time is the same for all charge storage areas within each pixel. The integration time for an individual pixel is preferably chosen such that it yields a high signal just below the saturation level. Thus, the dynamic range is increased. A flow diagram of the method is shown in FIG. 10.
  • In the known TOF pixels, a barrier (the isolation gate OUTG) prevents the charge carriers from drifting from the integration gate IG to the sense node SDiff. During integration, all charge carriers are kept back in the integration well and only after exposure they are transferred to the output node SDiff in the so-called shift process. Due to this behaviour, a continuing monitoring of the sense nodes SDiff is not possible. By contrast, in the method according to the invention, the shift process is performed during integration in short shift intervals until a given threshold close to saturation is reached. A simplified flow diagram of this embodiment is shown in FIG. 11. In this example, a longer integration interval of, e.g., 5 to 200 μs, is followed by a shorter shift interval of, e.g., 1 μs. This loop is repeated as long as the output of each photosensitive area is smaller than a certain threshold. When the output reaches the threshold, the shift process will not be performed any longer, so that saturation at the sense node SDiff is avoided.
  • FIG. 12 shows a simple circuit diagram for performing the photodetection method according to the invention. The output signals from the sense nodes or sense diffusions Sdiff are fed into a comparator COMP, where they are compared with a decreasing control voltage int_ref for the integration gate IG. The output of the comparator COMP is used as the decreasing potential applied to all integration gates IG. A saturation flag is set when the potential of the integration gates IG falls below a certain value predetermined by a reference signal sat—ref.
  • In addition to the information obtained from the sampling signals, i.e., the amounts of charges from the sense diffusions SDiff (cf. FIG. 7), the potential of the integration gate IG can be read out as a further quantity to be measured. This quantity corresponds to the DC offset value common to the charges accumulated in all integration gates IG of a pixel. From this quantity, further information can be extracted, e.g., the level of the background illumination and/or whether an overflow of at least one integration gate IG occurred during the charge-storage process.
  • A third embodiment of a photodetecting method according to the invention also relates to a TOF range camera comprising a plurality of photosensitive areas. This method allows to control whether a pixel 1 is intact and works correctly. The idea is to create redundancy by performing at least two measurements of the same scene, however, with different phases of the emitted CW-modulated light. A phase shift of δ (in degrees) introduces an artificial distance shift of
  • Δ L = δ 360 ° · c 2 f ,
  • wherein c≈3·108 m/s is the light velocity in air and f is the modulation frequency. In an example where f=20 MHz and δ=180°, an artificial distance shift of ΔL=3.75 m should be obtained. If this is not the case for a certain pixel, said pixel is faulty.
  • Another preferred field of application of the pixel according to the invention is (bio-)chemical sensing. Some examples for such applications are given in the following.
  • A first application concerns (bio-)chemical sensors using luminescent labels. Two general measurement methods aiming at reaching high sensitivity apply, namely time-domain and frequency-domain measurements, such as described in “Elimination of Fluorescence and Scattering Backgrounds in Luminescence Lifetime Measurements Using Gated-Phase Fluorometry” by H. M. Rowe et al., Analytical Chemistry, 2002, 74(18), 4821-4827, and in “Principles of Fluorescence Spectroscopy”, by Lakowicz J., Kluwer Academic and Plenum Publishers, New York, pages 1-24 (1999), respectively. In frequency-domain measurements, the luminescent sample is excited with periodically modulated light, and the detection of luminescent emission occurs at the same frequency. This measurement method is often referred to as “lock-in technique”. The second measurement method uses repeated excitation using short pulses, with subsequent detection of the luminescent emission inside a different time window. This method is often referred to as “time-gated detection”. Both measurement methods may be used for the determination of luminescent intensity and/or lifetime. All measurement methods cited above and combinations thereof will profit from the present invention through the properties of parallel detection combined with on-chip implementation of signal treatment, the latter leading to a lower detection limit in comparison to conventional imaging systems.
  • A first example for practical implementation is luminescence imaging, e.g., fluorescence microscopy, using the general measurement methods described above. A fluorescence-microscopy setup with a sample 9, a modulated light source 2, a microscope 3, a wavelength filter 4 and a camera 10 according to the invention is schematically shown in FIG. 13. The camera 10 comprises an image sensor comprising an array of pixels 1 according to the invention. The light source 2 is triggered by the camera 10. The advantages over other detection methods are that imaging of the sample 9 and parallel detection of the luminescence signals are possible with a low detection limit. Compared to other imaging methods, the inventive method described above yields a higher sensitivity and a lower detection limit.
  • Another application example of the camera 10 according to the invention in the field of biochemical sensing is the readout of a sensor microarray 90. The microarray 90 may be arranged on a planar waveguide 5, as shown in FIG. 14. A light source 2, e.g., a laser, emits light onto an input grating 51 on the waveguide 5. The light is coupled into the waveguide 5 and guided to the microarray 90, where its evanescent portion excites luminescence. The microarray 90 is imaged onto the camera 10 by an optical imaging system 6. Thus, a part of the luminescent light is received by the camera 10. The general luminescent measurement methods described above may be used by modulating the light source 2 and triggering it by the camera 10. Another part of the luminescent light is coupled into the waveguide 5 and propagates towards an output grating 52 where it is coupled out. This part can also be detected, as is known from the prior art. The measurements by the camera 10 according to the invention and by the conventional outcoupling approach may be combined in order to efficiently suppress the excitation light. Shielding means 7 are preferably provided for shielding the camera 10 from undesired scattered light. The advantage of this setup over existing, highly sensitive schemes is that parallel detection of part of or the whole microarray 90 is possible due to the camera 10 according to the invention.
  • In still another example shown in FIG. 15, samples to be sensed are deposited on sensing pads or measurement units arranged on a planar waveguide 5. Each sensing pad comprises a diffraction grating 53-56. Light from a light source 2 is coupled into the waveguide 2 in the region of a sensing pad by the corresponding diffraction grating 53, excites luminescence in the sample, and the luminescent and the excitation light are coupled out by the same diffraction grating 53. A wavelength filter 4 is preferably arranged in the path of the outcoupled light in order to suppress the excitation light. However, since such a suppression will never be perfect, use of the camera 10 according to the invention is of great advantage. Instead of using a single grating, each measurement unit may comprise a pair of input and output coupling gratings.
  • While the previous four examples concerned luminescence sensing methods, the following examples concern label-free sensing methods.
  • The setup of a fifth application example of the camera 10 according to the invention in the field of biochemical sensing is shown in FIG. 16. A planar waveguide is provided with pairs of diffraction gratings 57, 58. A first grating 57 of each pair is used as the input grating and is comprised in a sensing pad for a sample to be sensed, whereas a second grating 58 of each pair is used as the output grating. Modulated light emitted by a light source 2 is incoupled into the waveguide 5 under a certain angle φin. AS indicated in FIG. 17, the wavelength λ of the emitted light is modulated, e.g., periodically sweeped over a certain range. This is possible, e.g., with a vertical-cavity surface-emitting laser (VCSEL) as a light source 2. The modulation is triggered by the camera 10. A resonance, i.e., an intensity peak is detected at a certain wavelength λ1 that fulfils the well-known incoupling condition for input gratings on planar waveguides, as shown in FIG. 18. When a change in the optical properties of the sample occurs, e.g., when additional molecules are deposited, the intensity peak is shifted from the first wavelength λ1 to a second, different wavelength λ2. Thus, information about the optical properties of the sample is obtained. This method allows imaging of the output grating 58 and is cost-effective compared to other wavelength-interrogated optical-sensing (WIOS) methods. It can be combined with a luminescence-sensing method.
  • Using the lock-in properties of the camera 10 according to the invention, the detection occurs at the same frequency as the wavelength sweep of the light source 2. The wavelength peak position translates therefore into an equivalent phase, which is calculated by the camera 10 based on the individual samples acquired inside the acquisition period. It can be calculated for instance using the known formalism for sinusoidal signal modulation, as it is also used for distance measurements (see, for instance, the article “Solid-State Time-Of-Flight Range Camera” by R. Lange et al., published in IEEE Journal of Quantum Electronics, No. 37, 2001, pp 390ss).
  • A still further application example of the camera 10 according to the invention in the field of biochemical sensing is discussed with reference to FIG. 19. The setup is similar to that of FIG. 16, with the differences mentioned in the following. The light emitted by the light source 2 is distributed over a certain range of angles of incidence φin, e.g., by a collimating lens 21, such that light is always coupled into the waveguide 5. The sensing pad is on the output grating 58, not on the input grating 57. Since the outcoupling angle φout depends on the wavelength λ, the outcoupled light beam periodically sweeps over the camera 10. Hence, at a given position a1 on the camera 10, i.e., for a given pixel, the maximum intensity I occurs at a given time, characterized by an equivalent phase when using the lock-in properties of the camera 10 as described above; cf.
  • FIG. 20. This results in a characteristic phase distribution on the camera 10. A change in the optical properties of the sample results in a shift of the intensity maximum from position a1 to another position a2, and eventually in a shift of the phase distribution. Thus, information about the optical properties of the sample is obtained. Compared to the WIOS method discussed above, this method offers an increased measurement range.
  • In the above-mentioned drawings and descriptions, only one sensing pad is considered for the sake of clarity. However, in preferred embodiments, multiple (even many or all pads of one or several chips) can be illuminated at the same time (by the same source 2), and different pixels or regions on the camera 10 are used to derive the plurality of output signals from the plurality of sensor pads quasi-simultaneously. FIG. 21 shows an embodiment of the apparatus according to the invention in which all sensing pads or gratings of the waveguide 5 are illuminated by the light source 2, e.g., by means of an optical system 23, 24 for widening the light beam and offering a limited range of incident angles to the gratings. All sensing pads or gratings or the waveguide 5 are imaged, e.g., by means of imaging optics 25, onto a photodetector chip 10 with an integrated array of lock-in pixels forming a lock-in camera. The camera is connected to the light source 2 as described above. The relevant phase information is extracted at the pixel level of the camera.
  • Thus, parallel processing and readout of the whole sensor chip 5 is possible.
  • This invention is not limited to the preferred embodiments described above, to which variations and improvements may be made, without departing from the scope of protection of the present patent. In particular, various embodiments of the invention may be combined with each other. Such a combination may yield an even better improvement than a single embodiment of the invention as described above.
  • LIST OF REFERENCE SIGNS
  • 1 Pixel
  • 10 Camera
  • 2 Light source
  • 21 Colimating lens
  • 23, 24 Optical system
  • 25 Imaging optics
  • 3 Microscope
  • 4 Wavelength filter
  • 5 Planar waveguide
  • 51 Input grating
  • 52 Output grating
  • 53-56 Diffraction gratings
  • 57 Input grating
  • 58 Output grating
  • 6 Imaging system
  • 7 Shielding means
  • 9 Sample
  • 91 Sensor microarray
  • a Position on camera
  • B Vertical opaque barrier
  • BC Transverse inhomogeneity, buried channel
  • COMP Comparator
  • DDiff Dump diffusion
  • DL Detection layer
  • I Intensity of light
  • IG Integration gate
  • In Incident electromagnetic radiation
  • int_ref Control voltage for integration gate
  • min Potential minimum
  • O Oxide layer
  • OP Opening in shielding layer
  • OUTG Isolation gate
  • PGL Left photogate
  • PGM Middle photogate
  • PGR Right photogate
  • R Opaque and/or reflecting layer
  • S Substrate
  • sat_ref Reference signal for saturation
  • SDiff Sense diffusion
  • SL Shielding layer
  • t Time
  • x Horizontal (lateral) coordinate
  • z Vertical (transverse) coordinate
  • α Angle of incidence of radiation
  • φ Incoupling or outcoupling angle
  • λ Wavelength of light
  • ΦPotential distribution

Claims (11)

1-46. (canceled)
47. A method for sensing a chemical and/or biochemical substance, comprising:
illuminating the substance with modulated electromagnetic radiation, causing the electromagnetic radiation to interact with the substance or indirectly through a transducer, and
detecting electromagnetic radiation from substance, with a photodetector that comprises an active region for converting incident electromagnetic radiation into charge carriers and an integration region for storing the charge carriers synchronously with the modulation of the modulated electromagnetic radiation.
48. The method according to claim 37, wherein the substance is deposited on a sensing pad arranged on a surface of a planar waveguide.
49. The method according to claim 37, wherein the interaction comprises luminescence, and luminescent radiation emitted by the substance is detected.
50. The method according to claim 38, wherein the sensing pad is illuminated by a resonant electromagnetic field excited in the waveguide, and luminescent radiation that does not excite a resonant electromagnetic field in the waveguide is detected.
51. The method according to claim 38, wherein the sensing pad is illuminated by a first resonant electromagnetic field excited in the waveguide, and electromagnetic radiation that excites a second resonant electromagnetic field in the waveguide is detected.
52. The method according to claim 41, wherein the sensing pad comprises a diffraction grating structure.
53. The method according to claim 42, wherein electromagnetic radiation is coupled into the waveguide and coupled out of the waveguide by the same grating structure comprised in the sensing pad.
54. The method according to claim 42, wherein electromagnetic radiation is coupled into the waveguide by a first grating structure and coupled out of the waveguide by a second grating structure that is not identical with the first grating structure, and the sensing pad comprises the first or the second grating structure.
55. The method according to claim 44, wherein the sensing pad comprises the first grating structure, a beam of electromagnetic radiation is coupled into the waveguide under a constant, well-defined incoupling angle (θin), the wavelength (λ) of the beam is periodically modulated, the modulation being triggered by a camera performing the detection, and an intensity of outcoupled electromagnetic radiation is detected by the camera.
56. The method according to claim 44, wherein the sensing pad comprises the second grating structure, electromagnetic radiation is coupled into the waveguide under a limited range of incoupling angles, the wavelength (λ) of the beam is periodically modulated, the modulation being triggered by a camera performing the detection, and a position-dependent phase of an outcoupled beam of electromagnetic radiation is detected by the camera.
US12/987,669 2004-07-26 2011-01-10 Solid-State Photodetector Pixel and Photodetecting Method Abandoned US20110101241A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US12/987,669 US20110101241A1 (en) 2004-07-26 2011-01-10 Solid-State Photodetector Pixel and Photodetecting Method
US14/161,053 US9209327B2 (en) 2004-07-26 2014-01-22 Solid-state photodetector pixel and photodetecting method

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
EP04405475.7 2004-07-26
EP04405475A EP1622200A1 (en) 2004-07-26 2004-07-26 Solid-state photodetector pixel and photodetecting method
PCT/CH2005/000436 WO2006010284A1 (en) 2004-07-26 2005-07-25 Solid-state photodetector pixel and photodetecting method
US65851608A 2008-02-25 2008-02-25
US12/987,669 US20110101241A1 (en) 2004-07-26 2011-01-10 Solid-State Photodetector Pixel and Photodetecting Method

Related Parent Applications (3)

Application Number Title Priority Date Filing Date
PCT/CH2005/000436 Division WO2006010284A1 (en) 2004-07-26 2005-07-25 Solid-state photodetector pixel and photodetecting method
US11/658,516 Division US7897928B2 (en) 2004-07-26 2005-07-25 Solid-state photodetector pixel and photodetecting method
US65851608A Division 2004-07-26 2008-02-25

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/161,053 Continuation US9209327B2 (en) 2004-07-26 2014-01-22 Solid-state photodetector pixel and photodetecting method

Publications (1)

Publication Number Publication Date
US20110101241A1 true US20110101241A1 (en) 2011-05-05

Family

ID=34932218

Family Applications (3)

Application Number Title Priority Date Filing Date
US11/658,516 Active 2026-12-22 US7897928B2 (en) 2004-07-26 2005-07-25 Solid-state photodetector pixel and photodetecting method
US12/987,669 Abandoned US20110101241A1 (en) 2004-07-26 2011-01-10 Solid-State Photodetector Pixel and Photodetecting Method
US14/161,053 Expired - Fee Related US9209327B2 (en) 2004-07-26 2014-01-22 Solid-state photodetector pixel and photodetecting method

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/658,516 Active 2026-12-22 US7897928B2 (en) 2004-07-26 2005-07-25 Solid-state photodetector pixel and photodetecting method

Family Applications After (1)

Application Number Title Priority Date Filing Date
US14/161,053 Expired - Fee Related US9209327B2 (en) 2004-07-26 2014-01-22 Solid-state photodetector pixel and photodetecting method

Country Status (4)

Country Link
US (3) US7897928B2 (en)
EP (2) EP1622200A1 (en)
JP (1) JP2008513974A (en)
WO (1) WO2006010284A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8743356B1 (en) 2012-11-22 2014-06-03 Her Majesty The Queen In Right Of Canada As Represented By The Minister Of National Defence Man-portable device for detecting hazardous material
US20140200842A1 (en) * 2011-08-12 2014-07-17 National University Corporation Toyohashi University Of Technology Device and Method for Detecting Chemical and Physical Phenomena
US10016137B1 (en) 2017-11-22 2018-07-10 Hi Llc System and method for simultaneously detecting phase modulated optical signals
US20180302582A1 (en) * 2017-04-18 2018-10-18 Stmicroelectronics (Crolles 2) Sas Time-of-flight detection pixel
US10219700B1 (en) 2017-12-15 2019-03-05 Hi Llc Systems and methods for quasi-ballistic photon optical coherence tomography in diffusive scattering media using a lock-in camera detector
US10299682B1 (en) 2017-11-22 2019-05-28 Hi Llc Pulsed ultrasound modulated optical tomography with increased optical/ultrasound pulse ratio
US10368752B1 (en) 2018-03-08 2019-08-06 Hi Llc Devices and methods to convert conventional imagers into lock-in cameras
US10948597B2 (en) 2014-08-29 2021-03-16 Denso Corporation Time-of-flight distance measurement device
US11206985B2 (en) 2018-04-13 2021-12-28 Hi Llc Non-invasive optical detection systems and methods in highly scattering medium
WO2022125973A1 (en) * 2020-12-11 2022-06-16 Quantum-Si Incorporated Integrated circuit with improved charge transfer efficiency and associated techniques
US11857316B2 (en) 2018-05-07 2024-01-02 Hi Llc Non-invasive optical detection system and method

Families Citing this family (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1622200A1 (en) * 2004-07-26 2006-02-01 CSEM Centre Suisse d'Electronique et de Microtechnique SA Solid-state photodetector pixel and photodetecting method
EP1748304A1 (en) * 2005-07-27 2007-01-31 IEE International Electronics & Engineering S.A.R.L. Method for operating a time-of-flight imager pixel
US7483151B2 (en) 2006-03-17 2009-01-27 Alpineon D.O.O. Active 3D triangulation-based imaging method and device
DE202007018027U1 (en) 2007-01-31 2008-04-17 Richard Wolf Gmbh endoscope system
DE102007006556B4 (en) 2007-02-09 2012-09-06 B/E Aerospace Systems Gmbh Method for emergency oxygen supply in an aircraft
US7889257B2 (en) 2007-07-18 2011-02-15 Mesa Imaging Ag On-chip time-based digital conversion of pixel outputs
JP5171158B2 (en) * 2007-08-22 2013-03-27 浜松ホトニクス株式会社 Solid-state imaging device and range image measuring device
JP5208692B2 (en) * 2008-11-17 2013-06-12 本田技研工業株式会社 Position measuring system and position measuring method
US8681321B2 (en) * 2009-01-04 2014-03-25 Microsoft International Holdings B.V. Gated 3D camera
JP5375141B2 (en) * 2009-02-05 2013-12-25 ソニー株式会社 Solid-state imaging device, method for manufacturing solid-state imaging device, driving method for solid-state imaging device, and electronic apparatus
DE102009034163A1 (en) * 2009-07-22 2011-02-03 Ulrich Lohmann Parallel light bus system for spatial distribution of optical information signals on optical printed circuit board, has planar light-conducting space for transmitting optical information signals between electronic board layers
DE102009037596B4 (en) * 2009-08-14 2014-07-24 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Pixel structure, system and method for optical distance measurement and control circuit for the pixel structure
JP5446738B2 (en) * 2009-11-02 2014-03-19 ソニー株式会社 Solid-state imaging device and camera system
US9229581B2 (en) 2011-05-05 2016-01-05 Maxim Integrated Products, Inc. Method for detecting gestures using a multi-segment photodiode and one or fewer illumination sources
US8716649B2 (en) 2011-05-05 2014-05-06 Maxim Integrated Products, Inc. Optical gesture sensor using a single illumination source
EP2551698B1 (en) 2011-07-29 2014-09-17 Richard Wolf GmbH Endoscopic instrument
US10324033B2 (en) * 2012-07-20 2019-06-18 Samsung Electronics Co., Ltd. Image processing apparatus and method for correcting an error in depth
EP3699577B1 (en) 2012-08-20 2023-11-08 Illumina, Inc. System for fluorescence lifetime based sequencing
EP2955539B1 (en) * 2014-06-12 2018-08-08 Delphi International Operations Luxembourg S.à r.l. Distance measuring device
KR102290502B1 (en) 2014-07-31 2021-08-19 삼성전자주식회사 Image sensor and method of fabricating the same
AU2015300766B2 (en) 2014-08-08 2021-02-04 Quantum-Si Incorporated Integrated device for temporal binning of received photons
TWI558982B (en) * 2014-09-24 2016-11-21 原相科技股份有限公司 Optical sensor and optical sensor system
CN111089612B (en) * 2014-09-24 2022-06-21 原相科技股份有限公司 Optical sensor and optical sensing system
GB201417887D0 (en) 2014-10-09 2014-11-26 Univ Aston Optical sensor
JP6520053B2 (en) 2014-11-06 2019-05-29 株式会社デンソー Optical flight type distance measuring device
EP3045896B1 (en) * 2015-01-16 2023-06-07 Personal Genomics, Inc. Optical sensor with light-guiding feature
US10134926B2 (en) 2015-02-03 2018-11-20 Microsoft Technology Licensing, Llc Quantum-efficiency-enhanced time-of-flight detector
US10976257B2 (en) * 2015-06-08 2021-04-13 The Regents Of The University Of Michigan Pixel circuit and method for optical sensing
KR20180111999A (en) * 2016-02-17 2018-10-11 테서렉트 헬스, 인코포레이티드 Sensors and devices for life-time imaging and detection applications
JP6673084B2 (en) 2016-08-01 2020-03-25 株式会社デンソー Light flight type distance measuring device
WO2018119347A1 (en) 2016-12-22 2018-06-28 Quantum-Si Incorporated Integrated photodetector with direct binning pixel
JP7089719B2 (en) * 2017-02-07 2022-06-23 ナノフォトン株式会社 Spectroscopic microscope and spectroscopic observation method
MX2020013788A (en) 2018-06-22 2021-03-02 Quantum Si Inc Integrated photodetector with charge storage bin of varied detection time.
KR102618016B1 (en) 2018-12-04 2023-12-26 삼성전자주식회사 An image sensor and a distance measuring sensor using the same
US11835628B2 (en) * 2020-11-02 2023-12-05 Microsoft Technology Licensing, Llc Time-of-flight image sensor

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5442169A (en) * 1991-04-26 1995-08-15 Paul Scherrer Institut Method and apparatus for determining a measuring variable by means of an integrated optical sensor module
US5459323A (en) * 1990-01-12 1995-10-17 University Of Salford Measurement of luminescence
US5856667A (en) * 1994-11-14 1999-01-05 Leica Ag Apparatus and method for detection and demodulation of an intensity-modulated radiation field
US5891656A (en) * 1992-09-14 1999-04-06 Sri International Up-converting reporters for biological and other assays using laser excitation techniques
US20010035568A1 (en) * 1999-08-25 2001-11-01 Rong-Fuh Shyu Lead frame for a semiconductor chip package, semiconductor chip package incorporating multiple integrated circuit chips, and method of fabricating a semiconductor chip package with multiple integrated circuit chips
US6395558B1 (en) * 1996-08-29 2002-05-28 Zeptosens Ag Optical chemical/biochemical sensor
US6483096B1 (en) * 1999-09-15 2002-11-19 Csem Centre Suisse D'electronique Et De Microtechnique Sa Integrated-optical chemical and biochemical sensor
US6537829B1 (en) * 1992-09-14 2003-03-25 Sri International Up-converting reporters for biological and other assays using laser excitation techniques
US20040008394A1 (en) * 2000-04-28 2004-01-15 Robert Lange Device and method for spatially resolved photodetection and demodulation of modulated electromagnetic waves
US20040028567A1 (en) * 1996-06-28 2004-02-12 Caliper Technologies Corp. High throughput screening assay systems in microscale fluidic devices
US20050023439A1 (en) * 2001-07-06 2005-02-03 Cartlidge Andrew G. Imaging system, methodology, and applications employing reciprocal space optical design
US7034317B2 (en) * 2002-12-17 2006-04-25 Dmetrix, Inc. Method and apparatus for limiting scanning imaging array data to characteristics of interest
US20060108611A1 (en) * 2002-06-20 2006-05-25 Peter Seitz Image sensing device and method of
US7279338B2 (en) * 2004-12-22 2007-10-09 Cargill, Incorporated Methods for determining cellular response to stimuli
US20090014658A1 (en) * 2004-07-26 2009-01-15 Mesa Inaging Ag Solid-state photodetector pixel and photodetecting method
US7508505B2 (en) * 2005-07-21 2009-03-24 Mesa Imaging Ag Apparatus and method for all-solid-state fluorescence lifetime imaging

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07153932A (en) * 1993-09-30 1995-06-16 Sony Corp Solid-state image pickup device
US5986297A (en) * 1996-05-22 1999-11-16 Eastman Kodak Company Color active pixel sensor with electronic shuttering, anti-blooming and low cross-talk
DE69720458T2 (en) 1997-10-22 2004-02-26 MAX-PLANCK-Gesellschaft zur Förderung der Wissenschaften e.V. Programmable spatially light-modulated microscope and microscopy method
JP3827909B2 (en) 2000-03-21 2006-09-27 シャープ株式会社 Solid-state imaging device and manufacturing method thereof
GB0011822D0 (en) 2000-05-17 2000-07-05 Photonic Research Systems Limi Apparatus and methods for phase-sensitive imaging
DE10038527A1 (en) 2000-08-08 2002-02-21 Zeiss Carl Jena Gmbh Arrangement to increase depth discrimination in optical imaging systems
WO2003014400A1 (en) * 2001-08-08 2003-02-20 Applied Precision, Llc Time-delay integration imaging of biological specimens
JP4262446B2 (en) * 2002-06-21 2009-05-13 富士フイルム株式会社 Solid-state imaging device
US8039882B2 (en) * 2003-08-22 2011-10-18 Micron Technology, Inc. High gain, low noise photodiode for image sensors and method of formation
EP1777811B1 (en) * 2005-10-19 2018-10-03 Heptagon Micro Optics Pte. Ltd. Method and Device for the demodulation of modulated optical signals

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5459323A (en) * 1990-01-12 1995-10-17 University Of Salford Measurement of luminescence
US5442169A (en) * 1991-04-26 1995-08-15 Paul Scherrer Institut Method and apparatus for determining a measuring variable by means of an integrated optical sensor module
US5891656A (en) * 1992-09-14 1999-04-06 Sri International Up-converting reporters for biological and other assays using laser excitation techniques
US6537829B1 (en) * 1992-09-14 2003-03-25 Sri International Up-converting reporters for biological and other assays using laser excitation techniques
US5856667A (en) * 1994-11-14 1999-01-05 Leica Ag Apparatus and method for detection and demodulation of an intensity-modulated radiation field
US20040028567A1 (en) * 1996-06-28 2004-02-12 Caliper Technologies Corp. High throughput screening assay systems in microscale fluidic devices
US6395558B1 (en) * 1996-08-29 2002-05-28 Zeptosens Ag Optical chemical/biochemical sensor
US20010035568A1 (en) * 1999-08-25 2001-11-01 Rong-Fuh Shyu Lead frame for a semiconductor chip package, semiconductor chip package incorporating multiple integrated circuit chips, and method of fabricating a semiconductor chip package with multiple integrated circuit chips
US6483096B1 (en) * 1999-09-15 2002-11-19 Csem Centre Suisse D'electronique Et De Microtechnique Sa Integrated-optical chemical and biochemical sensor
US20040008394A1 (en) * 2000-04-28 2004-01-15 Robert Lange Device and method for spatially resolved photodetection and demodulation of modulated electromagnetic waves
US20050023439A1 (en) * 2001-07-06 2005-02-03 Cartlidge Andrew G. Imaging system, methodology, and applications employing reciprocal space optical design
US20060108611A1 (en) * 2002-06-20 2006-05-25 Peter Seitz Image sensing device and method of
US7034317B2 (en) * 2002-12-17 2006-04-25 Dmetrix, Inc. Method and apparatus for limiting scanning imaging array data to characteristics of interest
US20090014658A1 (en) * 2004-07-26 2009-01-15 Mesa Inaging Ag Solid-state photodetector pixel and photodetecting method
US7279338B2 (en) * 2004-12-22 2007-10-09 Cargill, Incorporated Methods for determining cellular response to stimuli
US7508505B2 (en) * 2005-07-21 2009-03-24 Mesa Imaging Ag Apparatus and method for all-solid-state fluorescence lifetime imaging

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Abramowitz et al. (July 16 2004). "Anatomy of a Charge-Coupled Device". Molecular Expressions. Retreived Oct. 17, 2011. . *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140200842A1 (en) * 2011-08-12 2014-07-17 National University Corporation Toyohashi University Of Technology Device and Method for Detecting Chemical and Physical Phenomena
US9482641B2 (en) * 2011-08-12 2016-11-01 National University Corporation Toyohashi University Of Technology Device and method for detecting chemical and physical phenomena
US8743356B1 (en) 2012-11-22 2014-06-03 Her Majesty The Queen In Right Of Canada As Represented By The Minister Of National Defence Man-portable device for detecting hazardous material
US10948597B2 (en) 2014-08-29 2021-03-16 Denso Corporation Time-of-flight distance measurement device
US10951844B2 (en) * 2017-04-18 2021-03-16 Stmicroelectronics (Crolles 2) Sas Time-of-flight detection pixel
US20180302582A1 (en) * 2017-04-18 2018-10-18 Stmicroelectronics (Crolles 2) Sas Time-of-flight detection pixel
US11058301B2 (en) 2017-11-22 2021-07-13 Hi Llc System and method for simultaneously detecting phase modulated optical signals
US10299682B1 (en) 2017-11-22 2019-05-28 Hi Llc Pulsed ultrasound modulated optical tomography with increased optical/ultrasound pulse ratio
US10335036B2 (en) 2017-11-22 2019-07-02 Hi Llc Pulsed ultrasound modulated optical tomography using lock-in camera
US10420469B2 (en) 2017-11-22 2019-09-24 Hi Llc Optical detection system for determining neural activity in brain based on water concentration
US10016137B1 (en) 2017-11-22 2018-07-10 Hi Llc System and method for simultaneously detecting phase modulated optical signals
US10219700B1 (en) 2017-12-15 2019-03-05 Hi Llc Systems and methods for quasi-ballistic photon optical coherence tomography in diffusive scattering media using a lock-in camera detector
US10881300B2 (en) 2017-12-15 2021-01-05 Hi Llc Systems and methods for quasi-ballistic photon optical coherence tomography in diffusive scattering media using a lock-in camera detector
US10368752B1 (en) 2018-03-08 2019-08-06 Hi Llc Devices and methods to convert conventional imagers into lock-in cameras
US11291370B2 (en) 2018-03-08 2022-04-05 Hi Llc Devices and methods to convert conventional imagers into lock-in cameras
US11206985B2 (en) 2018-04-13 2021-12-28 Hi Llc Non-invasive optical detection systems and methods in highly scattering medium
US11857316B2 (en) 2018-05-07 2024-01-02 Hi Llc Non-invasive optical detection system and method
WO2022125973A1 (en) * 2020-12-11 2022-06-16 Quantum-Si Incorporated Integrated circuit with improved charge transfer efficiency and associated techniques

Also Published As

Publication number Publication date
JP2008513974A (en) 2008-05-01
EP1771882B1 (en) 2013-09-11
US20140203389A1 (en) 2014-07-24
US20090014658A1 (en) 2009-01-15
WO2006010284A1 (en) 2006-02-02
US7897928B2 (en) 2011-03-01
EP1771882A1 (en) 2007-04-11
US9209327B2 (en) 2015-12-08
EP1622200A1 (en) 2006-02-01

Similar Documents

Publication Publication Date Title
US9209327B2 (en) Solid-state photodetector pixel and photodetecting method
EP3519860B1 (en) System and method for determining a distance to an object
EP3625589B1 (en) System and method for determining a distance to an object
US7947939B2 (en) Detection of optical radiation using a photodiode structure
JP2760818B2 (en) Detector for spectrometer
US7420677B2 (en) Sensing photon energies of optical signals
US8665422B2 (en) Back-illuminated distance measuring sensor and distance measuring device
US9117712B1 (en) Demodulation pixel with backside illumination and charge barrier
KR102451010B1 (en) A system for determining the distance to an object
US8722346B2 (en) Chemiluminescence compact imaging scanner
US8653619B2 (en) Range sensor and range image sensor
US7705336B2 (en) Optical interrogation system and method for increasing a read-out speed of a spectrometer
US20040008394A1 (en) Device and method for spatially resolved photodetection and demodulation of modulated electromagnetic waves
US9258536B2 (en) Imaging systems with plasmonic color filters
US9000349B1 (en) Sense node capacitive structure for time of flight sensor
EP3550329A1 (en) System and method for determining a distance to an object
JP4757779B2 (en) Distance image sensor
KR100783335B1 (en) Measuring method of incident light and sensor having spectroscopic mechanism employing it
CN110729315B (en) Pixel architecture and image sensor
Groom Recent progress on CCDs for astronomical imaging

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: HEPTAGON MICRO OPTICS PTE. LTD., SINGAPORE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MESA IMAGING AG;REEL/FRAME:037211/0220

Effective date: 20150930