WO2001093763A2 - Event localization and fall-off correction by distance-dependent weighting - Google Patents

Event localization and fall-off correction by distance-dependent weighting Download PDF

Info

Publication number
WO2001093763A2
WO2001093763A2 PCT/US2001/017869 US0117869W WO0193763A2 WO 2001093763 A2 WO2001093763 A2 WO 2001093763A2 US 0117869 W US0117869 W US 0117869W WO 0193763 A2 WO0193763 A2 WO 0193763A2
Authority
WO
WIPO (PCT)
Prior art keywords
event
energy
set forth
light
determining
Prior art date
Application number
PCT/US2001/017869
Other languages
French (fr)
Other versions
WO2001093763A3 (en
Inventor
Frank P. Difilippo
John F. Vesel
Steven E. Cooke
Original Assignee
Philips Medical Systems (Cleveland), Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Philips Medical Systems (Cleveland), Inc. filed Critical Philips Medical Systems (Cleveland), Inc.
Priority to EP01944229A priority Critical patent/EP1328825A2/en
Publication of WO2001093763A2 publication Critical patent/WO2001093763A2/en
Publication of WO2001093763A3 publication Critical patent/WO2001093763A3/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01TMEASUREMENT OF NUCLEAR OR X-RADIATION
    • G01T1/00Measuring X-radiation, gamma radiation, corpuscular radiation, or cosmic radiation
    • G01T1/16Measuring radiation intensity
    • G01T1/161Applications in the field of nuclear medicine, e.g. in vivo counting
    • G01T1/164Scintigraphy
    • G01T1/1641Static instruments for imaging the distribution of radioactivity in one or two dimensions using one or several scintillating elements; Radio-isotope cameras
    • G01T1/1647Processing of scintigraphic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/02Devices for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computerised tomographs
    • A61B6/037Emission tomography

Definitions

  • the present invention relates to the art of nuclear medicine and diagnostic imaging. It finds particular application in localizing a scintillation event in a gamma camera having a number of photomultipliers arranged over a camera surface. It is to be appreciated that the present invention may be used in conjunction with positron emission tomography (“PET”), single photon emission computed tomography (“SPECT”), whole body nuclear scans, transmission imaging, other diagnostic modes and/or other like applications. Those skilled in the art will also appreciate applicability of the present invention to other applications where a plurality of pulses tend to overlap, or "pile-up" and obscure one another.
  • PET positron emission tomography
  • SPECT single photon emission computed tomography
  • Diagnostic nuclear imaging is used to study a radio nuclide distribution in a subject.
  • one or more radiopharmaceutical or radioisotopes are injected into a subject.
  • the radiopharmaceutical are commonly injected into the subject's bloodstream for imaging the circulatory system or for imaging specific organs that absorb the injected radiopharmaceutical.
  • a gamma or scintillation camera detector head is placed adjacent to a surface of the subject to monitor and record emitted radiation.
  • Each detector typically includes an array of photomultiplier tubes facing a large scintillation crystal.
  • Each received radiation event generates a corresponding flash of light (scintillation) that is seen by the closest photomultiplier tubes.
  • Each photomultiplier tube that sees an event generates a corresponding analog pulse. Respective amplitudes of the pulses are generally proportional to the distance of each tube from the flash.
  • event estimation is the determination of energy and position of the location of an interacting gamma or other radiation ray based on the detected electronic signals.
  • a conventional method for event positioning is known as the Anger method, which sums and weights signals seen by tubes after the occurrence of an event. The Anger method for event positioning is based on a simple first moment calculation. More specifically, the energy is typically measured as the sum of all the photomultiplier tube signals, and the position is typically measured as the "center of mass" of the photomultiplier tube signals.
  • the scintillation light pulse is mostly contained within a small subset of the tubes on a detector. For example, over 90% of a total signal is typically detected in seven (7) out of a total number of tubes, typically on the order of 50 or 60. However, imaging based only on the seven
  • the fall-off curve varies with a depth that a gamma photon interacts in the crystal.
  • Different energy photons have varying interaction • depth probabilities that are more pronounced in thicker crystals, which are typically used in combination with PET/ SPECT cameras.
  • a disadvantage of generating a fall-off curve using a point source is the large amount of time required to move the source position. This method is also prone to errors in positioning the source accurately on the detector. It is also usually only done in one or two directions. Therefore, the assumption is made that the fall-off curve is exactly symmetric. Regenerating the fall-off curve for a different energy requires that the process be repeated again. Likewise, generating the fall-off curve for a different tube requires the process be repeated again. Therefore, the assumption is usually made that the fall-off curve is invariant across different detectors or photomultiplier tubes.
  • Generating the linearity correction tables typically involves using a lead mask that contains many small holes to restrict the incident location of radiation on the crystal surface.
  • the holes represent the true location of the incident photons that interact in the detector crystal.
  • This information is used to generate a table that consists of x and y deltas that when added to the x and y estimate, respectively, are used to generate a corrected position estimate that more accurately reflects the true position.
  • a disadvantage is that new tables must be generated for each energy that is to be imaged, thereby increasing the calibration time.
  • the calibration mask has a limited number of holes, since each must be resolved individually, thereby limiting the accuracy of the correction. It is also increasingly more expensive and difficult to calibrate for higher energy photons since the thickness of the lead mask must increase in order to have sufficient absorption in non-hole areas .
  • Another prior art method uses separate flood uniformity correction tables for each energy.
  • a disadvantage is that new tables must be generated for each energy that is to be imaged, which increases calibration time.
  • Flood correction has the disadvantage of creating noise in the image, since the method is based on either adding or removing counts unevenly throughout the pixel matrix. This method is also sensitive to drift in either the photomultiplier tubes or electronics .
  • Another prior art method reduces the output from the closest tube. For example, an opaque dot is sometimes painted over the center of each photomultiplier tube. The sensitivity can also be reduced electronically. Unfortunately, the closest photomultiplier tube typically has the best noise statistics. Reducing its sensitivity to the event causes a resolution loss.
  • excluding the outlying tubes reduces the noise in the determined values of energy and position.
  • the most common way of excluding signals from outlying tubes includes imposing a threshold, such that tube signals below a set value are either ignored in the calculation or are adjusted by a threshold value. This method works reasonably well in excluding excess noise. However, the method fails if stray signals exist above the threshold value. Stray signals may exist at high-counting rates, when events occur nearly simultaneously in the crystal. When two events occur substantially simultaneously, their "center-of-mass" is midway between the two—where no event actually occurred. Nearly simultaneously occurring events may result in pulse-pile-up in the energy spectrum and mispositioning of events. This behavior is especially detrimental in coincidence imaging, where high-count rates are necessary.
  • the present invention provides a new and improved apparatus and method which overcomes the , above-referenced problems and others.
  • a method for generating an image representation from detected radiation events Radiation from a subject in an examination region is converted into light energy events. The light energy events are received with an array of sensors. Respective sensor output values are generated in response to each received light event. For each of the light events, at least one of an initial position and energy and distances from the initial position to each sensor which received the light event is determined. For each energy event, at least one of the initial position and the energy is corrected in accordance with the determined distances. An image representation is generated from the corrected positions.
  • a nuclear camera system includes a detector for receiving radiation from a subject in an exam region.
  • the detector head includes a scintillation crystal, which converts radiation events into light events, and an array of sensors, which are arranged to receive the light events from the scintillation crystal. Each of the sensors generates a respective sensor output value in response to each received light event.
  • a processor determines when each of the radiation events is detected. At least one of an initial digital position and an energy of each of the detected radiation events is determined in accordance with respective distances from a position of the detected event to the sensors.
  • One advantage of the present invention resides in its high linearity. Therefore, linearity and uniformity corrections are reduced. Another advantage resides in improved accuracy in event positioning, even in high count and pile-up situations. Another advantage is that local centroiding is continuous and seamless.
  • the invention may take form in various components and arrangements of components, and in various steps and arrangements of steps.
  • the drawings are only for purposes of illustrating a preferred embodiment and are not to be construed as limiting the invention.
  • FIGURE 1 is a diagrammatic illustration of a nuclear camera system according to the present invention.
  • FIGURE 2 illustrates an overview flowchart according to the present invention
  • FIGURE 3 illustrates a flow chart detailing the flowchart shown in FIGURE 2;
  • FIGURE 4 illustrates a partial array of sensors
  • FIGURE 5 illustrates a graphical depiction of an event in amplitude versus time
  • FIGURE 6 illustrates an optimal weighting graph according to the present invention in multiplier correction value versus distance
  • FIGURE 7 illustrates an actual fall-off curve used for obtaining the optimal weighting graph of FIGURE 6
  • FIGURE 8 illustrates a desired fall-off curve used for obtaining the optimal weighting graph of FIGURE 6;
  • FIGURE 9 illustrates a flowchart for generating a scaling curve according to the present invention.
  • FIGURE 10 illustrates various energy ratio curves according to the present invention
  • FIGURE 11 illustrates an energy scaling curve according to the present invention
  • FIGURE 12 illustrates a flow chart detailing the flowchart shown in FIGURE 3; and FIGURE 13 illustrates an embodiment of the present invention including a PET scanner.
  • a nuclear camera system 10 includes a plurality of detectors heads (“detectors”) 12 mounted for movement around a subject 14 in an examination region 16.
  • Each of the detectors 12 includes a scintillation crystal 20 that converts a radiation event into a flash of light energy or scintillation.
  • An array of sensors 22, e.g. 59 sensors, is arranged to receive the' light flashes from the scintillation crystal.
  • the sensors include photomultiplier tubes. However, other sensors are also contemplated.
  • Each of the sensors 22 generates a respective analog sensor output pulse (e.g., tube output pulse) in response to the received light flash. Furthermore, each of the sensors 22 is electrically connected to analog-to-digital converters 24. The analog-to-digital converters 24 convert the analog sensor output pulses to a series of digital sensor output values, as illustrated in FIGURE 5. As is discussed in more detail below, a processor 26 determines coordinates in two dimensions of the location and the energy of the scintillation event that occurred in the crystal.
  • a processor 26 determines coordinates in two dimensions of the location and the energy of the scintillation event that occurred in the crystal.
  • radiation is detected and converted into sensor output values (e.g., tube output values) , which are transmitted to the processor 26 in a step A.
  • the processor 26 detects that an event occurs and identifies which sensor values (e.g., tube values) will be used for determining an approximate position and energy of the event.
  • the processor 26 calculates the approximate position and energy of the event and then determines a corrected position by applying a weighting algorithm.
  • an image e.g., volumetric image
  • each of the steps A-C includes a plurality of respective sub-steps, which are discussed below.
  • each of the sub- steps is identified with a reference numeral specifying both the step (see FIGURE 2) and the sub-step (see FIGURE 3) .
  • each radiation event is detected within the array of sensors 22 in a sub-step Al.
  • the radiation produces gamma quanta that arise in the disintegration of radioisotopes .
  • the disintegration quanta strike the scintillation crystal, which preferably includes doped sodium iodide (Nal) causing a scintillation.
  • Light from the scintillation is distributed over a large number of the sensors 22.
  • the scintillation which is created by a radiation event, is illustrated centered at an arbitrary position 28. It is to be understood that only a partial array of the sensors 22 is shown in FIGURE 4.
  • the energy of the absorbed gamma quantum is converted, or transformed, into the flash of light at the position 28 by the scintillation crystal in a sub-step A2.
  • the sensors 22 detect (receive) the scintillation light in a sub-step A3.
  • the sensors 22 produce the respective analog sensor output signals in a sub- step A4.
  • the relative strengths of the analog, sensor output signals are proportional to the respective amounts of the scintillation light received by the sensors 22 in the sub-step A3.
  • a scintillation event 28 typically includes a rapidly changing portion 40, which reaches a peak 42.
  • the processor 26 detects that an event occurs (starts) in a sub-step Bl by analyzing the output values for each of the sensors. In the preferred embodiment, the processor 26 triggers (detects) that an event occurs when a sensor output value surpasses a trigger amplitude 44.
  • the processor determines the energy of the event 28.
  • the signal is sampled at a rate sufficient to capture an appropriate number of amplitude values.
  • a rate between 40 to 70 MHz provides a useful number of samples.
  • a post-pulse pile-up occurs when a subsequent event is detected during an integration period of the first event.
  • a pre-pulse pile-up occurs when the processor 26 indicates the presence of a previous event that occurred before the current event that is being integrated.
  • the processor 26 checks for a pre-pulse pile-up in a sub-step B2. In particular, the processor 26 checks whether the sensor outputs exceed a predetermined nominal or baseline value, which would exist in the absence of light. To avoid the undesirable effects of pulse pile-up, the integrated values of these sensors are zeroed (nulled) .
  • the sensor output values are integrated, during an integration period, for each sensor in a sub-step B3.
  • Subsequent triggers are detected after a delay period (post- pulse pile-up) (e.g., 75 nanoseconds), which begins substantially simultaneously when the integration period begins, in a sub-step B4.
  • the integration values associated with the subsequent, post-pulse pile-up triggers are zeroed in a sub-step B5. It is assumed that all of the sensors 22 in the immediate vicinity of the first event 28 have already caused the trigger processor 26 to trigger within this delay period for the first event 28. If the baseline processor indicates the presence of a previous event (pre-pulse pile-up) , the integrated value of the corresponding sensor is also zeroed (nulled) .
  • the subsequent scintillation events will introduce some error. More specifically, the sensors which see the subsequent scintillation events sufficiently strongly to reach the triggering threshold are zeroed (nulled) . However, the peripheral sensors that only saw a small fraction of the light from the subsequent scintillation events still have their outputs incorporated into the summation, which determines the position or the energy of the first scintillation event 28. It is assumed, however, that the outputs from these peripheral sensors are small enough, when compared to the total summation, that the error they contribute is negligible.
  • a subset of nineteen (19) sensors including the sensor 22 having a maximum integrated value along with a group (e.g., 18) of nearest sensors, are selected.
  • the processor determines the approximate position 28' and energy of the event 28 using the subset of nineteen (19) sensors within the array of sensors 22, preferably using weighted sums to determine a centroid (e.g., the Anger algorithm) .
  • the intensity of light received by each sensor is proportional to a corresponding distance d ⁇ d 2 , d 3 , ...
  • the processor 26 determines weighting (correcting) values as a function of the respective distances from the point 28' to the centers of the sensors 22, in the nineteen (19) sensor example, a weighting function for each of distances d x , d 2 , ..., d 19 .
  • the weighting values are assigned from an optimal weighting graph 50 as shown in FIGURE 6.
  • the graph 50 is designed by empirical measurement with sensors having a diameter of about 75 mm. However, it is to be understood that analogous graphs can be generated for sensors having other diameters. It is expected that graphs used for sensors having other diameters will have similar shapes to the graph 50.
  • the actual fall-off i.e. amplitude of sensor output with distance from the center of the sensor
  • This actual fall-off is compared with the desired fall-off for a linear system.
  • the deviation in the fall-off curves results in the weighting function of FIGURE 6. That is, operating on the actual fall-off curve with the curve of FIGURE 6 results in the desired ideal fall-off curve.
  • the curve of FIGURE 6 is digitized and stored in a look-up table 52.
  • Each of the distances d x , ..., d 19 is addressed to the abscissa of the graph 50 so that a corresponding weighting factor is retrieved from the ordinate. Therefore, in the nineteen (19) sensor example, nineteen (19) weighting factors are retrieved from the ordinate. In this manner, the response of sensors beyond the closest seven (7) are also used in the calculation and a subset including nineteen (19) sensors is selected.
  • the graph 50 is generated as a function of an actual fall-off curve 54 (input response curve) and a desired fall-off curve 56 (desired response curve) . More specifically, as will be discussed in more detail below, the graph 50 is obtained by dividing the desired response curve 56 by the input response curve 54. In other words, the weighting values are generated for each distance by dividing the desired response curve 56 by the input response curve 54 at each distance.
  • the desired response curve 56 has the characteristic of smoothly reaching a zero (0) value at a distance chosen to include the appropriate number of sensors in the centroid.
  • the desired curve 56 also has the characteristic of being substantially non-discontinuous and substantially linear.
  • the input response curve is measured or modeled for a given camera geometry, which includes crystal thickness, glass thickness, sensor diameter, and any other operating conditions.
  • each of the distances d x through d 19 are used for addressing the look-up table to determine corresponding weighting factors.
  • corrected sensor values are generated as a function of the weighting factors.
  • the look-up table may also be indexed as a function of time, temperature, count-rate, depth of interaction, and/or event energy.
  • the processor 26 sums the weighted values in a sub- step C4 to determine the corrected position 28 and energy. A decision is made in a sub-step C5 whether to iterate (repeat) the correction process. If it is decided to repeat the process of correcting the event position, control is passed back to the sub-step C2 for determining subsequent weighting values from the look-up table based on the corrected position 28. Otherwise, control is passed to the step D for reconstructing the image .
  • the camera illustrated in FIGURE 1 has a SPECT mode and a PET mode.
  • the heads In the SPECT mode, the heads have collimators which limit receipt of radiation to preselected directions, i.e., along known rays.
  • the determined location on the crystal 20 at which radiation is detected and the angular position of the head define the ray along which each radiation event occurred.
  • These ray trajectories and head angular position from an angular position resolver 64 are conveyed to a reconstruction processor 60 which back projects or otherwise reconstructs the rays into a volumetric image representation in an image memory 62.
  • the collimators In a PET mode, the collimators are removed. Thus, the location of a single scintillation event does not define a ray.
  • the radioisotopes used in PET scanning undergo an annihilation event in which two photons of radiation are emitted simultaneously in diametrically opposed directions, i.e., 180° apart.
  • a coincidence detector 66 detects when scintillations on two heads occur simultaneously. The locations of the two simultaneous scintillations define the end points of a ray through the annihilation event.
  • a ray or trajectory calculator 68 calculates the corresponding ray through the subject from each pair of simultaneously received scintillation events. The ray trajectories form the ray calculator 68 are conveyed to the reconstruction processor for reconstruction into a volumetric image representation.
  • a video processor 70 processes the image representation data for display on a monitor 72.
  • the processor 26 also determines an energy of the event 28 by integrating, or summing, the corrected sensor output values during an integration period.
  • the integration period preferably lasts about 250 nanoseconds, although the integration period may vary in different scintillation crystals, radiation energies, or software applications. That is, once all of the integrated sensor outputs of FIGURE 5 corresponding to the event are scaled by the correction curve 50, they are summed to determine the energy of the event. Stated in mathematical terms, the energy E of the event 28 and the position x of the event 28 are calculated as:
  • x x represents respective sensor locations
  • Si represents the respective sensor output values
  • w represents energy weighting values
  • w represents distance weighting values
  • w and w are a function of the respective distance
  • the initial position x 0 is determined as a centroid of the event 28. Since a detector normally consists of photomultiplier sensors arranged in a two- dimensional array, calculation of the distance usually involves computing the value of the difference between the sensor location x ⁇ and x 0 for each of a plurality of coordinates. The differences are squared, summed, and the square root is taken to find i ' . In order to avoid the complexities of taking the square root, a table look-up may be used.
  • a two-dimensional fall-off correction curve table and/or two- dimensional pre-correction table can be indexed by the absolute values of the differences between the sensor location x ⁇ and x 0 in order to save the step of calculating the distance directly.
  • the weighting values w are optionally pre-corrected as a function of the energy being imaged.
  • a representative fall-off curve for one energy level El is generated in a step FI .
  • the energy level El is a low energy within a range including 75 KeV and 511 KeV (e.g., about 75 KeV) .
  • the curve 54 represents the actual fall-off curve for the energy El.
  • a fall-off curve (not shown) for another energy E2, E3, E4 is acquired in a step F2.
  • the fall-off curve (including, for example, the fall-off curve 54 for the energy level El) is normalized to be within a range including, for example, zero (0) and 100 in a step F3.
  • the fall-off curve for one of the energies E2 , E3, E4 is divided by the fall-off curve 54 for the first energy El in a step F4, thereby generating one of a plurality of energy ratio curves (pre-correction curves) 80, 82, 84 (see FIGURE 10) .
  • the energy ratio curves 80, 82, 84 represent weighting that must be applied as a function of distance to a sensor's output when a respective one of the energies E2 , E3, E4 is being imaged.
  • the energy ratio curve 80 represents E1/E2
  • the energy ratio curve 82 represents E1/E3
  • the energy ratio curve 84 represents ⁇ 1/E4. Although only four (4) energy levels are discussed, it is to be understood that any number of energy levels may be generated. It is noted that each of the energy ratio curves 80, 82, 84 may be made smoother by collecting more data and/or applying commonly known regression or curve fits.
  • An energy scaling curve 86 is generated by determining scaling values between the energy ratio curve 80, which represents E1/E2 (e.g., the highest energy) and each of the energy ratio curves 82, 84, which represent E1/E3 and E1/E4, respectively.
  • the energy scaling curve 86 which yields an energy scaling factor as a function of energy, is produced in the step F6. It is to be understood that standard methods are used for fitting a curve to the scaling values between the various energy ratio curves . As will be discussed in more detail below, a scaling value sVi may be obtained from the energy scaling curve 86 as a function of energy. In the current example, it is assumed that the optimal weighting graph 50 (see FIGURE 6) is calibrated for the energy El.
  • the optimal weighting graph 50 may optionally be "pre-corrected" as a function of an energy ratio curve corresponding to the energy being imaged and a distance of the sensor. More specifically, with reference to FIGURES 3 and 10-12, a distance between a sensor center and the event 28 is determined in a sub-step C2A. Then, in a sub-step C2B, an energy pre-correction factor pv ⁇ is optionally obtained from the graph 80 as a function of the distance determined in the sub-step C2A. Importantly, an appropriate one of the energy ratio curves 80, 82, 84 is selected as a function of the energy being imaged.
  • a scaling value sv x is optionally obtained from the energy scaling curve 86 in a sub-step C2C.
  • a fall-off correction value fcv-j . is obtained from the optimal weighting graph 50 as a function of distance in a sub- step C2D.
  • the weighting factor w and corrected sensor output value S are used in the above equations for energy E and position x.
  • the fall-off curves for the energies El, E2, E3, E4 are generated by flooding an open detector with a radiation source of a known energy. For each event that interacts in the crystal of the detector, an estimate of the event position is determined. Then, the distance from the event to each of the sensor centers is calculated. In order to have a statistically significant number of counts for each distance, multiple events are produced. A histogram of each sensor's output is created as a function of distance. It is to be understood that the resolution of the distances may be set according to a required application (e.g., 1/4 of an intrinsic resolution of a gamma camera) .
  • the histograms from different sensor outputs may be combined to generate a composite histogram for the entire detector or certain areas that can naturally be grouped together.
  • the mean value of each histogram is then computed to generate the fall-off curve as a function of distance.
  • the curve can be normalized by dividing each value by the maximum fall-off value (e.g., the value at the distance zero (0)).
  • FIGURE 13 illustrates a second embodiment of the present invention including single photon emission computed tomography ("SPECT") scanner.
  • SPECT single photon emission computed tomography
  • like components are designated by like numerals with a primed ( ' ) suffix and new components are designated by new numerals.
  • a SPECT scanner 100 includes three (3) detectors 12' mounted for movement around a subject 14' in an examination region 16'. The subject is injected with a radioisotope. Each of the detectors 12' includes a scintillation crystal 20' for converting radiation events from the injected isotope into a flash of light energy or scintillation.
  • a radiation source 102 produces a fan of transmission radiation of a different energy than the injected radiation.
  • Collimators 104 on the detectors limit and define the patches or rays along which each detector can receive emission and transmission radiation. The location of the scintillation and the position of the receiving detector uniquely determine the ray.
  • An array of sensors 22' e.g. 59 sensors, is arranged to receive the light flashes from the scintillation crystal 20'.
  • Each of the sensors 22' generates a respective analog sensor output pulse (FIGURE 5) in response to the received light flash.
  • each of the- sensors 22' is electrically connected to at least one of a plurality ' of analog-to-digital converters 24'.
  • the analog-to-digital converters 24' convert the analog sensor output pulses to respective series of three digital sensor output values.
  • a processor 26' determines the energy and the location in two dimensions of each scintillation on the face of the detector, hence the ray along which the radiation originated. Additionally, the curves of FIGURES 6, 10, and optionally 11 are digitized and stored in respective look-up tables 52' .
  • a processor 60' reconstructs an image representation from the emission data.
  • the transmission data is used to correct the emission data for an improved image.
  • the image representation is stored in an image memory 62'.
  • a video processor 70' processes the image representation data for display on a monitor 72 ' .
  • the three heads can be used without collimators in a PET mode.
  • the heads are positioned to provide uniform coverage of the region of interest during annihilation events.
  • a coincidence detector 66' determines concurrent events and a ray calculator 68' calculates the trajectory between each pair of coincident events.

Abstract

A nuclear camera system includes a detector (12) for receiving radiation from a subject (14) in an exam region (16). The detector (12) includes a scintillation crystal (20) that converts radiation events into flashes of light. An array of sensors (22) is arranged to receive the light flashes from the scintillation crystal (20). Each of the photomultiplier sensors (22) generates a respective sensor output value in response to each received light flash. A processor (26) determines when each of the radiation events is detected. At least one of an initial position and an energy of each of the detected radiation events is determined in accordance with respective distances (d1....d19) from a position of the detected event to the sensors (22). An image representation is generated from the initial positions and energies.

Description

EVENT LOCALIZATION AND FALL-OFF CORRECTION BY DISTANCE-DEPENDENT WEIGHTING
Background of the Invention
The present invention relates to the art of nuclear medicine and diagnostic imaging. It finds particular application in localizing a scintillation event in a gamma camera having a number of photomultipliers arranged over a camera surface. It is to be appreciated that the present invention may be used in conjunction with positron emission tomography ("PET"), single photon emission computed tomography ("SPECT"), whole body nuclear scans, transmission imaging, other diagnostic modes and/or other like applications. Those skilled in the art will also appreciate applicability of the present invention to other applications where a plurality of pulses tend to overlap, or "pile-up" and obscure one another.
Diagnostic nuclear imaging is used to study a radio nuclide distribution in a subject. Typically, one or more radiopharmaceutical or radioisotopes are injected into a subject. The radiopharmaceutical are commonly injected into the subject's bloodstream for imaging the circulatory system or for imaging specific organs that absorb the injected radiopharmaceutical. A gamma or scintillation camera detector head is placed adjacent to a surface of the subject to monitor and record emitted radiation. Each detector typically includes an array of photomultiplier tubes facing a large scintillation crystal. Each received radiation event generates a corresponding flash of light (scintillation) that is seen by the closest photomultiplier tubes. Each photomultiplier tube that sees an event generates a corresponding analog pulse. Respective amplitudes of the pulses are generally proportional to the distance of each tube from the flash. A fundamental function of a scintillation camera is event estimation, which is the determination of energy and position of the location of an interacting gamma or other radiation ray based on the detected electronic signals. A conventional method for event positioning is known as the Anger method, which sums and weights signals seen by tubes after the occurrence of an event. The Anger method for event positioning is based on a simple first moment calculation. More specifically, the energy is typically measured as the sum of all the photomultiplier tube signals, and the position is typically measured as the "center of mass" of the photomultiplier tube signals. Several methods have been used for implementing the center of mass calculation. With fully analog cameras, all such calculations (e.g., summing, weighting, dividing) are done using analog circuits. With hybrid analog/digital cameras, the, summing and weighting are done using analog circuits, but the summed values are digitized and the final calculation of position is done digitally. With "fully digital" cameras, the tube signals will be digitized individually. In any event, because the fall-off curve of the photomultipliers is not linear as assumed by the Anger method, the image created has non-linearity errors.
One important consideration is the location of the event estimation. The scintillation light pulse is mostly contained within a small subset of the tubes on a detector. For example, over 90% of a total signal is typically detected in seven (7) out of a total number of tubes, typically on the order of 50 or 60. However, imaging based only on the seven
(7) closest tubes, known as clustering, has poor resolution and causes uniformity artifacts. Furthermore, because the photomultiplier tubes have non-linear outputs, the scintillation events are artificially shifted toward the center of the nearest photomultiplier tube.
For a given detector geometry, the fall-off curve varies with a depth that a gamma photon interacts in the crystal. Different energy photons have varying interaction • depth probabilities that are more pronounced in thicker crystals, which are typically used in combination with PET/ SPECT cameras.
Therefore, separate linearity or flood correction tables are created and used for each energy in order to correct for the uniformity artifact. Fall-off curves are acquired using a labor intensive method of moving a point source a small amount (e.g., 2 mm) roughly 30-40 times for each tube. The individual tube's output is acquired at each location, the mean value of the tube's output is found, and a curve of tube output versus distance from the location of the point source is generated.
A disadvantage of generating a fall-off curve using a point source is the large amount of time required to move the source position. This method is also prone to errors in positioning the source accurately on the detector. It is also usually only done in one or two directions. Therefore, the assumption is made that the fall-off curve is exactly symmetric. Regenerating the fall-off curve for a different energy requires that the process be repeated again. Likewise, generating the fall-off curve for a different tube requires the process be repeated again. Therefore, the assumption is usually made that the fall-off curve is invariant across different detectors or photomultiplier tubes.
Generating the linearity correction tables typically involves using a lead mask that contains many small holes to restrict the incident location of radiation on the crystal surface. The holes represent the true location of the incident photons that interact in the detector crystal. This information is used to generate a table that consists of x and y deltas that when added to the x and y estimate, respectively, are used to generate a corrected position estimate that more accurately reflects the true position. A disadvantage is that new tables must be generated for each energy that is to be imaged, thereby increasing the calibration time. Another disadvantage is that the calibration mask has a limited number of holes, since each must be resolved individually, thereby limiting the accuracy of the correction. It is also increasingly more expensive and difficult to calibrate for higher energy photons since the thickness of the lead mask must increase in order to have sufficient absorption in non-hole areas .
Another prior art method uses separate flood uniformity correction tables for each energy. A disadvantage is that new tables must be generated for each energy that is to be imaged, which increases calibration time. Flood correction has the disadvantage of creating noise in the image, since the method is based on either adding or removing counts unevenly throughout the pixel matrix. This method is also sensitive to drift in either the photomultiplier tubes or electronics . Another prior art method reduces the output from the closest tube. For example, an opaque dot is sometimes painted over the center of each photomultiplier tube. The sensitivity can also be reduced electronically. Unfortunately, the closest photomultiplier tube typically has the best noise statistics. Reducing its sensitivity to the event causes a resolution loss.
Similarly, excluding the outlying tubes reduces the noise in the determined values of energy and position. The most common way of excluding signals from outlying tubes includes imposing a threshold, such that tube signals below a set value are either ignored in the calculation or are adjusted by a threshold value. This method works reasonably well in excluding excess noise. However, the method fails if stray signals exist above the threshold value. Stray signals may exist at high-counting rates, when events occur nearly simultaneously in the crystal. When two events occur substantially simultaneously, their "center-of-mass" is midway between the two—where no event actually occurred. Nearly simultaneously occurring events may result in pulse-pile-up in the energy spectrum and mispositioning of events. This behavior is especially detrimental in coincidence imaging, where high-count rates are necessary.
Thus, it is desirable to improve localization in event estimation. With a fully digital detector, both the intensity and the location of each tube signal are known. It is, therefore, possible to calculate the energy and position based primarily on the tube signals close to an individual event. One current method for event localization is seven (7) tube clustering in which a cluster of seven (7) tubes is selected for each event. These tubes include the tube with maximum amplitude, along with that tube's six (6) closest neighbors. This method is an effective method for limiting the spatial extent of the calculation. However, the main drawback of this method is the resulting discontinuity. Discontinuity arises when the detected positions for events from a uniform flood source form an array of zones around each possible cluster. Elaborate correction schemes (see e.g., Geagan, Chase, and Muehllehner, Nucl . Instr. Meth. Phys. Res A 353, 379-383 (1994)) are needed to "stitch" together these overlapping zones to form a single, continuous image. However, this correction is sensitive to electronic shifts, which often arise in high-count situations, causing seam artifacts in the camera response.
The present invention provides a new and improved apparatus and method which overcomes the , above-referenced problems and others.
Summary of the Invention
In accordance with one aspect of the present invention, a method for generating an image representation from detected radiation events is provided. Radiation from a subject in an examination region is converted into light energy events. The light energy events are received with an array of sensors. Respective sensor output values are generated in response to each received light event. For each of the light events, at least one of an initial position and energy and distances from the initial position to each sensor which received the light event is determined. For each energy event, at least one of the initial position and the energy is corrected in accordance with the determined distances. An image representation is generated from the corrected positions. A nuclear camera system includes a detector for receiving radiation from a subject in an exam region. The detector head includes a scintillation crystal, which converts radiation events into light events, and an array of sensors, which are arranged to receive the light events from the scintillation crystal. Each of the sensors generates a respective sensor output value in response to each received light event. A processor determines when each of the radiation events is detected. At least one of an initial digital position and an energy of each of the detected radiation events is determined in accordance with respective distances from a position of the detected event to the sensors.
One advantage of the present invention resides in its high linearity. Therefore, linearity and uniformity corrections are reduced. Another advantage resides in improved accuracy in event positioning, even in high count and pile-up situations. Another advantage is that local centroiding is continuous and seamless.
Another advantage resides in more accurate estimation of events. Still further advantages of the present invention will become apparent to those of ordinary skill in the art upon reading and understanding the following detailed description of the preferred embodiments.
Brief Description of the Drawings
The invention may take form in various components and arrangements of components, and in various steps and arrangements of steps. The drawings are only for purposes of illustrating a preferred embodiment and are not to be construed as limiting the invention.
FIGURE 1 is a diagrammatic illustration of a nuclear camera system according to the present invention;
FIGURE 2 illustrates an overview flowchart according to the present invention; FIGURE 3 illustrates a flow chart detailing the flowchart shown in FIGURE 2;
FIGURE 4 illustrates a partial array of sensors;
FIGURE 5 illustrates a graphical depiction of an event in amplitude versus time; FIGURE 6 illustrates an optimal weighting graph according to the present invention in multiplier correction value versus distance;
FIGURE 7 illustrates an actual fall-off curve used for obtaining the optimal weighting graph of FIGURE 6; FIGURE 8 illustrates a desired fall-off curve used for obtaining the optimal weighting graph of FIGURE 6;
FIGURE 9 illustrates a flowchart for generating a scaling curve according to the present invention;
FIGURE 10 illustrates various energy ratio curves according to the present invention;
FIGURE 11 illustrates an energy scaling curve according to the present invention;
FIGURE 12 illustrates a flow chart detailing the flowchart shown in FIGURE 3; and FIGURE 13 illustrates an embodiment of the present invention including a PET scanner.
Detailed Description of the Preferred Embodiments
With reference to FIGURE 1, a nuclear camera system 10 includes a plurality of detectors heads ("detectors") 12 mounted for movement around a subject 14 in an examination region 16. Each of the detectors 12 includes a scintillation crystal 20 that converts a radiation event into a flash of light energy or scintillation. An array of sensors 22, e.g. 59 sensors, is arranged to receive the' light flashes from the scintillation crystal. In the preferred embodiment, the sensors include photomultiplier tubes. However, other sensors are also contemplated.
Each of the sensors 22 generates a respective analog sensor output pulse (e.g., tube output pulse) in response to the received light flash. Furthermore, each of the sensors 22 is electrically connected to analog-to-digital converters 24. The analog-to-digital converters 24 convert the analog sensor output pulses to a series of digital sensor output values, as illustrated in FIGURE 5. As is discussed in more detail below, a processor 26 determines coordinates in two dimensions of the location and the energy of the scintillation event that occurred in the crystal.
With reference to FIGURES 1 and 2, radiation is detected and converted into sensor output values (e.g., tube output values) , which are transmitted to the processor 26 in a step A. Then, in a step B, the processor 26 detects that an event occurs and identifies which sensor values (e.g., tube values) will be used for determining an approximate position and energy of the event. In a step C, the processor 26 calculates the approximate position and energy of the event and then determines a corrected position by applying a weighting algorithm. Finally, in a step D, an image (e.g., volumetric image) is reconstructed.
With reference to FIGURES 2 and 3, each of the steps A-C includes a plurality of respective sub-steps, which are discussed below. For ease of explanation, each of the sub- steps is identified with a reference numeral specifying both the step (see FIGURE 2) and the sub-step (see FIGURE 3) .
With reference to FIGURES 1-3, each radiation event is detected within the array of sensors 22 in a sub-step Al. The radiation produces gamma quanta that arise in the disintegration of radioisotopes . The disintegration quanta strike the scintillation crystal, which preferably includes doped sodium iodide (Nal) causing a scintillation. Light from the scintillation is distributed over a large number of the sensors 22.
As illustrated in FIGURE 4, the scintillation, which is created by a radiation event, is illustrated centered at an arbitrary position 28. It is to be understood that only a partial array of the sensors 22 is shown in FIGURE 4. With reference to FIGURES 1, 3, and 4, the energy of the absorbed gamma quantum is converted, or transformed, into the flash of light at the position 28 by the scintillation crystal in a sub-step A2. The sensors 22 detect (receive) the scintillation light in a sub-step A3. Then, the sensors 22 produce the respective analog sensor output signals in a sub- step A4. The relative strengths of the analog, sensor output signals are proportional to the respective amounts of the scintillation light received by the sensors 22 in the sub-step A3. The analog-to-digital converters 24 convert the analog sensor output signals to respective series of digital sensor output values in a sub-step A5. The digital sensor output values are then transmitted to the processor 26 in a sub-step A6. Referring now to FIGURES 1 and 3-5, a scintillation event 28 typically includes a rapidly changing portion 40, which reaches a peak 42. The processor 26 detects that an event occurs (starts) in a sub-step Bl by analyzing the output values for each of the sensors. In the preferred embodiment, the processor 26 triggers (detects) that an event occurs when a sensor output value surpasses a trigger amplitude 44.
For the processor to determine the energy of the event 28, the area underneath the curve is determined. The signal is sampled at a rate sufficient to capture an appropriate number of amplitude values. A rate between 40 to 70 MHz provides a useful number of samples. Artisans appreciate with further reference to FIGURE 5, that the integration or combination of sample data points is relatively straight-forward for a single scintillation event. The integration becomes problematic when several pulses overlap, a condition known as pile-up.
As discussed above, a post-pulse pile-up occurs when a subsequent event is detected during an integration period of the first event. A pre-pulse pile-up occurs when the processor 26 indicates the presence of a previous event that occurred before the current event that is being integrated. The processor 26 checks for a pre-pulse pile-up in a sub-step B2. In particular, the processor 26 checks whether the sensor outputs exceed a predetermined nominal or baseline value, which would exist in the absence of light. To avoid the undesirable effects of pulse pile-up, the integrated values of these sensors are zeroed (nulled) .
The sensor output values are integrated, during an integration period, for each sensor in a sub-step B3. Subsequent triggers are detected after a delay period (post- pulse pile-up) (e.g., 75 nanoseconds), which begins substantially simultaneously when the integration period begins, in a sub-step B4. The integration values associated with the subsequent, post-pulse pile-up triggers are zeroed in a sub-step B5. It is assumed that all of the sensors 22 in the immediate vicinity of the first event 28 have already caused the trigger processor 26 to trigger within this delay period for the first event 28. If the baseline processor indicates the presence of a previous event (pre-pulse pile-up) , the integrated value of the corresponding sensor is also zeroed (nulled) .
It is noted that the subsequent scintillation events will introduce some error. More specifically, the sensors which see the subsequent scintillation events sufficiently strongly to reach the triggering threshold are zeroed (nulled) . However, the peripheral sensors that only saw a small fraction of the light from the subsequent scintillation events still have their outputs incorporated into the summation, which determines the position or the energy of the first scintillation event 28. It is assumed, however, that the outputs from these peripheral sensors are small enough, when compared to the total summation, that the error they contribute is negligible.
In a sub-step B6, a subset of nineteen (19) sensors, including the sensor 22 having a maximum integrated value along with a group (e.g., 18) of nearest sensors, are selected. Then, in a sub-step Cl, the processor determines the approximate position 28' and energy of the event 28 using the subset of nineteen (19) sensors within the array of sensors 22, preferably using weighted sums to determine a centroid (e.g., the Anger algorithm) . Looking to the nineteen (19) sensors closest to the event 22x, 222, 223, ... , 22lg, it is assumed that the intensity of light received by each sensor is proportional to a corresponding distance d^ d2, d3, ... , d19, between the sensor and the event. This linear proportionality places the event at the point 28' in FIGURE 4. If the sensor array were linear, point 28' would be an accurate estimate of the actual location 28 at which the event occurred. Due to inherent non- linearities, the point 28' is typically shifted from the actual event 28.
Then, in a sub-step C2, the processor 26 determines weighting (correcting) values as a function of the respective distances from the point 28' to the centers of the sensors 22, in the nineteen (19) sensor example, a weighting function for each of distances dx, d2, ..., d19. In the preferred embodiment, the weighting values are assigned from an optimal weighting graph 50 as shown in FIGURE 6. With reference to FIGURES 4-'6, the graph 50 is designed by empirical measurement with sensors having a diameter of about 75 mm. However, it is to be understood that analogous graphs can be generated for sensors having other diameters. It is expected that graphs used for sensors having other diameters will have similar shapes to the graph 50. More specifically, the actual fall-off, i.e. amplitude of sensor output with distance from the center of the sensor, is measured. This actual fall-off is compared with the desired fall-off for a linear system. The deviation in the fall-off curves results in the weighting function of FIGURE 6. That is, operating on the actual fall-off curve with the curve of FIGURE 6 results in the desired ideal fall-off curve. Preferably, the curve of FIGURE 6 is digitized and stored in a look-up table 52. Each of the distances dx, ..., d19 is addressed to the abscissa of the graph 50 so that a corresponding weighting factor is retrieved from the ordinate. Therefore, in the nineteen (19) sensor example, nineteen (19) weighting factors are retrieved from the ordinate. In this manner, the response of sensors beyond the closest seven (7) are also used in the calculation and a subset including nineteen (19) sensors is selected.
With reference to FIGURES 6-8, the graph 50 is generated as a function of an actual fall-off curve 54 (input response curve) and a desired fall-off curve 56 (desired response curve) . More specifically, as will be discussed in more detail below, the graph 50 is obtained by dividing the desired response curve 56 by the input response curve 54. In other words, the weighting values are generated for each distance by dividing the desired response curve 56 by the input response curve 54 at each distance. The desired response curve 56 has the characteristic of smoothly reaching a zero (0) value at a distance chosen to include the appropriate number of sensors in the centroid. The desired curve 56 also has the characteristic of being substantially non-discontinuous and substantially linear. The input response curve is measured or modeled for a given camera geometry, which includes crystal thickness, glass thickness, sensor diameter, and any other operating conditions. With reference again to FIGURES 1 and 3-5, in the sub-step C2 each of the distances dx through d19, as well as the distances of further out sensors, are used for addressing the look-up table to determine corresponding weighting factors. In a sub-step C3, corrected sensor values are generated as a function of the weighting factors. It is to be understood that in other embodiments, the look-up table may also be indexed as a function of time, temperature, count-rate, depth of interaction, and/or event energy.
The processor 26 sums the weighted values in a sub- step C4 to determine the corrected position 28 and energy. A decision is made in a sub-step C5 whether to iterate (repeat) the correction process. If it is decided to repeat the process of correcting the event position, control is passed back to the sub-step C2 for determining subsequent weighting values from the look-up table based on the corrected position 28. Otherwise, control is passed to the step D for reconstructing the image .
The camera illustrated in FIGURE 1 has a SPECT mode and a PET mode. In the SPECT mode, the heads have collimators which limit receipt of radiation to preselected directions, i.e., along known rays. Thus, the determined location on the crystal 20 at which radiation is detected and the angular position of the head define the ray along which each radiation event occurred. These ray trajectories and head angular position from an angular position resolver 64 are conveyed to a reconstruction processor 60 which back projects or otherwise reconstructs the rays into a volumetric image representation in an image memory 62. In a PET mode, the collimators are removed. Thus, the location of a single scintillation event does not define a ray. However, the radioisotopes used in PET scanning undergo an annihilation event in which two photons of radiation are emitted simultaneously in diametrically opposed directions, i.e., 180° apart. A coincidence detector 66 detects when scintillations on two heads occur simultaneously. The locations of the two simultaneous scintillations define the end points of a ray through the annihilation event. A ray or trajectory calculator 68 calculates the corresponding ray through the subject from each pair of simultaneously received scintillation events. The ray trajectories form the ray calculator 68 are conveyed to the reconstruction processor for reconstruction into a volumetric image representation.
A video processor 70 processes the image representation data for display on a monitor 72.
The processor 26 also determines an energy of the event 28 by integrating, or summing, the corrected sensor output values during an integration period. The integration period preferably lasts about 250 nanoseconds, although the integration period may vary in different scintillation crystals, radiation energies, or software applications. That is, once all of the integrated sensor outputs of FIGURE 5 corresponding to the event are scaled by the correction curve 50, they are summed to determine the energy of the event. Stated in mathematical terms, the energy E of the event 28 and the position x of the event 28 are calculated as:
E = w? S . , and
∑ w 1 S 1. x 1. x 1
where xx represents respective sensor locations, Si represents the respective sensor output values, w represents energy weighting values, and w represents distance weighting values.
In one embodiment, w and w are a function of the respective distance | xA - x01 between the sensor location xA and the initial determined position x0 28' of the event 28 (see FIGURE 6) . As discussed above, the initial position x0 is determined as a centroid of the event 28. Since a detector normally consists of photomultiplier sensors arranged in a two- dimensional array, calculation of the distance usually involves computing the value of the difference between the sensor location x± and x0 for each of a plurality of coordinates. The differences are squared, summed, and the square root is taken to find i'. In order to avoid the complexities of taking the square root, a table look-up may be used. Alternatively, a two-dimensional fall-off correction curve table and/or two- dimensional pre-correction table can be indexed by the absolute values of the differences between the sensor location x± and x0 in order to save the step of calculating the distance directly. As will be discussed in more detail below, the weighting values w are optionally pre-corrected as a function of the energy being imaged.
With reference to FIGURES 9-11, a representative fall-off curve for one energy level El is generated in a step FI . Preferably, the energy level El is a low energy within a range including 75 KeV and 511 KeV (e.g., about 75 KeV) . For purposes of explanation, it is to be understood that the curve 54 represents the actual fall-off curve for the energy El. A fall-off curve (not shown) for another energy E2, E3, E4 is acquired in a step F2. The fall-off curve (including, for example, the fall-off curve 54 for the energy level El) is normalized to be within a range including, for example, zero (0) and 100 in a step F3. The fall-off curve for one of the energies E2 , E3, E4 is divided by the fall-off curve 54 for the first energy El in a step F4, thereby generating one of a plurality of energy ratio curves (pre-correction curves) 80, 82, 84 (see FIGURE 10) . The energy ratio curves 80, 82, 84 represent weighting that must be applied as a function of distance to a sensor's output when a respective one of the energies E2 , E3, E4 is being imaged.
A decision is made in a step F5 whether to repeat the process of generating another one of the energy ratio curves 80, 82, 84. If it is desired to repeat the process, control returns to the step F2 for acquiring the fall-off curve for another energy. Otherwise, control passes to a step F6. With reference to FIGURE 10, the energy ratio curve 80 represents E1/E2, the energy ratio curve 82 represents E1/E3, and the energy ratio curve 84 represents Ξ1/E4. Although only four (4) energy levels are discussed, it is to be understood that any number of energy levels may be generated. It is noted that each of the energy ratio curves 80, 82, 84 may be made smoother by collecting more data and/or applying commonly known regression or curve fits.
It is evident that all of the energy ratio curves 80, 82, 84 generally have a same shape but are scaled differently. Since table space (i.e., computer memory) is usually limited due to memory size in practical implementations and/or time constraints prohibit acquiring curves for all continuous energies, an additional energy scaling curve may optionally be used.
An energy scaling curve 86 is generated by determining scaling values between the energy ratio curve 80, which represents E1/E2 (e.g., the highest energy) and each of the energy ratio curves 82, 84, which represent E1/E3 and E1/E4, respectively. In this manner, the energy scaling curve 86, which yields an energy scaling factor as a function of energy, is produced in the step F6. It is to be understood that standard methods are used for fitting a curve to the scaling values between the various energy ratio curves . As will be discussed in more detail below, a scaling value sVi may be obtained from the energy scaling curve 86 as a function of energy. In the current example, it is assumed that the optimal weighting graph 50 (see FIGURE 6) is calibrated for the energy El. Therefore, once the energy ratio curves 80, 82, 84 are created, the optimal weighting graph 50 (see FIGURE 6) may optionally be "pre-corrected" as a function of an energy ratio curve corresponding to the energy being imaged and a distance of the sensor. More specifically, with reference to FIGURES 3 and 10-12, a distance between a sensor center and the event 28 is determined in a sub-step C2A. Then, in a sub-step C2B, an energy pre-correction factor pv± is optionally obtained from the graph 80 as a function of the distance determined in the sub-step C2A. Importantly, an appropriate one of the energy ratio curves 80, 82, 84 is selected as a function of the energy being imaged. A scaling value svx is optionally obtained from the energy scaling curve 86 in a sub-step C2C. A fall-off correction value fcv-j. is obtained from the optimal weighting graph 50 as a function of distance in a sub- step C2D. The weighting factor w is calculated in a sub-step C2E as wf = sVi * pv± * fcv.j_. Then, a corrected sensor output value is calculated as Sf = wf * Si in the sub-step C3. The weighting factor w and corrected sensor output value S are used in the above equations for energy E and position x.
In the preferred embodiment, the fall-off curves for the energies El, E2, E3, E4 (see, e.g., the fall-off curve 54 of FIGURE 6) , are generated by flooding an open detector with a radiation source of a known energy. For each event that interacts in the crystal of the detector, an estimate of the event position is determined. Then, the distance from the event to each of the sensor centers is calculated. In order to have a statistically significant number of counts for each distance, multiple events are produced. A histogram of each sensor's output is created as a function of distance. It is to be understood that the resolution of the distances may be set according to a required application (e.g., 1/4 of an intrinsic resolution of a gamma camera) . The histograms from different sensor outputs may be combined to generate a composite histogram for the entire detector or certain areas that can naturally be grouped together. The mean value of each histogram is then computed to generate the fall-off curve as a function of distance. The curve can be normalized by dividing each value by the maximum fall-off value (e.g., the value at the distance zero (0)).
FIGURE 13 illustrates a second embodiment of the present invention including single photon emission computed tomography ("SPECT") scanner. For ease of understanding this embodiment of the present invention, like components are designated by like numerals with a primed ( ' ) suffix and new components are designated by new numerals. With reference to FIGURE 13, a SPECT scanner 100 includes three (3) detectors 12' mounted for movement around a subject 14' in an examination region 16'. The subject is injected with a radioisotope. Each of the detectors 12' includes a scintillation crystal 20' for converting radiation events from the injected isotope into a flash of light energy or scintillation. Optionally, a radiation source 102 produces a fan of transmission radiation of a different energy than the injected radiation. Collimators 104 on the detectors limit and define the patches or rays along which each detector can receive emission and transmission radiation. The location of the scintillation and the position of the receiving detector uniquely determine the ray.
An array of sensors 22', e.g. 59 sensors, is arranged to receive the light flashes from the scintillation crystal 20'. Each of the sensors 22' generates a respective analog sensor output pulse (FIGURE 5) in response to the received light flash. Furthermore, each of the- sensors 22' is electrically connected to at least one of a plurality 'of analog-to-digital converters 24'. As discussed above, the analog-to-digital converters 24' convert the analog sensor output pulses to respective series of three digital sensor output values. Also, a processor 26' determines the energy and the location in two dimensions of each scintillation on the face of the detector, hence the ray along which the radiation originated. Additionally, the curves of FIGURES 6, 10, and optionally 11 are digitized and stored in respective look-up tables 52' .
Once the corrected position and energy are determined on a detector 12' at which a scintillation occurred and from the respective positions of the detectors, a processor 60' reconstructs an image representation from the emission data. When a radiation source 102 is used, the transmission data is used to correct the emission data for an improved image. The image representation is stored in an image memory 62'. A video processor 70' processes the image representation data for display on a monitor 72 ' .
Again, the three heads can be used without collimators in a PET mode. The heads are positioned to provide uniform coverage of the region of interest during annihilation events. A coincidence detector 66' determines concurrent events and a ray calculator 68' calculates the trajectory between each pair of coincident events.

Claims

Having thus described the preferred embodiment, the invention is now claimed to be:
1. A method of generating an image representation from detected radiation events, the method comprising: converting radiation from a subject (14) in an examination region (16) into light energy events; receiving the light events with an array of sensors
(22); generating (A) respective sensor output values in response to each received light event; determining (B) for each light energy event (i) at least one of an initial position and an energy and (ii) distances from the determined initial position to each sensor which received the light energy event; for each energy event, correcting (C) at least one of the initial position and the event energy in accordance with the determined distances; and generating (D) an image representation from the corrected positions.
2. The method as set forth in claim 1 wherein the correcting step (C) includes: weighting (C2) each sensor output value as a function of a distance between the respective sensor and the position of the event; and determining (C3) at least one of the position and the energy of the event from the weighted sensor output values .
3. The method as set forth in claim 2, wherein the step (B) of determining at least one of the position and the energy includes: calculating the energy E of the event as: E = ∑ W * S . , i
where Si represents the respective sensor output value, and wf represents weighting values.
4. The method as set forth in either one of preceding claims 2 and 3, further including: determining the initial position x0 of the event as a function of the respective distances (dl r d2, ... , d19) of the sensors from the position of the event.
5. The method as set forth in claim 4, further including: determining a corrected position x of the event as:
∑ w? S x . x = w± S . -
where xx represents respective sensor locations and where wf and w? represent weighting values that are a function of the respective distance | - x01 between the sensor location x and the initial position x0 of the event.
6. The method as set forth in claim 5, further including: determining the weighting values wf and wf from an • empirically generated optimum weighting graph.
7. The method of determining at least one of a position and an energy of an event as set forth in any one of preceding claims 3-6, further including: determining the weighting values wf and w from pre- correction curves as a function of the respective distance I Xi - x01 and as a function of the energy of the event.
8. The method as set forth in claim 7, further including: determining the weighting values wf and wf as a function of a scaling curve representing a relationship between various ones of the pre-correction curves.
9. The method as set forth in any one of preceding claims 2-8, further including: determining the initial position x0 of the event as a centroid of the event.
10. The method as set forth in any one of preceding claims 2-9, wherein the step of determining the position of the event includes: ignoring (B5) any of the sensor output values of a sensor having an output value that reaches a trigger amplitude after a delay period following the radiation event.
11. The method as set forth in any one of preceding claims 2-10, wherein a step of determining the energy of the event includes: ignoring any of the sensor output values of a sensor having an output value that reaches the baseline amplitude before the radiation event.
12. The method as set forth in any one of preceding claims 1-11, further including determining weighting values for correcting the at least one of the initially determined position and the initially determined energy of each event, including: generating (FI, F2) a plurality of fall-off curves (54), each of the fall-off curves corresponding to a respective one of a plurality of energies; creating (F4) a plurality of energy ratio curves (80, 82, 84) as a function of the fall-off curves, each of the energy ratio curves representing a relationship between a selected pairs of the energies; determining (C2) a weighting value from one of the energy ratio curves for scaling the fall-off curve associated with one of the energies; and correcting (C3) the at least one of the initially determined position and the initially determined energy as a function of the weighting value and the fall-off curve (54) associated with the initially determined energy.
13. The method as set forth in claim 12, further including: generating (F6) an energy scaling curve (86) representing a relationship between the energy ratio curves, the determining step also determining the weighting value as a function of the energy scaling curve.
14. The method as set forth in claim 12, wherein the step of generating each of the fall-off curves includes: dividing (FI, F2) a selected fall-off curve by an actual fall-off curve, each of the fall-off curves representing an energy amplitude as a function of a distance.
15. The method as set forth in claim 14, further including: before the creating step, normalizing (F3) the fall- off curves.
16. The method as set forth in any one of preceding claims 1-15, further including: weighting (C3) each of the sensor output values in accordance with the corresponding determined distance; and determining (C4) a corrected position and a corrected energy in conjunction with the weighted sensor output values.
17. The method as set forth in claim 16, further including: iteratively repeating (C5) the steps of weighting and determining the corrected position and the corrected energy.
18. The method as set forth in either one of preceding claims 16 and 17, further including: generating (C2) weighting values for each of the distances as a function of a selected response curve and an input response curve.
19. The method as set forth in claim 18, further including: generating the weighting values as a function of the energy of the radiation.
20. The method as set forth in claim 19, further including: generating (F4) energy ratio curves (80, 82, 84) representing respective relationships between a plurality of radiation energies; generating (F6) an energy scaling curve (86) representing a relationship between the plurality of energies and a plurality of respective scaling factors; and generating the weighting values as a function of the scaling factors.
21. The method as set forth in claim 16, further including: accessing weighting values from a look-up table (52) .
22. The method as set forth in claim 21, further including: indexing the look-up table as a function of at least one of time, temperature, count-rate, depth of interaction, and radiation energy.
23. The method as set forth in claim 22, further including: analyzing (Bl) the sensor output values to detect a start of the each light event.
24. The method as set forth in claim 23, further including: in response to detecting a subsequent light event after an integration period of one of the light events begins, ignoring (B5) the sensor values associated with the sensors receiving the subsequent light event when calculating the initial position and the energy of the light event.
25. The method as set forth in any one of preceding claims 1-24, further including: detecting temporally adjacent light events that are at least partially overlapping; determining a position for each non-overlapping light event ; in each pair of overlapping light events, compensating for one of the light events while determining a position of the other; and, generating the image representation from the determined positions.
26. The method as set forth in claim 25, further including: in the step of determining the position of overlapping light events, ignoring any of the sensor output values associated with a first light event while determining the position of a second light event.
27. The method as set forth in any one of preceding claims 1-24 wherein ,each of the output values is being associated with one of a current radiation event, a previous radiation event, and a subsequent radiation event and further including: identifying (Bl) when the current light event occurs as a function of the sensor output values; integrating (B3) the sensor output values associated with the current light event; and generating (D) the image as a function of the sensor output values integrated from the current light event.
28. The method as set forth in claim 27, further including: determining if any of the output values are associated with the previous light event.
29. The method as set forth in claim 28, wherein the ignoring step includes: reassigning the output values associated with the previous light event to be about zero.
30. The method as set forth in any one of preceding claims 27-29, wherein the identifying step includes: determining when one of the output values surpasses a trigger amplitude (44) .
31. A method of determining at least one of a position and an energy of an event detected by a medical imaging device, the method comprising: transforming (A2) each received radiation event into a light energy event; with an array of sensors (22) , converting (A4) each light energy event into a plurality of output pulses; determining (Bl) when a radiation event occurs from the sensor output pulses; weighting (C3) each sensor output pulse as a function of a distance between the respective sensor and the position of the event; and determining (C4) at least one of the position and the energy of the event from the weighted sensor output pulses.
32. A method of determining weighting values for correcting at least one of an initially determined position and an initially determined energy of a radiation event, the method comprising: generating (FI, F2) a plurality of fall-off curves
(54) , each of the fall-off curves corresponding to a respective one of a plurality of energies; creating (F4) a plurality of energy ratio curves (80, 82, 84) as a function of the fall-off curves, each of the energy ratio curves representing a relationship between a selected pairs of the energies; determining (C2) a weighting value from one of the energy ratio curves for scaling the fall-off curve associated with one of the energies; and correcting (C4) the at least one of the initially determined position and the initially determined energy as a function of the weighting value and the fall-off curve associated with the initially determined energy.
33. A method of generating an image representation from detected radiation events, the method comprising: converting (Al, A2) radiation from a subject in an examination region into flashes of light; receiving (A3) the flashes of light with an array of sensors; generating (A4) respective sensor output values in response to each received light flash; detecting (B2) temporally adjacent light events that are at least partially overlapping; determining (Cl) a position for each non-overlapping light event; in each pair of overlapping light events, compensating (C2-C5) for one of the light events while determining a position of the other; and, generating (D) an image representation from the determined positions.
34. A method for generating an image from a radiation event detected by a nuclear camera, the method comprising: detecting (A) a plurality of output signals, each of the output signals being associated with one of a current radiation event, a previous radiation event, and a subsequent radiation event; identifying (B) when the current radiation event occurs as a function of the output signals; integrating (B3) the output signals associated with the current radiation event; and generating (D) the image as a function of the output signals integrated from the current radiation event.
35. A nuclear camera system comprising: a detector (12) for receiving radiation from a subject in an exam region, the detector including: a scintillation crystal (20) that converts radiation events into light events; an array of sensors (22) arranged to receive the light events from the scintillation crystal, a plurality of the sensors generating a respective sensor output value in response to each received light event; and a processor (26) for determining when each of the radiation events is detected, at least one of an initial position and an energy of each of the detected radiation events being determined in accordance with respective distances from a position of the detected event to the sensors.
36. The nuclear camera system as set forth in claim 35, further including: a plurality of analog-to-digital converters (24), each of the sensors being electrically connected to at least one of the analog-to-digital converters for converting the sensor output values from analog values to respective series of digital sensor output values.
37. The nuclear camera system as set forth in either one of preceding claims 35 and 36, wherein the processor (26) weights the sensor output values with weighting values, which are determined in accordance with the respective distances (dx- d19) from the position of each event to each of the sensors that detects the event, for determining corrected positions and energies of the events.
38. The nuclear camera system as set forth in claim 37, wherein the processor (26) determines a subsequent set of weighting values as a function of the corrected positions and energies of the events.
39. The nuclear camera system as set forth in claim 37, wherein the processor (26) generates the weighting values for each of the distances as a function of a desired response curve (56) and an input response curve (52) .
40. The nuclear camera system as set forth in claim 39, wherein the processor (26) generates the weighting values as a function of an energy being imaged.
41. The nuclear camera system as set forth in claim 40, wherein: the processor (26) generates energy ratio curves (80, 82, 84) representing respective relationships between a plurality of the energies being imaged; the processor generates an energy scaling curve (86) representing a relationship between the plurality of energies being imaged and respective scaling factors; and the processor generates the weighting values as a function of one of the scaling factors.
42. The nuclear camera system as set forth in any one of preceding claims 37-40, further including: a multi-dimensional look-up table (52) for storing the weighting values and being indexed by the processor as a function of at least one. of time, temperature, count-rate, depth of interaction, and energy.
43. The nuclear camera system as set forth in any one of preceding claims 35-42, wherein the processor (26) analyzes the sensor output values for detecting a start of the event .
44. The nuclear camera system as set forth in any one of preceding claims 35-43, wherein the processor (26) analyzes the sensor output values for detecting an on-going previous event, any sensor output values associated with the previous event being excluded from calculations of an initial position and an energy of a next detected event.
45. The nuclear camera system as set forth in any one of preceding claims 41-50, wherein in response to the processor detecting a next event after an integration period of the event begins, the sensor values associated with the sensors of the next event being nulled from calculations of the initial position and the energy of the event.
46. The nuclear camera system as set forth in any one of preceding claims 35-45, further including: a second detector (20) disposed across an imaging region from the first detector (20) ; a coincidence detector (66) connected with the first and second detectors for detecting concurrent events on both detectors; and a reconstruction processor ( 60 ) for determining rays through the imaging region between concurrent events and reconstructing the rays into an output image representation.
47. A nuclear camera system comprising: a detector (20) for receiving radiation from a subject in an exam region, the detector including: a scintillation crystal (20) that converts radiation events into flashes of light; an array of sensors (22) arranged to receive the light flashes from the scintillation crystal, a plurality of the sensors generating a respective sensor output value in response to each received light flash; and a processor (26) which (i) detects overlapping events that are sufficiently temporally close that their light flashes are at least partially concurrent, (ii) determines at least one of position and energy of at least one of the overlapping events while compensating for the partially concurrent light flash of the other, and (iii) generates an image representation from the initial positions and the energies.
PCT/US2001/017869 2000-06-02 2001-06-01 Event localization and fall-off correction by distance-dependent weighting WO2001093763A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP01944229A EP1328825A2 (en) 2000-06-02 2001-06-01 Event localization and fall-off correction by distance-dependent weighting

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US20903200P 2000-06-02 2000-06-02
US60/209,032 2000-06-02
US09/846,013 2001-04-30
US09/846,013 US6603125B1 (en) 2000-06-02 2001-04-30 Event localization and fall-off correction by distance-dependent weighting

Publications (2)

Publication Number Publication Date
WO2001093763A2 true WO2001093763A2 (en) 2001-12-13
WO2001093763A3 WO2001093763A3 (en) 2002-07-04

Family

ID=26903756

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/017869 WO2001093763A2 (en) 2000-06-02 2001-06-01 Event localization and fall-off correction by distance-dependent weighting

Country Status (3)

Country Link
US (2) US6603125B1 (en)
EP (1) EP1328825A2 (en)
WO (1) WO2001093763A2 (en)

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6946658B2 (en) * 2002-07-05 2005-09-20 The Washington University Method and apparatus for increasing spatial resolution of a pet scanner
CA2404613C (en) * 2002-09-20 2008-07-08 Is2 Research Inc. Method of localizing a scintillation event in a scintillation camera
US7262416B2 (en) * 2002-11-27 2007-08-28 Koninklijke Philips Electronics N.V. Gamma camera with dynamic threshold
US7649175B2 (en) * 2004-01-13 2010-01-19 Koninklijke Philips Electronics N.V. Analog to digital conversion shift error correction
US20110101230A1 (en) * 2005-02-04 2011-05-05 Dan Inbar Advanced SNM Detector
US7820977B2 (en) * 2005-02-04 2010-10-26 Steve Beer Methods and apparatus for improved gamma spectra generation
US7847260B2 (en) * 2005-02-04 2010-12-07 Dan Inbar Nuclear threat detection
US8173970B2 (en) 2005-02-04 2012-05-08 Dan Inbar Detection of nuclear materials
US7791029B2 (en) * 2005-08-18 2010-09-07 Societe De Commercialisation Des Produits De La Recherche Appliquee- Socpra Sciences Sante Et Humaines, S.E.C. Digital identification and vector quantization methods and systems for detector crystal recognition in radiation detection machines
US7446308B2 (en) * 2005-12-22 2008-11-04 Baker Hughes Incorporated Method of calibrating multi-channel nuclear energy spectra
US20100254311A1 (en) * 2007-05-25 2010-10-07 Osvaldo Simeone Pulse-Coupled Discrete-Time Phase Locked Loops For Wireless Networks
US8340377B2 (en) * 2007-09-17 2012-12-25 Siemens Medical Solutions Usa, Inc. Method for energy calculation and pileup determination for continuously sampled nuclear pulse processing
WO2009054070A1 (en) * 2007-10-26 2009-04-30 Shimadzu Corporation Radiation detector
US8384037B2 (en) * 2008-09-11 2013-02-26 Siemens Medical Solutions Usa, Inc. Use of crystal location in nuclear imaging apparatus to minimize timing degradation in a photodetector array
US8115172B2 (en) * 2008-09-26 2012-02-14 Siemens Medical Solutions Usa, Inc. Position-weighted location of scintillation events
DE102009042054A1 (en) * 2009-09-15 2011-03-24 Mirion Technologies (Rados) Gmbh Method for detecting a contamination on a moving object and measuring device for this purpose
US8450693B2 (en) * 2009-12-11 2013-05-28 General Electric Company Method and system for fault-tolerant reconstruction of images
US20110142367A1 (en) * 2009-12-15 2011-06-16 Charles William Stearns Methods and systems for correcting image scatter
EP2762832B1 (en) * 2013-01-30 2018-06-13 Hexagon Technology Center GmbH Optical single-point measurement
CN104337531B (en) * 2013-07-25 2016-12-28 苏州瑞派宁科技有限公司 Method and system are met at heat input for digital PET system
CN105637386B (en) 2013-10-14 2019-01-15 皇家飞利浦有限公司 Histogram in positron emission tomography (PET) energy histogram is smooth
US9606245B1 (en) 2015-03-24 2017-03-28 The Research Foundation For The State University Of New York Autonomous gamma, X-ray, and particle detector
WO2016201131A1 (en) * 2015-06-09 2016-12-15 University Of Washington Gamma camera scintillation event positioning
CN105115994A (en) * 2015-07-22 2015-12-02 武汉数字派特科技有限公司 Digital PET energy parameterization calibration method and system
CN109073769B (en) * 2016-02-19 2022-11-25 卡里姆·S·卡里姆 Method and apparatus for improved quantum detection efficiency in X-ray detectors
US11269088B2 (en) * 2017-07-31 2022-03-08 Shimadzu Corporation Radiation detector and nuclear medicine diagnosis device
WO2020073186A1 (en) * 2018-10-09 2020-04-16 Shenzhen Xpectvision Technology Co., Ltd. Methods and systems for forming images with radiation

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2757956A1 (en) 1996-12-31 1998-07-03 Commissariat Energie Atomique DEVICE AND METHOD FOR NUCLEAR LOCALIZATION BY CALCULATION OF ITERATIVE BARYCENTER, AND APPLICATION TO GAMMA-CAMERAS

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4899054A (en) * 1988-01-19 1990-02-06 General Electric Company Gamma camera with image uniformity by energy correction offsets
DE59102128D1 (en) 1991-09-11 1994-08-11 Siemens Ag Procedure for fast location using the maximum likelihood estimator for a gamma camera.
US5576546A (en) * 1992-10-28 1996-11-19 Park Medical Systems Inc. Depth-of-interaction normalization of signals for improved positioning, and energy resolution in scintillation camera
US5345082A (en) 1993-03-22 1994-09-06 Sopha Medical Systems, Inc. Scintillation camera utilizing energy dependent linearity correction
GB9314398D0 (en) * 1993-07-12 1993-08-25 Gen Electric Signal processing in scintillation cameras for nuclear medicine
US5410153A (en) * 1993-07-27 1995-04-25 Park Medical Systems, Inc. Position calculation in a scintillation camera
US5576547A (en) 1993-07-27 1996-11-19 Park Medical Systems Inc. Position calculation and energy correction in the digital scintillation camera
US5508524A (en) * 1994-10-03 1996-04-16 Adac Laboratories, Inc. Spatially variant PMT cluster constitution and spatially variant PMT weights
US5491342A (en) 1994-11-10 1996-02-13 Trionix Research Laboratory, Inc. Apparatus and method for nuclear camera calibration
US5545898A (en) * 1994-12-13 1996-08-13 Park Medical Systems, Inc. Scintillation camera position calculation with uniform resolution using variance injection
FR2755815B1 (en) * 1996-11-08 1998-12-18 Commissariat Energie Atomique DEVICE AND METHOD FOR DETERMINING THE PRESUMED POSITION OF AN EVENT IN RELATION TO A SET OF PHOTODETECTORS, AND APPLICATION TO GAMMA-CAMERAS
FR2757955B1 (en) * 1996-12-31 1999-01-29 Commissariat Energie Atomique DEVICE AND METHOD FOR NUCLEAR LOCALIZATION BY CALCULATION OF PARALLEL BARYCENTER, AND APPLICATION TO GAMMA-CAMERAS
US6169287B1 (en) * 1997-03-10 2001-01-02 William K. Warburton X-ray detector method and apparatus for obtaining spatial, energy, and/or timing information using signals from neighboring electrodes in an electrode array
US6310349B1 (en) * 1997-05-07 2001-10-30 Board Of Regents, The University Of Texas System Method and apparatus to prevent signal pile-up
CA2212196A1 (en) * 1997-08-01 1999-02-01 Is2 Research Inc. Medical diagnostic apparatus and method
US6252232B1 (en) * 1998-06-29 2001-06-26 General Electric Company Digital integrator
CA2248424A1 (en) * 1998-09-25 2000-03-25 Is2 Research Inc. Image generation method
US6291825B1 (en) * 1998-10-23 2001-09-18 Adac Laboratories Method and apparatus for performing pulse pile-up corrections in a gamma camera system
US6198104B1 (en) * 1998-10-23 2001-03-06 Adac Laboratories Randoms correction using artificial trigger pulses in a gamma camera system
US6169285B1 (en) * 1998-10-23 2001-01-02 Adac Laboratories Radiation-based imaging system employing virtual light-responsive elements
US6525323B1 (en) * 2000-04-18 2003-02-25 Koninklijke Philips Electronics, N.V. Method and apparatus for improved estimation of characteristics of pulses detected by a nuclear camera

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2757956A1 (en) 1996-12-31 1998-07-03 Commissariat Energie Atomique DEVICE AND METHOD FOR NUCLEAR LOCALIZATION BY CALCULATION OF ITERATIVE BARYCENTER, AND APPLICATION TO GAMMA-CAMERAS

Also Published As

Publication number Publication date
US6723993B2 (en) 2004-04-20
EP1328825A2 (en) 2003-07-23
US20030116713A1 (en) 2003-06-26
US6603125B1 (en) 2003-08-05
WO2001093763A3 (en) 2002-07-04

Similar Documents

Publication Publication Date Title
US6603125B1 (en) Event localization and fall-off correction by distance-dependent weighting
JP3343122B2 (en) An automated coincidence timing calibration method for PET scanners
US5818050A (en) Collimator-free photon tomography
US9029786B2 (en) Nuclear medicine imaging apparatus, and nuclear medicine imaging method
US8471210B2 (en) Radiation imaging method with individual signal resolution
US6680750B1 (en) Device and method for collecting and encoding signals coming from photodetectors
US6169285B1 (en) Radiation-based imaging system employing virtual light-responsive elements
US5760401A (en) Resolution enhancement apparatus and method for dual head gamma camera system capable of coincidence imaging
US7626172B2 (en) Nuclear medical diagnosis apparatus
EP1840597A2 (en) Energy calibration method and radiation detecting and radiological imaging apparatus
Vaquero et al. Performance characteristics of a compact position-sensitive LSO detector module
JP7317586B2 (en) MEDICAL IMAGE PROCESSING APPARATUS, METHOD AND PROGRAM
US6281504B1 (en) Diagnostic apparatus for nuclear medicine
JP4594855B2 (en) Nuclear medicine diagnostic apparatus, radiation camera, and radiation detection method in nuclear medicine diagnostic apparatus
US7132663B2 (en) Methods and apparatus for real-time error correction
US7262416B2 (en) Gamma camera with dynamic threshold
US6348692B1 (en) Device and method for nuclear locating by iterative computing of barycenter, and application to gamma-cameras
US7518102B2 (en) Calibration method and apparatus for pixilated solid state detector
Logan et al. Single photon scatter compensation by photopeak energy distribution analysis
US6403961B1 (en) Image generation method
JP4142767B2 (en) Nuclear medicine diagnostic equipment
US9186115B2 (en) Method and apparatus for compensating for magnetic field during medical imaging
US7323691B1 (en) Methods and apparatus for protecting against X-ray infiltration in a SPECT scanner
Kanno Pet Instrumentation for Quantitative Tracing of Radiopharmaceuticals
JPH09318750A (en) Method for correcting absorption of spect

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): JP

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
AK Designated states

Kind code of ref document: A3

Designated state(s): JP

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR

REEP Request for entry into the european phase

Ref document number: 2001944229

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2001944229

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 2001944229

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: JP