Apparatus and method for digital X-ray imaging
Technical Field
The present invention relates to digital X-ray imaging apparatus and method of compensating images acquired by the same.
Background Art
Diagnostic criteria require that X-ray images such as mammograms exhibit excellent spatial resolution and contrast sensitivity. An imaging system which offers wider dynamic range, higher contrast sensitivity, higher spatial resolution, and the ability to manipulate and archive the image is desirable.
Two approaches are currently under investigation for digital diagnostic X- ray imaging. One is the secondary digitization technique, in which conventional film mammograms are digitized. The other approach is the acquisition of primary digital images, the "electronic imaging technique".
In U.S. Pat. No. 5,105,087, issued to Jagielinski, multiple detector arrays are used to image over the large area needed in clinical mammography applications. This invention relies on multiple layers of detector elements, one above the other, to provide a complete image with no gaps. One disadvantage with this system is that enough photo detectors must be used to cover the active area which increases the cost of the device. Another disadvantage is the effect of the edges of the detector arrays in one layer on the x-ray image seen by the detectors below these edges.
In U.S. Pat. No. 5,043,582 issued to Cox et al., the photo sensitive properties of transistors found in dynamic random access memory (DRAM) integrated circuits are used to detect photons emitted from x-ray sensitive phosphors. The use of DRAM cells as photo sensitive pixels results in less optical sensitivity, because the entire active area of each pixel is not photo sensitive, due to the
requirements for addressing the DRAM cells. Furthermore, the detection scheme described by Cox et al. is binary in nature. Therefore, substantial effort would be required to obtain gray scales.
Optically coupled CCD techniques are described in two U.S. patents. In U.S. Pat. Nos. 5,142,557 issued to Toker et al., and 5,216,250 issued to Pellegrino et al., an optical lens is used to image the visible photons emitted by an x-ray sensitive phosphor screen onto a single CCD detector. Because the CCD detector is smaller than the phosphor screen, the image from the screen must be reduced, or demagnified, in order for the detector to record the entire image. This means that each pixel on the CCD detector corresponds to a larger equivalent area on the phosphor screen. Therefore, the spatial resolution of this system is less than the spatial resolution of the CCD detector.
In U.S. Pat. No. 5,844,242 issued to Jalink et al., an apparatus and method for large field digital mammography is presented. It a mosaic of electronic digital imaging arrays to scan an image. The imaging arrays are mounted on a carrier platform to form a pattern. The arrays are then exposed to a portion of a radiated image, and convert this radiation into digital data. The platform is subsequently repositioned and the arrays are exposed to another portion of the image. While the arrays are being repositioned, the digital data in the arrays is transferred to a computer memory. This process is repeated until the entire image has been exposed to the arrays. The stored multiple image data is combined by a data processor to form data which corresponds to the original radiated image. As, it takes only a portion of an image at a time, it takes lot of time to get the entire image.
Disclosure of Invention
An object of the present invention is to provide a digital X-ray imaging apparatus with large field coverage and method of compensating images acquired
by the same.
Another object of the present invention is to provide a digital X-ray imaging apparatus with high spatial resolution and method of compensating images acquired by the same. Another object of the present invention is to provide a digital X-ray imaging apparatus which alleviates the need to use a beam splitter and method of compensating images acquired by the same.
These and other objects of the present invention can be achieved by providing multiple imaging devices and by compensating distortions caused by each imaging device and lens. Since multiple imaging devices are used to capture an image, both high spatial resolution and large field coverage can be achieved. Also, by capturing visible lights without use of splitters and by compensating radiation noise using digital signal processing techniques, the digital X-ray imaging apparatus of the present invention without splitters can be provided.
Brief Description of Drawings The nature and mode of operation of preferred embodiment of the present invention will now be more fully described in the following detailed description, taken with the accompanying drawing wherein: Figure 1 is a block diagram showing the overall structure of the X-ray imaging apparatus of the present invention.
Figure 2 is a schematic diagram showing the detailed structure of a scintillator, a lens part and an imaging part.
Figure 3 is an example of imaging areas of each imaging devices. Figure 4 is an embodiment of lens part and imaging part, where 4 lenses and
4 imaging devices are mounted in a printed circuit board.
Figure 5 is a block diagram of photo-sensing part used to calibrate the exposure timing of imaging part.
Figure 6 is a schematic view of arrangement of pixels. Figure 7 shows a case when four imaging devices are used and the images acquired by these imaging devices are compensated and collected to form an entire image. Figure 8 is an embodiment of the calibration grid.
Figure 9 shows correct crosses and a wrong cross.
Figure 10 is a schematic diagram of the central portions of the grid which is used to explain the procedure to find out center cross of the grid.
Figure 11 shows the effect of noise when searching for a horizontal line. Figure 12 shows the procedure of following the horizontal line.
Figure 13 shows the case when SLINE encounters a noise spot during searching for a vertical line.
Figure 14 shows the effect of noise spot when searching for a vertical line. Figure 15 shows schematically the procedure of the grid mapping. Figure 16 shows numbering of the grid position.
Figure 17 is schematic diagrams of a wrong and a correct block. Figure 18 shows schematically the procedure of the image mapping. Figure 19 is a detailed schematic diagram showing the procedure of the image mapping. Figure 20 is a schematic diagram for explaining the procedure of the bilinear interpolation.
Best Mode for Carrying out the Invention < Overall Structure > Figure 1 shows the overall structure of the X-ray imaging apparatus of the present invention.
A conventional X-ray source 10 emits X-rays 11 towards and through a subject 20. The X rays travel through the subject 20 and a conventional
fluorescent intensifying screen 30 which is also called as a scintillator. As a result of passage through the scintillator 30, X-rays and a visible component of the light spectrum 12 emit.
A lens system 40 which is composed of lenses 41, one for each imaging device 51a, 51b, ..., 5 In such as CCD (Charge Coupled Device) or CID (Charge Injection Device), collects light and form images at an imaging part 50.
The imaging devices 51 of the imaging part 50 are arranged so as to take a partial image of the scintillator 30 as shown in Figure 2. Numeral 31 designates an area covered by the imaging device 51a. The area 31 covered by the imaging device 51a includes overlapped areas 32. By overlapping areas covered by each imaging device, it is possible to cope with differences in the inherent characteristics of imaging devices or lenses, errors in manufacturing process, or environmental factors.
Figure 3 shows an example of images obtained using nine imaging devices. As shown in Figure 3, an image for the entire screen 30 can be obtained by collecting images acquired by each imaging device 51 and by discarding the overlapped areas 32 properly.
Figure 4 shows an embodiment of a lens and imaging device assembly. In this embodiment, four lenses and four imaging devices comprise the assembly. A lens 41 and a imaging device are housed in a tub 41. Each tub is mounted on a printed circuit board 44. Signal lines to and from the imaging devices are arranged on the printed circuit board 44. They are connected to a digital signal processing part 60 (Figure 1) using connectors (not shown).
For the larger screen or for the better resolution, the number of lenses and imaging devices for the apparatus of the present invention may be increased either by increasing the number of lenses and imaging devices on a printed circuit board or by combining several lens and imaging device assemblies.
Photosensors 42 are utilized to calibrate the exposure timing. The operation
of the photosensors 42 will be described later.
Returning to Figure 1, image signals acquired by each imaging device 51 are fed to the digital signal processing part 60, where the signals are processed for image enhancement. The detailed description of image enhancement procedure will be given later.
The processed image data are then fed to a main controller 90 where the data are stored at a data storage 80 or are display on a display 70.
< Photosensor > When an X-ray beam 11 collides into the scintillator 30, visible light 12 emits. The visible light 12 gradually gets bright and then gets dark. Thus it is important that exposure of the imaging devices is well timed. Photosensors are used to determine appropriate timing of the exposure.
Figure 5 shows block diagram of photosensing part of the X-ray imaging apparatus of the present invention. As the photosensor 42 reacts to the visible light 12 very vaguely, that is, the output of the photosenor 42 does not change enough to be used in an analog-to-digital converter 46, it is amplified by an amplifier 45. The output signal of the amplifier 45 is then digitized by means of the analog-to-digital converter 46. The sampling procedure at the analog-to- digital converter 46 is performed with a sampling time enough to find out exact relationships between the emitting visible light and time.
The digitized data is transmitted to a personal computer 48 through a PCI interface 47. The personal computer 48 converts this digitized signal into a graph showing the relationship between the light and time. From this information, the control timing of the camera shutter is determined.
The above mentioned procedure does not need to be performed every time an image is to be captured. It may be performed once when the X-ray imaging apparatus of the present invention is manufactured. Alternatively, it may be
performed when calibration of shutter control timing is required.
< Compensation of radiation noise >
The scintillator 30 emits visible radiation when stimulated by X-rays 11. But the visible radiation includes not only the exact image but also spots (hereinafter
"noise") like bombs or falling snow. This phenomenon can be observed in almost every radiography apparatus. A special procedure is included in the imaging system of the present invention to compensate this radiation noise.
To compensate this noise, the following three assumptions are made: 1. There is no object which is smaller than the size of a pixel of the image sensor.
2. The intensity of the noise is significantly different from that of neighboring pixels.
3. The number of noises neighboring a noise does not exceed two.
Based on the above assumptions, compensation is made as below:
Detection
As can be seen from Figure 6, a pixel is surrounded by 8 other pixels. Every pixel is compared with its neighboring 8 pixels to determine if the pixel is a noise.
If the difference of the intensity between the pixel and the neighboring pixels exceeds predetermined amount for more than 6 neighboring pixels (assumption 3), then the pixel is recognized as a noise.
Compensation
When a pixel is recognized as a noise, the intensity of the pixel is replaced with average intensity of the neighboring pixels.
< Calibration Grid >
In the present invention, multiple imaging devices are used to acquire an image. But partial images acquired by each imaging device do not exhibit uniform quality because of differences in the inherent characteristics of imaging devices or lenses, errors in manufacturing process, or environmental factors.
A calibration grid is used in the present invention to compensate this error. The grid is a combination of horizontal lines and vertical lines with predetermined dimensions(thickness of the lines, distance between the lines, length of the lines, number of the lines). In the calibration process, the system acquires an image for the calibration grid. As the dimensions of the grid is already known, a mapping table for the calibration of this error can be derived from this calibration image.
Figure 7 shows a case when four imaging devices are used. As the dimensions of the grid is already known, errors in each imaging device can be detected and corrected properly at the calibration process. The resulting mapping table is then used to compensate these errors for the images acquired afterwards.
The detailed description of the compensation procedure will be given later.
Uniform intensity should be kept over the entire image to get a correct calibration image. Thus the calibration grid is printed on a transparent film 53 and then attached to a plate 54 with uniform intensity. (See Figure 8)
< Image Distortion Correction Procedure >
As already explained, images obtained by each imaging sensor have inherent distortions. To correct these distortions, the calibration grid is used. Here, the distortion correction method of the present invention will be given.
Grid Position Detection
The distortion correction method of the present invention first searches for
grid positions of the calibration image obtained from the calibration grid. The "grid position" is a point where a vertical line and a horizontal line cross.
As shown in Figure 9, there should be a cross at the grid position. Thus, (a) and (b) of Figure 9 are the grid positions. But (c) does not constitute a grid position. Therefore, the calibration image obtained from the calibration grid should not include points like (c).
To detect the grid positions, a center of the calibration image is first determined. Then, as shown in Figure 10(a), a window 100 of a predetermined size whose center is the center of the image is determined. The size of the window 100 should be determined so as not to exceed the size of a unit grid of the calibration grid. Then, an image inside the window will be like the one shown in Figure 10(b). As the center part of the image does not suffer from severe distortion, only a center cross point 101 will be included in the window.
Next, the method determines the position of the center cross point 101. Starting from one of the four edges of the window 100, it scans one of the four sides of the window 100 until it finds a black point. This procedure is repeated for all four edges. As a result, four points 102a, 102b, 102c and 102d will be determined. The position of the center cross point 101 is calculated from the positions of these four points as follows: jc(101) = [x(102a) + jc(102c)] / 2, 101) = [y(102b) +j(102d)] / 2, where x(101) and y(101) are the abscissa and the ordinate of the center cross point 101 respectively and so on. In other words, the abscissa of the center cross point 101 is determined by the average of abscissas of the two points on the horizontal sides of the window 100, and the ordinate of the center cross point 101 is determined by the average of ordinates of the two points on the vertical sides of the window 100.
Next, it begins to search for a horizontal line downwards starting from a
point predetermined pixels (30 pixels, for example) beneath and predetermined pixels (30 pixels, for example) right from the center cross point. The starting point is determined so as not to skip both the first horizontal line under the center cross point and the first vertical line on the right side of the center cross point. If it encounters a black point, then it stores the y-distance (ordinate) of the point. Then it keeps going down until it encounters a white point where it stores the y-distance of the point. Now, the center point of the horizontal line is determined by the average of the y-distance of the black point and the y-distance of the white point. There may be several noise spots in the calibration image. As shown in
Figure 11(a), if it encounters a noise spot while going down for a horizontal line, there is a possibility of misconceiving the spot as a horizontal line. To eliminate the possibility, it is preferred that the search is performed through several paths. Figure 11(b) shows the case when it searches through two paths. It recognizes a horizontal line only when it encounters black points in both paths.
Next, it searches leftwards and rightwards as follows. First, it prepares a vertical line (SLINE) with length of, for example, 12 pixels. The length of SLINE is determined considering the thickness of grid lines. The center of SLINE is put on the center point of the horizontal line obtained previously. Then SLINE will look like Figure 12(a).
Now, move SLINE to the left. If the horizontal line is inclined upwards, then black points in SLINE will be biased upwards as shown in Figure 12(b). Then, move SLINE upwards so that the center of SLINE is located on the center point of the horizontal line. If the horizontal line is inclined downwards, then black points in SLINE will be biased downwards as shown in Figure 12(c). Then, move SLINE downwards so that the center of SLINE is located on the center point of the horizontal line. This way, we can follow the horizontal line.
There may be several noise spots in the calibration image. But these spots
should not have an adverse effect to the search. As shown in Figure 13, while SLINE is being shifted for searching for a vertical line, it may encounter a noise spot. In the searching method of the present invention, only the number of black points contiguously located around the center point is counted as shown in Figure 14. Thus, the noise spot not in contact with the horizontal line cannot affect the result of the searching algorithm.
If SLINE is filled with black points, it means that we encountered a vertical line. Thus, the center point is the position of a cross point. By repeating this way, we can get the grid positions of the entire calibration image.
Grid Mapping
Using the grid positions obtained in the previous step, we can get mapping tables for image correction.
Now that we have grid positions of both the calibration grid and the calibration image, mapping tables which map each grid position of the calibration image to that of the calibration grid can be derived as shown in Figure 15.
Let A[n][m] be the grid position of the calibration image (that is, distorted image) crossed by nth horizontal line and mth vertical line. Also, let B[n][m] be the grid position of the calibration grid (that is, undistorted image) crossed by nth horizontal line and mth vertical line as shown in Figure 16. Then, these tables
A[n][m] and B[n][m] form the mapping tables for the distortion correction.
Image Mapping
Now, a block can be formed by four neighboring grid positions such as A[0][0], A[0][1], A[1][0], A[l][l]. In this algorithm, it is assumed that each block composed of the grid positions is rectangular. That is, each side of the block is linear as shown in Figure 17(b). The block shown in Figure 17(a), therefore, cannot be accepted. As the distance between each neighboring grid is
short, this assumption generally does not cause problems.
When an image 103 such as a mammogram is acquired using the imaging devices, it should be mapped to a corrected image 104 using the mapping tables as shown in Figure 18. Since several image devices are used to take the entire image, the mapping should be performed for each partial image taken by each imaging device.
The procedure for the image mapping will now be described referring to Figures 18 and 19. Let C[i][j] denote a corresponding position in the distorted image to a position (i,j) in the corrected image as shown in Figure 18. And let A[0][0], A[1][0], A[0][1] and A[l][l] be (a0,b0), (al,bl), (a2,b2) and (a3,b3) respectively as shown in Figure 19. Also, let B[0][0], B[1][0], B[0][1] and B[l][l] be (0,0), (d,0), (0,d) and (d, d) respectively. Then C[0][0], C[0][d], C[d][0] and C[d][d] give (a0,b0), (al,bl), (a2,b2) and (a3,b3) respectively.
The equation for calculating C[i]|j] for an arbitrary point (i,j) inside the block formed by points (0,0), (d,0), (0,d) and (d,d) can be derived as follows: c[0]D] = (C[0][d] - C[0][0]) x j / d,
C[d]D] = (C[d][d] - C[d][0]) x j / d,
C[i]D] = (C[d]D] - C[θ]D]) x i / d,
= { (C[d][d] - C[d][0]) x j / d - (C[0][d] - C[0][0]) x j / d } x i / d, = (C[0][0] + C[d][d] - C[d][0] - C[0][d]) x i x j / d2
From this equation, we can get a position in the distorted image corresponding to a position (i,j) in the corrected image.
Bi-linear Interpolation Now that we know the position at the distorted image corresponding to the position (i,j) in the corrected image, the remaining procedure is to take the brightness at C[i]β] in the distored image and put it at (i,j) in the corrected image.
However, as the position C[i][j] does not always give an integer as can be
seen from the above equation, a bi-linear interpolation is used to estimate the brightness at C[i][j] in the present invention.
A detailed description on the bi-linear interpolation will be given referring to Figure 20. Assume that C[i][j] falls in an unit block surrounded by pixel points (0,0),
(1,0), (0,1) and (1,1) as shown in Figure 20. And assume that C[i][j] gives (x,y) where x and y is smaller than one. As the distance between pixels is one, x and y denote respectively the ratio of abscissa and ordinate of C[i][j] to the side length of the block. Let f(x,y) be the brightness at (x,y), and if we assume that the brightness changes linearly inside the block, then f(x,0) = f(0,0) + [f(l,0) - f(0,0)] x x f(x,l) = f(0,l) + [f(l,l) - f(0,l)] x x f(x,y) = f(x,0) + [f(x,l) - f(x,0)] x y From the above equations, f(x,y) = [f(l,0) - f(0,0)] x x + [f(0,l) - f(0,0)] x y + [f(l,l) + f(0,0) - f(0,l) - f(l,0)] x χy + f(0,0)
The above equation yields the estimated brightness at (x,y). Thus, the estimated brightness f(x,^) will be used for the brightness at (i,j) in the corrected image.
Figures 21 through 23 show actual images aquired and processed using the X-ray imaging apparatus of the present invention. Figure 21 is an image of the calibration grid. Figure 22 shows images acquired by 4 imaging devices. It is clear that each image is distorted at the edges. Figure 23 shows the final image acquired by the apparatus of the present invention, where distortions of the imaging devices are corrected and then these four corrected images are combined to the final image.