WO2002103065A2 - Method for segmentation of digital images - Google Patents

Method for segmentation of digital images Download PDF

Info

Publication number
WO2002103065A2
WO2002103065A2 PCT/IB2002/002349 IB0202349W WO02103065A2 WO 2002103065 A2 WO2002103065 A2 WO 2002103065A2 IB 0202349 W IB0202349 W IB 0202349W WO 02103065 A2 WO02103065 A2 WO 02103065A2
Authority
WO
WIPO (PCT)
Prior art keywords
intensity
values
threshold
image
gradient
Prior art date
Application number
PCT/IB2002/002349
Other languages
French (fr)
Other versions
WO2002103065A3 (en
Inventor
Rafael Wiemker
Vladimir Pekar
Original Assignee
Koninklijke Philips Electronics N.V.
Philips Intellectual Property & Standards Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V., Philips Intellectual Property & Standards Gmbh filed Critical Koninklijke Philips Electronics N.V.
Priority to US10/481,810 priority Critical patent/US20040175034A1/en
Priority to JP2003505384A priority patent/JP2004520923A/en
Priority to EP02735888A priority patent/EP1412541A2/en
Publication of WO2002103065A2 publication Critical patent/WO2002103065A2/en
Publication of WO2002103065A3 publication Critical patent/WO2002103065A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • G06T2207/30064Lung nodule

Definitions

  • the invention relates to a method for processing of digital images, wherein an automated segmentation is performed by determination of intensity threshold values, which separate at least one image object from the surrounding background of a digital image, said intensity threshold values being determined by evaluation of a gradient integral function. Furthermore, the invention relates to a computer program for carrying out this method and to a video graphics appliance, particularly for a medical imaging apparatus, which operates in accordance with the present invention.
  • Optimal visualization of image data is of high importance for medical applications as it generally refers to the direct rendering of a diagnostic image, generated for example by computer tomography (CT) or magnetic resonance imaging (MRI), to show the characteristics of the interior of a solid object when displayed on a two dimensional display.
  • CT computer tomography
  • MRI magnetic resonance imaging
  • a planar or a volume image of a region of interest of a patient is reconstructed from the X-ray beam projections (CT) or the magnetic resonance signals (MRI).
  • CT X-ray beam projections
  • MRI magnetic resonance signals
  • the resulting images consist of image intensity values at each point of a two- or three-dimensional grid.
  • tissue type transitions are evaluated when selecting the shape of a transfer function which assigns visualization properties, such as opacity and color, to intensity values of the rendered image.
  • visualization procedures widely involve human interaction, e.g. for the selection of appropriate transfer functions in volume rendering. In general, the user has to spicify the required parameters of the respective visualization protocol manually.
  • the selection of the optimal parameters is performed by visually inspecting the resulting images. It is possible to interactively find optimal intensity threshold values corresponding to tissue transitions in this way, but since the result has to be assessed by visual inspection of the rendered images, this is generally a time consuming process.
  • the manual method is particularly disadvantageous if volume rendering is performed, since the rendering process itself is computationally extremely demanding. From the foregoing, it will be readily appreciated that there is a need for automated or at least semi-automated methods for the segmentation of digital images. Such a method is particularly advantageous in the field of medical imaging, since it immediately provides optimal threshold values for surface rendering and enables the automatic generation of opacity transfer functions for volume rendering.
  • a demand for automated image segmentation techniques is also due to the increasing importance of computer aided diagnosis (CAD), which is for example employed for the classification of lung nodules as either benign or malignant.
  • CAD computer aided diagnosis
  • the automated segmentation is necessary to enable the reproducible quantitative measurement of nodule properties, such as volume, eccentricity, growth etc. .
  • an automated segmentation method In comparison to manual segmentation of medical images, an automated segmentation method has the advantage of being much faster, thereby accelerating the work flow remarkably. It also delivers much more consistent and reliable results for the measurement of geometric properties in follow-up examinations and in patient-to-patient comparisons. Since lung cancer screening using computer tomography is more and more becoming a routine method, there is a need of powerful tools for automated segmentation and visualization of lung nodules. Such tools should enable the radiologist to perform the segmentation and visualization tasks more or less in real time, and they should be implementable on a clinical image processing workstation.
  • Zhao et al. Two-dimensional multi-criterion segmentation of pulmonary nodules on helical CT images
  • Zhao et al. Medical Physics, 26 (6), pp. 889-895, 1999
  • a series of intensity threshold values is first applied to the digital image.
  • a binary image is generated for each of these thresholds by identifying all pixels with intensities being larger than the respective threshold intensity.
  • the largest connected object is selected from the binary image, and the remaining image components are eliminated.
  • the boundaries of the object are traced, thereby calculating the mean intensity gradient strength at the object boundaries and the roundness of the object. These values depend on the respective intensity threshold.
  • the computation is repeated for the series of threshold values, and finally the threshold, which corresponds to a large mean intensity gradient value and to an optimal roundness of the identified object, is selected.
  • the main drawback of this known method is that it takes a very long computation time. According to the above cited article, the proposed scheme takes several minutes to perform a standard segmentation task on a medical image processing workstation.
  • a further drawback is that the known method is only applicable if a single largest object can be found in the image data set. This is the typical situation if, for example, the segmentation is performed for the classification of a nodule during computer aided diagnosis of lung cancer. In these cases, a limited region of interest can be pre-defined by the user making sure that the examined lung nodule is the largest object of the image.
  • One particular object of the present invention is to improve the above described known method by making it computationally more efficient.
  • the general object of the present invention is to provide a method for the segmentation of digital images which is applicable for the automated detection of characteristic intensity transitions in the image data.
  • the present invention provides a method for the processing of digital images of the type specified above, wherein the aforementioned problems and drawbacks are avoided by computing said gradient integral as a function of threshold intensity by the steps of: - calculating a Laplacian for each point of said digital image, and adding up said Laplacians for all points with intensities being larger than said threshold intensity.
  • the method of the invention enables the automated detection of intensity transitions representing, for example, the boundaries of anatomical structures in tomographic images.
  • the task of detecting intensity transitions in the image data set is performed by the computation of an objective function. This is the gradient integral which is evaluated for determination of optimal intensity threshold values.
  • the gradient integral is computed very efficiently in accordance with the method of the present invention by making use of the divergence theorem.
  • a standard segmentation task can be performed in less than a second, because only a single computation pass of the image data set is required.
  • the intensity value at position x is I(x) .
  • Each intensity threshold T t generates a binary image consisting of pixels with intensity values being either larger or smaller than T t . Every binary image has a set of boundaries T ; by which it is divided into regions with I(x) > T l and regions with I(x) ⁇ T t .
  • the basic problem is to find a set T, consisting of pixels or voxels with large intensity gradients .
  • the gradient operator V (d/dx,8/dy,d/dz) ⁇ .
  • Large intensity gradients indicate image stuctures with highly contrasted boundaries.
  • This integral can be computed for each threshold T t by finding the partitioning boundaries and computing the gradient vectors at the corresponding points.
  • a threshold T l can be considered as optimal if the gradient integral F(T, ) takes a maximum value.
  • the computation of the integral gradient function is performed by the approach which is described as follows:
  • the adding up of said Laplacians is performed by computing a histogram of said Laplacians as a function of image intensity and by further adding up all histogram values corresponding to intensities being larger than said threshold intensity.
  • the result is the above gradient integral which is computed for a plurality of thresholds T t at once.
  • This scheme is particularly efficient, because only one pass through the image data set is required.
  • the histogram of Laplacians is computed.
  • the Laplacians V 2 1(x) are calculated at each point x of the image.
  • the histogram is then incremented at bin I(x) by the value of the respective Laplacian.
  • the histogram values are accumulated such that cumulative histogram values are set as the sum of all histogram values with I ⁇ T .
  • some additional features of the segmented image can be computed, which are particularly useful for rendering of lung nodules and for quantitative measurement of their geometric properties.
  • the volume of the image objects can obviously be determined by simply counting the number of pixels or voxels with I ⁇ T .
  • the difference between the numbers of positive and negative signs of the Laplacians V 2 1(x) taken for all positions x with I(x) ⁇ T gives the number of boundary faces between the image objects and the surrounding background.
  • the number of boundary faces is proportional to the total surface of the image objects.
  • the "roundness" can be estimated by determining the ratio of the total volume and the total surface of the image objects. This volume-to-surface ratio takes a maximum if the image objects are mostly spherical.
  • a mean gradient function can be computed as the ratio of the gradient integral function and the respective number of surface points.
  • the optimal threshold intensity value can be selected such that the mean gradient and the roundness are high at the same time.
  • the histograms are set up as functions of image intensity, which always requires only a single pass through the image data set.
  • the results can then be computed by accumulating the values of the corresponding bins of the histograms, which takes only a minimum amount of computation time.
  • the curvature of this surface patch can be estimated as dC ⁇ ⁇ d 2 1 dy 2 + d 2 1 dz 2 ⁇ (for the y - and z -directions, the curvature is
  • This technique can also be employed to compute the surface fractality by calculating the total surface area of the segmented image objects at different levels of subsampling of the image data.
  • the fractal dimension of the surface at threshold T is assessed by linear regression of the logarithm of the surface area as a function of subsampling length.
  • the computation of surface curvature and surface fractality as further criteria for evaluation of the most appropriate intensity threshold for the segmentation of the digital image takes only a minimum of additional computation time.
  • the method of the present invention can advantageously be applied for rendering of volume image data sets.
  • a transfer function is employed which assigns visualization properties to image intensity values.
  • this transfer function is automatically generated such that it assigns different visualization properties to those voxels of said volume image data set which are separated by the intensity threshold values being prescribed by the method of the present invention.
  • the transfer function can for example be generated such that it assigns a high opacity to those voxels that have intensities being larger than the respective threshold intensity, while the remaining parts of the image appear transparent. In this way, a change in image opacity can automatically be correlated with the intensity transitions of the rendered volume image data set.
  • a computer program adapted for carrying out the method of the present invention performs the processing of a volume image data set pursuant to claims 11-14.
  • Such an algorithm can advantageously be implemented on any common computer hardware which is capable of standard computer graphics tasks.
  • image reconstruction and displaying units of medical imaging devices can easily be provided with a programming for carrying out the method of the present invention.
  • the computer program can be provided for these devices on suitable data carriers as CD-ROM or diskette. Alternatively, it can also be downloaded by a user from an internet server.
  • the computer program of the present invention in dedicated graphics hardware componentes, as for example video cards for personal computers. This makes sense notably since a single CPU of a typical personal computer is usually not capable of carrying out volume rendering with interactive frame rates.
  • the method of the present invention can for example be implemented into a volume rendering accelerator of a PCI video card for a conventional PC.
  • Todays PCI hardware has the capacity and speed which is required for delivering interactive frame rates by use of the above described algorithm.
  • Fig.l shows the application of the method of the present invention for detecting intensity transitions in a synthetic image data set
  • Fig.2 shows the prescription of an opacity transfer function for volume rendering of an abdomen CT data set
  • Fig.3 shows the automated segmentation of a CT data set of a lung nodule by the method of the present invention.
  • Fig.l shows an example of the application of the method of the present invention for detecting intensity transitions between different material types in an image data set.
  • the method of the invention can advantageously be incorporated into a rendering software of an image processing workstations such that intensity thresholds can be selected either manually by a user or automatically by evaluation of at least one of the above quality functions.
  • intensity thresholds can be selected either manually by a user or automatically by evaluation of at least one of the above quality functions.
  • the user adjusts the shape of the opacity transfer function in accordance with the curve of the respective goodness function. In this way, the method of the invention is assisting the user with the interactive specification of rendering parameters.
  • Fig.l shows an image 1 of a slice through a model data set.
  • This artificial data set consists of a concentric arrangement of two different materials.
  • the image 1 shows a dark region 2 corresponding to soft tissue and a light region 3 corresponding to bone.
  • Fig.l further shows a diagramatic representation 4 of the gradient integral function which is computed for the image 1 by the method of the present invention.
  • the diagram 4 shows two clear maxima of the function F(T) . These two maxima correspond to the transitions from background to soft tissue (left maximum) and from soft tissue to bone (right maximum). These two detected intensity transitions can be used for the manual or automated assignment of visualization properties to data voxels.
  • the diagram 4 further shows a curve 5 representing the opacity transfer function, which has a two-step shape, such that the bone tissue is made completely opaque while the soft tissue appears transparent.
  • Fig.2 shows the application of the method of the invention for the detection of intensity transitions in an abdomen CT data set.
  • three volume rendered images 6, 7, 8 are shown on the left.
  • the respective opacity transfer functions 9, 10, 11, which are used for the rendering of the data set, are displayed next to the respective images on the right.
  • the opacity transfer functions are overlaid on top of the gradient integral F(T) of the CT data set.
  • the gradient integral function F(T) shows well-pronounced peaks at the transitions air to skin, skin to muscle and soft-tissue to bone.
  • the opacity transfer function has a step at -460 FIU (Hounsfield units), such that the complete body appears opaque while the surrounding air is made fully transparent. It can be seen in Fig. 2 that the gradient integral takes its global maximum at this intensity value, thereby indicating the most dominant contrast of the data set.
  • a local maximum of the gradient integral is found at -40 HU. This threshold is selected to visualize the skin to muscle transition in image 7 of Fig.2.
  • the local maximum at +200 HU is used to separate the anatomical structures of the bones from the remaining soft tissue in the lower image 8.
  • Fig.3 shows the application of the method of the invention for the segmentation of a CT image of a lung nodule.
  • the radiologist finds a suspicious object on a CT image of the lung, he selects a volume of interest (VOI) closely around this object.
  • the next step is the automated segmentation of the VOI in order to classify each voxel as either belonging to the background (the lung parenchyma) or to the foreground (the nodule).
  • the decisive parameter is the correct intensity threshold T , which is efficiently computed by the method of the present invention.
  • the separating threshold is known, it can be utilized for rendering or for the measurement of nodule properties.
  • Fig.3 shows an image 12 of a single nodule in a cube-shaped VOI.
  • the dimensions of the cube are 30x30x30 mm 3 (125000 voxels).
  • the threshold for the rendering is chosen such that the mean gradient integral G(T) and the sphericity R(T) are high at the same time, which is obviously the case at a Hounsfield level of -200 HU.
  • the mean gradient is the ratio of the gradient integral and the total surface of the object volume with I ⁇ T .
  • the gradient integral and the surface area are computed by the method of the present invention.
  • R(T) is computed as the ratio of the volume of the image object and a further spherical volume.
  • the latter volume is estimated as the volume of a sphere, wherein the radius of the sphere is taken as the square root of the surface area of the segmented image object.
  • This sphericity function takes a maximum if the shape of the image object is mostly spherical.

Abstract

The invention relates to a computationally efficient method for the automated detection of intensity transitions in 2D or 3D image data. Contrasting boundaries in the image are indicated as global or local maxima of a gradient integral function, which is calculated by applying a Laplace operator to the intensity values of each pixel or voxel of the image data set. Only one pass through the image data set is required if the gradient integral function is computed by means of a cumulative histogram technique. The detected intensity thresholds can advantageously be employed for the specification of rendering parameters for visualization purposes. The method of the invention is also well-suited for the rendering and measurement of lung nodules, as the detection of correct intensity thresholds turns out to be crucial for the reproducible and consistent interpretation of medical image data.

Description

Method for segmentation of digital images
The invention relates to a method for processing of digital images, wherein an automated segmentation is performed by determination of intensity threshold values, which separate at least one image object from the surrounding background of a digital image, said intensity threshold values being determined by evaluation of a gradient integral function. Furthermore, the invention relates to a computer program for carrying out this method and to a video graphics appliance, particularly for a medical imaging apparatus, which operates in accordance with the present invention.
Efficient visualization techniques are becoming more and more important which is particularly due to the increasing amount of two- and three-dimensional image data being routinely acquired and processed in many scientific and technological fields. Optimal visualization of image data is of high importance for medical applications as it generally refers to the direct rendering of a diagnostic image, generated for example by computer tomography (CT) or magnetic resonance imaging (MRI), to show the characteristics of the interior of a solid object when displayed on a two dimensional display. In medical imaging either a planar or a volume image of a region of interest of a patient is reconstructed from the X-ray beam projections (CT) or the magnetic resonance signals (MRI). The resulting images consist of image intensity values at each point of a two- or three-dimensional grid. These data sets of equidistant pixels or voxels can be processed and displayed by appropriate methods for indicating the boundaries of various types of tissue corresponding to the intensity changes in the image data.
In order to display boundaries of anatomical structures, it is of particular importance to detect transitions between different tissue types in the image data. In surface rendering of volume image data sets for example, surface representations of the anatomical structures of interest are generated by binary classification of the voxels, which is achieved by the application of intensity threshold values for each tissue type transition. In volume rendering, tissue type transitions are evaluated when selecting the shape of a transfer function which assigns visualization properties, such as opacity and color, to intensity values of the rendered image. One challenging problem in rendering of image data is the automated generation of data specific visualization parameters. Current visualization procedures widely involve human interaction, e.g. for the selection of appropriate transfer functions in volume rendering. In general, the user has to spicify the required parameters of the respective visualization protocol manually. The selection of the optimal parameters is performed by visually inspecting the resulting images. It is possible to interactively find optimal intensity threshold values corresponding to tissue transitions in this way, but since the result has to be assessed by visual inspection of the rendered images, this is generally a time consuming process. The manual method is particularly disadvantageous if volume rendering is performed, since the rendering process itself is computationally extremely demanding. From the foregoing, it will be readily appreciated that there is a need for automated or at least semi-automated methods for the segmentation of digital images. Such a method is particularly advantageous in the field of medical imaging, since it immediately provides optimal threshold values for surface rendering and enables the automatic generation of opacity transfer functions for volume rendering.
A demand for automated image segmentation techniques is also due to the increasing importance of computer aided diagnosis (CAD), which is for example employed for the classification of lung nodules as either benign or malignant. The automated segmentation is necessary to enable the reproducible quantitative measurement of nodule properties, such as volume, eccentricity, growth etc. . In comparison to manual segmentation of medical images, an automated segmentation method has the advantage of being much faster, thereby accelerating the work flow remarkably. It also delivers much more consistent and reliable results for the measurement of geometric properties in follow-up examinations and in patient-to-patient comparisons. Since lung cancer screening using computer tomography is more and more becoming a routine method, there is a need of powerful tools for automated segmentation and visualization of lung nodules. Such tools should enable the radiologist to perform the segmentation and visualization tasks more or less in real time, and they should be implementable on a clinical image processing workstation.
A method for automated segmentation of digital images has for example been proposed by Zhao et al. ("Two-dimensional multi-criterion segmentation of pulmonary nodules on helical CT images", Zhao et al., Medical Physics, 26 (6), pp. 889-895, 1999). According to this known method, a series of intensity threshold values is first applied to the digital image. A binary image is generated for each of these thresholds by identifying all pixels with intensities being larger than the respective threshold intensity. Thereafter, the largest connected object is selected from the binary image, and the remaining image components are eliminated. In the next step, the boundaries of the object are traced, thereby calculating the mean intensity gradient strength at the object boundaries and the roundness of the object. These values depend on the respective intensity threshold. The computation is repeated for the series of threshold values, and finally the threshold, which corresponds to a large mean intensity gradient value and to an optimal roundness of the identified object, is selected.
The main drawback of this known method is that it takes a very long computation time. According to the above cited article, the proposed scheme takes several minutes to perform a standard segmentation task on a medical image processing workstation. A further drawback is that the known method is only applicable if a single largest object can be found in the image data set. This is the typical situation if, for example, the segmentation is performed for the classification of a nodule during computer aided diagnosis of lung cancer. In these cases, a limited region of interest can be pre-defined by the user making sure that the examined lung nodule is the largest object of the image.
One particular object of the present invention is to improve the above described known method by making it computationally more efficient.
Furthermore, the general object of the present invention is to provide a method for the segmentation of digital images which is applicable for the automated detection of characteristic intensity transitions in the image data.
The present invention provides a method for the processing of digital images of the type specified above, wherein the aforementioned problems and drawbacks are avoided by computing said gradient integral as a function of threshold intensity by the steps of: - calculating a Laplacian for each point of said digital image, and adding up said Laplacians for all points with intensities being larger than said threshold intensity. The method of the invention enables the automated detection of intensity transitions representing, for example, the boundaries of anatomical structures in tomographic images. As in the above described known method, the task of detecting intensity transitions in the image data set is performed by the computation of an objective function. This is the gradient integral which is evaluated for determination of optimal intensity threshold values. The gradient integral is computed very efficiently in accordance with the method of the present invention by making use of the divergence theorem. A standard segmentation task can be performed in less than a second, because only a single computation pass of the image data set is required.
In the image data set, the intensity value at position x is I(x) . Each intensity threshold Tt generates a binary image consisting of pixels with intensity values being either larger or smaller than Tt . Every binary image has a set of boundaries T; by which it is divided into regions with I(x) > Tl and regions with I(x) < Tt .
The basic problem is to find a set T, consisting of pixels or voxels with large intensity gradients . In three dimensions, the gradient operator V is V = (d/dx,8/dy,d/dz)τ . Large intensity gradients indicate image stuctures with highly contrasted boundaries. Hence, the objective function for assessing the correctness of a segmentation can be defined as the integral of the gradient g = over the set ofboundari.es
Figure imgf000005_0001
This integral can be computed for each threshold Tt by finding the partitioning boundaries and computing the gradient vectors at the corresponding points. A threshold Tl can be considered as optimal if the gradient integral F(T, ) takes a maximum value.
According to the present invention, the computation of the integral gradient function is performed by the approach which is described as follows: The divergence theorem states that an intergral of a vector field g over the boundary surface T can be replaced by the volume integral of the divergence V • g over the volume Ω enclosed by this surface. It can thus be easily shown that the gradient integral function can be written as: F(T) = j_ V2 I dω
This is because the divergence of the gradient vector field is equal to the
Laplace operator V2 = \δ2 1 dx2 ,d2 1 dy2 ,d2 1 dz2 ) applied to the intensity distribution of the image data. For the image data set consisting of discrete pixels or voxels, the correctness of the segmentation is computed by identifying all pixels or voxels with intensity values above the threshold T, and replacing the integral by adding up the respective Laplacians reading:
Figure imgf000005_0002
In accordance with claim 2, the Laplacian V2 1(x) can easily be calculated as the sum of the differences Δ = I(x) - I(x') between the intensities of the point x and its respective neighboring points x' .
With the method of the present invention it is advantageous if the adding up of said Laplacians is performed by computing a histogram of said Laplacians as a function of image intensity and by further adding up all histogram values corresponding to intensities being larger than said threshold intensity.
The result is the above gradient integral which is computed for a plurality of thresholds Tt at once. This scheme is particularly efficient, because only one pass through the image data set is required. At first, the histogram of Laplacians is computed. For this purpose, the Laplacians V2 1(x) are calculated at each point x of the image. The histogram is then incremented at bin I(x) by the value of the respective Laplacian. After the Laplacian values of all pixels or voxels have been inserted into the histogram, the histogram values are accumulated such that cumulative histogram values are set as the sum of all histogram values with I ≥ T . This directly corresponds to the computation of the sum F(T) = ∑/ω>rV2/(x) for the given threshold value T . Thus each cumulative histogram value gives a discrete approximation of the gradient integral over all pixels or voxels with I ≥ T .
With the method of the present invention some additional features of the segmented image can be computed, which are particularly useful for rendering of lung nodules and for quantitative measurement of their geometric properties. In this context, it is useful to further determine the intensity threshold values by evaluation of a "roundness function", which is computed in accordance with the method of claim 5. The volume of the image objects can obviously be determined by simply counting the number of pixels or voxels with I ≥ T . The difference between the numbers of positive and negative signs of the Laplacians V21(x) taken for all positions x with I(x) ≥ T gives the number of boundary faces between the image objects and the surrounding background. The number of boundary faces is proportional to the total surface of the image objects. The "roundness" can be estimated by determining the ratio of the total volume and the total surface of the image objects. This volume-to-surface ratio takes a maximum if the image objects are mostly spherical.
Furthermore, a mean gradient function can be computed as the ratio of the gradient integral function and the respective number of surface points. For the automated segmentation of lung nodules, for example, the optimal threshold intensity value can be selected such that the mean gradient and the roundness are high at the same time.
For the computation of volume, surface, mean gradient and other functions of threshold intensity, it is advantageous to employ the above described technique of cumulative histograms as well. The histograms are set up as functions of image intensity, which always requires only a single pass through the image data set. The results can then be computed by accumulating the values of the corresponding bins of the histograms, which takes only a minimum amount of computation time.
Other features of the segmented image, which can be computed in accordance with the present invention, are for example the surface curvature and the surface fractality. For a voxel with a boundary face in x -direction, the curvature of this surface patch can be estimated as dC ■ d2 1 dy2 + d2 1 dz2\ (for the y - and z -directions, the curvature is
dC = d2 /dx2 + d2 /dz2\ md dC = id2 Idx2 + d2 ldy \ , respectively). The curvature integral of the whole surface of the image objects can advantageously be calculated by the above cumulative histogram technique, so that a discrete approximation of the surface curvature C(T) = I>T (I) at threshold T is obtained. This technique can also be employed to compute the surface fractality by calculating the total surface area of the segmented image objects at different levels of subsampling of the image data. Thereafter, the fractal dimension of the surface at threshold T is assessed by linear regression of the logarithm of the surface area as a function of subsampling length. The computation of surface curvature and surface fractality as further criteria for evaluation of the most appropriate intensity threshold for the segmentation of the digital image takes only a minimum of additional computation time.
The method of the present invention can advantageously be applied for rendering of volume image data sets. In accordance with claims 7-10, a transfer function is employed which assigns visualization properties to image intensity values. For the visualization of the volume image, this transfer function is automatically generated such that it assigns different visualization properties to those voxels of said volume image data set which are separated by the intensity threshold values being prescribed by the method of the present invention. The transfer function can for example be generated such that it assigns a high opacity to those voxels that have intensities being larger than the respective threshold intensity, while the remaining parts of the image appear transparent. In this way, a change in image opacity can automatically be correlated with the intensity transitions of the rendered volume image data set. A computer program adapted for carrying out the method of the present invention performs the processing of a volume image data set pursuant to claims 11-14. Such an algorithm can advantageously be implemented on any common computer hardware which is capable of standard computer graphics tasks. Especially image reconstruction and displaying units of medical imaging devices can easily be provided with a programming for carrying out the method of the present invention. The computer program can be provided for these devices on suitable data carriers as CD-ROM or diskette. Alternatively, it can also be downloaded by a user from an internet server.
It is also possible to incorporate the computer program of the present invention in dedicated graphics hardware componentes, as for example video cards for personal computers. This makes sense notably since a single CPU of a typical personal computer is usually not capable of carrying out volume rendering with interactive frame rates. The method of the present invention can for example be implemented into a volume rendering accelerator of a PCI video card for a conventional PC. Todays PCI hardware has the capacity and speed which is required for delivering interactive frame rates by use of the above described algorithm.
The following drawings disclose preferred embodiments of the present invention. It should be understood, however, that the drawings are designed for the purpose of illustration only and not as a definition of the limits of the invention. In the drawings
Fig.l shows the application of the method of the present invention for detecting intensity transitions in a synthetic image data set;
Fig.2 shows the prescription of an opacity transfer function for volume rendering of an abdomen CT data set; Fig.3 shows the automated segmentation of a CT data set of a lung nodule by the method of the present invention.
Fig.l shows an example of the application of the method of the present invention for detecting intensity transitions between different material types in an image data set. It will become apparent in the following, that the method of the invention can advantageously be incorporated into a rendering software of an image processing workstations such that intensity thresholds can be selected either manually by a user or automatically by evaluation of at least one of the above quality functions. For volume rendering, for example, the user adjusts the shape of the opacity transfer function in accordance with the curve of the respective goodness function. In this way, the method of the invention is assisting the user with the interactive specification of rendering parameters.
Fig.l shows an image 1 of a slice through a model data set. This artificial data set consists of a concentric arrangement of two different materials. As a model of a real CT data set, the image 1 shows a dark region 2 corresponding to soft tissue and a light region 3 corresponding to bone. Fig.l further shows a diagramatic representation 4 of the gradient integral function which is computed for the image 1 by the method of the present invention. The diagram 4 shows two clear maxima of the function F(T) . These two maxima correspond to the transitions from background to soft tissue (left maximum) and from soft tissue to bone (right maximum). These two detected intensity transitions can be used for the manual or automated assignment of visualization properties to data voxels. For the volume rendering of the image data set, the diagram 4 further shows a curve 5 representing the opacity transfer function, which has a two-step shape, such that the bone tissue is made completely opaque while the soft tissue appears transparent. Fig.2 shows the application of the method of the invention for the detection of intensity transitions in an abdomen CT data set. In Fig.2, three volume rendered images 6, 7, 8 are shown on the left. The respective opacity transfer functions 9, 10, 11, which are used for the rendering of the data set, are displayed next to the respective images on the right. In the diagrams, the opacity transfer functions are overlaid on top of the gradient integral F(T) of the CT data set. The gradient integral function F(T) shows well-pronounced peaks at the transitions air to skin, skin to muscle and soft-tissue to bone. In the upper image, the opacity transfer function has a step at -460 FIU (Hounsfield units), such that the complete body appears opaque while the surrounding air is made fully transparent. It can be seen in Fig. 2 that the gradient integral takes its global maximum at this intensity value, thereby indicating the most dominant contrast of the data set. A local maximum of the gradient integral is found at -40 HU. This threshold is selected to visualize the skin to muscle transition in image 7 of Fig.2. The local maximum at +200 HU is used to separate the anatomical structures of the bones from the remaining soft tissue in the lower image 8.
Fig.3 shows the application of the method of the invention for the segmentation of a CT image of a lung nodule. When the radiologist finds a suspicious object on a CT image of the lung, he selects a volume of interest (VOI) closely around this object. The next step is the automated segmentation of the VOI in order to classify each voxel as either belonging to the background (the lung parenchyma) or to the foreground (the nodule). Again, the decisive parameter is the correct intensity threshold T , which is efficiently computed by the method of the present invention. Once the separating threshold is known, it can be utilized for rendering or for the measurement of nodule properties. Fig.3 shows an image 12 of a single nodule in a cube-shaped VOI. The dimensions of the cube are 30x30x30 mm3 (125000 voxels). The threshold for the rendering is chosen such that the mean gradient integral G(T) and the sphericity R(T) are high at the same time, which is obviously the case at a Hounsfield level of -200 HU. As described above, the mean gradient is the ratio of the gradient integral and the total surface of the object volume with I ≥ T . The gradient integral and the surface area are computed by the method of the present invention. R(T) is computed as the ratio of the volume of the image object and a further spherical volume. The latter volume is estimated as the volume of a sphere, wherein the radius of the sphere is taken as the square root of the surface area of the segmented image object. This sphericity function takes a maximum if the shape of the image object is mostly spherical.

Claims

CLAIMS:
1. Method for processing of digital images, wherein an automated segmentation is performed by determination of intensity threshold values, which separate at least one image object from the surrounding background of a digital image, said intensity threshold values being determined by evaluation of a gradient integral function, characterized in that said gradient integral is computed as a function of threshold intensity by the steps of: calculating a Laplacian for each point of said digital image, and adding up said Laplacians for all points with intensities being larger than said threshold intensity.
2. Method of claim 1 , characterized in that said Laplacian is calculated for each point by computing the sum of differences between the intensities of this point and its respective neighboring points.
3. Method of claim 1 , characterized in that said adding up of said Laplacians is performed by computing a histogram of said Laplacians as a function of image intensity and by further adding up all histogram values corresponding to intensities being larger than said threshold intensity.
4. Method of claim 1 , characterized in that the number of surface points of said image objects is determined by computing the difference between the numbers of positive and negative signs of said Laplacians for all points of said digital image with intensities being larger than said threshold intensity.
5. Method of claim 4, characterized in that said intensity threshold values are further determined by evaluation of a roundness function, wherein said roundness is computed as a function of threshold intensity by the steps of: calculating the volume of said image objects by determining the number of points of said digital image with intensities being larger than said threshold intensity, and computing the ratio of said volume and said number of surface points.
6. Method of claim 4, characterized in that the surface fractality of said image objects is determined by computing the number of surface points at different levels of spatial subsampling of the image data.
7. Method for rendering of a volume image data set on a two-dimensional display, wherein a transfer function is employed which assigns visualization properties to image intensity values, characterized in that said transfer function is automatically generated such that it assigns different visualization properties to those voxels of said volume image data set which are separated by intensity threshold values being computed in accordance with the method of claim 1.
8. Method of claim 7, characterized in that said intensity threshold values are selected such that said gradient integral function takes a maximum at these values.
9. Method for rendering of a pre-defined region of interest of a volume image data set on a two-dimensional display, wherein a transfer function is employed which assigns visualization properties to image intensity values, characterized in that said transfer function is automatically generated such that it assigns different visualization properties to those voxels of said volume image data set which are separated by an intensity threshold value being computed in accordance with the method of claim 5.
10. Method of claim 9, characterized in that said intensity threshold values are selected such that a mean gradient function, which is computed as the ratio of said gradient integral function and said number of surface points, and said roundness function are maximized simultaneously.
11. Computer program for carrying out the method of claim 1 , characterized in that the processing of a volume image data set comprises the steps of: calculating a Laplacian for each voxel, computing gradient integrals for a plurality of threshold intensity values such that each gradient integral is set as the sum of Laplacians of all voxels with intensities being larger than the respective threshold intensity, and selecting at least one of said plurality of threshold intensity values such that the corresponding gradient integral takes a maximum at this value.
12. Computer program for carrying out the method of claim 5, characterized in that the processing of a volume image data set comprises the steps of: calculating a Laplacian for each voxel, computing object volumes for said plurality of threshold intensity values such that each object volume is set as the number of voxels with intensities being larger than the respective threshold intensity, - computing obj ect surface values for said plurality of threshold intensity values such that each object surface value is set as the difference between the numbers of positive and negative signs of said Laplacians for all voxels with intensities being larger the respective threshold intensity, and calculating a mean roundness by computing the ratios of said object volumes and said object surface values for each of said plurality of threshold intensity values.
13. Computer program of claim 12, characterized in that it further comprises the steps of: computing gradient integrals for a plurality of threshold intensity values such that each gradient integral is set as the sum of Laplacians of all voxels with intensities being larger than the respective threshold intensity, computing mean gradients by calculating the ratios of said gradient integrals and said object surface values for each of said plurality of threshold intensity values, and - selecting at least one of said plurality of threshold intensity values such that the corresponding mean gradient and the correponding roundness value take a maximum at this value.
14. Video graphics appliance, particularly for a medical imaging apparatus, with a program controlled processing element, characterized in that the graphics appliance has a programming which operates in accordance with the method of claim 1.
PCT/IB2002/002349 2001-06-20 2002-06-18 Method for segmentation of digital images WO2002103065A2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US10/481,810 US20040175034A1 (en) 2001-06-20 2002-06-18 Method for segmentation of digital images
JP2003505384A JP2004520923A (en) 2001-06-20 2002-06-18 How to segment digital images
EP02735888A EP1412541A2 (en) 2001-06-20 2002-06-18 Method for segmentation of digital images

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP01202391 2001-06-20
EP01202391.7 2001-06-20

Publications (2)

Publication Number Publication Date
WO2002103065A2 true WO2002103065A2 (en) 2002-12-27
WO2002103065A3 WO2002103065A3 (en) 2003-10-23

Family

ID=8180513

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2002/002349 WO2002103065A2 (en) 2001-06-20 2002-06-18 Method for segmentation of digital images

Country Status (4)

Country Link
US (1) US20040175034A1 (en)
EP (1) EP1412541A2 (en)
JP (1) JP2004520923A (en)
WO (1) WO2002103065A2 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004102458A2 (en) * 2003-05-08 2004-11-25 Siemens Corporate Research, Inc. Method and apparatus for automatic setting of rendering parameter for virtual endoscopy
GB2414295A (en) * 2004-05-20 2005-11-23 Medicsight Plc Nodule detection
WO2007078258A1 (en) * 2006-01-06 2007-07-12 Agency For Science, Technology And Research Obtaining a threshold for partitioning a dataset based on class variance and contrast
US7515743B2 (en) * 2004-01-08 2009-04-07 Siemens Medical Solutions Usa, Inc. System and method for filtering a medical image
US7697742B2 (en) 2004-06-23 2010-04-13 Medicsight Plc Lesion boundary detection
US8270687B2 (en) * 2003-04-08 2012-09-18 Hitachi Medical Corporation Apparatus and method of supporting diagnostic imaging for medical use
RU2601212C2 (en) * 2011-10-11 2016-10-27 Конинклейке Филипс Н.В. Process of interactive segmentation fraction of lung lobes, taking ambiguity into account
KR101822105B1 (en) * 2015-11-05 2018-01-26 오스템임플란트 주식회사 Medical image processing method for diagnosising temporomandibular joint, apparatus, and recording medium thereof

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4062232B2 (en) * 2003-10-20 2008-03-19 株式会社日立製作所 X-ray CT apparatus and imaging method using X-ray CT apparatus
WO2005076186A1 (en) * 2004-02-09 2005-08-18 Institut De Cardiologie De Montreal Computation of a geometric parameter of a cardiac chamber from a cardiac tomography data set
US7751602B2 (en) 2004-11-18 2010-07-06 Mcgill University Systems and methods of classification utilizing intensity and spatial data
JP4325552B2 (en) * 2004-12-24 2009-09-02 セイコーエプソン株式会社 Image processing apparatus, image processing method, and image processing program
US7623250B2 (en) * 2005-02-04 2009-11-24 Stryker Leibinger Gmbh & Co. Kg. Enhanced shape characterization device and method
US20070019778A1 (en) * 2005-07-22 2007-01-25 Clouse Melvin E Voxel histogram analysis for measurement of plaque
US7873194B2 (en) 2006-10-25 2011-01-18 Rcadia Medical Imaging Ltd. Method and system for automatic analysis of blood vessel structures and pathologies in support of a triple rule-out procedure
US7983459B2 (en) 2006-10-25 2011-07-19 Rcadia Medical Imaging Ltd. Creating a blood vessel tree from imaging data
US7940977B2 (en) 2006-10-25 2011-05-10 Rcadia Medical Imaging Ltd. Method and system for automatic analysis of blood vessel structures to identify calcium or soft plaque pathologies
US7940970B2 (en) 2006-10-25 2011-05-10 Rcadia Medical Imaging, Ltd Method and system for automatic quality control used in computerized analysis of CT angiography
US7860283B2 (en) 2006-10-25 2010-12-28 Rcadia Medical Imaging Ltd. Method and system for the presentation of blood vessel structures and identified pathologies
US8126267B2 (en) * 2007-02-05 2012-02-28 Albany Medical College Methods and apparatuses for analyzing digital images to automatically select regions of interest thereof
US8803910B2 (en) * 2008-08-28 2014-08-12 Tomotherapy Incorporated System and method of contouring a target area
EP2598034B1 (en) 2010-07-26 2018-07-18 Kjaya, LLC Adaptive visualization for direct physician use
CN102496161B (en) * 2011-12-13 2013-10-16 浙江欧威科技有限公司 Method for extracting contour of image of printed circuit board (PCB)
US8948449B2 (en) * 2012-02-06 2015-02-03 GM Global Technology Operations LLC Selecting visible regions in nighttime images for performing clear path detection
EP2962309B1 (en) 2013-02-26 2022-02-16 Accuray, Inc. Electromagnetically actuated multi-leaf collimator
JP6253962B2 (en) 2013-11-28 2017-12-27 富士通株式会社 Information processing apparatus and method
JP2019536531A (en) 2016-11-08 2019-12-19 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. Apparatus for detecting opacity in X-ray images
US10593099B2 (en) * 2017-11-14 2020-03-17 Siemens Healthcare Gmbh Transfer function determination in medical imaging
US10789725B2 (en) * 2018-04-22 2020-09-29 Cnoga Medical Ltd. BMI, body and other object measurements from camera view display
CN110060268A (en) * 2019-04-19 2019-07-26 哈尔滨理工大学 A kind of 3 d medical images edge extracting method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5452367A (en) * 1993-11-29 1995-09-19 Arch Development Corporation Automated method and system for the segmentation of medical images

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU705713B2 (en) * 1995-03-03 1999-05-27 Arch Development Corporation Method and system for the detection of lesions in medical images
FR2733336A1 (en) * 1995-04-20 1996-10-25 Philips Electronique Lab METHOD AND DEVICE FOR PROCESSING IMAGES FOR AUTOMATIC DETECTION OF OBJECTS IN DIGITAL IMAGES
DE19636949A1 (en) * 1996-09-11 1998-03-12 Siemens Ag Method for the detection of edges in an image signal

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5452367A (en) * 1993-11-29 1995-09-19 Arch Development Corporation Automated method and system for the segmentation of medical images

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
BLAFFERT T ET AL: "The Laplace integral for a watershed segmentation" PROCEEDINGS OF 7TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, vol. 3, 10 September 2000 (2000-09-10), pages 444-447, XP010529499 Vancouver, BC, Canada *
PEKAR V ET AL: "Fast detection of meaningful isosurfaces for volume data visualization" VISUALIZATION 2001;SAN DIEGO, CA, UNITED STATES OCT 21-26 2001, 2001, pages 223-227, XP002246128 Proc IEEE Visual Conf;Proceedings of the IEEE Visualization Conference 2001 *
ZHAO B ET AL: "TWO-DIMENSIONAL MULTI-CRITERION SEGMENTATION OF PULMONARY NODULES ON HELICAL CT IMAGES" MEDICAL PHYSICS, AMERICAN INSTITUTE OF PHYSICS. NEW YORK, US, vol. 26, no. 6, June 1999 (1999-06), pages 889-895, XP000898642 ISSN: 0094-2405 cited in the application *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8270687B2 (en) * 2003-04-08 2012-09-18 Hitachi Medical Corporation Apparatus and method of supporting diagnostic imaging for medical use
WO2004102458A2 (en) * 2003-05-08 2004-11-25 Siemens Corporate Research, Inc. Method and apparatus for automatic setting of rendering parameter for virtual endoscopy
WO2004102458A3 (en) * 2003-05-08 2005-07-07 Siemens Corp Res Inc Method and apparatus for automatic setting of rendering parameter for virtual endoscopy
US7417636B2 (en) 2003-05-08 2008-08-26 Siemens Medical Solutions Usa, Inc. Method and apparatus for automatic setting of rendering parameter for virtual endoscopy
US7515743B2 (en) * 2004-01-08 2009-04-07 Siemens Medical Solutions Usa, Inc. System and method for filtering a medical image
GB2414295A (en) * 2004-05-20 2005-11-23 Medicsight Plc Nodule detection
US7460701B2 (en) 2004-05-20 2008-12-02 Medicsight, Plc Nodule detection
GB2414295B (en) * 2004-05-20 2009-05-20 Medicsight Plc Nodule detection
US7697742B2 (en) 2004-06-23 2010-04-13 Medicsight Plc Lesion boundary detection
WO2007078258A1 (en) * 2006-01-06 2007-07-12 Agency For Science, Technology And Research Obtaining a threshold for partitioning a dataset based on class variance and contrast
RU2601212C2 (en) * 2011-10-11 2016-10-27 Конинклейке Филипс Н.В. Process of interactive segmentation fraction of lung lobes, taking ambiguity into account
KR101822105B1 (en) * 2015-11-05 2018-01-26 오스템임플란트 주식회사 Medical image processing method for diagnosising temporomandibular joint, apparatus, and recording medium thereof

Also Published As

Publication number Publication date
EP1412541A2 (en) 2004-04-28
JP2004520923A (en) 2004-07-15
WO2002103065A3 (en) 2003-10-23
US20040175034A1 (en) 2004-09-09

Similar Documents

Publication Publication Date Title
US20040175034A1 (en) Method for segmentation of digital images
EP1315125B1 (en) Image processing method and system for disease detection
Saad et al. Image segmentation for lung region in chest X-ray images using edge detection and morphology
Zoroofi et al. Automated segmentation of acetabulum and femoral head from 3-D CT images
Mesanovic et al. Automatic CT image segmentation of the lungs with region growing algorithm
EP2916737B1 (en) System and method for automated detection of lung nodules in medical images
US7512284B2 (en) Volumetric image enhancement system and method
US20050147297A1 (en) Unsupervised data segmentation
US8059900B2 (en) Method and apparatus to facilitate visualization and detection of anatomical shapes using post-processing of 3D shape filtering
CN104871207B (en) Image processing method and system
US7684602B2 (en) Method and system for local visualization for tubular structures
US8705821B2 (en) Method and apparatus for multimodal visualization of volume data sets
EP1755457A1 (en) Nodule detection
CN110675464A (en) Medical image processing method and device, server and storage medium
Bendtsen et al. X-ray computed tomography: semiautomated volumetric analysis of late-stage lung tumors as a basis for response assessments
Lo et al. Classification of lung nodules in diagnostic CT: an approach based on 3D vascular features, nodule density distribution, and shape features
Kawata et al. An approach for detecting blood vessel diseases from cone-beam CT image
Kayode et al. An explorative survey of image enhancement techniques used in mammography
US20100316267A1 (en) Segmentation
Ali et al. Automatic technique to produce 3D image for brain tumor of MRI images
Malu et al. An automated algorithm for lesion identification in dynamic contrast enhanced MRI
Chen Histogram partition and interval thresholding for volumetric breast tissue segmentation
Kubicek et al. Autonomous segmentation and modeling of brain pathological findings based on iterative segmentation from MR images
Oliveira et al. A region growing approach for pulmonary vessel tree segmentation using adaptive threshold
Mueller et al. Improved direct volume visualization of the coronary arteries using fused segmented regions

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): JP US

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR

WWE Wipo information: entry into national phase

Ref document number: 2003505384

Country of ref document: JP

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2002735888

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 10481810

Country of ref document: US

WWP Wipo information: published in national office

Ref document number: 2002735888

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 2002735888

Country of ref document: EP