WO2015051622A1 - Ultrasound fusion imaging method and ultrasound fusion imaging navigation system - Google Patents

Ultrasound fusion imaging method and ultrasound fusion imaging navigation system Download PDF

Info

Publication number
WO2015051622A1
WO2015051622A1 PCT/CN2014/074451 CN2014074451W WO2015051622A1 WO 2015051622 A1 WO2015051622 A1 WO 2015051622A1 CN 2014074451 W CN2014074451 W CN 2014074451W WO 2015051622 A1 WO2015051622 A1 WO 2015051622A1
Authority
WO
WIPO (PCT)
Prior art keywords
ultrasound
image
breathing
frame
images
Prior art date
Application number
PCT/CN2014/074451
Other languages
French (fr)
Chinese (zh)
Inventor
康锦刚
王广志
朱磊
张倩
丁辉
杨明雷
丛龙飞
Original Assignee
深圳迈瑞生物医疗电子股份有限公司
清华大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳迈瑞生物医疗电子股份有限公司, 清华大学 filed Critical 深圳迈瑞生物医疗电子股份有限公司
Priority to EP14852089.3A priority Critical patent/EP3056151B1/en
Publication of WO2015051622A1 publication Critical patent/WO2015051622A1/en
Priority to US15/094,821 priority patent/US10751030B2/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5284Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving retrospective matching to a physiological signal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0866Detecting organic movements or changes, e.g. tumours, cysts, swellings involving foetal diagnosis; pre-natal or peri-natal diagnosis of the baby
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/42Details of probe positioning or probe attachment to the patient
    • A61B8/4245Details of probe positioning or probe attachment to the patient involving determining the position of the probe, e.g. with respect to an external reference frame or to the patient
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/42Details of probe positioning or probe attachment to the patient
    • A61B8/4245Details of probe positioning or probe attachment to the patient involving determining the position of the probe, e.g. with respect to an external reference frame or to the patient
    • A61B8/4254Details of probe positioning or probe attachment to the patient involving determining the position of the probe, e.g. with respect to an external reference frame or to the patient using sensors mounted on the probe
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/46Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
    • A61B8/461Displaying means of special interest
    • A61B8/463Displaying means of special interest characterised by displaying multiple images or images and diagnostic data on one display
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5238Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5238Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image
    • A61B8/5246Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image combining images from the same or different imaging techniques, e.g. color Doppler and B-mode
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5238Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image
    • A61B8/5261Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image combining images from different diagnostic modalities, e.g. ultrasound and X-ray
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5269Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving detection or reduction of artifacts
    • A61B8/5276Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving detection or reduction of artifacts due to motion
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • A61B2090/365Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image

Definitions

  • the present application relates to ultrasound imaging techniques, and more particularly to a method for fusing ultrasound images to pre-acquired modal images, and an ultrasound fusion imaging navigation system.
  • Imaging of the target subject in the clinic can use more than one imaging system, allowing medical personnel to obtain multiple modal medical images, such as computed tomography (CT) images, magnetic resonance (Magnetic Resonance, MR) ) images, ultrasound images, etc.
  • CT computed tomography
  • MR Magnetic Resonance
  • the principle of ultrasound fusion imaging navigation is to establish the spatial correspondence between real-time ultrasound images and other modal data acquired in advance (such as CT or MR images) through a spatial positioning device (usually a magnetic positioning sensor attached to the probe). And superimposing the ultrasound and the corresponding modal data facet to achieve the fusion of the two images, realize the common guidance of the two images to the diagnosis and treatment process, give full play to the advantages of CT or MR high resolution and the real-time characteristics of ultrasound. Clinicians provide more diagnostic information to improve treatment outcomes.
  • an important step is to register the ultrasound image with the modal data, which is essentially the position of the point (or plane) in the ultrasound image in the world coordinate system and the point in the modal data ( Or the planes in the world coordinate system one-to-one correspondence, accurately obtaining the position of the target point in the world coordinate system has a great influence on improving the registration accuracy.
  • the current registration technology is based on real-time ultrasound.
  • the medical staff obtains the ultrasound image used to provide the registration information by freezing the current frame.
  • This method of processing the real-time ultrasound image frame by frame has become a habit. .
  • the doctor in order to obtain an ultrasound image of a specific section of a certain depth of breath, the doctor often needs the patient to control the breathing well, especially for some patients with abdominal breathing to perform abdominal body fusion. Displacement, rotation and deformation of the organ can cause large errors and therefore require the patient to remain in a particular breathing state for a longer period of time, for example. This puts high demands on doctors and patients who use ultrasound fusion imaging navigation.
  • Imaging success rate Drop The current method for eliminating the effects of respiration is mainly based on the doctor manually determining the respiratory phase or adding a sensor for simple respiratory gating, but the effect is not good. Summary of the invention
  • a method for fusing an ultrasound image with a pre-acquired modal image comprising: a selecting step of selecting at least one frame of ultrasound image from at least one piece of pre-stored ultrasound video data according to the input instruction, the ultrasound video
  • the data includes an ultrasonic image obtained by acquiring the target object from at least one plane, and position pointing information corresponding to each frame of the ultrasonic image, the position pointing information being determined by a position sensor fixed to the ultrasonic probe during the ultrasonic image acquiring process.
  • the multi-frame ultrasound image is selected in the selecting step; the method further includes: establishing a breathing model step, establishing a breathing model according to the ultrasound video data; and a breathing correction step, selecting the selected multi-frame ultrasound
  • the multi-frame ultrasound image is corrected to the same breathing depth using a breathing model before and/or during fusion of the image and the modal image.
  • a method for fusing an ultrasound image with a pre-acquired modal image comprising: a selecting step of selecting a multi-frame ultrasound image from at least one piece of ultrasound video data, the ultrasound video data comprising from at least one plane to a target object Performing the acquired ultrasonic image and position pointing information corresponding to each frame of the ultrasonic image, the position pointing information being obtained by the position sensor fixed on the ultrasonic probe during the ultrasonic image acquiring process;
  • An ultrasonic fusion imaging navigation system includes: a probe and a position sensor fixed on the probe; an acquisition module, configured to collect the target object from at least one plane, and obtain at least one piece of ultrasound video including information for registration Data, for each frame of ultrasound image in each piece of ultrasound video data, recording position pointing information generated by the position sensor during the ultrasound image acquisition process according to the sensed movement condition of the ultrasound probe; And playing back the pre-stored ultrasound video data according to the input instruction; the selecting module, configured to select at least one frame of the ultrasound image from the played back ultrasound video data according to the input selection instruction; the registration module, configured to select Aligning at least one frame of the ultrasound image with the modal image, wherein the registration uses the positional orientation information of the at least one frame of the ultrasound image; the fusion module is configured to register the ultrasound image and the modality
  • the above-mentioned ultrasonic fusion imaging method and the ultrasonic fusion imaging navigation system adopt a registration fusion method different from the existing real-time ultrasound, but record a registration video of the target object before registration, and then select one or more frames.
  • the frame ultrasound image is registered to eliminate the effect of breathing.
  • FIG. 1 is a schematic structural view of an ultrasonic fusion imaging navigation system according to an embodiment
  • FIG. 2 is a schematic flow chart of a method for fusing an ultrasonic image and a pre-acquired modal image according to an embodiment
  • 3 is a schematic diagram of spatial transformation of an embodiment
  • each window displays an ultrasound image
  • FIG. 5 is a schematic flow chart of establishing a breathing model according to an embodiment
  • FIG. 6 is a schematic diagram showing changes in breathing depth with time according to an embodiment
  • FIG. 7 is a schematic diagram showing the displacement of a target organ in a three-direction of the world coordinate system with respect to a reference position as a function of respiratory depth by linear fitting according to an embodiment
  • FIG. 8 is a schematic flow chart of performing registration fusion after establishing a breathing model according to an embodiment. detailed description
  • FIG. 1 A block diagram of an ultrasound fusion imaging navigation system is shown in Figure 1 (excluding the dashed box):
  • the ultrasound probe 101 transmits ultrasound to the body examination site, and the received echo signal is processed by the ultrasound imaging module 102 to obtain an ultrasound image of the target organ.
  • the modal image data acquired in advance such as a CT or MR image, is introduced into the registration fusion module 105 before registration;
  • the position sensor 103 fixed on the probe 101 continuously provides position information as the probe moves, Information on the spatial orientation of the probe's six degrees of freedom (including vertical, lateral, longitudinal, pitch, roll, and sway) is obtained by the position controller 104;
  • the registration fusion module 105 uses the image information and position information to image the ultrasound image and the modal data Registration and fusion are performed; display module 106 is used to display the fusion results.
  • the fusion method of the ultrasound image and the pre-acquired modal image provided in this embodiment is as shown in FIG. 2, and includes the following steps S1: S13:
  • the selecting step S 1 1 selects at least one frame of the ultrasonic image from the pre-stored at least one piece of the ultrasound video data according to the input instruction.
  • the pre-stored ultrasound video data includes: acquiring a target object (a target organ such as a liver) in advance, and obtaining an ultrasound video containing registration information, that is, a registration video, for each frame of the ultrasound image, the recording is fixed at the same time
  • the positional orientation information of the position sensor on the ultrasonic probe is R prote ( ), R probe ⁇ t).
  • the position sensor is generated according to the movement condition of the induced ultrasonic probe during the ultrasonic image acquisition process, that is, the content of the registration video is in addition to the ultrasound.
  • positional orientation information of the position sensor is also included.
  • the position sensor may be an electromagnetic induction based position sensor, an optical principle based position sensor, or other types of positions based on an acoustic principle or the like. sensor.
  • a position sensor based on electromagnetic induction is taken as an example for description.
  • the input instruction may be an instruction input from an external user, or may be An instruction that is automatically triggered when the registration is fused inside the system.
  • the pre-stored ultrasound video may be played back, and then selected during playback, and the input instruction and selection instruction received by the system may be determined by the user's needs, and the commonly used correlation is combined.
  • the technology is implemented as long as it can satisfy the playback of the ultrasound video and the selection of the frame image.
  • the pre-stored ultrasound video may be played frame by frame from beginning to end, or the ultrasound video may be dragged to the frame position corresponding to the slice of interest by, for example, a progress bar or a knob rotation.
  • selection may also be made by some pre-set conditions, such as presetting a certain frame in the selected ultrasound video, such as the first 20 frames.
  • the selected at least one frame of the ultrasound image is registered with the modal image, and the system simultaneously uses the positional pointing information corresponding to the ultrasound images to participate in the registration.
  • the prerequisite for selecting a multi-frame ultrasound image for registration is that the ultrasound images of these frames have the same or similar respiratory states, or that these ultrasound images are acquired in the same or similar respiratory state.
  • the ultrasound video is acquired while the patient is holding the breath, the ultrasound images of all frames in the video have similar breathing states, and the multi-frame ultrasound images can be directly registered simultaneously.
  • the registration of the ultrasound image and the modal image can be realized by the spatial transformation relationship of FIG. 3, that is, the point in the ultrasound image is first transformed from the coordinate system of the ultrasound image to the position sensor.
  • the coordinate system is transformed from the coordinate system of the position sensor to the world coordinate system, and finally the world coordinate system is transformed into the coordinate system of the modal image.
  • the world coordinate system in this embodiment and other embodiments is a coordinate system used as a reference, and may be arbitrarily designated, for example, a magnetic field generator coordinate system. Of course, other coordinate systems may be employed as the reference coordinate system.
  • the spatial transformation relationship shown in Figure 3 can be expressed in the form of a formula:
  • the transformation matrix A is fixed, so the transformation matrix A can be obtained by the calibration method in combination with the positional pointing information R pra o before registration.
  • the transformation matrix A can be obtained by the calibration method in combination with the positional pointing information R pra o before registration.
  • the transformation matrix R PRATE For the transformation matrix R PRATE , it can be directly read by the positioning controller connected to the position sensor. As the probe moves, the R PRATE changes continuously, and can also be implemented by referring to commonly used related technologies.
  • the transformation matrix P which can also be called a registration matrix
  • P it can be calculated according to formula (1) by finding corresponding points or faces in the ultrasound image space and the modal image space.
  • certain points or areas of the body examination site are marked, and the position of the marked points or regions is obtained after imaging in the ultrasound image space, and these points are also obtained by imaging in the modal image space.
  • the position of the region et . so that P can be obtained by the formula (1); in another example, after transforming some points or regions of the ultrasound image space to the world coordinate system, the modal image is Some points or regions are also transformed into the world coordinate system.
  • the image matching method is used to obtain the position of the point or region in which the ultrasound image is spatially imaged.
  • the position ec of the point or region in the modal image space it is also possible to calculate P. .
  • an inverse transform can be applied to obtain the inverse coordinate transformation from one image space to another.
  • the fusion step S13 performs image fusion on the registered ultrasound image and the modal image.
  • image fusion can be specifically referred to the existing image fusion processing methods, such as image fusion methods based on spatial domain, such as image pixel gray value maximal (small) fusion method, image pixel gray value weighted fusion method, etc., or A transform domain-based image fusion method, such as a multi-resolution pyramid fusion method, a Fourier transform-based image fusion method, etc., will not be described in detail herein.
  • image fusion methods based on spatial domain such as image pixel gray value maximal (small) fusion method, image pixel gray value weighted fusion method, etc.
  • a transform domain-based image fusion method such as a multi-resolution pyramid fusion method, a Fourier transform-based image fusion method, etc.
  • the registration and fusion of the ultrasound image and the modal image are achieved by the above steps, and the registration method employed here is different from the existing registration method.
  • the existing method is based on real-time ultrasound, which is performed by freezing one frame at a time.
  • the registration method of this embodiment is to record a registration video of the target object before registration, and select one or more frames of ultrasound images for registration by video playback.
  • the fusion method of the ultrasound image and the modal image of an embodiment further includes: a multi-frame display step of selecting a multi-frame ultrasound image for registration in the registration step, after displaying the registration or fusion While the ultrasound image is being framed, the intersection and angle between the images of these frames are also displayed, as shown in Figure 4, The two boxes show the intersection line position 305 between the different frame ultrasound images and the angle 307 between the two frames. If the multi-frame registration or fused image is to be displayed, one of the frames may be selected as the reference frame, and the displayed intersection line position and the angle between the two are the remaining frames and the reference frame. Intersection lines and angles.
  • An advantage of this embodiment is that the relative positional relationship of the frames can be more intuitively reflected, facilitating subsequent fine-tuning of the registration.
  • the respiratory control ability of many patients is usually poor.
  • the registration and fusion of the registration video is performed under the free breathing state of the patient.
  • the organ is also Along with the more complicated movements, the motion forms include rigid motions such as translation and rotation, as well as non-rigid motions such as integral or local deformation caused by inter-organ compression. This causes the organ movement caused by breathing to have a large effect on the fusion result, so it is necessary to reduce or eliminate the effect of breathing on registration and fusion.
  • the present embodiment proposes a method for registering and fusing an ultrasound image and a pre-acquired modal image, which is first based on the positional information of the ultrasound video and the position sensor, and is used to describe the organ with the breathing.
  • the regular breathing model of the movement uses the breathing model to obtain a time-varying corrected spatial map and apply it to registration and fusion to achieve the goal of attenuating or eliminating respiratory effects.
  • Xsec P ⁇ T Rp ⁇ A ⁇ X U s) ( 2 )
  • is some spatial mapping method for correction, which can be linear mapping, affine mapping and other forms of nonlinear mapping, ie r can be defined Any continuous mapping in the middle of three dimensions.
  • the breathing movement during free breathing is relatively regular and can be approximated as a periodic movement.
  • the patient's abdominal epidermis moves mainly along the front and rear directions of the human body, which can be approximated as reciprocating motion.
  • a target organ whose movement is mainly caused by breathing, similar to the movement of the abdomen epidermis, its motion can also be approximated as periodic.
  • a linear model can be used to describe its regularity with the movement of the breathing.
  • a linear model is taken as an example.
  • the present embodiment can be simply deduced in combination with an algorithm formula commonly used in non-rigid motion.
  • equation (2) can be expressed as
  • respiration sensor 109 such as the dotted line portion shown in Fig. 1, to the patient's abdominal epidermis for tracking the abdominal epidermis of the patient, based on the ultrasonic fusion imaging navigation system of Embodiment 1. How the sensor is positioned to move with the breath.
  • the respiration sensor can be a position sensor based on electromagnetic induction or other type of position sensor.
  • Reference position may be any point on the path of the breathing sensor with the abdomen movement, for example, it may be an intermediate position of the sensor movement path;
  • Respiratory depth The displacement of the respiration sensor relative to the reference position at a time corresponding to the ultrasound image of a frame, called the respiration depth, is used to approximate the state of the respiration, which can be represented by d(t), which can be passed through the respiration position.
  • the information R rep (0 is obtained, which can be obtained by using the conversion method of the position information of the commonly used respiratory sensor, and will not be described in detail;
  • Reference breathing depth The breathing depth corresponding to the reference position is called the reference breathing depth and can be expressed as d 0 .
  • Reference frame relative motion to be at the reference breathing depth ⁇ .
  • the position of the lower target organ is the reference position, and in the world coordinate system, the amount of movement of the target organ relative to the reference position at different breathing depths is called the reference system relative motion amount.
  • This embodiment uses rigid motion as an example for illustration, and ignores the rotational motion to consider only the relative displacement of the reference frame, which is used here to describe.
  • the method for establishing a breathing model of this embodiment includes the following steps S21 to S25:
  • Step S21 setting a reference breathing depth and a reference position for the pre-stored ultrasonic video.
  • the positional pointing information R pro of the position sensor fixed to the ultrasonic probe, and the respiratory position information R resp (t) of the respiratory sensor fixed on the patient's abdomen epidermis are recorded.
  • the acquisition time includes one or more breathing cycles.
  • respiratory motion exhibits periodic characteristics, and the phases of each respiratory motion vary during the cycle, and this motion is a repetitive process during the cycle, similar to Figure 6.
  • the waveform of the sinusoid is shown, where the horizontal axis represents the breathing cycle and the vertical axis represents the breathing depth 40, and each repeated curve represents one breathing cycle.
  • the horizontal dotted line is the given reference position d 0
  • the breathing depth varies with the video frame number /
  • the reference position is set to the middle position of the sensor motion path.
  • the reference position may also be other positions, such as the lowest position or the highest position of the sensor motion path. Since the same breathing depth may correspond to an inhalation state or an exhalation state, it is also possible to further distinguish the respiratory state into an expiratory phase and an inhalation phase, and similarly, a sinusoidal waveform can also be obtained.
  • Step S22 For each video segment, select a reference frame to obtain a motion displacement V d(t) of the target object corresponding to the reference frame corresponding to the ultrasound image of the other frame.
  • the target organ corresponding to the ultrasound image of the other frame can be obtained by motion tracking, such as template matching.
  • Step S23 converting to the same reference coordinate system to eliminate the influence of probe jitter.
  • proji WXdit The projected component on the corresponding plane
  • the motion displacement of the target organ corresponding to the other frame relative to the reference frame is the observed value of the projected component.
  • the projection component of the plane corresponding to the reference frame is proj mit mo) , which is a general expression of the observation value V ) of ⁇ ( ⁇ ( ⁇ ))), namely:
  • Vi ⁇ d ⁇ t) proji(m(i)-m 0 ) ( 6 )
  • the relative displacement of the reference system relative to the reference position can be obtained by the following formula (7) by the optimization method, that is, J (D) is the corresponding value when the outermost layer is the minimum value:
  • Equation (7) is solved at multiple breathing depths to obtain the displacement W(d) at the depth of the breath.
  • Step S25 obtaining a breathing model by fitting different respiratory depths and corresponding reference frame relative displacements.
  • the "breathing model” refers to a law in which the displacement of the target organ varies with the depth of the breathing.
  • the term "establishing a breathing model” refers to the calculation or determination of the target organ displacement as a function of respiratory depth based on existing ultrasound video data, that is, the mathematical expression of the law of the target organ displacement as a function of respiratory depth.
  • the target organ motion and the reference frame relative displacement JV (d) observed at a certain breathing depth D in plane i (ie, video I) are measured by the square of the difference modulus of the two vectors.
  • the error between the projection / "•/ iWD) on the plane, ie II, ⁇ (,) is called) II 2
  • other methods can also describe the magnitude of the error in other ways, Such as ⁇ ⁇ ⁇ ' ⁇ )) ( )) 11 and so on.
  • the three straight lines shown in FIG. 7 are schematic diagrams of the displacement of the target organ in the three directions of the world coordinate system with respect to the reference position as a function of the breathing depth d by linear fitting.
  • Other embodiments may be used with other fitting methods, such as by other types of curves such as a quadratic curve or a cubic curve.
  • the breathing model is used for correction, that is, the ultrasound image of different frames is used by the breathing model before registration. Correct to the same breathing depth (ie, the same breathing state) to achieve the purpose of eliminating or reducing the respiratory effects.
  • the specific calibration process is described below.
  • t is a certain frame in the frame to be registered selected by the doctor, and the corresponding breathing depth of the frame is 0.
  • the relative displacement of the reference system relative to the reference position at the depth of respiration is
  • ⁇ ( /(/))) is the ultrasound image frame/corresponding respiratory depth as an independent variable, corrected according to the breathing model Breath correction matrix.
  • T(W(ci( ⁇ ))) may also be obtained by compensating for the motion law in one or more dimensions after some processing, such as nonlinear transformation or some directions. Give different weights on them.
  • This embodiment uses a breathing model for correction before registration.
  • the established breathing model can be used to correct the effects of breathing during the fusion process in real time.
  • the principle of correction is the same as correcting different frames to the same breathing depth during the aforementioned registration process.
  • the correspondence between M and ec in the fusion after correction can be embodied by the formula (3) into the formula (10):
  • Figure 8 shows the positional orientation based on ultrasound video and position sensor in the case of establishing a breathing model.
  • Schematic diagram of the process of registration and fusion of information and position information of the respiratory sensor Specifically include:
  • step S31 collecting ultrasound video.
  • step S32 playing back the video, and selecting one or more frames of images for registration.
  • step S33 it is judged whether or not the breathing correction is performed on the registration frame. If so, step S34 is performed to correct the registration frame to the same breathing depth according to the breathing pattern, and the flow proceeds to step S35. If no, in step S35, the selected image is registered.
  • step S36 it is determined whether the breathing result is corrected for the fusion result. If so, step S37 is performed to correct the fusion result to the same breathing depth according to the breathing model, and the flow proceeds to step S38. If no, step S38 is performed to display the result of the fusion.
  • the doctor can perform video playback, select one or more frames of ultrasound images for registration, and extract the position sensor of the position sensor to register for registration. If the selected multi-frame ultrasound image is not at the same depth of breath, the breath model can be used to correct to the same depth and then register. After the registration is completed, if you want to observe the fusion results at the same breathing depth, you can also use the breathing model to correct to that depth.
  • the above embodiment describes how to correct a multi-frame ultrasound image for registration to a certain breathing depth using a breathing model, and how to correct the fusion result to a certain breathing depth, assuming that the corrected target depth is the time when the breathing model is established.
  • the reference depth d 0 of course, the corrected target depth can be understood for any meaningful breathing depth, when the breathing correction matrix is changed from r(ff(i ( ))) to ( ( ⁇ »)) - T( W(d(t))) 0
  • the fusion method of the ultrasound image and the pre-acquired modal image provided by the embodiment includes the following steps:
  • the ultrasound video data comprising an ultrasound image obtained by acquiring the target object from at least one plane, and corresponding to each frame of the ultrasound image
  • Position pointing information and respiratory position information wherein the position pointing information is sensed by a position sensor fixed to the ultrasonic probe during the ultrasonic image acquisition process, and the respiratory position information is a respiratory sensor fixed to the target object during the ultrasonic image acquisition process Inductive breathing of the target object is obtained;
  • the registration step the multi-frame ultrasound image is selected from the ultrasound video data to be registered with the modal image, and the registration method may adopt a common image registration algorithm instead of the registration method provided by the foregoing embodiment of the present application;
  • the fusion step performs image fusion on the multi-frame ultrasound image and the modal image after registration, and in the fusion process, the multi-frame ultrasound image after registration is corrected to the same breathing depth by using the breathing model, thereby being The fusion results were observed at a depth of breath.
  • the establishment of the breathing model and the method of correcting to the same breathing depth can be referred to the corresponding portions in the foregoing embodiment 2, and will not be repeated here.
  • a selecting step of selecting a multi-frame ultrasound image from at least one piece of ultrasound video data wherein the ultrasound video data comprises an ultrasound image obtained by acquiring the target object from at least one plane, and position pointing information corresponding to each frame of the ultrasound image, the positional pointing The information is obtained by a position sensor fixed on the ultrasonic probe during the ultrasonic image acquisition process. Meanwhile, the ultrasonic video data further includes respiratory position information corresponding to each frame of the ultrasonic image, and the respiratory position information is in the ultrasonic image acquisition process. Inducing the breathing of the target object by a breathing sensor fixed to the target object;
  • the multi-frame ultrasound image to be registered is corrected to the same breathing depth by the breathing model.
  • the steps of establishing a breathing model include:
  • the relative motion calculation sub-step for each piece of ultrasound video data, selecting one frame of the ultrasound image corresponding to the reference breathing depth as the reference frame, and acquiring the motion amount of the target object corresponding to the reference frame corresponding to the other frame ultrasound image, according to the amount of motion, in the same Calculating the relative motion amount of the reference frame at different breathing depths of the target object corresponding to the ultrasound image of the other frame under the reference coordinate system;
  • the fitting modeling substeps are performed to fit different respiratory depths and their corresponding reference frame relative motion amounts to obtain a breathing model.
  • correcting the multi-frame ultrasound image to be registered to the same respiratory depth comprises: a calculation sub-step, combining the corresponding respiratory position with any frame ultrasound image in the multi-frame ultrasound image to be registered Information, obtaining a breathing depth corresponding to the ultrasound image of the frame, and obtaining a relative motion amount of the reference frame of the target object relative to the reference position in the frame ultrasound image according to the breathing model;
  • a calibration substep according to the breathing model, the relative motion amount of the reference frame corresponding to the multi-frame ultrasound image Going to the same scheduled breathing depth.
  • the multi-frame ultrasound image to be registered in the registration step of this embodiment may be a multi-frame image selected when the breathing model is established, or may be a multi-frame ultrasound image obtained based on the real-time ultrasound image.
  • the embodiments of the present application propose a new review based on the review.
  • Sexual ultrasound video combined with position sensor information for registration fusion, while also establishing a breathing model for respiratory correction.
  • the patient is required to breathe normally.
  • Ultrasound video of more than one respiratory cycle is collected at several angles of each other angle, one video is collected at one position, and the sensor position and pointing information on the ultrasonic probe corresponding to each frame of ultrasonic data are recorded.
  • each frame of ultrasound data Prior to registration, in the acquired ultrasound video containing the information required for registration, each frame of ultrasound data simultaneously records the sensor position and pointing information fixed to the ultrasound probe, and the position of the respiratory sensor fixed to the patient's abdomen.
  • the doctor is allowed to perform frame-by-frame or continuous video playback search, one or more frames of data are selected for registration, and the system simultaneously extracts corresponding ultrasonic probe position sensor information for registration.
  • the physician can use the breathing model and respiratory sensor information to correct ultrasound data from different frames to the same breathing state.
  • the breathing model can also be used to correct the fusion result according to the position information of the respiratory sensor in real time, so as to eliminate or attenuate the respiratory effect.
  • the method for registering and merging with other modal data and respirating the registration data and the fusion result based on the ultrasound video proposed in the embodiments of the present application is applicable not only to the liver but also to other abdominal organs such as the kidney and the prostate.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).
  • ROM read-only memory
  • RAM random access memory

Abstract

The present application relates to an ultrasound fusion imaging method and an ultrasound fusion imaging navigation system. The ultrasound fusion imaging method comprises: a selection step, comprising selecting at least one ultrasound image from at least one previously stored piece of ultrasound video data according to an input instruction, wherein the ultrasound video data comprises an ultrasound image obtained by acquiring a target object from at least one plane and location-pointing information corresponding to the ultrasound image; a registration step, comprising registering the selected at least one ultrasound image with a modality image, wherein the registration process uses the location-pointing information of the at least one ultrasound image; and a fusion step, comprising image fusion of the registered ultrasound image with the modality image. The present invention differs from the existing registration-fusion methods based on real-time ultrasound in using a recorded registration video by scanning a target object prior to registration, and then selecting one or more ultrasound images to perform registration.

Description

超声融合成像方法、 超声融合成像导航系统  Ultrasound fusion imaging method, ultrasound fusion imaging navigation system
技术领域 Technical field
本申请涉及超声成像技术, 尤其涉及一种用于将超声图像与预先获取的模态 图像进行融合的方法、 以及一种超声融合成像导航系统。 背景技术  The present application relates to ultrasound imaging techniques, and more particularly to a method for fusing ultrasound images to pre-acquired modal images, and an ultrasound fusion imaging navigation system. Background technique
在临床中对目标对象的成像可以使用一种以上的成像系统, 从而让医务人员 可获得多种模态的医学图像, 例如计算机断层扫描(Computed Tomography, CT ) 图像、 磁共振(Magnetic Resonance, MR ) 图像、 超声图像等。 超声融合成像导 航的原理是通过空间定位装置 (通常是绑定在探头上的磁定位传感器), 建立起 实时的超声图像和提前获取的其它模态数据(如 CT或 MR图像) 的空间对应关 系, 并将超声与对应模态数据切面叠加显示, 达到两种图像的融合, 实现两种图 像对诊断和治疗过程的共同引导, 充分发挥 CT或 MR高分辨率的优势和超声实 时的特点, 为临床医生提供更多的诊断信息, 提高治疗效果。  Imaging of the target subject in the clinic can use more than one imaging system, allowing medical personnel to obtain multiple modal medical images, such as computed tomography (CT) images, magnetic resonance (Magnetic Resonance, MR) ) images, ultrasound images, etc. The principle of ultrasound fusion imaging navigation is to establish the spatial correspondence between real-time ultrasound images and other modal data acquired in advance (such as CT or MR images) through a spatial positioning device (usually a magnetic positioning sensor attached to the probe). And superimposing the ultrasound and the corresponding modal data facet to achieve the fusion of the two images, realize the common guidance of the two images to the diagnosis and treatment process, give full play to the advantages of CT or MR high resolution and the real-time characteristics of ultrasound. Clinicians provide more diagnostic information to improve treatment outcomes.
在超声融合成像导航中, 重要的一步是将超声图像与模态数据配准, 其实质 是, 将超声图像中的点(或平面)在世界坐标系中的位置与模态数据中的点(或 平面)在世界坐标系中的位置一一对应起来, 精确获得目标点在世界坐标系中的 位置对提高配准精度有很大影响。  In ultrasound fusion imaging navigation, an important step is to register the ultrasound image with the modal data, which is essentially the position of the point (or plane) in the ultrasound image in the world coordinate system and the point in the modal data ( Or the planes in the world coordinate system one-to-one correspondence, accurately obtaining the position of the target point in the world coordinate system has a great influence on improving the registration accuracy.
目前的配准技术是基于实时超声, 医护人员通过冻结当前帧的方式来获取用 来提供配准用信息的超声图像, 这种一帧一帧地对实时超声图像进行处理的操作 方式已形成习惯。 此外, 医生为了获取某个呼吸深度下的特定切面的超声图像, 经常需要病人很好地控制呼吸进行配合, 特别是对于一些釆用腹式呼吸的病人进 行腹部脏器的融合时,呼吸运动造成的脏器位移、旋转及形变会造成很大的误差, 因此需要病人例如在特定呼吸状态保持较长时间。 这对于使用超声融合成像导航 的医生和病人都提出了较高的要求, 如果病人呼吸控制较差或者医生的操作经验 不足, 往往不能取得令人满意的效果, 如导致配准精度不高, 融合成像成功率下 降。 目前用于消除呼吸影响的方法主要是靠医生人工判断呼吸相位或者加传感器 进行简单的呼吸门控, 然效果不佳。 发明内容 The current registration technology is based on real-time ultrasound. The medical staff obtains the ultrasound image used to provide the registration information by freezing the current frame. This method of processing the real-time ultrasound image frame by frame has become a habit. . In addition, in order to obtain an ultrasound image of a specific section of a certain depth of breath, the doctor often needs the patient to control the breathing well, especially for some patients with abdominal breathing to perform abdominal body fusion. Displacement, rotation and deformation of the organ can cause large errors and therefore require the patient to remain in a particular breathing state for a longer period of time, for example. This puts high demands on doctors and patients who use ultrasound fusion imaging navigation. If the patient's breathing control is poor or the doctor's operating experience is insufficient, it often fails to achieve satisfactory results, such as resulting in low registration accuracy, fusion. Imaging success rate Drop. The current method for eliminating the effects of respiration is mainly based on the doctor manually determining the respiratory phase or adding a sensor for simple respiratory gating, but the effect is not good. Summary of the invention
基于此, 有必要提供一种消除呼吸影响效果好的超声融合成像方法及超声融 合成像导航系统。  Based on this, it is necessary to provide an ultrasound fusion imaging method and a ultrasound fusion imaging navigation system that have a good effect of eliminating respiratory effects.
一种用于将超声图像与预先获取的模态图像进行融合的方法, 包括: 选择步 骤, 根据输入的指令, 从预先存储的至少一段超声视频数据中选择至少一帧超声 图像, 所述超声视频数据包括从至少一个平面对目标对象进行采集得到的超声图 像、 以及与每一帧超声图像对应的位置指向信息, 所述位置指向信息由在超声图 像获取过程中固定于超声探头上的位置传感器根据感应的超声探头的移动状况 而产生; 配准步骤, 将选择出的至少一帧超声图像与所述模态图像进行配准, 所 述配准过程中使用所述至少一帧超声图像的位置指向信息; 融合步骤, 对配准后 的超声图像和模态图像进行图像融合。  A method for fusing an ultrasound image with a pre-acquired modal image, comprising: a selecting step of selecting at least one frame of ultrasound image from at least one piece of pre-stored ultrasound video data according to the input instruction, the ultrasound video The data includes an ultrasonic image obtained by acquiring the target object from at least one plane, and position pointing information corresponding to each frame of the ultrasonic image, the position pointing information being determined by a position sensor fixed to the ultrasonic probe during the ultrasonic image acquiring process. Generating a condition of the inductive ultrasonic probe; a registration step of registering the selected at least one frame of the ultrasound image with the modal image, wherein the at least one frame of the ultrasound image is used in the registration process Information; fusion step, image fusion of the registered ultrasound image and modal image.
一种实施例中, 所述选择步骤中选择多帧超声图像; 所述方法还包括: 建立 呼吸模型步骤, 根据所述超声视频数据建立呼吸模型; 呼吸校正步骤, 在将选择 出的多帧超声图像和模态图像进行配准之前和 /或进行融合过程中,采用呼吸模型 将所述多帧超声图像校正到同一个呼吸深度。  In one embodiment, the multi-frame ultrasound image is selected in the selecting step; the method further includes: establishing a breathing model step, establishing a breathing model according to the ultrasound video data; and a breathing correction step, selecting the selected multi-frame ultrasound The multi-frame ultrasound image is corrected to the same breathing depth using a breathing model before and/or during fusion of the image and the modal image.
一种用于将超声图像与预先获取的模态图像进行融合的方法, 包括: 选择步骤, 从至少一段超声视频数据中选择多帧超声图像, 所述超声视频数 据包括从至少一个平面对目标对象进行采集得到的超声图像、 以及与每一帧超声 图像对应的位置指向信息, 所述位置指向信息是在超声图像获取过程中由固定于 超声探头上的位置传感器感应得到;  A method for fusing an ultrasound image with a pre-acquired modal image, comprising: a selecting step of selecting a multi-frame ultrasound image from at least one piece of ultrasound video data, the ultrasound video data comprising from at least one plane to a target object Performing the acquired ultrasonic image and position pointing information corresponding to each frame of the ultrasonic image, the position pointing information being obtained by the position sensor fixed on the ultrasonic probe during the ultrasonic image acquiring process;
建立呼吸模型步骤, 根据所述超声视频数据建立呼吸模型;  Establishing a breathing model step of establishing a breathing model based on the ultrasound video data;
配准步骤, 将待配准的多帧超声图像与所述模态图像进行配准;  a registration step of registering the multi-frame ultrasound image to be registered with the modal image;
融合步骤, 对配准后的超声图像和模态图像进行图像融合;  a fusion step of performing image fusion on the registered ultrasound image and modal image;
其中,在将待配准的多帧超声图像和模态图像进行配准之前和 /或进行融合过 程中, 采用所述呼吸模型将所述待配准的多帧超声图像校正到同一个呼吸深度。 一种超声融合成像导航系统,包括:探头及固定于所述探头上的位置传感器; 采集模块, 用于从至少一个平面对所述目标对象进行采集, 得到包含配准用信息 的至少一段超声视频数据, 对于每一段超声视频数据中的每一帧超声图像, 记录 位置指向信息, 所述位置指向信息由在超声图像获取过程中位置传感器根据感应 的超声探头的移动状况而产生; 回放模块, 用于根据输入的指令, 回放预先存储 的所述超声视频数据; 选择模块, 用于根据输入的选择指令, 从回放的超声视频 数据中选择至少一帧超声图像; 配准模块, 用于将选择出的至少一帧超声图像与 所述模态图像进行配准, 所述配准过程中使用所述至少一帧超声图像的位置指向 信息; 融合模块, 用于对配准后的超声图像和模态图像进行图像融合。 Wherein, before the registration of the multi-frame ultrasound image and the modal image to be registered and/or the fusion process, the multi-frame ultrasound image to be registered is corrected to the same breathing depth by using the breathing model . An ultrasonic fusion imaging navigation system includes: a probe and a position sensor fixed on the probe; an acquisition module, configured to collect the target object from at least one plane, and obtain at least one piece of ultrasound video including information for registration Data, for each frame of ultrasound image in each piece of ultrasound video data, recording position pointing information generated by the position sensor during the ultrasound image acquisition process according to the sensed movement condition of the ultrasound probe; And playing back the pre-stored ultrasound video data according to the input instruction; the selecting module, configured to select at least one frame of the ultrasound image from the played back ultrasound video data according to the input selection instruction; the registration module, configured to select Aligning at least one frame of the ultrasound image with the modal image, wherein the registration uses the positional orientation information of the at least one frame of the ultrasound image; the fusion module is configured to register the ultrasound image and the modality The image is image blended.
上述超声融合成像方法和超声融合成像导航系统采用不同于现有的基于实 时超声的配准融合方法 , 而是在配准前事先录制扫查目标对象的一段配准视频, 然后选取一帧或多帧超声图像进行配准, 消除呼吸影响效果好。  The above-mentioned ultrasonic fusion imaging method and the ultrasonic fusion imaging navigation system adopt a registration fusion method different from the existing real-time ultrasound, but record a registration video of the target object before registration, and then select one or more frames. The frame ultrasound image is registered to eliminate the effect of breathing.
图 1为一实施方式的超声融合成像导航系统的结构示意图; 1 is a schematic structural view of an ultrasonic fusion imaging navigation system according to an embodiment;
图 2为一实施方式的超声图像与预先获取的模态图像的融合方法的流程示意 图;  2 is a schematic flow chart of a method for fusing an ultrasonic image and a pre-acquired modal image according to an embodiment;
图 3为一实施方式的空间变换示意图;  3 is a schematic diagram of spatial transformation of an embodiment;
图 4为一实施方式的多个配准平面显示方式示意图, 其中每个窗口显示一幅 超声图像;  4 is a schematic diagram showing a plurality of registration plane display modes in an embodiment, wherein each window displays an ultrasound image;
图 5为一实施方式的建立呼吸模型流程示意图;  5 is a schematic flow chart of establishing a breathing model according to an embodiment;
图 6为一实施方式的呼吸深度随时间变化示意图;  6 is a schematic diagram showing changes in breathing depth with time according to an embodiment;
图 7为一实施方式的通过线性拟合得到的目标器官相对于基准位置在世界坐 标系三个方向上的位移随呼吸深度变化的示意图;  7 is a schematic diagram showing the displacement of a target organ in a three-direction of the world coordinate system with respect to a reference position as a function of respiratory depth by linear fitting according to an embodiment;
图 8为一实施方式的建立呼吸模型后进行配准融合的流程示意图。 具体实施方式  FIG. 8 is a schematic flow chart of performing registration fusion after establishing a breathing model according to an embodiment. detailed description
以下说明提供了用于完全理解 的特定细节。 然而, 本领域的技术人员应该理解, 无需这样的细节亦可实践本发 明。 在一些实例中, 为了避免不必要地混淆对实施例的描述, 没有详细示出或描 述公知的结构和功能。 The following instructions are provided for full understanding Specific details. However, it will be understood by those skilled in the art that the present invention may be practiced without such detail. In some instances, well known structures and functions are not shown or described in detail in order to avoid obscuring the description of the embodiments.
除非上下文清楚地要求, 否则, 贯穿本说明书和权利要求的用语 "包括"、 "包 含"等应以包含性的意义来解释而不是排他性或穷尽性的意义, 即, 其含义为"包 括, 但不限于"。  The terms "including", "comprising", etc., are intended to be interpreted in an inclusive meaning rather than an exclusive or exhaustive meaning. not limited to".
下面通过具体实施方式结合附图对本发明作进一步详细说明。  The present invention will be further described in detail below with reference to the accompanying drawings.
一种超声融合成像导航系统的框图如图 1 所示 (不包括虚线框): 超声探头 101向人体检查部位发射超声波,接收到的回波信号经过超声成像模块 102处理, 获得目标器官的超声图像; 提前获取的模态图像数据, 如 CT或 MR图像, 在进 行配准前被导入配准融合模块 105; 固定在探头 101 上的位置传感器 103 , 随着 探头的移动,不断地提供位置信息,通过定位控制器 104得到探头的六自由度(包 括垂直向、横向、 纵向、俯仰、 滚转和摇摆)空间方位的信息; 配准融合模块 105 利用图像信息和位置信息将超声图像和模态数据进行配准和融合; 显示模块 106 用于显示融合结果。  A block diagram of an ultrasound fusion imaging navigation system is shown in Figure 1 (excluding the dashed box): The ultrasound probe 101 transmits ultrasound to the body examination site, and the received echo signal is processed by the ultrasound imaging module 102 to obtain an ultrasound image of the target organ. The modal image data acquired in advance, such as a CT or MR image, is introduced into the registration fusion module 105 before registration; the position sensor 103 fixed on the probe 101 continuously provides position information as the probe moves, Information on the spatial orientation of the probe's six degrees of freedom (including vertical, lateral, longitudinal, pitch, roll, and sway) is obtained by the position controller 104; the registration fusion module 105 uses the image information and position information to image the ultrasound image and the modal data Registration and fusion are performed; display module 106 is used to display the fusion results.
结合图 1所示的超声融合成像导航系统, 本实施例提供的一种超声图像与预 先获取的模态图像的融合方法如图 2所示, 包括如下步骤 S 1卜 S 13 :  In conjunction with the ultrasound fusion imaging navigation system shown in FIG. 1, the fusion method of the ultrasound image and the pre-acquired modal image provided in this embodiment is as shown in FIG. 2, and includes the following steps S1: S13:
选择步骤 S 1 1 , 根据输入的指令从预先存储的至少一段超声视频数据中选择 至少一帧超声图像。  The selecting step S 1 1 selects at least one frame of the ultrasonic image from the pre-stored at least one piece of the ultrasound video data according to the input instruction.
预先存储的超声视频数据包括: 事先对目标对象(目标器官, 如肝脏)进行 采集, 得到包含配准用信息的超声视频, 即配准视频, 对于其中的每一帧超声图 像 , 同时记录固定于超声探头上的位置传感器的位置指向信息 Rprote( ), Rprobe{t) 由在超声图像获取过程中位置传感器根据感应的超声探头的移动状况而产生, 也 就是说配准视频的内容除了超声图像数据外,还包括位置传感器的位置指向信息 Rpro t) . 其中, 位置传感器可以是基于电磁感应的位置传感器, 也可以是基于光 学原理的位置传感器, 也可以是基于声学原理等的其它类型位置传感器。 本申请 各实施例中以基于电磁感应的位置传感器为例进行说明。 The pre-stored ultrasound video data includes: acquiring a target object (a target organ such as a liver) in advance, and obtaining an ultrasound video containing registration information, that is, a registration video, for each frame of the ultrasound image, the recording is fixed at the same time The positional orientation information of the position sensor on the ultrasonic probe is R prote ( ), R probe {t). The position sensor is generated according to the movement condition of the induced ultrasonic probe during the ultrasonic image acquisition process, that is, the content of the registration video is in addition to the ultrasound. In addition to the image data, positional orientation information of the position sensor is also included. The position sensor may be an electromagnetic induction based position sensor, an optical principle based position sensor, or other types of positions based on an acoustic principle or the like. sensor. In the embodiments of the present application, a position sensor based on electromagnetic induction is taken as an example for description.
在选择步骤中, 该输入的指令可以是来自外部的用户输入的指令, 也可以是 系统内部在进行配准融合时自动触发的指令。 一种实现中, 在选择时, 可以先回 放预先存储的超声视频, 然后再回放的过程中进行选择, 系统所接收到的输入的 播放指令和选择指令可视用户需要而定, 结合常用的相关技术实现, 只要其能满 足超声视频的回放以及帧图像的选取即可。 本步骤的回放超声视频中, 可以是将 预先存储的超声视频从头到尾逐帧播放, 也可以是通过例如进度条或者旋钮转动 等方式将超声视频拖曳到感兴趣的切面所对应的帧位置。 种实现中, 也可以通过 预先设定的一些条件进行选择, 例如预先设定选择超声视频中的某些帧如前 20 帧等。 In the selecting step, the input instruction may be an instruction input from an external user, or may be An instruction that is automatically triggered when the registration is fused inside the system. In an implementation, when selecting, the pre-stored ultrasound video may be played back, and then selected during playback, and the input instruction and selection instruction received by the system may be determined by the user's needs, and the commonly used correlation is combined. The technology is implemented as long as it can satisfy the playback of the ultrasound video and the selection of the frame image. In the playback ultrasound video of this step, the pre-stored ultrasound video may be played frame by frame from beginning to end, or the ultrasound video may be dragged to the frame position corresponding to the slice of interest by, for example, a progress bar or a knob rotation. In some implementations, selection may also be made by some pre-set conditions, such as presetting a certain frame in the selected ultrasound video, such as the first 20 frames.
配准步骤 S12, 将选择出的至少一帧超声图像与模态图像进行配准, 系统同 时使用这些超声图像对应的位置指向信息参与配准。  In the registration step S12, the selected at least one frame of the ultrasound image is registered with the modal image, and the system simultaneously uses the positional pointing information corresponding to the ultrasound images to participate in the registration.
为了提高配准的整场精度, 也可以在配准视频中选择两帧或者更多帧超声图 像进行配准。 当然, 选择多帧超声图像进行配准的前提条件是, 这些帧的超声图 像具有相同或者相似的呼吸状态, 或者说这些超声图像是在相同或相似的呼吸状 态下采集得到的。 一般地, 如果超声视频是在病人屏住呼吸的状态下采集的, 视 频中所有帧的超声图像具有相似的呼吸状态, 可以直接同时使用多帧超声图像进 行配准。  In order to improve the accuracy of the entire field of registration, it is also possible to select two or more frames of ultrasound images for registration in the registration video. Of course, the prerequisite for selecting a multi-frame ultrasound image for registration is that the ultrasound images of these frames have the same or similar respiratory states, or that these ultrasound images are acquired in the same or similar respiratory state. In general, if the ultrasound video is acquired while the patient is holding the breath, the ultrasound images of all frames in the video have similar breathing states, and the multi-frame ultrasound images can be directly registered simultaneously.
在进行图像配准时, 需要寻找一种空间变换把一幅图像映射到幅图像, 使得 两图像中对应于空间同一位置的点——对应起来,从而达到将信息正确融合的目 的。 基于此, 在超声融合成像导航系统中, 可以通过如图 3的空间变换关系实现 超声图像与模态图像的配准, 即, 先将超声图像中的点从超声图像的坐标系变换 到位置传感器的坐标系, 再从位置传感器的坐标系变换到世界坐标系, 最后将世 界坐标系变换为模态图像的坐标系。本实施例和其它实施例中的世界坐标系是用 作参考的坐标系, 可以是任意指定的, 例如磁场发生器坐标系, 当然也可以采用 其它的坐标系作为参考坐标系。 图 3所示空间变换关系以公式的形式可表示为:  In the image registration, it is necessary to find a spatial transformation to map an image to the image, so that the points corresponding to the same position in the two images are correspondingly matched, so as to achieve the purpose of correctly merging the information. Based on this, in the ultrasound fusion imaging navigation system, the registration of the ultrasound image and the modal image can be realized by the spatial transformation relationship of FIG. 3, that is, the point in the ultrasound image is first transformed from the coordinate system of the ultrasound image to the position sensor. The coordinate system is transformed from the coordinate system of the position sensor to the world coordinate system, and finally the world coordinate system is transformed into the coordinate system of the modal image. The world coordinate system in this embodiment and other embodiments is a coordinate system used as a reference, and may be arbitrarily designated, for example, a magnetic field generator coordinate system. Of course, other coordinate systems may be employed as the reference coordinate system. The spatial transformation relationship shown in Figure 3 can be expressed in the form of a formula:
Xsec― P ' Rprobe - ' ^-us ( 1 ) 其中, 是超声图像中某一点的坐标, ec是该点在模态图像的坐标, ^是 超声图像空间到位置传感器空间的变换矩阵, ^6是位置传感器空间到参考坐标 系如磁场发生器空间的变换矩阵, P是参考坐标系到模态图像空间的变换矩阵。 对于变换矩阵 A,由于位置传感器固定在探头上不动,超声探头深度不变时,Xsec― P ' Rprobe - ' ^-us ( 1 ) where is the coordinates of a point in the ultrasound image, ec is the coordinates of the point in the modal image, ^ is the transformation matrix of the ultrasound image space to the position sensor space, ^ 6 It is the transformation matrix of the position sensor space to the reference coordinate system such as the magnetic field generator space, and P is the transformation matrix of the reference coordinate system to the modal image space. For the transformation matrix A, since the position sensor is fixed on the probe and the depth of the ultrasonic probe is constant,
A固定不变,因此可在配准前结合位置指向信息 Rpra o通过标定的方法获取变换 矩阵 A, 具体可参考现有超声图像空间变换到位置传感器空间的相关技术 , 不作 详述。 A is fixed, so the transformation matrix A can be obtained by the calibration method in combination with the positional pointing information R pra o before registration. For details, refer to the related art of spatial transformation of the existing ultrasound image to the position sensor space, which will not be described in detail.
对于变换矩阵 RPRATE, 可以由与位置传感器相连的定位控制器直接读取, 随 着探头的移动, RPRATE不断变化, 具体也可参考常用的相关技术实现。 For the transformation matrix R PRATE , it can be directly read by the positioning controller connected to the position sensor. As the probe moves, the R PRATE changes continuously, and can also be implemented by referring to commonly used related technologies.
对于变换矩阵 P, 也可称为配准矩阵, 可通过在超声图像空间和模态图像空 间找到对应的点或者面, 根据公式(1 )计算得到。 一种实例中, 对人体检查部 位的某些特定的点或区域作标记, 在超声图像空间中成像后得到所标记的点或区 域的位置 ,, 同时在模态图像空间中成像也得到这些点或区域的位置 et., 从而 通过公式 ( 1 )进行计算即可得到 P; 另一种实例中, 在将超声图像空间的某些 点或区域变换到世界坐标系后, 将模态图像的某些点或区域也变换到世界坐标 系, 通过图像匹配方法, 得到超声图像空间成像的点或区域的位置 .、对应在模 态图像空间的点或区域的位置 ec, 由此也可以计算出 P。 本领域普通技术人员 将易于领会到, 可以应用逆变换以获得从一个图像空间至另一个的相反的坐标变 换。 For the transformation matrix P, which can also be called a registration matrix, it can be calculated according to formula (1) by finding corresponding points or faces in the ultrasound image space and the modal image space. In one example, certain points or areas of the body examination site are marked, and the position of the marked points or regions is obtained after imaging in the ultrasound image space, and these points are also obtained by imaging in the modal image space. Or the position of the region et ., so that P can be obtained by the formula (1); in another example, after transforming some points or regions of the ultrasound image space to the world coordinate system, the modal image is Some points or regions are also transformed into the world coordinate system. The image matching method is used to obtain the position of the point or region in which the ultrasound image is spatially imaged. Corresponding to the position ec of the point or region in the modal image space, it is also possible to calculate P. . One of ordinary skill in the art will readily appreciate that an inverse transform can be applied to obtain the inverse coordinate transformation from one image space to another.
融合步骤 S13 , 对配准后的超声图像和模态图像进行图像融合。  The fusion step S13 performs image fusion on the registered ultrasound image and the modal image.
图像融合采用的技术具体可以参见现有图像融合处理的方法, 例如基于空域 的图像融合方法, 如图像像素灰度值极大(小)融合法、 图像像素灰度值加权融 合法等, 或是基于变换域的图像融合方法, 如基于多分辨率的金字塔融合法、 基 于傅里叶变换的图像融合法等, 在此不作详述。  The techniques used for image fusion can be specifically referred to the existing image fusion processing methods, such as image fusion methods based on spatial domain, such as image pixel gray value maximal (small) fusion method, image pixel gray value weighted fusion method, etc., or A transform domain-based image fusion method, such as a multi-resolution pyramid fusion method, a Fourier transform-based image fusion method, etc., will not be described in detail herein.
从而, 通过上述步骤实现超声图像与模态图像的配准与融合, 这里采用的配 准方法不同于现有的配准方法。 现有的方法基于实时超声, 通过一帧一帧进行冻 结的方式进行。 而本实施例的配准方法是在配准前录制扫查目标对象的一段配准 视频, 通过视频回放的方式选取一帧或多帧超声图像进行配准。  Thus, the registration and fusion of the ultrasound image and the modal image are achieved by the above steps, and the registration method employed here is different from the existing registration method. The existing method is based on real-time ultrasound, which is performed by freezing one frame at a time. The registration method of this embodiment is to record a registration video of the target object before registration, and select one or more frames of ultrasound images for registration by video playback.
除了上述步骤, 一种实施例的超声图像与模态图像的融合方法还包括: 多帧 显示步骤, 在配准步骤中选择多帧超声图像进行配准后, 在显示配准或融合后的 多帧超声图像的同时, 还显示这些帧图像之间的交线和夹角 , 如图 4所示, 左右 两个方框显示了不同帧超声图像之间的相交线位置 305和两帧之间的夹角 307。 如果是要显示多帧配准或融合后的图像, 则, 可以选择其中的一帧为参考帧, 所 显示的相交线位置和两者之间的夹角为其余每一帧与该参考帧的相交线和夹角。 该实施例的优点在于, 能更为直观地反应这些帧的相对位置关系, 便于后续对配 准的微调。 In addition to the above steps, the fusion method of the ultrasound image and the modal image of an embodiment further includes: a multi-frame display step of selecting a multi-frame ultrasound image for registration in the registration step, after displaying the registration or fusion While the ultrasound image is being framed, the intersection and angle between the images of these frames are also displayed, as shown in Figure 4, The two boxes show the intersection line position 305 between the different frame ultrasound images and the angle 307 between the two frames. If the multi-frame registration or fused image is to be displayed, one of the frames may be selected as the reference frame, and the displayed intersection line position and the angle between the two are the remaining frames and the reference frame. Intersection lines and angles. An advantage of this embodiment is that the relative positional relationship of the frames can be more intuitively reflected, facilitating subsequent fine-tuning of the registration.
在应用超声融合成像导航系统时, 通常很多病人的呼吸控制能力是比较差 的, 配准视频的采集和融合是在病人呼吸自由状态下进行的, 然而, 由于人在自 由呼吸时, 脏器也随之进行比较复杂的运动, 运动形式既包括平移、 旋转等刚性 运动, 也包括器官间挤压等造成的整体或局部形变等非刚性运动。 这就使得呼吸 造成的脏器运动将对融合结果产生很大影响, 因此需要减弱或消除呼吸对配准和 融合的影响。 基于此, 本实施例提出一种用于将超声图像和预先获取的模态图像 进行配准和融合的方法, 其首先基于超声视频和位置传感器的位置指向信息, 建 立用来描述脏器随呼吸运动的规律的呼吸模型, 然后利用呼吸模型获取随时间变 化的校正空间映射并应用到配准和融合中, 达到减弱或消除呼吸影响的目的。  In the application of ultrasound fusion imaging navigation system, the respiratory control ability of many patients is usually poor. The registration and fusion of the registration video is performed under the free breathing state of the patient. However, since the person is free to breathe, the organ is also Along with the more complicated movements, the motion forms include rigid motions such as translation and rotation, as well as non-rigid motions such as integral or local deformation caused by inter-organ compression. This causes the organ movement caused by breathing to have a large effect on the fusion result, so it is necessary to reduce or eliminate the effect of breathing on registration and fusion. Based on this, the present embodiment proposes a method for registering and fusing an ultrasound image and a pre-acquired modal image, which is first based on the positional information of the ultrasound video and the position sensor, and is used to describe the organ with the breathing. The regular breathing model of the movement then uses the breathing model to obtain a time-varying corrected spatial map and apply it to registration and fusion to achieve the goal of attenuating or eliminating respiratory effects.
这里采用校正的方式消除或减弱呼吸运动造成的影响, 以公式的形式可表示 为:  Here, the effect of breathing movement is eliminated or attenuated by means of a correction, which can be expressed in the form of a formula:
Xsec = P · T Rp · A · XUs) ( 2 ) 其中 Γ为用于校正的某种空间映射方式, 其可以是线性映射、 仿射映射以及 其它形式非线性映射, 即 r可以是定义在三维中间的任意连续映射。 Xsec = P · T Rp · A · X U s) ( 2 ) where Γ is some spatial mapping method for correction, which can be linear mapping, affine mapping and other forms of nonlinear mapping, ie r can be defined Any continuous mapping in the middle of three dimensions.
一般地, 自由呼吸时呼吸运动比较规律,可以近似看成是一个周期性的运动。 呼吸时, 病人的腹部表皮主要沿人体前后方向运动 , 可近似为前后往复运动。 对 运动主要由呼吸引起的目标器官, 类似于腹部表皮运动, 其运动也可以近似看作 是周期性的。 假设这类目标器官如肝脏等随呼吸的运动是刚性运动, 可用线性模 型来描述其随呼吸运动的规律。 本实施例中以线性模型为例进行说明, 对于具有 非刚性分量的其它实施例, 可结合非刚性运动中常用的算法公式对本实施例做简 单推演即可。 对于线性模型, 空间中所有点有相同的映射关系, 公式(2 )可表 示为  In general, the breathing movement during free breathing is relatively regular and can be approximated as a periodic movement. When breathing, the patient's abdominal epidermis moves mainly along the front and rear directions of the human body, which can be approximated as reciprocating motion. For a target organ whose movement is mainly caused by breathing, similar to the movement of the abdomen epidermis, its motion can also be approximated as periodic. It is assumed that the movement of such target organs, such as the liver, with the breathing is a rigid motion, and a linear model can be used to describe its regularity with the movement of the breathing. In this embodiment, a linear model is taken as an example. For other embodiments having non-rigid components, the present embodiment can be simply deduced in combination with an algorithm formula commonly used in non-rigid motion. For a linear model, all points in space have the same mapping relationship, and equation (2) can be expressed as
Xsec― P T Rprobe ^ Xus ( 3 ) 此时, 空间映射 T退化为一个矩阵。 Xsec- PT Rprobe ^ Xus ( 3 ) At this time, the spatial map T is degenerated into one matrix.
为了建立呼吸模型, 需要在实施例 1的超声融合成像导航系统的基础上, 增 加呼吸传感器 109, 如图 1所示的虚线部分, 将其固定在病人的腹部表皮, 用来 跟踪病人腹部表皮呼吸传感器所在位置如何随呼吸而运动。 呼吸传感器可以是基 于电磁感应的位置传感器或其他类型的位置传感器。  In order to establish a breathing model, it is necessary to add a respiration sensor 109, such as the dotted line portion shown in Fig. 1, to the patient's abdominal epidermis for tracking the abdominal epidermis of the patient, based on the ultrasonic fusion imaging navigation system of Embodiment 1. How the sensor is positioned to move with the breath. The respiration sensor can be a position sensor based on electromagnetic induction or other type of position sensor.
为便于下文的描述, 首先对相关的术语给出如下解释:  For the convenience of the following description, the related terms are first explained as follows:
( 1 )基准位置: 可以是呼吸传感器随腹部运动路径上的任一点, 例如其可 以是传感器运动路径的中间位置;  (1) Reference position: may be any point on the path of the breathing sensor with the abdomen movement, for example, it may be an intermediate position of the sensor movement path;
( 2 )呼吸深度: 在某一帧超声图像对应的时刻, 呼吸传感器相对于基准位 置的位移, 称为呼吸深度, 用来近似描述呼吸的状态, 可用 d(t)表示, 其可通过 呼吸位置信息 Rrep(0得到, 具体可采用常用的呼吸传感器的位置信息的转换方法 得到, 不作详述; (2) Respiratory depth: The displacement of the respiration sensor relative to the reference position at a time corresponding to the ultrasound image of a frame, called the respiration depth, is used to approximate the state of the respiration, which can be represented by d(t), which can be passed through the respiration position. The information R rep (0 is obtained, which can be obtained by using the conversion method of the position information of the commonly used respiratory sensor, and will not be described in detail;
( 3 )基准呼吸深度: 基准位置对应的呼吸深度称为基准呼吸深度, 可表示 为 d0(3) Reference breathing depth: The breathing depth corresponding to the reference position is called the reference breathing depth and can be expressed as d 0 .
( 4 )参考系相对运动量: 以处于基准呼吸深度 ί。下的目标器官的位置为基 准位置, 则在世界坐标系下, 目标器官在不同呼吸深度 下相对于该基准位置 的运动量, 称为参考系相对运动量。 本实施例采用的是刚性运动为例进行说明, 并且忽略旋转运动仅考虑参考系相对位移, 这里用来 来描述。  (4) Reference frame relative motion: to be at the reference breathing depth ί. The position of the lower target organ is the reference position, and in the world coordinate system, the amount of movement of the target organ relative to the reference position at different breathing depths is called the reference system relative motion amount. This embodiment uses rigid motion as an example for illustration, and ignores the rotational motion to consider only the relative displacement of the reference frame, which is used here to describe.
如图 5所示, 本实施例的建立呼吸模型的方法包括以下步骤 S21〜S25:  As shown in FIG. 5, the method for establishing a breathing model of this embodiment includes the following steps S21 to S25:
步骤 S21 , 对于预先存储的超声视频, 设定基准呼吸深度和基准位置。  Step S21, setting a reference breathing depth and a reference position for the pre-stored ultrasonic video.
预先存储的超声视频为从一个或互成角度的多个平面(如《个,《为正整数) 对腹部采集而得,一个平面采集一段超声视频,得到 n段超声视频 US V,· , ζ·= 1 , 视频的帧号用 表示, 可用 USV^)表示第 段超声视频中的第 帧超声图像。 同 时, 对于采集的每一帧超声图像, 记录固定于超声探头上的位置传感器的位置指 向信息 Rpro 、 以及固定在病人腹部表皮上的呼吸传感器的呼吸位置信息 Rresp(t)。 在每一段超声视频的采集过程中, 采集时间包含一个或者多个呼吸周期。 The pre-stored ultrasound video is acquired from a plurality of planes that are at an angle to each other (eg, "for a positive integer"), and a plane acquires an ultrasound video to obtain an n-segment ultrasound video US V, · , ζ · = 1 , the frame number of the video is indicated, and USV^) can be used to represent the first frame of the ultrasound image in the first ultrasound video. At the same time, for each frame of the acquired ultrasound image, the positional pointing information R pro of the position sensor fixed to the ultrasonic probe, and the respiratory position information R resp (t) of the respiratory sensor fixed on the patient's abdomen epidermis are recorded. During each acquisition of ultrasound video, the acquisition time includes one or more breathing cycles.
如前述分析的呼吸运动特性, 呼吸运动呈现出周期性的特点, 在周期内各呼 吸运动相位均有差异, 而在周期间这一运动过程是不断重复的过程, 类似图 6所 示的正弦曲线的波形, 其中横轴 表示呼吸周期, 纵轴表示呼吸深度 40, 每一 个重复的曲线表示一个呼吸周期。 图 6中, 橫向的虚线为给定基准位置 d0, 呼吸 深度随视频帧号/变化, 这里基准位置设为传感器运动路径的中间位置。 当然, 基准位置也可以是其它位置, 例如传感器运动路径的最低位置或最高位置等。 而 由于同一呼吸深度对应的可能是吸气状态 , 也可能是呼气状态, 因此, 也可以进 一步将呼吸状态区分为呼气相位和吸气相位 , 类似地也可以得到正弦曲线的波 形。 As described in the respiratory motion characteristics described above, respiratory motion exhibits periodic characteristics, and the phases of each respiratory motion vary during the cycle, and this motion is a repetitive process during the cycle, similar to Figure 6. The waveform of the sinusoid is shown, where the horizontal axis represents the breathing cycle and the vertical axis represents the breathing depth 40, and each repeated curve represents one breathing cycle. In Fig. 6, the horizontal dotted line is the given reference position d 0 , and the breathing depth varies with the video frame number /, where the reference position is set to the middle position of the sensor motion path. Of course, the reference position may also be other positions, such as the lowest position or the highest position of the sensor motion path. Since the same breathing depth may correspond to an inhalation state or an exhalation state, it is also possible to further distinguish the respiratory state into an expiratory phase and an inhalation phase, and similarly, a sinusoidal waveform can also be obtained.
步骤 S22 , 对于每一段视频, 选取基准帧, 获取其它帧超声图像对应的目标 对象相对于基准帧的运动位移 V d(t))。  Step S22: For each video segment, select a reference frame to obtain a motion displacement V d(t) of the target object corresponding to the reference frame corresponding to the ultrasound image of the other frame.
对每一段超声视频 USV, , 如选定呼吸深度 ί/„对应的某一帧超声图像为基准 帧, 可以通过运动跟踪, 如模板匹配等常用算法, 获取其它帧 超声图像对应的 目标器官相对于基准帧的运动位移 V d(t )。  For each segment of the ultrasound video USV, if the ultrasound image corresponding to a certain depth of breath ί/„ is used as the reference frame, the target organ corresponding to the ultrasound image of the other frame can be obtained by motion tracking, such as template matching. The motion shift of the reference frame V d(t ).
步骤 S23 , 转换到同一参考坐标系以消除探头抖动的影响。  Step S23, converting to the same reference coordinate system to eliminate the influence of probe jitter.
由于采集呼吸校正视频的时候, 难以保证探头绝对静止不动, 所以需要将运 动跟踪限定在基准帧的平面内以消除探头抖动的影响。 本步骤中, 假设 χ。是基准 帧中的某一点的位置, .ψ)是经过跟踪得到的该点在第 t帧超声图像中的位置, 用 R;,,,,/,, «和 分别表示基准帧和第 /帧超声图像对应的探头位置, x。和 x(0在 世界坐标系中对应的点分别表示为 /"0和 /;?(/) , 则有: Since it is difficult to ensure that the probe is absolutely stationary when the breath correction video is acquired, it is necessary to limit the motion tracking to the plane of the reference frame to eliminate the influence of probe jitter. In this step, assume χ. Is the position of a point in the reference frame, .ψ) is the position of the point obtained by tracking in the ultrasound image of the t-th frame, using R;,,,,/,,« and respectively representing the reference frame and the frame/frame The position of the probe corresponding to the ultrasound image, x. And x (0 points corresponding to the world coordinate system are represented as /" 0 and /;? (/), respectively:
m0 = Rprobe_0 · A - XQ ( 4 ) m{t) = Rprob j) - A - x(t) ( 5 ) 假设其它帧 (即非基准帧 )对应的目标器官的参考系相对位移 在基准 帧对应的平面上的投影分量表示为 proji WXdit))) ^ 其它帧对应的目标器官相对于 基准帧的运动位移 即为该投影分量的观察值。 如前假设, 此时 。在 基准帧对应的平面的投影分量为 proj mit mo) ,它是 Λ·(^(ί )))的观察值 V ) 的一种普遍表达方式, 即: m 0 = Rprobe_0 · A - XQ ( 4 ) m{t) = Rprob j) - A - x(t) ( 5 ) Assume that the relative displacement of the reference frame of the target organ corresponding to other frames (ie non-reference frames) is in the reference frame The projected component on the corresponding plane is expressed as proji WXdit))) ^ The motion displacement of the target organ corresponding to the other frame relative to the reference frame is the observed value of the projected component. As assumed before, at this time. The projection component of the plane corresponding to the reference frame is proj mit mo) , which is a general expression of the observation value V ) of Λ·(^(ί ))), namely:
Vi{d{t)) = proji(m(i)-m0) ( 6 ) Vi{d{t)) = proji(m(i)-m 0 ) ( 6 )
这里通过将基准帧和非基准帧转换到同一世界坐标系中进行投影, 消除了探 头抖动可能带来的位移计算误差。 在采集每一段超声视频时, 可以用探头夹等装 置固定探头位置, 尽可能地保持探头位置不移动 , 病人体位保持不变, 这时可认 为探头位置在采集的过程中保持不变, Rprob = Rprobe 0,这时有 ν«ή) = χ(ή - x0 步骤 S24, 根据步骤 S22得到的运动位移 (ί )), 计算在同一个参考坐标系 下, 其它帧超声图像对应的目标器官在不同呼吸深度下的参考系相对位移 0)。 Here, by shifting the reference frame and the non-reference frame into the same world coordinate system for projection, the displacement calculation error caused by the probe jitter is eliminated. When collecting each piece of ultrasound video, you can use a probe clip or the like Set the position of the probe to keep the position of the probe as far as possible, and keep the position of the patient unchanged. At this time, the position of the probe can be considered to remain unchanged during the acquisition, R prob = R probe 0 , then ν «ή) = χ (ή - x 0 step S24, according to the motion displacement (ί) obtained in step S22), calculate the relative displacement of the reference system of the target organ corresponding to the ultrasound image of the other frame at different breathing depths in the same reference coordinate system. .
为所有段视频设定相同的基准呼吸深度 /„,在某一呼吸深度 i/(/)::=D下, 目标 Set the same reference breathing depth / „ for all segment videos, at a certain breathing depth i/(/)::=D, target
9 -官相对于基准位置的参考系相对位移 可以用以下公式 ( 7 )通过优化的方 法得到, 即 J (D)为当最外层∑为最小值时对应的值:The relative displacement of the reference system relative to the reference position can be obtained by the following formula (7) by the optimization method, that is, J (D) is the corresponding value when the outermost layer is the minimum value:
(Z)) = argmin(X( ∑ || P^(0) ||2)) 其中 argmin()是使得其括号内的函数取得最小值的函数。 (Z)) = argmin(X( ∑ || P^(0) || 2 )) where argmin() is a function that minimizes the function in its parentheses.
在多个呼吸深度 下,对公式( 7 )进行求解得到该呼吸深度下的位移 W(d)。 步骤 S25 , 通过对不同呼吸深度及其对应的参考系相对位移进行拟合, 得到 呼吸模型。  Equation (7) is solved at multiple breathing depths to obtain the displacement W(d) at the depth of the breath. Step S25, obtaining a breathing model by fitting different respiratory depths and corresponding reference frame relative displacements.
本发明的实施例中, 所说的 "呼吸模型" 是指目标器官位移随呼吸深度变化 的规律。 所说的 "建立呼吸模型" 即是指根据已有的超声视频数据计算或者确定 目标器官位移随呼吸深度变化的这个规律, 即获得目标器官位移随呼吸深度变化 的规律的数学表达。  In the embodiment of the present invention, the "breathing model" refers to a law in which the displacement of the target organ varies with the depth of the breathing. The term "establishing a breathing model" refers to the calculation or determination of the target organ displacement as a function of respiratory depth based on existing ultrasound video data, that is, the mathematical expression of the law of the target organ displacement as a function of respiratory depth.
本步骤中, 以 ί为自变量, 对(i, 点对进行一定方式的拟合, 就可以 得到目标器官位移随呼吸深度变化的规律, 完成呼吸模型的建立。  In this step, with ί as the independent variable, (i, the point pair is fitted in a certain way, the deviation of the target organ displacement with the breathing depth can be obtained, and the establishment of the breathing model is completed.
在本实施例, 公式 ( 7 ) 中以两个向量之差模的平方来度量在平面 i (即视频 I ) 中某一呼吸深度 D下观察到的目标器官运动 和参考系相对位移 JV(d(t)) 在该平面上投影 /"•/ iWD))之间的误差, 即 II ,·(,) 稱)) II2 , 其它实施例中 也可以用其它方式来描述该误差的大小, 如 Ι^^'Ί )) ( )) 11等。 In the present embodiment, in equation (7), the target organ motion and the reference frame relative displacement JV (d) observed at a certain breathing depth D in plane i (ie, video I) are measured by the square of the difference modulus of the two vectors. (t)) The error between the projection / "•/ iWD) on the plane, ie II, · (,) is called) II 2 , other methods can also describe the magnitude of the error in other ways, Such as Ι ^ ^ 'Ί )) ( )) 11 and so on.
图 7所示的三条直线为通过线性拟合得到的目标器官相对于基准位置在世界 坐标系三个方向上的位移随呼吸深度 d变化规律的示意图。 其它实施例中可以用 釆用其它的拟合方式 , 例如通过其他类型的曲线如二次曲线或者三次曲线的方式 来描述。 本实施例中, 在得到呼吸模型后, 在对超声图像和模态图像采用实施例 1的 方法进行配准之前, 采用呼吸模型进行校正, 即在配准前利用呼吸模型将不同帧 的超声图像校正到同一呼吸深度(即同一呼吸状态 )下 , 达到消除或减弱呼吸影 响的目的。 具体校正过程见下文的描述。 The three straight lines shown in FIG. 7 are schematic diagrams of the displacement of the target organ in the three directions of the world coordinate system with respect to the reference position as a function of the breathing depth d by linear fitting. Other embodiments may be used with other fitting methods, such as by other types of curves such as a quadratic curve or a cubic curve. In this embodiment, after obtaining the breathing model, before the ultrasound image and the modal image are registered by the method of Embodiment 1, the breathing model is used for correction, that is, the ultrasound image of different frames is used by the breathing model before registration. Correct to the same breathing depth (ie, the same breathing state) to achieve the purpose of eliminating or reducing the respiratory effects. The specific calibration process is described below.
假设 t为医生选择的待配准帧中的某一帧, 该帧对应的呼吸深度为 0。 根 据呼吸模型, 呼吸深度 下目标器官相对于基准位置的参考系相对位移为 Suppose t is a certain frame in the frame to be registered selected by the doctor, and the corresponding breathing depth of the frame is 0. According to the breathing model, the relative displacement of the reference system relative to the reference position at the depth of respiration is
W{d{t))0 假设. 是该帧中的一点, 其在世界坐标系中位置为 R/W,( ) · A · Λ-, 则根 据公式(3)可知, 在将其校正到基准呼吸深度 4后该点的位置变为: W{d{t)) 0 hypothesis. is a point in the frame whose position is R /W in the world coordinate system, ( ) · A · Λ-, then according to formula (3), it is corrected to The position of this point after the reference breathing depth 4 becomes:
T{W{d{t)))■ Rpro t)■ A■ X (8) 其中 Γ( /(/)))是以超声图像帧 /对应的呼吸深度 为自变量, 根据呼吸模 型校正后得到的呼吸校正矩阵。 本实施例中, r(i (/(/)))可以是利用目标器官的参 考系相对位移 在三个维度上随呼吸运动的规律对呼吸影响进行的线性补 偿而得到, 假设 W{d{i))={Wx{d{t)\ Wy{d{i)\ Wz{d{t))), 则在齐次坐标系下的呼吸校 正矩阵为: T{W{d{t)))■ Rpro t)■ A■ X (8) where Γ( /(/))) is the ultrasound image frame/corresponding respiratory depth as an independent variable, corrected according to the breathing model Breath correction matrix. In this embodiment, r(i(/(/))) may be obtained by linearly compensating for the respiratory effect in accordance with the law of respiratory motion in three dimensions using the relative displacement of the reference system of the target organ, assuming W{d{ i))={W x {d{t)\ W y {d{i)\ W z {d{t))), then the breathing correction matrix in homogeneous coordinate system is:
1 0 0  1 0 0
0 1 0  0 1 0
T((W(d(t)))  T((W(d(t)))
0 0 1 -Wz(d(t)) 0 0 1 -W z (d(t))
0 0 0 1  0 0 0 1
(9) 其中 [- -wy{d{t)) - ^(4))]为平移向量。 (9) where [- -w y {d{t)) - ^(4))] is the translation vector.
其它实施例中, T(W(ci(†))) 也可以是对一个或多个维度上的运动规律经过一 定处理后进行补偿而得到 , 这些处理可以是如非线性变换或者某几个方向上给定 不同的权重。  In other embodiments, T(W(ci(†))) may also be obtained by compensating for the motion law in one or more dimensions after some processing, such as nonlinear transformation or some directions. Give different weights on them.
本实施例是在进行配准之前, 采用呼吸模型进行校正。 其它实施例中, 也可 以在配准完成后, 在融合的过程中进行呼吸校正。 假设配准时的基准呼吸深度为 do, 可以利用已经建立的呼吸模型, 在融合的过程中对呼吸造成的影响进行实时 校正, 校正的原理与前述配准过程中将不同帧校正到同一呼吸深度相同, 则校正 后融合中 Mec的对应关系可由公式(3)具体化为公式 ( 10 ): This embodiment uses a breathing model for correction before registration. In other embodiments, it is also possible to perform a respiration correction during the fusion process after the registration is completed. Assuming that the reference breathing depth at registration is do, the established breathing model can be used to correct the effects of breathing during the fusion process in real time. The principle of correction is the same as correcting different frames to the same breathing depth during the aforementioned registration process. Then, the correspondence between M and ec in the fusion after correction can be embodied by the formula (3) into the formula (10):
Xsec = P- T( W(d(t)))-RPro t)-A-Xus ( 10) X sec = P- T( W(d(t)))-R P ro t)-AX us ( 10)
图 8所示为建立呼吸模型的情况下, 基于超声视频、 位置传感器的位置指向 信息及呼吸传感器的位置信息等进行配准和融合的流程示意图。 具体包括: 步骤Figure 8 shows the positional orientation based on ultrasound video and position sensor in the case of establishing a breathing model. Schematic diagram of the process of registration and fusion of information and position information of the respiratory sensor. Specifically include:
S31 , 采集超声视频。 步骤 S32 , 回放视频, 选择一帧或多帧图像用于配准。 步 骤 S33 , 判断是否对配准帧进行呼吸校正。 如果是, 执行步骤 S34, 根据呼吸模 型将配准帧校正到同一呼吸深度, 转到步骤 S35。 如果否, 步骤 S35 , 将选择的 图像进行配准。 步骤 S36, 判断是否对融合的结果进行呼吸校正? 如果是, 执行 步骤 S37, 根据呼吸模型, 将融合结果校正到同一呼吸深度, 并转到步骤 S38。 如果否, 执行步骤 S38 , 显示融合的结果。 在进行配准时, 医生可以进行视频回 放, 选定一帧或者多帧超声图像进行配准, 同时提取位置传感器的位置指向信息 进行配准。 如果选择的多帧超声图像不在同一呼吸深度下, 可以利用呼吸模型校 正到同一深度下然后进行配准。 配准完成后, 如果希望在同一呼吸深度下观察融 合结果, 也可以利用呼吸模型进行校正到该深度。 S31, collecting ultrasound video. Step S32, playing back the video, and selecting one or more frames of images for registration. In step S33, it is judged whether or not the breathing correction is performed on the registration frame. If so, step S34 is performed to correct the registration frame to the same breathing depth according to the breathing pattern, and the flow proceeds to step S35. If no, in step S35, the selected image is registered. In step S36, it is determined whether the breathing result is corrected for the fusion result. If so, step S37 is performed to correct the fusion result to the same breathing depth according to the breathing model, and the flow proceeds to step S38. If no, step S38 is performed to display the result of the fusion. During registration, the doctor can perform video playback, select one or more frames of ultrasound images for registration, and extract the position sensor of the position sensor to register for registration. If the selected multi-frame ultrasound image is not at the same depth of breath, the breath model can be used to correct to the same depth and then register. After the registration is completed, if you want to observe the fusion results at the same breathing depth, you can also use the breathing model to correct to that depth.
上述实施例在描述如何用呼吸模型把配准用的多帧超声图像校正到某一呼 吸深度、 以及如何把融合结果校正到某一呼吸深度时, 都是假设校正的目标深度 为建立呼吸模型时的基准深度 d0 , 当然校正的目标深度可以为任一有意义的呼吸 深度 可以理解, 这时的呼吸校正矩阵由 r(ff(i ( )))变为 ( (^»)) - T(W(d(t)))0 本实施例提供的一种超声图像与预先获取的模态图像的融合方法包括如下 步骤: The above embodiment describes how to correct a multi-frame ultrasound image for registration to a certain breathing depth using a breathing model, and how to correct the fusion result to a certain breathing depth, assuming that the corrected target depth is the time when the breathing model is established. The reference depth d 0 , of course, the corrected target depth can be understood for any meaningful breathing depth, when the breathing correction matrix is changed from r(ff(i ( ))) to ( (^»)) - T( W(d(t))) 0 The fusion method of the ultrasound image and the pre-acquired modal image provided by the embodiment includes the following steps:
回放步骤,根据输入的播放指令逐帧播放或者回放预先存储的至少一段超声 视频数据, 该超声视频数据包括从至少一个平面对目标对象进行采集得到的超声 图像、 以及与每一帧超声图像对应的位置指向信息和呼吸位置信息, 其中位置指 向信息是在超声图像获取过程中由固定于超声探头上的位置传感器感应得到 , 呼 吸位置信息是在超声图像获取过程中由固定于目标对象上的呼吸传感器感应目 标对象的呼吸得到;  a playback step of playing or playing back pre-stored at least one piece of ultrasound video data according to the input play command, the ultrasound video data comprising an ultrasound image obtained by acquiring the target object from at least one plane, and corresponding to each frame of the ultrasound image Position pointing information and respiratory position information, wherein the position pointing information is sensed by a position sensor fixed to the ultrasonic probe during the ultrasonic image acquisition process, and the respiratory position information is a respiratory sensor fixed to the target object during the ultrasonic image acquisition process Inductive breathing of the target object is obtained;
配准步骤, 从超声视频数据中选择多帧超声图像与模态图像进行配准, 这里 配准的方法可采用常用的图像配准算法, 而不是采用本申请前述实施例提供的配 准方法;  In the registration step, the multi-frame ultrasound image is selected from the ultrasound video data to be registered with the modal image, and the registration method may adopt a common image registration algorithm instead of the registration method provided by the foregoing embodiment of the present application;
融合步骤, 对配准后的多帧超声图像和模态图像进行图像融合, 在融合过程 中, 利用呼吸模型将配准后的多帧超声图像校正到同一个呼吸深度, 从而可在同 一个呼吸深度下观察融合结果。 The fusion step performs image fusion on the multi-frame ultrasound image and the modal image after registration, and in the fusion process, the multi-frame ultrasound image after registration is corrected to the same breathing depth by using the breathing model, thereby being The fusion results were observed at a depth of breath.
在融合步骤中, 呼吸模型的建立以及如何校正到同一个呼吸深度的方法可参 考前述实施例 2中相对应的部分, 在此不作重述。  In the fusion step, the establishment of the breathing model and the method of correcting to the same breathing depth can be referred to the corresponding portions in the foregoing embodiment 2, and will not be repeated here.
本实施例提供的一种超声图像与预先获取的模态图像的融合方法包括如下 步骤:  The method for fusing an ultrasound image and a pre-acquired modal image provided by this embodiment includes the following steps:
选择步骤, 从至少一段超声视频数据中选择多帧超声图像, 其中超声视频数 据包括从至少一个平面对目标对象进行采集得到的超声图像、 以及与每一帧超声 图像对应的位置指向信息,位置指向信息是在超声图像获取过程中由固定于超声 探头上的位置传感器感应得到, 同时, 超声视频数据还包括与每一帧超声图像对 应的呼吸位置信息, 该呼吸位置信息是在超声图像获取过程中由固定于目标对象 上的呼吸传感器感应目标对象的呼吸得到;  a selecting step of selecting a multi-frame ultrasound image from at least one piece of ultrasound video data, wherein the ultrasound video data comprises an ultrasound image obtained by acquiring the target object from at least one plane, and position pointing information corresponding to each frame of the ultrasound image, the positional pointing The information is obtained by a position sensor fixed on the ultrasonic probe during the ultrasonic image acquisition process. Meanwhile, the ultrasonic video data further includes respiratory position information corresponding to each frame of the ultrasonic image, and the respiratory position information is in the ultrasonic image acquisition process. Inducing the breathing of the target object by a breathing sensor fixed to the target object;
建立呼吸模型步骤, 根据所述超声视频数据建立呼吸模型;  Establishing a breathing model step of establishing a breathing model based on the ultrasound video data;
配准步骤, 将待配准的多帧超声图像与模态图像进行配准;  a registration step of registering the multi-frame ultrasound image to be registered with the modal image;
融合步骤, 对配准后的超声图像和模态图像进行图像融合;  a fusion step of performing image fusion on the registered ultrasound image and modal image;
其中,在将待配准的多帧超声图像和模态图像进行配准之前和 /或进行融合过 程中, 釆用呼吸模型将待配准的多帧超声图像校正到同一个呼吸深度。  Wherein, before the multi-frame ultrasound image to be registered and the modal image are registered and/or during the fusion process, the multi-frame ultrasound image to be registered is corrected to the same breathing depth by the breathing model.
一种实现中, 建立呼吸模型的步骤包括:  In one implementation, the steps of establishing a breathing model include:
相对运动计算子步骤, 对于每一段超声视频数据, 选取基准呼吸深度对应的 一帧超声图像为基准帧, 获取其它帧超声图像对应的目标对象相对于基准帧的运 动量, 根据该运动量, 在同一个参考坐标系下, 计算其它帧超声图像对应的目标 对象在不同呼吸深度下的参考系相对运动量;  The relative motion calculation sub-step, for each piece of ultrasound video data, selecting one frame of the ultrasound image corresponding to the reference breathing depth as the reference frame, and acquiring the motion amount of the target object corresponding to the reference frame corresponding to the other frame ultrasound image, according to the amount of motion, in the same Calculating the relative motion amount of the reference frame at different breathing depths of the target object corresponding to the ultrasound image of the other frame under the reference coordinate system;
拟合建模子步骤, 对不同呼吸深度及其对应的参考系相对运动量进行拟合, 得到呼吸模型。  The fitting modeling substeps are performed to fit different respiratory depths and their corresponding reference frame relative motion amounts to obtain a breathing model.
又一实施方式中, 将待配准的多帧超声图像校正到同一个呼吸深度包括: 计算子步骤, 对于待配准的多帧超声图像中的任一帧超声图像, 结合其对应 的呼吸位置信息, 得到该帧超声图像对应的呼吸深度, 根据呼吸模型得到该帧超 声图像中目标对象相对于基准位置的参考系相对运动量;  In still another embodiment, correcting the multi-frame ultrasound image to be registered to the same respiratory depth comprises: a calculation sub-step, combining the corresponding respiratory position with any frame ultrasound image in the multi-frame ultrasound image to be registered Information, obtaining a breathing depth corresponding to the ultrasound image of the frame, and obtaining a relative motion amount of the reference frame of the target object relative to the reference position in the frame ultrasound image according to the breathing model;
校正子步骤, 根据呼吸模型, 将该多帧超声图像对应的参考系相对运动量校 正到同一个预定呼吸深度。 a calibration substep, according to the breathing model, the relative motion amount of the reference frame corresponding to the multi-frame ultrasound image Going to the same scheduled breathing depth.
上述步骤及子步骤的具体说明可参考前述实施例 2中相对应的部分, 在此不 作重述。 本实施例的配准步骤中的待配准的多帧超声图像可以是建立呼吸模型时 选择的多帧图像, 也可以是基于实时超声图像得到的多帧超声图像。  For a detailed description of the above steps and sub-steps, reference may be made to the corresponding parts in the foregoing embodiment 2, and will not be repeated here. The multi-frame ultrasound image to be registered in the registration step of this embodiment may be a multi-frame image selected when the breathing model is established, or may be a multi-frame ultrasound image obtained based on the real-time ultrasound image.
综上 , 为解决现有配准技术基于实时超声往往配准精度和融合成功率不高, 及消除呼吸影响的方法效果不佳这两个问题, 本申请各实施例提出一种新的基于 回顾性的超声视频结合位置传感器信息进行配准融合的方法, 同时还建立呼吸模 型以进行呼吸校正。 为了进行呼吸校正, 需要请病人正常呼吸, 在互成角度的若 干位置采集一个以上呼吸周期的超声视频, 一个位置采集一个视频, 记录每一帧 超声数据对应的超声探头上的传感器位置及指向信息, 以及固定在病人腹部的呼 吸传感器位置信息, 然后根据视频及传感器信息建立病人的呼吸模型。 在配准之 前, 已采集的包含配准需要信息的超声视频中, 每一帧超声数据同时记录着固定 在超声探头上的传感器位置及指向信息, 及固定在病人腹部的呼吸传感器位置信 息。 进行配准时, 允许医生进行逐帧或连续的视频回放搜索, 选定一帧或多帧数 据进行配准, 系统同时提取对应的超声探头位置传感器信息等进行配准。 选择多 帧配准时, 医生可以利用呼吸模型和呼吸传感器信息可以将不同帧的超声数据校 正到同一呼吸状态下。 在融合过程中, 还可以利用呼吸模型实时地根据呼吸传感 器位置信息校正融合结果, 达到消除或减弱呼吸影响的目的。  In summary, in order to solve the two problems that the existing registration technology is based on real-time ultrasound, the registration accuracy and the fusion success rate are not high, and the method of eliminating the respiratory effect is not effective, the embodiments of the present application propose a new review based on the review. Sexual ultrasound video combined with position sensor information for registration fusion, while also establishing a breathing model for respiratory correction. In order to perform respiratory correction, the patient is required to breathe normally. Ultrasound video of more than one respiratory cycle is collected at several angles of each other angle, one video is collected at one position, and the sensor position and pointing information on the ultrasonic probe corresponding to each frame of ultrasonic data are recorded. And the position of the respiratory sensor fixed to the patient's abdomen, and then the patient's breathing model is established based on the video and sensor information. Prior to registration, in the acquired ultrasound video containing the information required for registration, each frame of ultrasound data simultaneously records the sensor position and pointing information fixed to the ultrasound probe, and the position of the respiratory sensor fixed to the patient's abdomen. When registering, the doctor is allowed to perform frame-by-frame or continuous video playback search, one or more frames of data are selected for registration, and the system simultaneously extracts corresponding ultrasonic probe position sensor information for registration. When multi-frame registration is selected, the physician can use the breathing model and respiratory sensor information to correct ultrasound data from different frames to the same breathing state. In the fusion process, the breathing model can also be used to correct the fusion result according to the position information of the respiratory sensor in real time, so as to eliminate or attenuate the respiratory effect.
本申请实施例提出的基于超声视频进行与其他模态数据进行配准、 融合及对 配准数据和融合结果进行呼吸校正的方法, 不仅适用于肝脏, 还适用于肾脏、 前 列腺等其它腹部器官。  The method for registering and merging with other modal data and respirating the registration data and the fusion result based on the ultrasound video proposed in the embodiments of the present application is applicable not only to the liver but also to other abdominal organs such as the kidney and the prostate.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程 ,是 可以通过计算机程序来指令相关的硬件来完成, 所述的程序可存储于一计算机可 读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中, 所述的存储介质可为磁碟、 光盘、 只读存储记忆体(Read- Only Memory, ROM ) 或随机存储 i己忆体 ( andom Access Memory, RAM )等。 但并不能因此而理解为对本发明专利范围的限制。 应当指出的是, 对于本领域的 普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进, 这些都属于本发明的保护范围。 因此, 本发明专利的保护范围应以所附权利要求 为准。 A person skilled in the art can understand that all or part of the process of implementing the above embodiment method can be completed by a computer program to instruct related hardware, and the program can be stored in a computer readable storage medium. When executed, the flow of an embodiment of the methods as described above may be included. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM). However, it is not to be construed as limiting the scope of the invention. It should be noted that for the field A person skilled in the art can make several modifications and improvements without departing from the spirit and scope of the invention. Therefore, the scope of the invention should be determined by the appended claims.

Claims

权利要求书 claims
1、 一种用于将超声图像与预先获取的模态图像进行融合的方法, 其特征在 于, 包括: 1. A method for fusing ultrasound images with pre-acquired modal images, which is characterized by including:
选择步骤, 根据输入的指令, 从预先存储的至少一段超声视频数据中选择至 少一帧超声图像, 所述超声视频数据包括从至少一个平面对目标对象进行采集得 到的超声图像、 以及与每一帧超声图像对应的位置指向信息 , 所述位置指向信息 是在超声图像获取过程中由固定于超声探头上的位置传感器感应得到; The selection step is to select at least one frame of ultrasound image from at least one piece of pre-stored ultrasound video data according to the input instruction. The ultrasound video data includes an ultrasound image obtained by collecting the target object from at least one plane, and each frame. Position pointing information corresponding to the ultrasound image, the position pointing information is sensed by a position sensor fixed on the ultrasound probe during the acquisition process of the ultrasound image;
配准步骤, 将选择出的至少一帧超声图像与所述模态图像进行配准, 所述配 准过程中使用所述至少一帧超声图像的位置指向信息; 及 The registration step is to register the selected at least one frame of ultrasound image with the modal image, using the position and pointing information of the at least one frame of ultrasound image in the registration process; and
融合步骤, 对配准后的超声图像和模态图像进行图像融合。 The fusion step is to perform image fusion on the registered ultrasound image and the modal image.
2、 如权利要求 1所述的方法, 其特征在于, 2. The method according to claim 1, characterized in that,
所述选择步骤中选择多帧超声图像; Selecting multiple frames of ultrasound images in the selection step;
所述方法还包括: The method also includes:
建立呼吸模型步骤, 根据所述超声视频数据建立呼吸模型; The step of establishing a respiratory model is to establish a respiratory model based on the ultrasound video data;
呼吸校正步骤,在将选择出的多帧超声图像和模态图像进行配准之前和 /或进 行融合过程中, 采用所述呼吸模型将所述多帧超声图像校正到同一个呼吸深度。 In the breathing correction step, before registering the selected multiple frames of ultrasound images and modal images and/or during the fusion process, the breathing model is used to correct the multiple frames of ultrasound images to the same breathing depth.
3、 如权利要求 2所述的方法, 其特征在于, 3. The method of claim 2, characterized in that,
所述超声视频数据还包括与每一帧超声图像对应的呼吸位置信息, 所述呼吸 位置信息是在超声图像获取过程中由固定于所述目标对象上的呼吸传感器感应 目标对象的呼吸得到; The ultrasonic video data also includes respiratory position information corresponding to each frame of the ultrasonic image. The respiratory position information is obtained by sensing the breathing of the target object by a respiratory sensor fixed on the target object during the acquisition of the ultrasonic image;
所述建立呼吸模型步骤包括: The steps of establishing a breathing model include:
相对运动计算子步骤, 对于每一段超声视频数据, 选取基准呼吸深度对应的 一帧超声图像为基准帧, 获取其它帧超声图像对应的目标对象相对于所述基准帧 的运动量, 根据所述运动量, 在同一个参考坐标系下, 计算所述其它帧超声图像 对应的目标对象在不同呼吸深度下的参考系相对运动量; Relative motion calculation sub-step: for each piece of ultrasound video data, select an ultrasound image corresponding to the reference breathing depth as the reference frame, and obtain the amount of movement of the target object corresponding to the other frames of ultrasound images relative to the reference frame. According to the amount of movement, In the same reference coordinate system, calculate the relative motion of the target object corresponding to the other frames of ultrasound images at different breathing depths in the reference system;
拟合建模子步骤, 对所述不同呼吸深度及其对应的所述参考系相对运动量进 行拟合, 得到呼吸模型。 The fitting modeling sub-step is to fit the different breathing depths and their corresponding relative motion amounts of the reference system to obtain a breathing model.
4、 如权利要求 3 所述的方法, 其特征在于, 所述其它帧超声图像对应的目 标对象在不同呼吸深度下的参考系相对运动量满足: 使在同一个预定呼吸深度下 目标对象相对于基准帧的运动量和参考系相对运动量之间的误差为最小。 4. The method according to claim 3, characterized in that: the objects corresponding to the other frames of ultrasound images The relative motion amount of the target object relative to the reference frame at different breathing depths satisfies: Minimize the error between the relative motion amount of the target object relative to the reference frame and the relative motion amount of the reference frame at the same predetermined breathing depth.
5、 如权利要求 4 所述的方法, 其特征在于, 所述运动量为刚性运动量, 所 述参考系相对运动量为位移量, 所述其它帧超声图像对应的目标对象在不同呼吸 深度下的参考系相对运动量通过如下公式计算: 5. The method of claim 4, wherein the amount of motion is a rigid amount of motion, the amount of relative motion of the reference system is a displacement amount, and the reference systems of the target objects corresponding to the other frames of ultrasound images at different breathing depths The relative amount of motion is calculated by the following formula:
))) )))
Figure imgf000019_0001
Figure imgf000019_0001
其中, 为呼吸深度, J¾i ( ))为目标对象相对于基准帧的位移量, W(D)为 目标对象在呼吸深度 D下的位移量, ρη).„ 为 在基准帧对应的平面的 投影分量, " 为所有段超声视频数据的段数, i=i ,.. .,n, f -聊 为 proj W{D))^ 的误差函数。 Among them, is the breathing depth, J¾i ( )) is the displacement of the target object relative to the base frame, W (D) is the displacement of the target object at the breathing depth D, ρη).„ is the projection of the plane corresponding to the base frame Component, " is the segment number of all segments of ultrasound video data, i=i,.. .,n, f - is the error function of proj W{D))^.
6、 如权利要求 3 所述的方法, 其特征在于, 所述将多帧超声图像校正到同 一个呼吸深度包括: 6. The method of claim 3, wherein correcting multiple frames of ultrasound images to the same breathing depth includes:
计算子步骤, 对于所述多帧超声图像中的任一帧超声图像, 结合其对应的呼 吸位置信息, 得到该帧超声图像对应的呼吸深度, 根据所述呼吸模型得到该帧超 声图像中目标对象相对于基准位置的参考系相对运动量; Calculation sub-step: for any one of the multiple frames of ultrasound images, combined with its corresponding breathing position information, obtain the breathing depth corresponding to the frame of ultrasound image, and obtain the target object in the frame of the ultrasound image according to the breathing model The amount of relative movement of the reference frame relative to the reference position;
校正子步骤, 根据所述呼吸模型, 将所述多帧超声图像对应的参考系相对运 动量校正到同一个预定呼吸深度。 The correction sub-step is to correct the relative motion of the reference system corresponding to the multiple frames of ultrasound images to the same predetermined breathing depth according to the breathing model.
7、 如权利要求 6所述的方法, 其特征在于, 所述运动量为刚性运动量, 所 述参考系相对运动量为位移量, 所述校正子步骤包括: 7. The method of claim 6, wherein the amount of motion is a rigid amount of motion, the amount of relative motion of the reference system is a displacement amount, and the correction sub-step includes:
根据所述位移量, 得到目标对象在参考坐标系的各个维度上的平移向量; 根据所述呼吸模型, 确定旋转因子; According to the displacement amount, the translation vector of the target object in each dimension of the reference coordinate system is obtained; According to the breathing model, the rotation factor is determined;
根据所述平移向量和旋转因子 , 得到呼吸校正矩阵; According to the translation vector and rotation factor, the breathing correction matrix is obtained;
根据所述呼吸校正矩阵, 将所述多帧超声图像对应的参考系相对运动量校正 到同一个预定呼吸深度。 According to the respiration correction matrix, the relative motion amount of the reference system corresponding to the multiple frames of ultrasound images is corrected to the same predetermined respiration depth.
8、 如权利要求 6所述的方法, 其特征在于, 将所述多帧超声图像对应的参 考系相对运动量校正到同一个预定呼吸深度, 具体包括: 8. The method of claim 6, wherein correcting the relative motion of the reference system corresponding to the multiple frames of ultrasound images to the same predetermined breathing depth specifically includes:
对于所述多帧超声图像中的任一帧超声图像上的任一点, 计算其在参考坐标 系中的位置,并采用公式 ζ3 · r(Rprote . 进行计算以得到该点在预定呼吸深度 的位置, 其中 P为超声图像空间到模态图像空间的变换矩阵, Γ为用于校正的空 间映射方式, Rprate为位置传感器空间到世界坐标系的变换矩阵, 为超声图像空 间到位置传感器空间的变换矩阵, xus为该点在超声图像空间中的坐标。 For any point on any one of the multiple frames of ultrasound images, calculate its reference coordinates The position in the system is calculated using the formula ζ 3 · r(R prote . to obtain the position of the point at the predetermined breathing depth, where P is the transformation matrix from the ultrasound image space to the modal image space, and Γ is used for correction. Spatial mapping method, R prate is the transformation matrix from the position sensor space to the world coordinate system, is the transformation matrix from the ultrasound image space to the position sensor space, x us is the coordinate of the point in the ultrasound image space.
9、 如权利要求 7 所述的方法, 其特征在于, 经所述呼吸校正步骤后, 所述 融合步骤中通过如下公式得到: 9. The method according to claim 7, characterized in that, after the breathing correction step, the fusion step is obtained by the following formula:
Xsec = P-T(W(d(t)))-Rprobe(t)-A-Xus Xsec = PT(W(d(t)))-R probe (t)-AX us
其中, 表示超声图像中一像素点的坐标, ^^表示该像素点在模态图像的 坐标, P表示超声图像空间到模态图像空间的变换矩阵, r(wv/(/)))表示呼吸校正 矩阵, R^。 o为位置传感器空间到世界坐标系的变换矩阵, ^表示超声图像空间 到位置传感器空间的变换矩阵。 Among them, represents the coordinates of a pixel in the ultrasound image, ^^ represents the coordinates of the pixel in the modal image, P represents the transformation matrix from the ultrasound image space to the modal image space, r(wv/(/))) represents respiration Correction matrix, R^. o is the transformation matrix from the position sensor space to the world coordinate system, ^ represents the transformation matrix from the ultrasound image space to the position sensor space.
1 0、 如权利要求 1所述的方法, 其特征在于, 还包括: 10. The method of claim 1, further comprising:
多帧显示步骤, 显示配准或融合后的多帧超声图像, 同时显示所述配准或融 合后的多帧超声图像之间的交线和夹角。 The multi-frame display step displays the registered or fused multiple frames of ultrasound images, and simultaneously displays the intersection lines and angles between the registered or fused multiple frames of ultrasound images.
1 1、 一种用于将超声图像与预先获取的模态的图像进行融合的方法, 其特征 在于, 包括: 1 1. A method for fusing ultrasound images with images of pre-acquired modalities, characterized by including:
选择步骤, 从至少一段超声视频数据中选择多帧超声图像, 所述超声视频数 据包括从至少一个平面对目标对象进行采集得到的超声图像、 以及与每一帧超声 图像对应的位置指向信息 , 所述位置指向信息是在超声图像获取过程中由固定于 超声探头上的位置传感器感应得到; The selection step is to select multiple frames of ultrasound images from at least one piece of ultrasound video data. The ultrasound video data includes ultrasound images obtained by collecting the target object from at least one plane, and position pointing information corresponding to each frame of ultrasound image, so The position and pointing information is sensed by a position sensor fixed on the ultrasound probe during the ultrasound image acquisition process;
建立呼吸模型步骤, 根据所述超声视频数据建立呼吸模型; The step of establishing a respiratory model is to establish a respiratory model based on the ultrasound video data;
配准步骤, 将待配准的多帧超声图像与所述模态图像进行配准; The registration step is to register the multi-frame ultrasound image to be registered with the modal image;
融合步骤, 对配准后的超声图像和模态图像进行图像融合; The fusion step is to perform image fusion on the registered ultrasound image and modal image;
其中,在将待配准的多帧超声图像和模态图像进行配准之前和 /或进行融合过 程中, 釆用所述呼吸模型将所述待配准的多帧超声图像校正到同一个呼吸深度。 Wherein, before registering and/or during the fusion process of the multiple frames of ultrasound images to be registered and the modal images, the breathing model is used to correct the multiple frames of ultrasound images to be registered to the same respiration. depth.
12、 如权利要求 11 所述的方法, 其特征在于, 所述超声视频数据还包括与 每一帧超声图像对应的呼吸位置信息, 所述呼吸位置信息是在超声图像获取过程 中由固定于所述目标对象上的呼吸传感器感应目标对象的呼吸得到; 所述建立呼吸模型步骤包括: 12. The method of claim 11, wherein the ultrasound video data further includes respiratory position information corresponding to each frame of ultrasound image, and the respiratory position information is obtained by fixing the ultrasonic image during the acquisition process. The respiration sensor on the target object senses the respiration of the target object; The steps of establishing a breathing model include:
相对运动计算子步骤, 对于每一段超声视频数据, 选取基准呼吸深度对应的 一帧超声图像为基准帧, 获取其它帧超声图像对应的目标对象相对于所述基准帧 的运动量, 根据所述运动量, 在同一个参考坐标系下, 计算所述其它帧超声图像 对应的目标对象在不同呼吸深度下的参考系相对运动量; Relative motion calculation sub-step: for each piece of ultrasound video data, select an ultrasound image corresponding to the reference breathing depth as the reference frame, and obtain the amount of movement of the target object corresponding to the other frames of ultrasound images relative to the reference frame. According to the amount of movement, In the same reference coordinate system, calculate the relative motion of the target object corresponding to the other frames of ultrasound images at different breathing depths in the reference system;
拟合建模子步骤, 对所述不同呼吸深度及其对应的所述参考系相对运动量进 行拟合, 得到呼吸模型。 The fitting modeling sub-step is to fit the different breathing depths and their corresponding relative motion amounts of the reference system to obtain a breathing model.
13、 如权利要求 12所述的方法, 其特征在于, 所述将待配准的多帧超声图 像校正到同一个呼吸深度包括: 13. The method of claim 12, wherein correcting the multiple frames of ultrasound images to be registered to the same breathing depth includes:
计算子步骤, 对于所述多帧超声图像中的任一帧超声图像, 结合其对应的呼 吸位置信息, 得到该帧超声图像对应的呼吸深度, 根据所述呼吸模型得到该帧超 声图像中目标对象相对于基准位置的参考系相对运动量; Calculation sub-step: for any one of the multiple frames of ultrasound images, combined with its corresponding respiratory position information, obtain the breathing depth corresponding to the frame of ultrasound image, and obtain the target object in the frame of the ultrasound image according to the breathing model The amount of relative movement of the reference frame relative to the reference position;
校正子步骤, 根据所述呼吸模型, 将所述多帧超声图像对应的参考系相对运 动量校正到同一个预定呼吸深度。 The correction sub-step is to correct the relative motion of the reference system corresponding to the multiple frames of ultrasound images to the same predetermined breathing depth according to the breathing model.
14、 一种超声融合成像导航系统, 其特征在于, 包括: 14. An ultrasound fusion imaging navigation system, characterized by including:
探头及固定于所述探头上的位置传感器; A probe and a position sensor fixed on the probe;
采集模块, 用于从至少一个平面对所述目标对象进行采集, 得到包含配准用 信息的至少一段超声视频数据, 对于每一段超声视频数据中的每一帧超声图像, 记录位置指向信息, 所述位置指向信息是在超声图像获取过程中由位置传感器感 应得到; An acquisition module, configured to collect the target object from at least one plane to obtain at least one segment of ultrasound video data containing registration information, and record position and pointing information for each frame of ultrasound image in each segment of ultrasound video data, so The above position and pointing information is sensed by the position sensor during the ultrasonic image acquisition process;
选择模块, 用于根据输入的指令, 从预先存储的超声视频数据中选择至少一 帧超声图像; A selection module, configured to select at least one frame of ultrasound image from pre-stored ultrasound video data according to the input instruction;
配准模块, 用于将选择出的至少一帧超声图像与所述模态图像进行配准, 所 述配准过程中使用所述至少一帧超声图像的位置指向信息; A registration module, configured to register the selected at least one frame of ultrasound image with the modal image, using the position and pointing information of the at least one frame of ultrasound image in the registration process;
融合模块, 用于对配准后的超声图像和模态图像进行图像融合。 The fusion module is used to perform image fusion on registered ultrasound images and modal images.
15、 如权利要求 14所述的超声融合成像导航系统, 其特征在于, 还包括: 固定于所述目标对象上的呼吸传感器; 15. The ultrasonic fusion imaging navigation system according to claim 14, further comprising: a respiration sensor fixed on the target object;
呼吸校正模块, 用于在将选择出的多帧超声图像和模态图像进行配准之前和 /或进行融合过程中, 采用呼吸模型将所述多帧超声图像校正到同一个呼吸深度; 所述采集模块还用于, 对于每一段超声视频数据中的每一帧超声图像, 还记 录呼吸位置信息, 所述呼吸传感器是在超声图像获取过程中由呼吸传感器感应目 标对象的呼吸得到。 Respiration correction module, used to register the selected multi-frame ultrasound images and modal images before and /Or during the fusion process, a breathing model is used to correct the multiple frames of ultrasound images to the same breathing depth; the acquisition module is also used to record the breathing position for each frame of ultrasound image in each piece of ultrasound video data. Information, the respiration sensor is obtained by sensing the target object's respiration during the ultrasonic image acquisition process.
PCT/CN2014/074451 2013-10-09 2014-03-31 Ultrasound fusion imaging method and ultrasound fusion imaging navigation system WO2015051622A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP14852089.3A EP3056151B1 (en) 2013-10-09 2014-03-31 Ultrasound fusion imaging method and ultrasound fusion imaging navigation system
US15/094,821 US10751030B2 (en) 2013-10-09 2016-04-08 Ultrasound fusion imaging method and ultrasound fusion imaging navigation system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310468271.5A CN104574329B (en) 2013-10-09 2013-10-09 Ultrasonic fusion of imaging method, ultrasonic fusion of imaging navigation system
CN201310468271.5 2013-10-09

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/094,821 Continuation US10751030B2 (en) 2013-10-09 2016-04-08 Ultrasound fusion imaging method and ultrasound fusion imaging navigation system

Publications (1)

Publication Number Publication Date
WO2015051622A1 true WO2015051622A1 (en) 2015-04-16

Family

ID=52812492

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/074451 WO2015051622A1 (en) 2013-10-09 2014-03-31 Ultrasound fusion imaging method and ultrasound fusion imaging navigation system

Country Status (4)

Country Link
US (1) US10751030B2 (en)
EP (1) EP3056151B1 (en)
CN (1) CN104574329B (en)
WO (1) WO2015051622A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107548294A (en) * 2015-03-31 2018-01-05 皇家飞利浦有限公司 Medical imaging apparatus

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105708492A (en) * 2015-12-31 2016-06-29 深圳市一体医疗科技有限公司 Method and system for fusing B ultrasonic imaging and microwave imaging
CN106934807B (en) * 2015-12-31 2022-03-01 深圳迈瑞生物医疗电子股份有限公司 Medical image analysis method and system and medical equipment
JP6620252B2 (en) * 2016-05-23 2019-12-11 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. Correction of probe induced deformation in ultrasonic fusion imaging system
CN106373108A (en) * 2016-08-29 2017-02-01 王磊 Method and device for fusing real-time ultrasonic image and preoperative magnetic resonance image
CN106780643B (en) * 2016-11-21 2019-07-26 清华大学 Magnetic resonance repeatedly excites diffusion imaging to move antidote
CN106691504A (en) * 2016-11-29 2017-05-24 深圳开立生物医疗科技股份有限公司 User-defined section navigation method and device and ultrasonic equipment
EP3508132A1 (en) 2018-01-04 2019-07-10 Koninklijke Philips N.V. Ultrasound system and method for correcting motion-induced misalignment in image fusion
CN108694705B (en) * 2018-07-05 2020-12-11 浙江大学 Multi-frame image registration and fusion denoising method
CN108992084B (en) * 2018-09-07 2023-08-01 广东工业大学 Method for imaging by using combination of CT system and ultrasonic system and CT-ultrasonic inspection equipment
WO2020103103A1 (en) * 2018-11-22 2020-05-28 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic data processing method, ultrasonic device and storage medium
CN111292277B (en) * 2018-12-10 2021-02-09 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic fusion imaging method and ultrasonic fusion imaging navigation system
CN111292248B (en) * 2018-12-10 2023-12-19 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic fusion imaging method and ultrasonic fusion navigation system
CN109657593B (en) * 2018-12-12 2023-04-28 深圳职业技术学院 Road side information fusion method and system
CN109727223B (en) * 2018-12-27 2021-02-05 上海联影医疗科技股份有限公司 Automatic medical image fusion method and system
CN111179409B (en) * 2019-04-23 2024-04-02 艾瑞迈迪科技石家庄有限公司 Respiratory motion modeling method, device and system
CN110706357B (en) * 2019-10-10 2023-02-24 青岛大学附属医院 Navigation system
CN111127529B (en) * 2019-12-18 2024-02-02 浙江大华技术股份有限公司 Image registration method and device, storage medium and electronic device
CN113129342A (en) * 2019-12-31 2021-07-16 无锡祥生医疗科技股份有限公司 Multi-modal fusion imaging method, device and storage medium
US11497475B2 (en) * 2020-01-31 2022-11-15 Caption Health, Inc. Ultrasound image acquisition optimization according to different respiration modes
CN111327840A (en) * 2020-02-27 2020-06-23 努比亚技术有限公司 Multi-frame special-effect video acquisition method, terminal and computer readable storage medium
CN111358492A (en) * 2020-02-28 2020-07-03 深圳开立生物医疗科技股份有限公司 Four-dimensional contrast image generation method, device, equipment and storage medium
CN112184781A (en) * 2020-09-14 2021-01-05 中国科学院深圳先进技术研究院 Method, device and equipment for registering ultrasonic image and CT image
CN112386282B (en) * 2020-11-13 2022-08-26 声泰特(成都)科技有限公司 Ultrasonic automatic volume scanning imaging method and system
CN114073581B (en) * 2021-06-29 2022-07-12 成都科莱弗生命科技有限公司 Bronchus electromagnetic navigation system
CN113674393B (en) * 2021-07-12 2023-09-26 中国科学院深圳先进技术研究院 Method for constructing respiratory motion model and method for predicting unmarked respiratory motion
CN113768527B (en) * 2021-08-25 2023-10-13 中山大学 Real-time three-dimensional reconstruction device based on CT and ultrasonic image fusion and storage medium
CN114842239B (en) * 2022-04-02 2022-12-23 北京医准智能科技有限公司 Breast lesion attribute prediction method and device based on ultrasonic video
CN114973887B (en) * 2022-05-19 2023-04-18 北京大学深圳医院 Interactive display system for realizing ultrasonic image integration by combining multiple modules
CN116740219B (en) * 2023-08-14 2024-01-09 之江实验室 Three-dimensional photoacoustic tomography method, device, equipment and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050180544A1 (en) * 2004-02-17 2005-08-18 Frank Sauer System and method for patient positioning for radiotherapy in the presence of respiratory motion
US20060262970A1 (en) * 2005-05-19 2006-11-23 Jan Boese Method and device for registering 2D projection images relative to a 3D image data record
CN103123721A (en) * 2011-11-17 2013-05-29 重庆海扶医疗科技股份有限公司 Method and device for reducing artifacts in image in real time
CN103230283A (en) * 2013-04-16 2013-08-07 清华大学 Method for optimizing ultrasonic probe imaging plane space position calibration

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6501981B1 (en) * 1999-03-16 2002-12-31 Accuray, Inc. Apparatus and method for compensating for respiratory and patient motions during treatment
US9572519B2 (en) * 1999-05-18 2017-02-21 Mediguide Ltd. Method and apparatus for invasive device tracking using organ timing signal generated from MPS sensors
DE10015826A1 (en) * 2000-03-30 2001-10-11 Siemens Ag Image generating system for medical surgery
WO2003039370A1 (en) * 2001-11-05 2003-05-15 Computerized Medical Systems, Inc. Apparatus and method for registration, guidance, and targeting of external beam radiation therapy
CN101669831B (en) * 2003-05-08 2013-09-25 株式会社日立医药 Reference image display method
DE102004011156A1 (en) * 2004-03-08 2005-10-06 Siemens Ag Method for endoluminal imaging with movement correction
US20060020204A1 (en) * 2004-07-01 2006-01-26 Bracco Imaging, S.P.A. System and method for three-dimensional space management and visualization of ultrasound data ("SonoDEX")
US8989349B2 (en) * 2004-09-30 2015-03-24 Accuray, Inc. Dynamic tracking of moving targets
US7713205B2 (en) * 2005-06-29 2010-05-11 Accuray Incorporated Dynamic tracking of soft tissue targets with ultrasound images, without using fiducial markers
US7467007B2 (en) * 2006-05-16 2008-12-16 Siemens Medical Solutions Usa, Inc. Respiratory gated image fusion of computed tomography 3D images and live fluoroscopy images
US8126239B2 (en) * 2006-10-20 2012-02-28 Siemens Aktiengesellschaft Registering 2D and 3D data using 3D ultrasound data
JP2010515472A (en) * 2006-11-27 2010-05-13 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ System and method for fusing real-time ultrasound images to pre-collected medical images
US9535145B2 (en) * 2007-11-09 2017-01-03 Koninklijke Philips N.V. MR-PET cyclic motion gating and correction
JP5896737B2 (en) * 2008-04-03 2016-03-30 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. Respirometer, Respirometer operating method, and Respiratory computer program
US8131044B2 (en) * 2008-04-08 2012-03-06 General Electric Company Method and apparatus for determining the effectiveness of an image transformation process
US20090276245A1 (en) * 2008-05-05 2009-11-05 General Electric Company Automated healthcare image registration workflow
WO2010055816A1 (en) * 2008-11-14 2010-05-20 株式会社 日立メディコ Ultrasonographic device and method for generating standard image data for the ultrasonographic device
US8317705B2 (en) * 2008-12-10 2012-11-27 Tomtec Imaging Systems Gmbh Method for generating a motion-corrected 3D image of a cyclically moving object
DE102009030110A1 (en) * 2009-06-22 2010-12-23 Siemens Aktiengesellschaft Method for determining the ventilation of a lung
WO2012117381A1 (en) * 2011-03-03 2012-09-07 Koninklijke Philips Electronics N.V. System and method for automated initialization and registration of navigation system
US8831708B2 (en) * 2011-03-15 2014-09-09 Siemens Aktiengesellschaft Multi-modal medical imaging
US11109835B2 (en) * 2011-12-18 2021-09-07 Metritrack Llc Three dimensional mapping display system for diagnostic ultrasound machines
US20130172730A1 (en) * 2011-12-29 2013-07-04 Amit Cohen Motion-Compensated Image Fusion
US9451926B2 (en) * 2012-05-09 2016-09-27 University Of Washington Through Its Center For Commercialization Respiratory motion correction with internal-external motion correlation, and associated systems and methods

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050180544A1 (en) * 2004-02-17 2005-08-18 Frank Sauer System and method for patient positioning for radiotherapy in the presence of respiratory motion
US20060262970A1 (en) * 2005-05-19 2006-11-23 Jan Boese Method and device for registering 2D projection images relative to a 3D image data record
CN103123721A (en) * 2011-11-17 2013-05-29 重庆海扶医疗科技股份有限公司 Method and device for reducing artifacts in image in real time
CN103230283A (en) * 2013-04-16 2013-08-07 清华大学 Method for optimizing ultrasonic probe imaging plane space position calibration

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3056151A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107548294A (en) * 2015-03-31 2018-01-05 皇家飞利浦有限公司 Medical imaging apparatus
CN107548294B (en) * 2015-03-31 2021-11-09 皇家飞利浦有限公司 Medical imaging apparatus

Also Published As

Publication number Publication date
CN104574329A (en) 2015-04-29
EP3056151A4 (en) 2017-06-14
US20170020489A1 (en) 2017-01-26
EP3056151B1 (en) 2020-08-19
CN104574329B (en) 2018-03-09
US10751030B2 (en) 2020-08-25
EP3056151A1 (en) 2016-08-17

Similar Documents

Publication Publication Date Title
WO2015051622A1 (en) Ultrasound fusion imaging method and ultrasound fusion imaging navigation system
KR101428005B1 (en) Method of motion compensation and phase-matched attenuation correction in pet imaging based on a few low-dose ct images
JP6745879B2 (en) System for tracking an ultrasound probe in a body part
JP6334821B2 (en) Guide system for positioning a patient for medical imaging
JP6316572B2 (en) Compensation of patient motion in an internal probe tracking system
US9265468B2 (en) Fluoroscopy-based surgical device tracking method
Mori et al. Hybrid bronchoscope tracking using a magnetic tracking sensor and image registration
JP2011502687A (en) Interventional navigation using 3D contrast ultrasound
US9445745B2 (en) Tool shape estimation
JP6620252B2 (en) Correction of probe induced deformation in ultrasonic fusion imaging system
US11488313B2 (en) Generating a motion-compensated image or video
KR20120111871A (en) Method and apparatus for creating medical image using 3d deformable model
JP6131161B2 (en) Image registration apparatus, method, program, and three-dimensional deformation model generation method
WO2018002004A1 (en) Intertial device tracking system and method of operation thereof
WO2019118462A1 (en) Systems, methods, and computer-readable media of estimating thoracic cavity movement during respiration
JP7262579B2 (en) Methods for medical device localization based on magnetic and impedance sensors
KR101993384B1 (en) Method, Apparatus and system for correcting medical image by patient's pose variation
JP2020519367A (en) Workflow, system and method for motion compensation in ultrasound procedures
JP6960921B2 (en) Providing projection dataset
Choi et al. X-ray and magnetic resonance imaging fusion for cardiac resynchronization therapy
JP6692817B2 (en) Method and system for calculating displacement of target object
JP2014212904A (en) Medical projection system
JP7258907B2 (en) Multi-modal imaging registration
Peressutti et al. A framework for automatic model-driven 2D echocardiography acquisition for robust respiratory motion estimation in image-guided cardiac interventions
Zhong Image guided navigation for minimally invasive surgery

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14852089

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

REEP Request for entry into the european phase

Ref document number: 2014852089

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2014852089

Country of ref document: EP