WO2008004222A2 - Computer image-aided method and system for guiding instruments through hollow cavities - Google Patents

Computer image-aided method and system for guiding instruments through hollow cavities Download PDF

Info

Publication number
WO2008004222A2
WO2008004222A2 PCT/IL2007/000824 IL2007000824W WO2008004222A2 WO 2008004222 A2 WO2008004222 A2 WO 2008004222A2 IL 2007000824 W IL2007000824 W IL 2007000824W WO 2008004222 A2 WO2008004222 A2 WO 2008004222A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
instrument
images
camera
cavity
Prior art date
Application number
PCT/IL2007/000824
Other languages
French (fr)
Other versions
WO2008004222A3 (en
Inventor
Tatsuo Igarashi
Shmuel Peleg
Original Assignee
Yissum Research Development Company Of The Hebrew University Of Jerusalem
Chiba University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yissum Research Development Company Of The Hebrew University Of Jerusalem, Chiba University filed Critical Yissum Research Development Company Of The Hebrew University Of Jerusalem
Publication of WO2008004222A2 publication Critical patent/WO2008004222A2/en
Publication of WO2008004222A3 publication Critical patent/WO2008004222A3/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/042Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances characterised by a proximal camera, e.g. a CCD camera
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • H04N7/185Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source from a mobile camera, e.g. for remote control
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/313Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for introducing through surgical openings, e.g. laparoscopes
    • A61B1/3132Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for introducing through surgical openings, e.g. laparoscopes for laparoscopy
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2051Electromagnetic tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/25User interfaces for surgical systems
    • A61B2034/256User interfaces for surgical systems having a database of accessory information, e.g. including context sensitive help or scientific articles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • A61B2090/365Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/361Image-producing devices, e.g. surgical cameras
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/40ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing

Definitions

  • This invention relates to image-aided diagnostic and guidance systems preparatory to and during the guiding of instruments through hollow tubes and cavities, particularly but not only for medical procedures.
  • Endoscopes are used to obtain fine and colorful raw information of the abdominal cavity and intraluminal cavity of the hollow organs, such as the stomach, colon, throat, trachea, ureter, urinary bladder, urethra, lachrymal duct, vagina, yolk sac, etc. Since the endoscope offers a narrow and magnified view, doctors insert and pull back, rotate and tilt the tip of the endoscope so as to try and observe the complete cavity of the bodily organ imaged by the endoscope. Endoscopic findings may be recorded using visual devices. Still cameras provide fine images, but require many shots to achieve accurate diagnosis.
  • Video cameras can record the whole area of the organ imaged by the endoscope for subsequent storage in suitable video format.
  • Recently virtual endoscopy of the alimentary tract has been proposed, which reconstructs the 3D structure of hollow organs from CT or MRI images and displays an intra-luminal sight that traces an - A -
  • pixels of CT and MRI images contain information relating to intensity and 3D coordinates, it is possible to make an opened, "flattened” pictorial pathological specimen [4].
  • a probe with a video camera is inserted through a narrow opening into the patient's body.
  • the video from this camera helps the surgeon to control a surgical instrument inserted through another opening. See [5, 12, 13] for background.
  • the surgeon faces similar problems to those noted above in that it is difficult to determine the location of the area visible in the video camera, since the video camera produces a magnified view of a very small region that lacks three-dimensional (3-D) information.
  • the properties of the probe (endoscope) often provoke some accidental situations, which may turn fatal. The following instances may cause such accidents: disorientation of the anatomical structure or dissecting direction, injuries to adjacent organs caused during use of forceps outside the visible area, and inaccurate maneuvering owing to lack of spatial recognition.
  • WO05032355 (corresponding to 20070073103) in the name of Emaki, Inc. describes a luminal organ diagnosing device for displaying or printing a developed still image of continuous endoscope moving images of the inner wall of a luminal organ.
  • the device comprises developed still image creating means composed of pipe projection converting means for creating a development in the circumferential direction of the inner wall of a luminal organ for every frame of digital image data captured into the diagnosing device and mosaicing means for connecting strips of the frames of the development drawn by the pipe projection converting means and converting the connected strips into development still image data.
  • WO05077253 in the name of Osaka University discloses an endoscope for imaging the inside of a digestive organ.
  • an omni-directional camera that has a very wide field of view and is used for imaging the inside of a digestive organ, an illuminator, forceps, and a cleaning water jet orifice.
  • a probe-type endoscope has a receiver used for deducing the position and posture of the probe-type endoscope.
  • the image captured by the omni-directional camera is displayed on a display section of an image processing device connected to the probe-type endoscope.
  • Video images captured by the omni-directional camera to are mosaiced to create a panoramic image inside the digestive organ.
  • US2003045790 (Lewkowicz et al.) assigned to Given Imaging Ltd. discloses a method and system for positioning an in vivo device and for obtaining a three dimensional display of a body lumen by obtaining a plurality of in vivo images, generating position information corresponding to each in vivo image, and combining the plurality of in vivo images to construct a mosaic image. It is suggested to use image mosaic constructing techniques to display a panoramic view of a body lumen.
  • the in vivo device is an encapsulated miniature camera that may be swallowed and whose progress through the alimentary ducts can thus be monitored.
  • AU three of the above references relate principally to the imaging of long tubular organs, along whose axis the camera is guided.
  • the camera is thus constrained to move inside a tubular organ, it is much more likely that the camera will image the area of interest and, in any case, significant up-down and side-side movement of the camera is usually impossible. But these conditions do not apply when, for example, a laparoscope is used to illuminate and image a body cavity where significant up-down and side-side movement of the camera is not only possible but is mandatory in order to image the complete area of interest.
  • a laparoscope is inserted through one of these incisions for imaging the work area and projecting an image thereof on a display device for simultaneous view by members of the medical team.
  • the surgeon directs surgical instruments such as forceps through the other incision, while viewing the display device so as to obtain real-time feedback regarding the positioning and operation of the surgical instruments.
  • the instruments the laparoscope or endoscope may not be directed through a common lumen or cavity, with the consequence that the surgical instrument may not always be in the field of view of the camera.
  • the displayed video images typically lack depth information, requiring the surgeon to estimate the distance of structures by moving the camera laterally or by physically probing the structures to gauge their depth. It has been proposed to use stereo endoscopes to address this drawback, but the surgeon is still limited to viewing only what is directly in front of the camera. It has been proposed [22] to use virtual reality systems to overcome some of these drawbacks. This may require the surgeon to use a head-mounted display for viewing the imaged area thus militating against simultaneous view of the cavity by other members of the surgical team.
  • a method for presenting a stabilized mosaic view of a cavity comprising: maneuvering an imaging device through a first access point for producing successive input video images that include features in an area of interest within said cavity; aligning said input video images to compensate for relative lateral camera motion between successive input video images so as to produce successive aligned video images; mosaicing the successive aligned video images so as to produce a mosaic video image; and displaying the mosaic video image on a display device.
  • a system for presenting a stabilized mosaic view of a cavity comprises: an image generator for receiving and aligning successive input video images formed by maneuvering an imaging device through a first access point in said cavity to compensate for relative lateral camera motion between successive input video images so as to produce successive aligned video images and for mosaicing the successive aligned video images so as to produce a mosaic video image, and an auxiliary image display coupled to the image generator for displaying said mosaic image.
  • the mosaic video image is a panoramic image having a field of view that is wider than a field of view of the input video images.
  • the image generator generates a panoramic image. The benefit in this case is an increased field of view.
  • the mosaic video image has a field of view that is substantially equal to a field of view of the input video images and the image generated by the image generator will not then be panoramic.
  • the benefit in this case is a stabilization of the video for increasing the convenience of the surgeon, and for enabling better understanding of 3D structure.
  • the instrument is a surgical instrument and the cavity is a body cavity.
  • the invention proposes the use of a panoramic image generator that takes as input a video stream from a video camera attached to a surgical instrument such as an endoscope or laparoscope or from a CCD of such an instrument directly, and while the surgical instrument or the video camera scans an area of interest the panoramic generator stitches the video frames into a panoramic mosaic.
  • the panoramic mosaic may include 3-D information as known per se [14]. This technique may be used to display forceps entering the site from a region outside the field of vision when required.
  • the generated panoramic mosaic may be displayed on an auxiliary panoramic monitor located near the ordinary monitor displaying the video stream from the video camera with stabilization of image shake.
  • auxiliary panoramic display system can serve not only in medical procedures such as internal examination and surgery, but in any application where a probe having a narrow field of view is used.
  • a probe having a narrow field of view is used.
  • One such application is, for example, the inspection of sewer pipes, dermatological observation of skin, and ophthalmological apparatus.
  • FIG. 1 is a schematic representation depicting a working environment for which the present invention is particularly beneficial
  • Fig. 2 is a block diagram showing schematically a system according to an embodiment of the invention in conjunction with a conventional video system
  • FIG. 3 is a block diagram showing schematically a detail of a panoramic image generator used in the system shown in Fig. 2;
  • Fig. 4 is a flow diagram showing principal operations carried out during use of the system shown in Fig. 2;
  • Fig. 5 shows pictorially a system for displaying panoramic and regular views for use in surgical procedures
  • Fig. 6 is a block diagram showing schematically a detail of a computer-assisted diagnostic system used in the system shown in Fig. 2.
  • Fig. 1 is a pictorial representation depicting a working environment depicted generally as 1 for which the present invention is particularly beneficial.
  • the working environment 1 includes some sort of cavity 2 that may be a hollow tube such as a pipe or in the case of surgical applications a body lumen or a cavity such as an abdominal cavity.
  • the invention relates to an operation that is performed by an operator in an area of interest 3 within the cavity 2 using an instrument 4 that is maneuvered through a first access point 5 in the cavity 2.
  • the operator views an image of the area of interest 3 as viewed by a camera 6 that is maneuvered within the cavity 2 through a second access point 7, different from the first access point 5.
  • a camera 6 that is maneuvered within the cavity 2 through a second access point 7, different from the first access point 5.
  • the invention will now be described with particular regard to surgical procedures, it should be understood that the invention is applicable to the general situation shown in Fig. 1. However, more generally the invention relates to the situation where the instrument and the camera are not adjacent to each other Thus, the invention also embraces the possibility that the cavity has a single access point through which both the instrument and the camera are maneuvered independently of one another. This, of course, is different from conventional endoscopy where the camera is mounted at the end of the endoscope as described in above-referenced WO05077253 such that the camera and the instrument necessarily subtend the same line of sight with the area of interest.
  • Fig. 2 shows schematically a system 10 comprising a main video system 11 coupled to an auxiliary panoramic system 12.
  • the main video system 11 comprises a video camera 6 (shown in Fig. 1) that creates a video stream 13 that is fed to a video display 14.
  • the auxiliary panoramic system 12 comprises a panoramic image generator 15 that produces a panoramic image 16.
  • the panoramic image generator 15 is coupled to a computer-assisted diagnostic system 17, which is coupled to a panoramic image recording device 18 and to a printer 19 that is also coupled to the panoramic image generator 15 for printing the image produced thereby.
  • the computer-assisted diagnostic system 17 may be used to produce enhanced images or to combine a panoramic image with an external image produced by conventional imagining systems, as described later with reference to Fig. 6 of the drawings.
  • Fig. 3 is a block diagram showing schematically a detail of the panoramic image generator 15 according to an embodiment of the invention, which comprises an image acquisition unit 20.
  • This component takes frames from the incoming video stream, and stores them in computer memory.
  • Such video grabbing components are known per se.
  • the data stored by the image acquisition unit 20 is processed by an image motion computation module 21, which computes the motion of the camera or the video image and provides this information to a panoramic image stitching unit 22, which mosaics the image to form a composite panoramic view that is displayed on the auxiliary panoramic image display unit 16.
  • the image motion computation module 21 comprises an image based motion analysis unit 24, a motion tracking unit 25 and a probe motion measuring unit 26.
  • the image based motion analysis unit 24 uses the incoming frames from the video sequence, and using image analysis methods such as described in [6,18-21] determines the change between the camera positions for each video frame or the image displacement between frames.
  • the motion tracking unit 25 may be in the form of trackers mounted on the camera or on the probe holding the camera, for measuring its location using known tracking methods. Examples include trackers based on magnetic fields, or trackers based on gyroscopes such as are used in virtual reality systems [9, 10].
  • the probe measuring unit 26 may be an external device for measuring the motion of medical probes whose use in orthopedic surgery is known [7, 8]. Markers are attached to the part of a rigid probe that is outside the body of the patient.
  • a set of video cameras are located in the surgery room at appropriate positions, for tracking the markers.
  • the system can compute up to some accuracy the positions of the markers, and based on the knowledge of the structure of the probe, the position of the end of the probe inside the body can be computed as well.
  • the markers can be attached to the probe on which the video camera is mounted, and the position of the camera can thus be computed.
  • position is meant the spatial location and direction in 3D space.
  • the position of other surgical tools, such as forceps can be computed by attaching markers of different colors or shapes.
  • Fig. 4 is a flow diagram showing principal operations carried out during use of the system 10.
  • a method for presenting a stabilized panoramic view of a cavity for guiding an instrument when carrying out a procedure in the cavity by an operator of the instrument The operator maneuvers the instrument to an area of interest through a first access point of the cavity.
  • an assistant (or possibly the operator) maneuvers an imaging device through a second access point different from the first access point for producing successive video images that include features in the area of interest and the instrument.
  • the video images are aligned to compensate for relative lateral camera motion between successive video images so as to produce successive aligned video images.
  • Successive aligned video images are mosaiced so as to produce a panoramic video image, which is displayed for simultaneous viewing by the operator and the assistant.
  • a particular feature of the present invention is that the imaging device and the instrument may both subtend different lines of sight with the area of interest from different perspectives, while nevertheless allowing the surgeon (or other operative in the case of non-medical applications) to see a panoramic view that is free of camera shake and that displays a wide area of interest that includes the instrument.
  • the effect of camera shake may be reduced by aligning successive video images after neutralizing camera motion. This may be done as described in US Patent 6,798,897 [18] and US 2006/0280334 [20] both of whose contents are incorporated herein by reference. Such an approach is particularly suited to image alignment of substantially static images.
  • the invention may also be used when an object of interest is dynamic such as when operating on moving or pulsating organs. In this case, it may be more appropriate to employ the method described in US 2006/215934 [21] commonly assigned to one of the present applicants and sharing a common inventor and whose foil contents of are incorporated herein by reference.
  • the motion computation module 21 is shown as comprising several modules, it should be understood that the invention can use any image or camera motion analysis system, or any combination of systems to enhance each other, and does not depend on the particular method used for motion analysis. So, for example, any one of the sub-modules 24, 25 and 26 may be used on its own.
  • the panoramic image stitching unit 22 is an image mosaicing system, which uses the video frames together with the motion analysis information, and stitches the frames together into a panoramic mosaic image.
  • Image mosaicing systems are described in US Pat. No. 6,075,905 [11]. It will be appreciated that the panoramic image stitching unit 22 stereo may be adapted to generate stereoscopic panoramic images using known techniques such as described, for example, in US Pat. No. 6,665,003 [15].
  • the panoramic image stitching unit 22 writes the generated mosaic into memory for display by the auxiliary panoramic image display unit 23 that is typically disposed near the video display 14 of the main system 11.
  • the system 10 may also have the following additional features.
  • the image motion computed for the input video frames may be used to generate a stabilized video.
  • the stabilized video can provide enhanced visualization for the physician by stabilizing fast movements so as to allow the practitioner to observe the area of interest precisely without vibration caused by shaking hands or heartbeat, etc.
  • video stabilization the field of view changes because of the vibrations of the camera.
  • all images are moved to a common field of view (FOV), and most original images leave parts of the common FOV uncovered because they move.
  • FOV field of view
  • the camera motion may be stabilized so as to create three dimensional effects by using motion parallax when the camera is translated in a way including motion parallax.
  • normally three dimensional vision is perceived by the brain as a result of the two eyes viewing an object from slightly different lines of sight.
  • the differences in the left and right eye views are interpreted by the brain as depth information.
  • depth information even in the absence of different left and right eye views, it is still possible to perceive depth information by moving one's head. This is done, for example, by people having only one eye who are still able to perceive depth by virtue of repeated head movements, which generate parallax errors between successive images presented to the brain, allowing the brain to interpret these parallax errors as depth information, hi the invention, although two cameras may be used, there is typically only a single camera that, at any instant of time, sees an object in the area of interest from a single line of sight.
  • depth information may be obtained by deliberately moving the camera from side to side, so as to generate successive frames of video data wherein the object of interest appears in successive frames with motion parallax that allows depth information to be computed.
  • This is particularly useful, for example, when the surgical instrument is moved relative to a static background enabling depth information of the surgical instrument to be computed allowing the surgical instrument to be displayed in 3-D, and enhancing the surgeon's sense of where the tip of the surgical instrument is located relative to the cavity.
  • the stabilized video will stabilize for global motion only, and will not cancel motion parallax. Motion parallax in a stabilized video will give the surgeon 3D sense of the region.
  • the surgical instrument e.g.
  • the cavity may be a lumen or a non-tubular cavity such as an abdominal cavity. Fusion of panoramic image and 3D-CT (computerized tomography) and 3D-
  • MRI magnetic resonance imaging
  • Combination of other algorithms or devices such as enhancement of color information or 3D structure, and "flattened" panoramic picture leads to an automated detection of abnormal areas, enabling establishment of computer-aided diagnostic system in the hollow organs.
  • a computer aided diagnostic system of CT images has been developed for diagnosis of lung cancer.
  • Enhancement of color information such as Narrow Band Imaging (Olympus Co.), and three dimensional information, and setting of proper cutoff level of color for delineating between protuberances and hollows, allows automated detection of the lesion.
  • the system 10 allows movement of the surgical instrument to be tracked. This tracking facilitates automatic guidance of the camera onto the area of surgery.
  • the "camera” refers to the view field of the endoscope, or to the tip of the endoscope.
  • the motion of the tools visible in the laparoscopic camera can be analyzed, and some motions of forceps can be considered as a cue signal to start previously assigned motion of a robotic system handling the camera.
  • the surgeon orders some predefined action to the tracking system by some special motion of the forceps thus allowing seamless control of the tracking system without releasing the forceps. Also the tracking enables the surgeon to comprehend the direction of the forceps, and then coarse detection of the site of the trocar where the forceps was introduced, is possible.
  • the "trocar” is a sharp- pointed surgical instrument, used with a canula to puncture a body cavity such as the abdominal wall to introduce forceps.
  • the surgeon In laparoscopic surgery, the surgeon must concentrate his attention on the operative field, and cannot watch the site of the trocar where the forceps are inserted. Since accidental injury may occur on changing the forceps, the surgeon must repeatedly pan the view from the operative field to the trocar site, then from the trocar site to the operative field. This may result in the surgeon losing the location of both sites.
  • the direction of the both sites can be displayed. The system can indicate the direction of the trocar in the display even when the forceps does not appear in the display.
  • the tracking system may be configured to control other robotic devices by some allotted action of the forceps.
  • the motion tracking unit 25 is able to replace the control of some of the devices.
  • conventional robotic surgical systems such as Da Vinci, pan of the endoscope, zoom in and out, is performed via a switch that is operated by the surgeon's arm or foot, the operation of which can thus momentarily distract the surgeon.
  • the motion tracking unit 25 can obviate the need for such movement by reacting to different motions of the tools within the filed of view of the imaging system. For example, shake the forceps twice in the center means zoom in, shake it twice in the right side of the display means pan to the right, etc.
  • 3D-information of the system 10 enables a fine fit between the panoramic image and previously captured 3D-CT and 3D-MRI, whereby the 3D-CT and 3D-MRI images are overlaid on the panoramic image adequately using the common landmarks.
  • This fitting can be applied in images of the abdominal cavity, and the hollow organs, such as throat, stomach, colon, urinary tract, etc. This can help in 3D navigation, and in anticipating organs not visible in the video images (like arteries, etc.).
  • 3D-CT and 3D-MRI depict architecture of organs, vessels, bone, and so on, in an abdominal cavity precisely.
  • such architecture cannot be seen directly in open surgery and laparoscopic surgery.
  • the surgeon can see fat that covers organs.
  • Display of a panoramic view shows a large field of view of the real image, allowing the surgeon to determine the proper dissecting plane. It becomes easier and safer when 3D image of the vessels, lymph nodes and critical organs are projected on the laparoscopic image.
  • Conventional methods require the respective axes of the 3D- CT image and of the laparoscopic image to be adapted by proper placement of the trocar.
  • Use of a panoramic picture makes the fusion simpler by establishing several landmarks at the operating theater.
  • the system 10 supplies both magnified view and panoramic view simultaneously originated from a surgical instrument such as a conventional endoscope or laparoscope with 3D information, and records both pictures with medical information relevant to findings on the panoramic picture.
  • a surgical instrument such as a conventional endoscope or laparoscope with 3D information
  • the system 10 contributes several benefits as follows.
  • the panoramic picture indicates location and range of expansion of the lesion.
  • the combination of still, magnified pictures and a panoramic picture provides visual information, telling about what, where, and to what extent lesions exist. It enables physicians to estimate and record characteristics of the lesions objectively. Since the visual information is easy to understand for patients, co-medical staffs, and doctors, it contributes toward mutual sharing of correct and acceptable information of patients.
  • the system can present a panoramic view of whole organs in the abdominal cavity in a single picture allowing anatomical structures of organs to be seen systematically.
  • mages may be processed so as to highlight features in the area of interest. This may include changing a display attribute of features to be highlighted.
  • the lymph nodes can be stained to render them more visible as is known per se [16].
  • panoramic view and such a color analyzing system or staining method enhances the efficacy of detecting lesions with faint color and helps to prevent such lesions being overlooked. It is also useful for lymph node dissection for understanding the anatomical structure of lymphatic tissues.
  • a panoramic view indicates the site from where the specimen was taken. So it can provide evidence that specimen was punched out correctly from the targeted lesion when panoramic pictures made before and after a biopsy are compared. Panoramic pictures are also informative for pathologists.
  • the system offers 3D information by motion parallax [14, 15] which enhances recognition and estimation of some lesions. Observation and recording of the lesion with 3D information is important because the shape of the lesion itself has diagnostic value, indicating whether the tumor is benign, or has aggressive character etc.
  • a panoramic view indicates the extent of surgical maneuvering inside the abdomen. So it can demonstrate a surgical condition chronologically, when recorded from time to time during surgery. Previously, there has been inadequate information for the patients how the surgery was carried out apart from a verbal explanation accompanied by some segmental photographs and free hand drawings by surgeons, or via video tapes recording the whole surgical process, which requires professional knowledge for understanding.
  • the recorded panoramic still images displayed in a chronological order provide a much more compact overview of milestone events carried out during the procedure and explain eloquently to the patient and family the surgical process and its quality. Such images may also be used as an educational aid to students. 9.
  • Visualization of 3D structure enables surgeons to comprehend the spatial position of surgical point and tips of the forceps. Since surgical maneuvers require precise recognition of tissues and forceps, cutting device and clips, it will help carrying out reliable maneuvering.
  • the system according to the invention can merge CT and MRI images and panoramic images, after making an opened and "flattened” view of each image. It helps recognition of extent of invasion and superficial expansion of the disease, and enables making accurate plan for resection before surgery. It aids in recognition of hidden organs such as arteries, veins, lymph nodes, and retroperitoneal organs covered with thick fat tissue, which are difficult to detect by laparoscopic observation. Anatomical diagnosis of these organs is made before surgery by CT and MRI. Thus, fusion of CT, MRI and panoramic view functions as a navigation system, and contribute toward safer surgery by avoiding sudden hemorrhage or injuries to the adjacent organs.
  • the system according to the invention substitutes conventional video records for surgery with lower memory requirement and faster viewing time.
  • the system can be applied to endoscopic examination, microscopic examination aimed to tele-pathology and surgery.
  • the magnified view demonstrates the conventional endoscopic view.
  • the panoramic view demonstrates the whole scene, which affords precise identification of the location of any lesions inside the lumen.
  • a rigid scope is used as a laparoscope, cystoscope, ureteroscope, nephroscope, arthroscope, endoscope for mammary duct and lachrymal duct, etc.
  • the system 10 requires only video signals, both analog and digital signals, from conventional apparatus. This means that the system 10 can employ directly conventional endoscopic apparatus and devices.
  • the system reduces shake of a laparoscopic or endoscopic image, and thereby contributes to reduced fatigue and boosts concentration of the practitioner.
  • Fig. 5 shows pictorially a system 30 having two monitors 14, 16 for locating near the patient.
  • the monitor 14 displays a magnified view and the monitor 16 is part of the auxiliary panoramic system 12 described above with reference to Fig. 1 for displaying a panoramic view.
  • Two foot-operated switches are set near the doctor's foot.
  • One is a "freeze switch” 33 that freezes panoramic view.
  • Another switch is a "record switch” 34 for recording the current view. For example, if the doctor finds favorable panoramic view, he will step on the freeze switch to freeze the panoramic view, and then steps on the record switch to take a picture (just recorded).
  • the recorded image is stored in a database 35 and the operation may be accompanied by a shutter sound vocalized by a loudspeaker 36 to provide audible feedback.
  • the photograph just taken may appear for a few seconds with information indicating "abnormal" findings, whereafter the panoramic view monitor returns to real time mode automatically. This may be further improved by providing a call back function. For example, if the doctor steps on the freeze switch to freeze the panoramic view, but the timing was bad, or the doctor wants to check previous view again, the panoramic view monitor calls back several frames by subsequent stepping on the freeze switch.
  • the monitor 16 may display a panoramic image overlapping a transparent 3D-CT or 3D-MRI image. It is possible to toggle between the panoramic image, 3D-CT or 3D-MRI images, by stepping on a foot switch.
  • the doctor or an assistant may point to landmarks in the currently displayed image using a sterilized device such as joy stick or a touch panel on the display. Then, both images are automatically adapted to each other. By way of example, this may be done for a laparoscopic procedure as follows: 1. Before surgery, determine the site of laparoscopic insertion.
  • the computer-assisted diagnostic system 17 includes an image combiner 40 for overlaying the panoramic video image on a 3 -dimensional image of the cavity including the area of interest 3 produced by computerized tomography or magnetic resonance imaging.
  • the image combiner 40 includes a landmark processor 41 for flattening hollow organs and establishing common landmarks of each image.
  • a comparator 42 is coupled to the landmark processor for comparing corresponding frames of the different images and transforming image frames of one of the images so that the landmarks coincide.
  • 3D information is combined with the panoramic view, and is recorded as a picture.
  • 3D information is recognized as animation operated by a pointing device such as a mouse or joystick.
  • the scope can be controlled by a robotic arm, whereupon an electrical signal from some device such as foot switch, hand piece attached to the forceps etc. induce shaking motion of the robotic arm.
  • the monitor 16 While it will be difficult to view the original video as displayed in the monitor 14, the monitor 16 will display a stabilized image, where the only motion will be motion parallax. This will provide 3D perception to the surgeon.
  • spontaneous turbulence (vibration) of endoscope is cancelled. This reduces fatigue of the doctor, and allows the doctor to concentrate better on the examination, resulting in safer maneuvering.
  • automated indication of an "abnormal" lesion by morphological and optical analysis is applied to the panoramic view of the endoscope.
  • a system can also be adapted to display a panoramic view of microscope and ophthalmoscope images.
  • the motion tracking unit 25 may be adapted to control a surgery-assisting robotic system such as laparoscope control robotic arm and master-slave robotic surgical system, in which the laparoscope continues to display and magnifies the associated surgical field automatically. This can be performed by tracking the surgical tools using any known computer vision tracking method.
  • the motion tracking unit 25 can be adapted to indicate the direction of the trocars in the periphery of the display.
  • the motion tracking unit 25 can also afford detection of lesions or lymph nodes, stained or enhanced by color analysis.
  • the recording modality must correspond to the DICOM system and the other medical image transferring systems in order to send real time pictures to other section of the hospital or to a far place.
  • video denotes any series of image frames that when displayed at sufficiently high rate produces the effect of a time varying image.
  • image frames are generated using a video camera and in real-time applications such as medical procedures this is probably mandatory.
  • the invention is not limited in the manner in which the image frames are formed and is equally applicable to the processing of image frames created in other ways, such as animation, still cameras adapted to capture repetitive frames, and so on. Such techniques may be employed in applications that do not require real-time processing of video frames that provide an instantaneous view of the imaged area.
  • system may be a suitably programmed computer.
  • the invention contemplates a computer program being readable by a computer for executing the method of the invention.
  • the invention further contemplates a machine-readable memory tangibly embodying a program of instructions executable by the machine for executing the method of the invention.

Abstract

Method and system for presenting a stabilized mosaic view of a cavity (2) wherein an imaging device (6) is maneuvered through a first access point (7) for producing successive input video images that include features in an area of interest (3) within the cavity and the input video images are aligned to compensate for relative lateral camera motion between successive input video images so as to produce successive aligned video images that are mosaiced to produce a mosaic video image that is displayed on a display device (16).

Description

Computer image-aided method and system for guiding instruments through hollow cavities
RELATED APPLICATIONS
This application claims benefit of provisional applications Ser. Nos. 60/806,481 filed JuI. 3, 2006 and 60/890,489 filed Feb. 18, 2007 whose contents are included herein by reference.
FIELD OF THE INVENTION
This invention relates to image-aided diagnostic and guidance systems preparatory to and during the guiding of instruments through hollow tubes and cavities, particularly but not only for medical procedures.
PRIOR ART Prior art references considered to be relevant as a background to the invention are listed below and their contents are incorporated herein by reference. Additional references are mentioned in the above-mentioned US provisional applications nos. 60/806,481 and 60/890,489 and their contents are incorporated herein by reference. Acknowledgement of the references herein is not to be inferred as meaning that these are in any way relevant to the patentability of the invention disclosed herein. Each reference is identified by a number enclosed in square brackets and accordingly the prior art will be referred to throughout the specification by numbers enclosed in square brackets. [1] A.P. Royster, H.M. Fenlon, P.D. Clarke, D.P. Nunes, J.T. Ferrucci. CT colonoscopy of colorectal neoplasms: two dimensional and three-dimensional virtual reality techniques with colonoscopic correlation. Am J Roentgenol. 1997 Nov; 169 (5): 1237-42.
[2] A. Laghi, C Catalano, V. Penebianco, R. Iannaccone, S Iori, R. Passariello. Optimization of the technique of virtual colonoscopy using a mύltislice spiral computerized tomography. Radio Med (Torino) 2000 Dec; 100 (6): 459-64. [3] H.M. Fenlon, D.P. Nunes, P.C. Schroy 3rd, M.A. Barish, P.D. Clarke, J.T. Ferrucci. A comparison of virtual and conventional colonoscopy for the detection of colorectal polyps. N Engl J Med 1999 Nov 11; 341 (20): 1496-503.
[4] S. Haker, S. Angenent, A. Tannenbaum, R. Kikinis. Nondistorting Flattening Maps and the 3D Visualization of Colon CT Images, IEEE Trans Med Imaging,
2000 M; 19 (7): 665-7
[5] Russell, K. M., T. J. Broderick, et al. (2001). "Laparoscopic telescope with alpha port and aesop to view open surgical procedures? J Laparoendosc Adv Surg Tech A 11(4): 213-8. [6] US Patent 6,173,087 "Multi-view image registration with application to mosaic- ing and lens distortion correction", R. Kumar, H. Sawhney, and J. Bergen.
[7] Systems for visual tracking of medical instruments include "Optotrak" by NDI (http://www.ndigital.com/optotrak.htmD, "MicronTracker" by Claron Technology (http ://www. clarontech. com/) . [8] Passive/ active visual trackers by Traxal Technologies (http://www.traxtal.com/home.htm7/products/traxtal products.htm).
[9] Tracking by magnetic field include "Aurora" by NDI. (http://www.ndigital.com/aurora.html)
[10] Magnetic tracking systems by Traxal Technologies (http^/www.traxtal.corn/home.htmΫ/products/traxtaljroducts.htm).
[11] US Patent 6,075,905 "Method and apparatus for mosaic image construction" J. Herman, J. Bergen; S. Peleg; V. Paragano; D. Dixon, P. Burt H. Sawhney, G. Gendel, R. Kumar, and M. Brill.
[12] Chan, A. C, S. C. Chung, et al. (1997). "Comparison of two-dimensional vs three-dimensional camera systems in laparoscopic surgery" Surg Endosc 11(5):
438-40.
[13] Kourambas, J. and G. M. Preminger (2001). "Advances in camera, video, and imaging technologies in laparoscopy" Urol Clin North Am 28(1): 5-14.
[14] S. Peleg, M. Ben-Ezra, and Y. Pritch, OmniStereo: Panoramic Stereo Imaging, IEEE Trans, on PAMI, March 2001, pp. 279-290.
[15] US Patent 6,665,003 "System and method for generating and displaying panoramic images and movies", S. Peleg, M. Ben-Ezra, and Y. Pritch. [16] Bilchik AJ, Trocha SD (2003). "Lymphatic mapping and sentinel node analysis to optimize laparoscopic resection and staging of colorectal cancer: an update"
Cancer Control 10(3): 219-23.
[17] Y. Rosenberg and M. Werman "Representing local motion as a probability distribution matrix and object tracking", DARPA Image Understanding
Workshop, New Orleans, May 1997, pages 153-158. Morgan Kaufman. [18] US Patent 6,798,897 "Real time registration, motion detection and background replacement using discrete local motion estimation", Y. Rosenberg. [19] US Patent 5,999,662 "System for automatically aligning images to form a mosaic image", P. Burt, M. Irani, S. Hsu, P. Anandan, and M. Hansen.
[20] US 2006/0280334 Fast and robust motion computations using direct methods by
Alexander Rav-Acha. [21] US 2006/0215934 Online video registration of dynamic scenes using frame prediction by Alexander Rav-Acha, Shmuel Peleg and Yael Pritch. [22] Augmented Reality Visualization for Laparoscopic Surgery by Henry Fuchs et al. presented at First International Conference on Medical Image Computing and
Computer-Assisted Intervention (MICCAI) October 11-13, 1998.
BACKGROUND OF THE INVENTION
It is known to use video-imaging to guide endoscopes and laparoscopes through hollow tubes or through the abdominal cavity during medical procedures. Endoscopes are used to obtain fine and colorful raw information of the abdominal cavity and intraluminal cavity of the hollow organs, such as the stomach, colon, throat, trachea, ureter, urinary bladder, urethra, lachrymal duct, vagina, yolk sac, etc. Since the endoscope offers a narrow and magnified view, doctors insert and pull back, rotate and tilt the tip of the endoscope so as to try and observe the complete cavity of the bodily organ imaged by the endoscope. Endoscopic findings may be recorded using visual devices. Still cameras provide fine images, but require many shots to achieve accurate diagnosis. Video cameras can record the whole area of the organ imaged by the endoscope for subsequent storage in suitable video format. Recently virtual endoscopy of the alimentary tract has been proposed, which reconstructs the 3D structure of hollow organs from CT or MRI images and displays an intra-luminal sight that traces an - A -
endoscopic view moving inside it [1-3]. Since pixels of CT and MRI images contain information relating to intensity and 3D coordinates, it is possible to make an opened, "flattened" pictorial pathological specimen [4].
Some of the problems facing the medical practitioner in using the conventional endoscopes are to determine and record the location of the lesions accurately, since endoscopes produce a magnified view of very small region. It is known to take pictures of the intra-luminal cavity of the hollow organs, form a 3D image of whole area of the cavity imaged by the endoscope, and then write down the findings in a chart or clinical record with pictures attached to it. This method of recording endoscopic findings harbors risks of making misdiagnosis and may lead to inappropriate surgery owing to the following reasons:
1. Doctors easily find the obvious lesions and take photographs of it. However, doctors may not record all of the area where no apparent lesion was found, and may overlook very small lesions which cannot be re-estimated until the day the next endoscopic examination is scheduled.
2. During surgery, a surgeon determines the site of resection by observing the organ from the "extra-luminal" space, without simultaneous observation of the intra- luminal cavity as achieved by endoscopic examination. Lack of accurate information to the surgeon about the location of the lesion always harbors a risk of inappropriate resection of the lesions.
In laparoscopic surgery, a probe with a video camera is inserted through a narrow opening into the patient's body. The video from this camera helps the surgeon to control a surgical instrument inserted through another opening. See [5, 12, 13] for background. The surgeon faces similar problems to those noted above in that it is difficult to determine the location of the area visible in the video camera, since the video camera produces a magnified view of a very small region that lacks three-dimensional (3-D) information. The properties of the probe (endoscope) often provoke some accidental situations, which may turn fatal. The following instances may cause such accidents: disorientation of the anatomical structure or dissecting direction, injuries to adjacent organs caused during use of forceps outside the visible area, and inaccurate maneuvering owing to lack of spatial recognition. In order to imagine the location covered by the field of view of the camera as well as the viewed organ and dissecting plane, the surgeon sometimes rotates around and pulls back the camera to scan the scene. Additionally, the scope is maneuvered manually by an assistant resulting in shaking of the image, which is apt to cause the surgeon fatigue. WO05032355 (corresponding to 20070073103) in the name of Emaki, Inc. describes a luminal organ diagnosing device for displaying or printing a developed still image of continuous endoscope moving images of the inner wall of a luminal organ. The device comprises developed still image creating means composed of pipe projection converting means for creating a development in the circumferential direction of the inner wall of a luminal organ for every frame of digital image data captured into the diagnosing device and mosaicing means for connecting strips of the frames of the development drawn by the pipe projection converting means and converting the connected strips into development still image data.
WO05077253 in the name of Osaka University discloses an endoscope for imaging the inside of a digestive organ. At the end of the endoscope, there are provided an omni-directional camera that has a very wide field of view and is used for imaging the inside of a digestive organ, an illuminator, forceps, and a cleaning water jet orifice. . A probe-type endoscope has a receiver used for deducing the position and posture of the probe-type endoscope. The image captured by the omni-directional camera is displayed on a display section of an image processing device connected to the probe-type endoscope. Video images captured by the omni-directional camera to are mosaiced to create a panoramic image inside the digestive organ. This method requires omni- lighting which may require that the endoscope be provided with an additional channel that adds to the bulk of the endoscope. US2003045790 (Lewkowicz et al.) assigned to Given Imaging Ltd. discloses a method and system for positioning an in vivo device and for obtaining a three dimensional display of a body lumen by obtaining a plurality of in vivo images, generating position information corresponding to each in vivo image, and combining the plurality of in vivo images to construct a mosaic image. It is suggested to use image mosaic constructing techniques to display a panoramic view of a body lumen. The in vivo device is an encapsulated miniature camera that may be swallowed and whose progress through the alimentary ducts can thus be monitored. AU three of the above references relate principally to the imaging of long tubular organs, along whose axis the camera is guided. When the camera is thus constrained to move inside a tubular organ, it is much more likely that the camera will image the area of interest and, in any case, significant up-down and side-side movement of the camera is usually impossible. But these conditions do not apply when, for example, a laparoscope is used to illuminate and image a body cavity where significant up-down and side-side movement of the camera is not only possible but is mandatory in order to image the complete area of interest.
Thus, during surgical procedures, such as laparoscopy, several incisions are made in the patient's abdominal area. A laparoscope is inserted through one of these incisions for imaging the work area and projecting an image thereof on a display device for simultaneous view by members of the medical team. The surgeon directs surgical instruments such as forceps through the other incision, while viewing the display device so as to obtain real-time feedback regarding the positioning and operation of the surgical instruments. Thus, in surgical procedures the instruments the laparoscope or endoscope may not be directed through a common lumen or cavity, with the consequence that the surgical instrument may not always be in the field of view of the camera. Likewise the position of the display device is such that the visual-motor axis is disrupted as a result of which the surgeon's efficiency decreases. Other drawbacks of conventional laparoscopic systems relate to the limited field of view, the lack of good eye-hand coordination and the display of 2-D images. The camera is inevitably miniature with a small field of view that defines the surgeon's view. In order to augment the field of view, he must frequently readjust the camera, which requires much coordination with the assistant. Lack of coordination between the actual view and the desired view may be reduced by the surgeon guiding the camera, but he then has only one hand with which to operate. Robotic controlled laparoscopes have also been proposed but they are bulky and expensive. The displayed video images typically lack depth information, requiring the surgeon to estimate the distance of structures by moving the camera laterally or by physically probing the structures to gauge their depth. It has been proposed to use stereo endoscopes to address this drawback, but the surgeon is still limited to viewing only what is directly in front of the camera. It has been proposed [22] to use virtual reality systems to overcome some of these drawbacks. This may require the surgeon to use a head-mounted display for viewing the imaged area thus militating against simultaneous view of the cavity by other members of the surgical team. It would therefore be desirable to provide a system and method for imaging a body cavity during a surgical procedure that, on the one hand, allows conventional surgical techniques to be employed whereby the surgical instrument is not necessarily in the same line of sight as the imaging device and which facilitates two-hand operation by the surgeon, while on the other hand providing an augmented view.
SUMMARY OF THE INVENTION
According to a broad aspect of the invention, there is provided a method for presenting a stabilized mosaic view of a cavity, the method comprising: maneuvering an imaging device through a first access point for producing successive input video images that include features in an area of interest within said cavity; aligning said input video images to compensate for relative lateral camera motion between successive input video images so as to produce successive aligned video images; mosaicing the successive aligned video images so as to produce a mosaic video image; and displaying the mosaic video image on a display device.
A system according to the invention for presenting a stabilized mosaic view of a cavity comprises: an image generator for receiving and aligning successive input video images formed by maneuvering an imaging device through a first access point in said cavity to compensate for relative lateral camera motion between successive input video images so as to produce successive aligned video images and for mosaicing the successive aligned video images so as to produce a mosaic video image, and an auxiliary image display coupled to the image generator for displaying said mosaic image. In some embodiments of the invention, the mosaic video image is a panoramic image having a field of view that is wider than a field of view of the input video images. In this case the image generator generates a panoramic image. The benefit in this case is an increased field of view. In the more general case, the mosaic video image has a field of view that is substantially equal to a field of view of the input video images and the image generated by the image generator will not then be panoramic. The benefit in this case is a stabilization of the video for increasing the convenience of the surgeon, and for enabling better understanding of 3D structure.
In one embodiment, the instrument is a surgical instrument and the cavity is a body cavity. In such an embodiment, the invention proposes the use of a panoramic image generator that takes as input a video stream from a video camera attached to a surgical instrument such as an endoscope or laparoscope or from a CCD of such an instrument directly, and while the surgical instrument or the video camera scans an area of interest the panoramic generator stitches the video frames into a panoramic mosaic. The panoramic mosaic may include 3-D information as known per se [14]. This technique may be used to display forceps entering the site from a region outside the field of vision when required. The generated panoramic mosaic may be displayed on an auxiliary panoramic monitor located near the ordinary monitor displaying the video stream from the video camera with stabilization of image shake. It is clear that such an auxiliary panoramic display system can serve not only in medical procedures such as internal examination and surgery, but in any application where a probe having a narrow field of view is used. One such application is, for example, the inspection of sewer pipes, dermatological observation of skin, and ophthalmological apparatus.
BRIEF DESCRIPTION OF THE DRAWINGS
In order to understand the invention and to see how it may be carried out in practice, some embodiments will now be described, by way of non-limiting example only, with reference to the accompanying drawings, in which:
Fig. 1 is a schematic representation depicting a working environment for which the present invention is particularly beneficial; Fig. 2 is a block diagram showing schematically a system according to an embodiment of the invention in conjunction with a conventional video system;
Fig. 3 is a block diagram showing schematically a detail of a panoramic image generator used in the system shown in Fig. 2; Fig. 4 is a flow diagram showing principal operations carried out during use of the system shown in Fig. 2;
Fig. 5 shows pictorially a system for displaying panoramic and regular views for use in surgical procedures; and
Fig. 6 is a block diagram showing schematically a detail of a computer-assisted diagnostic system used in the system shown in Fig. 2.
DETAILED DESCRIPTION OF EMBODIMENTS
In the following description of various embodiments of the invention, components that are common to different embodiment or serve an identical function will be identified by the same reference numerals. Fig. 1 is a pictorial representation depicting a working environment depicted generally as 1 for which the present invention is particularly beneficial. The working environment 1 includes some sort of cavity 2 that may be a hollow tube such as a pipe or in the case of surgical applications a body lumen or a cavity such as an abdominal cavity. The invention relates to an operation that is performed by an operator in an area of interest 3 within the cavity 2 using an instrument 4 that is maneuvered through a first access point 5 in the cavity 2. During this operation, the operator views an image of the area of interest 3 as viewed by a camera 6 that is maneuvered within the cavity 2 through a second access point 7, different from the first access point 5. Although the invention will now be described with particular regard to surgical procedures, it should be understood that the invention is applicable to the general situation shown in Fig. 1. However, more generally the invention relates to the situation where the instrument and the camera are not adjacent to each other Thus, the invention also embraces the possibility that the cavity has a single access point through which both the instrument and the camera are maneuvered independently of one another. This, of course, is different from conventional endoscopy where the camera is mounted at the end of the endoscope as described in above-referenced WO05077253 such that the camera and the instrument necessarily subtend the same line of sight with the area of interest.
Fig. 2 shows schematically a system 10 comprising a main video system 11 coupled to an auxiliary panoramic system 12. The main video system 11 comprises a video camera 6 (shown in Fig. 1) that creates a video stream 13 that is fed to a video display 14. The auxiliary panoramic system 12 comprises a panoramic image generator 15 that produces a panoramic image 16. The panoramic image generator 15 is coupled to a computer-assisted diagnostic system 17, which is coupled to a panoramic image recording device 18 and to a printer 19 that is also coupled to the panoramic image generator 15 for printing the image produced thereby. The computer-assisted diagnostic system 17 may be used to produce enhanced images or to combine a panoramic image with an external image produced by conventional imagining systems, as described later with reference to Fig. 6 of the drawings.
Fig. 3 is a block diagram showing schematically a detail of the panoramic image generator 15 according to an embodiment of the invention, which comprises an image acquisition unit 20. This component takes frames from the incoming video stream, and stores them in computer memory. Such video grabbing components are known per se. The data stored by the image acquisition unit 20 is processed by an image motion computation module 21, which computes the motion of the camera or the video image and provides this information to a panoramic image stitching unit 22, which mosaics the image to form a composite panoramic view that is displayed on the auxiliary panoramic image display unit 16. In an embodiment of the invention, the image motion computation module 21 comprises an image based motion analysis unit 24, a motion tracking unit 25 and a probe motion measuring unit 26. The image based motion analysis unit 24 uses the incoming frames from the video sequence, and using image analysis methods such as described in [6,18-21] determines the change between the camera positions for each video frame or the image displacement between frames. The motion tracking unit 25 may be in the form of trackers mounted on the camera or on the probe holding the camera, for measuring its location using known tracking methods. Examples include trackers based on magnetic fields, or trackers based on gyroscopes such as are used in virtual reality systems [9, 10]. The probe measuring unit 26 may be an external device for measuring the motion of medical probes whose use in orthopedic surgery is known [7, 8]. Markers are attached to the part of a rigid probe that is outside the body of the patient. A set of video cameras are located in the surgery room at appropriate positions, for tracking the markers. Using image analysis, the system can compute up to some accuracy the positions of the markers, and based on the knowledge of the structure of the probe, the position of the end of the probe inside the body can be computed as well. In an embodiment of the present invention, the markers can be attached to the probe on which the video camera is mounted, and the position of the camera can thus be computed. By "position" is meant the spatial location and direction in 3D space. Also, the position of other surgical tools, such as forceps, can be computed by attaching markers of different colors or shapes.
Fig. 4 is a flow diagram showing principal operations carried out during use of the system 10. Thus, there is shown a method for presenting a stabilized panoramic view of a cavity for guiding an instrument when carrying out a procedure in the cavity by an operator of the instrument. The operator maneuvers the instrument to an area of interest through a first access point of the cavity. At the same time, an assistant (or possibly the operator) maneuvers an imaging device through a second access point different from the first access point for producing successive video images that include features in the area of interest and the instrument. The video images are aligned to compensate for relative lateral camera motion between successive video images so as to produce successive aligned video images. Successive aligned video images are mosaiced so as to produce a panoramic video image, which is displayed for simultaneous viewing by the operator and the assistant.
A particular feature of the present invention is that the imaging device and the instrument may both subtend different lines of sight with the area of interest from different perspectives, while nevertheless allowing the surgeon (or other operative in the case of non-medical applications) to see a panoramic view that is free of camera shake and that displays a wide area of interest that includes the instrument. The effect of camera shake may be reduced by aligning successive video images after neutralizing camera motion. This may be done as described in US Patent 6,798,897 [18] and US 2006/0280334 [20] both of whose contents are incorporated herein by reference. Such an approach is particularly suited to image alignment of substantially static images. However, the invention may also be used when an object of interest is dynamic such as when operating on moving or pulsating organs. In this case, it may be more appropriate to employ the method described in US 2006/215934 [21] commonly assigned to one of the present applicants and sharing a common inventor and whose foil contents of are incorporated herein by reference.
Once camera movement is known, it is then possible to neutralize relative camera movement between at least two frames so as produce a stabilized video, which when displayed is free of camera movement. In the present invention, this is used to eradicate the effect of camera shake. However, neutralizing relative camera movement between at least two frames may also be a precursor to subsequent image processing requiring a stabilized video sequence. Thus, for example, it is possible to compute one or more computed frames from at least two frames taking into account relative camera movement between the at least two frames. This may be done by combining portions of two or more frames for which relative camera movement is neutralized, so as to produce a mosaic containing parts of two or more video frames, for which camera movement has been neutralized. It may also be done by assigning respective color values to pixels in the computed frame as a function of corresponding values of aligned pixels in two or more frames, for which camera movement has been neutralized. Likewise, the relative camera movement may be applied to frames in a different sequence of frames of images or to portions thereof. Frames or portions thereof in the sequence of frames may also be combined with a different sequence of frames.
Although in Fig. 3, the motion computation module 21 is shown as comprising several modules, it should be understood that the invention can use any image or camera motion analysis system, or any combination of systems to enhance each other, and does not depend on the particular method used for motion analysis. So, for example, any one of the sub-modules 24, 25 and 26 may be used on its own.
The panoramic image stitching unit 22 is an image mosaicing system, which uses the video frames together with the motion analysis information, and stitches the frames together into a panoramic mosaic image. Image mosaicing systems are described in US Pat. No. 6,075,905 [11]. It will be appreciated that the panoramic image stitching unit 22 stereo may be adapted to generate stereoscopic panoramic images using known techniques such as described, for example, in US Pat. No. 6,665,003 [15]. The panoramic image stitching unit 22 writes the generated mosaic into memory for display by the auxiliary panoramic image display unit 23 that is typically disposed near the video display 14 of the main system 11. The system 10 may also have the following additional features. The image motion computed for the input video frames may be used to generate a stabilized video. The stabilized video can provide enhanced visualization for the physician by stabilizing fast movements so as to allow the practitioner to observe the area of interest precisely without vibration caused by shaking hands or heartbeat, etc. In video stabilization, the field of view changes because of the vibrations of the camera. To stabilize the image all images are moved to a common field of view (FOV), and most original images leave parts of the common FOV uncovered because they move. Alternatively, the camera motion may be stabilized so as to create three dimensional effects by using motion parallax when the camera is translated in a way including motion parallax. In this context it is to be noted that normally three dimensional vision is perceived by the brain as a result of the two eyes viewing an object from slightly different lines of sight. The differences in the left and right eye views are interpreted by the brain as depth information. However, even in the absence of different left and right eye views, it is still possible to perceive depth information by moving one's head. This is done, for example, by people having only one eye who are still able to perceive depth by virtue of repeated head movements, which generate parallax errors between successive images presented to the brain, allowing the brain to interpret these parallax errors as depth information, hi the invention, although two cameras may be used, there is typically only a single camera that, at any instant of time, sees an object in the area of interest from a single line of sight. However, even in this case, depth information may be obtained by deliberately moving the camera from side to side, so as to generate successive frames of video data wherein the object of interest appears in successive frames with motion parallax that allows depth information to be computed. This is particularly useful, for example, when the surgical instrument is moved relative to a static background enabling depth information of the surgical instrument to be computed allowing the surgical instrument to be displayed in 3-D, and enhancing the surgeon's sense of where the tip of the surgical instrument is located relative to the cavity. The stabilized video will stabilize for global motion only, and will not cancel motion parallax. Motion parallax in a stabilized video will give the surgeon 3D sense of the region. Specifically, in making a panoramic picture, the surgical instrument (e.g. endoscope) moves in the body cavity in a way that gives rise to motion parallax. Thus, the enhancement of motion parallax enables three dimensional effects to be created, and is useful to detect rugged, "abnormal" areas. The three dimensional effect is also useful in the laparoscopic surgery to perceive the spatial relation between organs and forceps, which enables easier maneuvering of meticulous tasks such as suturing. The cavity may be a lumen or a non-tubular cavity such as an abdominal cavity. Fusion of panoramic image and 3D-CT (computerized tomography) and 3D-
MRI (magnetic resonance imaging) images is possible by flattening the hollow organs in the 3D images and establishing the common landmarks of each image, such as pyloric and cardiac ring in the stomach, ureter orifice in the urinary bladder, etc. Once this is done, corresponding frames of the different images are compared and the image frames of one of the images are transformed so that the landmarks coincide. Such an opened and flattened three dimensional image may be used to display the extent of spread and depth of infiltration of a cancerous lesion in a single picture, and facilitates more precise surgical planning prior to actual surgery. This fitting can also be applied to images of a non-tubular organ such as the abdominal cavity thus providing a surgical navigation system that is applicable for both tubular and non-tubular body organs.
Combination of other algorithms or devices such as enhancement of color information or 3D structure, and "flattened" panoramic picture leads to an automated detection of abnormal areas, enabling establishment of computer-aided diagnostic system in the hollow organs. For example, a computer aided diagnostic system of CT images has been developed for diagnosis of lung cancer. Enhancement of color information, such as Narrow Band Imaging (Olympus Co.), and three dimensional information, and setting of proper cutoff level of color for delineating between protuberances and hollows, allows automated detection of the lesion.
The system 10 allows movement of the surgical instrument to be tracked. This tracking facilitates automatic guidance of the camera onto the area of surgery. The "camera" refers to the view field of the endoscope, or to the tip of the endoscope. The motion of the tools visible in the laparoscopic camera can be analyzed, and some motions of forceps can be considered as a cue signal to start previously assigned motion of a robotic system handling the camera. The surgeon orders some predefined action to the tracking system by some special motion of the forceps thus allowing seamless control of the tracking system without releasing the forceps. Also the tracking enables the surgeon to comprehend the direction of the forceps, and then coarse detection of the site of the trocar where the forceps was introduced, is possible. The "trocar" is a sharp- pointed surgical instrument, used with a canula to puncture a body cavity such as the abdominal wall to introduce forceps. In laparoscopic surgery, the surgeon must concentrate his attention on the operative field, and cannot watch the site of the trocar where the forceps are inserted. Since accidental injury may occur on changing the forceps, the surgeon must repeatedly pan the view from the operative field to the trocar site, then from the trocar site to the operative field. This may result in the surgeon losing the location of both sites. Using the tracking system according to the invention, the direction of the both sites can be displayed. The system can indicate the direction of the trocar in the display even when the forceps does not appear in the display. Further, the tracking system may be configured to control other robotic devices by some allotted action of the forceps. Unlike conventional systems, where the light source, cutting devices, robotic arm for the endoscope etc. are controlled by the surgeon and/or other surgical staffs, the motion tracking unit 25 is able to replace the control of some of the devices. In conventional robotic surgical systems such as Da Vinci, pan of the endoscope, zoom in and out, is performed via a switch that is operated by the surgeon's arm or foot, the operation of which can thus momentarily distract the surgeon. The motion tracking unit 25 can obviate the need for such movement by reacting to different motions of the tools within the filed of view of the imaging system. For example, shake the forceps twice in the center means zoom in, shake it twice in the right side of the display means pan to the right, etc.
Fusion of panoramic images and 3D-CT and 3D-MRI images is possible by defining for each image common landmarks such as lobes of lung and liver, branch of vessels etc. 3D-information of the system 10 enables a fine fit between the panoramic image and previously captured 3D-CT and 3D-MRI, whereby the 3D-CT and 3D-MRI images are overlaid on the panoramic image adequately using the common landmarks. This fitting can be applied in images of the abdominal cavity, and the hollow organs, such as throat, stomach, colon, urinary tract, etc. This can help in 3D navigation, and in anticipating organs not visible in the video images (like arteries, etc.).
3D-CT and 3D-MRI depict architecture of organs, vessels, bone, and so on, in an abdominal cavity precisely. However, such architecture cannot be seen directly in open surgery and laparoscopic surgery. In actual views, the surgeon can see fat that covers organs. Display of a panoramic view shows a large field of view of the real image, allowing the surgeon to determine the proper dissecting plane. It becomes easier and safer when 3D image of the vessels, lymph nodes and critical organs are projected on the laparoscopic image. Conventional methods require the respective axes of the 3D- CT image and of the laparoscopic image to be adapted by proper placement of the trocar. Use of a panoramic picture makes the fusion simpler by establishing several landmarks at the operating theater. Accuracy of the fusion depends on the number of the landmarks. Thus panoramic picture makes it easy and accurate to fuse different images. In hollow organs, the shape of the opened and flattened image is simpler than the original, making it easier to fuse both pictures. The mosaiced three-dimensional image superimposed on the image seen by the camera improves diagnostic quality, expansion and infiltration of the lesion.
Benefits
The system 10 supplies both magnified view and panoramic view simultaneously originated from a surgical instrument such as a conventional endoscope or laparoscope with 3D information, and records both pictures with medical information relevant to findings on the panoramic picture. The system 10 contributes several benefits as follows.
1. The panoramic picture indicates location and range of expansion of the lesion. The combination of still, magnified pictures and a panoramic picture provides visual information, telling about what, where, and to what extent lesions exist. It enables physicians to estimate and record characteristics of the lesions objectively. Since the visual information is easy to understand for patients, co-medical staffs, and doctors, it contributes toward mutual sharing of correct and acceptable information of patients. 2. Since the panoramic picture is processed by gathering magnified pictures, it intrinsically resolves conventional difficulties in providing adequate illumination. Recent endoscopes provide hyper-magnified view or spatial view under the illumination of a narrow band of visible frequencies [15]. In such a situation, lighting intensity is restricted, thus requiring that the endoscope be brought closer to the object. In known approaches, this tradeoff between adequate illumination and field of view severely restricts the field of view. However, since the invention creates a panoramic picture from an unlimited number of views, there is no drawback in imaging a narrow field of view thus allowing maximum benefit of available illumination.
3. It enables physicians to recognize what organs they are actually managing or which way they incise in order to avoid excessive invasiveness and iatrogenic trauma for adjacent organs, thereby contributing toward safer laparoscopic surgery.
4. The system can present a panoramic view of whole organs in the abdominal cavity in a single picture allowing anatomical structures of organs to be seen systematically.
It is especially useful in surgical removal of the cancerous lesion with lymph node dissection around the lesion. In all applications of the invention, mages may be processed so as to highlight features in the area of interest. This may include changing a display attribute of features to be highlighted. For example, in the above- mentioned surgical application, the lymph nodes can be stained to render them more visible as is known per se [16].
5. The combination of panoramic view and such a color analyzing system or staining method enhances the efficacy of detecting lesions with faint color and helps to prevent such lesions being overlooked. It is also useful for lymph node dissection for understanding the anatomical structure of lymphatic tissues.
6. In performing punch biopsy for a suspected diseased area, a panoramic view indicates the site from where the specimen was taken. So it can provide evidence that specimen was punched out correctly from the targeted lesion when panoramic pictures made before and after a biopsy are compared. Panoramic pictures are also informative for pathologists.
7. The system offers 3D information by motion parallax [14, 15] which enhances recognition and estimation of some lesions. Observation and recording of the lesion with 3D information is important because the shape of the lesion itself has diagnostic value, indicating whether the tumor is benign, or has aggressive character etc.
8. A panoramic view indicates the extent of surgical maneuvering inside the abdomen. So it can demonstrate a surgical condition chronologically, when recorded from time to time during surgery. Previously, there has been inadequate information for the patients how the surgery was carried out apart from a verbal explanation accompanied by some segmental photographs and free hand drawings by surgeons, or via video tapes recording the whole surgical process, which requires professional knowledge for understanding. The recorded panoramic still images displayed in a chronological order provide a much more compact overview of milestone events carried out during the procedure and explain eloquently to the patient and family the surgical process and its quality. Such images may also be used as an educational aid to students. 9. Visualization of 3D structure enables surgeons to comprehend the spatial position of surgical point and tips of the forceps. Since surgical maneuvers require precise recognition of tissues and forceps, cutting device and clips, it will help carrying out reliable maneuvering.
10. The system according to the invention can merge CT and MRI images and panoramic images, after making an opened and "flattened" view of each image. It helps recognition of extent of invasion and superficial expansion of the disease, and enables making accurate plan for resection before surgery. It aids in recognition of hidden organs such as arteries, veins, lymph nodes, and retroperitoneal organs covered with thick fat tissue, which are difficult to detect by laparoscopic observation. Anatomical diagnosis of these organs is made before surgery by CT and MRI. Thus, fusion of CT, MRI and panoramic view functions as a navigation system, and contribute toward safer surgery by avoiding sudden hemorrhage or injuries to the adjacent organs.
11. In conventional endoscopic examination, doctors observe intra-luminal cavity and take photographs. In this situation, doctors are urged to diagnose lesions immediately during examination. The system according to the invention automatically indicates abnormal findings in the panoramic view. The detection of abnormal findings is achieved by algorithms designed to indicate steep alteration of color and/or 3D structure. Doctors can confirm their diagnosis of "no findings", which improves the specificity of examination.
12. In known laparoscopic systems, it can be tedious to review previous surgery, because surgeons must rewind videotape and play back again from the beginning.
And an important scene is recorded only sparsely. For this reason most doctors will not review previous surgery. Besides, the number of videotapes thus required would occupy lockers, bookshelves and warehouses of the hospital. The system according to the invention substitutes conventional video records for surgery with lower memory requirement and faster viewing time.
The system can be applied to endoscopic examination, microscopic examination aimed to tele-pathology and surgery. The magnified view demonstrates the conventional endoscopic view. At the same time, the panoramic view demonstrates the whole scene, which affords precise identification of the location of any lesions inside the lumen. A rigid scope is used as a laparoscope, cystoscope, ureteroscope, nephroscope, arthroscope, endoscope for mammary duct and lachrymal duct, etc. Apart from controlling the robotic system, the system 10 requires only video signals, both analog and digital signals, from conventional apparatus. This means that the system 10 can employ directly conventional endoscopic apparatus and devices. In addition, the system reduces shake of a laparoscopic or endoscopic image, and thereby contributes to reduced fatigue and boosts concentration of the practitioner.
Possible Implementations
Fig. 5 shows pictorially a system 30 having two monitors 14, 16 for locating near the patient. The monitor 14 displays a magnified view and the monitor 16 is part of the auxiliary panoramic system 12 described above with reference to Fig. 1 for displaying a panoramic view. Two foot-operated switches are set near the doctor's foot. One is a "freeze switch" 33 that freezes panoramic view. Another switch is a "record switch" 34 for recording the current view. For example, if the doctor finds favorable panoramic view, he will step on the freeze switch to freeze the panoramic view, and then steps on the record switch to take a picture (just recorded). The recorded image is stored in a database 35 and the operation may be accompanied by a shutter sound vocalized by a loudspeaker 36 to provide audible feedback. After taking a picture, the photograph just taken may appear for a few seconds with information indicating "abnormal" findings, whereafter the panoramic view monitor returns to real time mode automatically. This may be further improved by providing a call back function. For example, if the doctor steps on the freeze switch to freeze the panoramic view, but the timing was bad, or the doctor wants to check previous view again, the panoramic view monitor calls back several frames by subsequent stepping on the freeze switch.
In the system 30 shown in Fig. 5, the monitor 16 may display a panoramic image overlapping a transparent 3D-CT or 3D-MRI image. It is possible to toggle between the panoramic image, 3D-CT or 3D-MRI images, by stepping on a foot switch. The doctor or an assistant may point to landmarks in the currently displayed image using a sterilized device such as joy stick or a touch panel on the display. Then, both images are automatically adapted to each other. By way of example, this may be done for a laparoscopic procedure as follows: 1. Before surgery, determine the site of laparoscopic insertion.
2. Prepare a transparent 3D-CT image which is reconstructed as a laparoscopic view.
3. During surgery, create a panoramic picture.
4. Display both images. 5. Establish common landmarks that appear in both images
6. Construct similar polygons around the contours of the common landmarks in both images.
7. Transform the polygon in the 3D-CT image to adjust that of panoramic image.
8. Overlap the two polygons. The benefit of using a panoramic picture is that it is easier to find landmarks and form precise polygons. The above-described functionality can be realized by the computer-assisted diagnostic system 17 shown in Fig. 2 a detail of which will now be described with reference to Fig. 6. Thus, as shown in Fig. 6 the computer-assisted diagnostic system 17 includes an image combiner 40 for overlaying the panoramic video image on a 3 -dimensional image of the cavity including the area of interest 3 produced by computerized tomography or magnetic resonance imaging. The image combiner 40 includes a landmark processor 41 for flattening hollow organs and establishing common landmarks of each image. A comparator 42 is coupled to the landmark processor for comparing corresponding frames of the different images and transforming image frames of one of the images so that the landmarks coincide.
When real time 3D information is required during examination, the doctor swings the tip of fiberscope surgery, and the scopist can shake the laparoscope gently. Alternatively, 3D information is combined with the panoramic view, and is recorded as a picture. In this picture, 3D information is recognized as animation operated by a pointing device such as a mouse or joystick. Alternatively, the scope can be controlled by a robotic arm, whereupon an electrical signal from some device such as foot switch, hand piece attached to the forceps etc. induce shaking motion of the robotic arm. While it will be difficult to view the original video as displayed in the monitor 14, the monitor 16 will display a stabilized image, where the only motion will be motion parallax. This will provide 3D perception to the surgeon. By such means, spontaneous turbulence (vibration) of endoscope is cancelled. This reduces fatigue of the doctor, and allows the doctor to concentrate better on the examination, resulting in safer maneuvering.
In another implementation, automated indication of an "abnormal" lesion by morphological and optical analysis is applied to the panoramic view of the endoscope. Such a system can also be adapted to display a panoramic view of microscope and ophthalmoscope images. In another implementation, the motion tracking unit 25 may be adapted to control a surgery-assisting robotic system such as laparoscope control robotic arm and master-slave robotic surgical system, in which the laparoscope continues to display and magnifies the associated surgical field automatically. This can be performed by tracking the surgical tools using any known computer vision tracking method. Likewise, the motion tracking unit 25 can be adapted to indicate the direction of the trocars in the periphery of the display. The motion tracking unit 25 can also afford detection of lesions or lymph nodes, stained or enhanced by color analysis.
In medical applications of the invention, the recording modality must correspond to the DICOM system and the other medical image transferring systems in order to send real time pictures to other section of the hospital or to a far place.
Although the invention has been described with particular regard to surgical procedures, it should be understood that the invention is applicable to the general situation shown in Fig. 1 both in the case where the camera and instrument subtend different lines of sight with the area of interest and in the case where they subtend the same line of sight. Moreover, although the invention has been described with particular regard to procedures carried out by a tool in an imaged area of interest, the principles of the invention insofar as they relate to image stabilization are applicable also for diagnostic purposes where an area of interest in a cavity is imaged by a camera that is maneuvered through an access point in the cavity. In such case, there is no need for an instrument to be maneuvered through the same or a different access point in the cavity. Likewise, although subsidiary features and benefits of various embodiments have been described with particular reference to surgical procedures, it will be appreciated that many of these features are equally applicable to non-medical procedures and to this extent these embodiments are not intended as being limited to surgical procedures.
Within the context of the invention and the appended claims the term "video" denotes any series of image frames that when displayed at sufficiently high rate produces the effect of a time varying image. Typically, such image frames are generated using a video camera and in real-time applications such as medical procedures this is probably mandatory. However, the invention is not limited in the manner in which the image frames are formed and is equally applicable to the processing of image frames created in other ways, such as animation, still cameras adapted to capture repetitive frames, and so on. Such techniques may be employed in applications that do not require real-time processing of video frames that provide an instantaneous view of the imaged area.
It will also be understood that the system according to the invention may be a suitably programmed computer. Likewise, the invention contemplates a computer program being readable by a computer for executing the method of the invention. The invention further contemplates a machine-readable memory tangibly embodying a program of instructions executable by the machine for executing the method of the invention.

Claims

CLAIMS:
1. A method for presenting a stabilized mosaic view of a cavity (2), the method comprising: maneuvering an imaging device (6) through a first access point (7) for producing successive input video images that include features in an area of interest (3) within said cavity; aligning said input video images to compensate for relative lateral camera motion between successive input video images so as to produce successive aligned video images; mosaicing the successive aligned video images so as to produce a mosaic video image; and displaying the mosaic video image on a display device (16).
2. The method according to claim 1, wherein the mosaic video image has a field of view that is substantially equal to a field of view of the input video images.
3. The method according to claim 1, wherein the mosaic video image is a panoramic image having a field of view that is wider than a field of view of the input video images.
4. The method according to any one of claims 1 to 3, when used for guiding an instrument (4) during a procedure carried out in an area of interest (3) of said cavity using said instrument, the method further comprising: maneuvering the instrument within said cavity such that the instrument and the camera subtend different lines of sight with said area of interest (3); whereby at least some of said successive video images include features in said area of interest and at least some of said successive video images include said instrument.
5. The method according to claim 4, wherein the instrument (4) is maneuvered through the first access point (7).
6. The method according to claim 4, wherein the instrument (4) is maneuvered through a second access point (5) that is different to the first access point (7).
7. The method according to any one of claims 4 to 6, wherein maneuvering the instrument includes subjecting the instrument to lateral movement so that the stabilized
5 view shows relative depth of the instrument to the cavity.
8. The method according to any one of claims 1 to 7, wherein the cavity is a body lumen.
9. The method according to any one of claims 1 to 7, wherein the cavity is a non- tubular body organ.
10 10. The method according to any one of claims 1 to 9, wherein aligning said video images includes: determining camera movement of a new frame relative to a sequence of frames of images; and neutralizing relative camera movement between at least two frames.
15 11. The method according to any one of claims 1 to 10, further including: overlaying the mosaic video image on a 3 -dimensional image of the cavity including said area produced by computerized tomography or magnetic resonance imaging.
12. The method according to claim 11, wherein overlaying the panoramic video 0 image includes: flattening hollow organs and establishing common landmarks of each image; and comparing corresponding frames of the different images and transforming image frames of one of the images so that the landmarks coincide. 5
13. The method according to any one of claims 1 to 12, further including: tracking movement of the instrument to be tracked for facilitating automatic guidance of the camera onto the area of interest.
14. The method according to claim 13, including displaying a direction of movement of the instrument even when the instrument does not appear in the display.
15. The method according to claim 13 or 14, including detecting a predefined action of the instrument and controlling operation of the camera or of other controllable
5 devices in response thereto.
16. The method according to claim 15, wherein respective predefined movements of the instrument are used to control zoom, panning direction and other functions of the camera.
17. The method according to any one of claims 1 to 16, further including processing 10 the video images so as to highlight features in the area of interest.
18. The method according to claim 17, wherein processing the video images includes changing a display attribute of said features.
19. The method according to any one of claims 1 to 18, further including recording panoramic images of the area of interest during said procedure so as to maintain a
15 chronological overview of milestone events carried out during the procedure.
20. A computer program comprising computer program code means for performing any one of claims 1 to 19 when said program is run on a computer.
21. A computer program as claimed in claim 20 embodied on a computer readable medium.
20 22. A system (12) for presenting a stabilized mosaic view of a cavity (2), the system comprising: an image generator (15) for receiving and aligning successive input video images formed by maneuvering an imaging device (6) through a first access point (7) in said cavity to compensate for relative lateral camera motion between successive video
25 images so as to produce successive aligned video images and for mosaicing the successive aligned video images so as to produce a mosaic video image, and an auxiliary image display (16) coupled to the image generator (15) for displaying said mosaic image.
23. The system according to claim 22, wherein the cavity is a body lumen.
24. The system according to claim 22, wherein the cavity is a non-tubular body organ.
25. The system according to any one of claims 22 to 24, further including:
5 a computer-assisted diagnostic system (17) coupled to the image generator (15), and an image recording device (18) coupled to the computer-assisted diagnostic system (17).
26. The system according to claim 22 or 25, further including a printer (19) coupled 10 to the image generator (15) for printing the image produced thereby.
27. The system according to any one of claims 22 to 26, wherein the image generator (15) includes: an image motion computation module (21) for computing motion of the camera or the video image, and
15 an image stitching unit (22) coupled to the image motion computation module
(21) for mosaicing the image to form a composite mosaic view.
28. The system according to any one of claims 22 to 27, wherein the mosaic video image has a field of view that is substantially equal to a field of view of the input video images.
20 29. The system according to any one of claims 22 to 27, wherein the mosaic video image is a panoramic image having a field of view that is wider than a field of view of the input video images.
30. The system according to claim 29, where in the computer-assisted diagnostic system (17) includes an image combiner (40) for overlaying the panoramic video image 25 on a 3-dimensional image of the cavity including said area produced by computerized tomography or magnetic resonance imaging.
31. The system according to claim 30, wherein the image combiner (40) includes: a landmark processor (41) for flattening hollow organs and establishing common landmarks of each image; and a comparator (42) coupled to the landmark processor for comparing 5 corresponding frames of the different images and transforming image frames of one of the images so that the landmarks coincide.
32. The system according to any one of claims 22 to 31, further including: a motion tracking unit (25) for tracking movement of the instrument to be tracked for facilitating automatic guidance of the camera onto the area of interest.
10 33. The system according to claim 32, wherein the motion tracking unit (25) is adapted to display a direction of movement of the instrument even when the instrument does not appear in the display.
34. The system according to claim 32 or 33, wherein the motion tracking unit (25) is responsive to a predefined action of the instrument for controlling operation of the
15 camera or of other controllable devices .
35. The system according to claim 34, wherein the motion tracking unit (25) is responsive to respective predefined movements of the instrument to control zoom, panning direction and other functions of the camera.
36. The system according to any one of claims 22 to 35, wherein the image 0 generator (15) is adapted to process the video images so as to highlight features in the area of interest.
37. The system according to claim 36, wherein the image generator (15) is adapted to change a display attribute of said features.
38. The system according to any one of claims 22 to 37, further including a control 5 device (34) for recording images of the area of interest during said procedure so as to maintain a chronological overview of milestone events carried out during the procedure.
PCT/IL2007/000824 2006-07-03 2007-07-03 Computer image-aided method and system for guiding instruments through hollow cavities WO2008004222A2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US80648106P 2006-07-03 2006-07-03
US60/806,481 2006-07-03
US89048907P 2007-02-18 2007-02-18
US60/890,489 2007-02-18

Publications (2)

Publication Number Publication Date
WO2008004222A2 true WO2008004222A2 (en) 2008-01-10
WO2008004222A3 WO2008004222A3 (en) 2008-06-19

Family

ID=38894984

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2007/000824 WO2008004222A2 (en) 2006-07-03 2007-07-03 Computer image-aided method and system for guiding instruments through hollow cavities

Country Status (1)

Country Link
WO (1) WO2008004222A2 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012034973A1 (en) * 2010-09-17 2012-03-22 Siemens Aktiengesellschaft Endoscopy procedure for producing a panoramic image from individual images which are recorded chronologically one after the other using a magnetically guided endoscopy capsule and endoscopy unit operating according to said procedure
CN104918534A (en) * 2013-03-19 2015-09-16 奥林巴斯株式会社 Endoscope system and operation method of endoscope system
WO2016044624A1 (en) * 2014-09-17 2016-03-24 Taris Biomedical Llc Methods and systems for diagnostic mapping of bladder
US9877086B2 (en) 2014-01-26 2018-01-23 BriefCam Ltd. Method and system for producing relevance sorted video summary
WO2018046092A1 (en) * 2016-09-09 2018-03-15 Siemens Aktiengesellschaft Method for operating an endoscope, and endoscope
EP3301639A1 (en) * 2016-09-28 2018-04-04 Fujifilm Corporation Image display device, image display method, and program
US10758209B2 (en) 2012-03-09 2020-09-01 The Johns Hopkins University Photoacoustic tracking and registration in interventional ultrasound
US10806346B2 (en) 2015-02-09 2020-10-20 The Johns Hopkins University Photoacoustic tracking and registration in interventional ultrasound

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5999662A (en) 1994-11-14 1999-12-07 Sarnoff Corporation System for automatically aligning images to form a mosaic image
US6075905A (en) 1996-07-17 2000-06-13 Sarnoff Corporation Method and apparatus for mosaic image construction
US6173087B1 (en) 1996-11-13 2001-01-09 Sarnoff Corporation Multi-view image registration with application to mosaicing and lens distortion correction
US6665003B1 (en) 1998-09-17 2003-12-16 Issum Research Development Company Of The Hebrew University Of Jerusalem System and method for generating and displaying panoramic images and movies
US6798897B1 (en) 1999-09-05 2004-09-28 Protrack Ltd. Real time image registration, motion detection and background replacement using discrete local motion estimation
WO2005032355A1 (en) 2003-10-06 2005-04-14 Emaki Incorporated Luminal organ diagnosing device
US20060215934A1 (en) 2005-03-25 2006-09-28 Yissum Research Development Co of the Hebrew University of Jerusalem Israeli Co Online registration of dynamic scenes using video extrapolation
US20060280334A1 (en) 2005-05-25 2006-12-14 Yissum Research Development Company Of The Hebrew University Of Jerusalem Fast and robust motion computations using direct methods

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6702736B2 (en) * 1995-07-24 2004-03-09 David T. Chen Anatomical visualization system
US8229549B2 (en) * 2004-07-09 2012-07-24 Tyco Healthcare Group Lp Surgical imaging device
JP4372382B2 (en) * 1999-08-20 2009-11-25 ヒーブルー ユニバーシティ オブ エルサレム System and method for correction mosaicing of images recorded by a moving camera
EP1620012B1 (en) * 2003-05-01 2012-04-18 Given Imaging Ltd. Panoramic field of view imaging device
CN100399978C (en) * 2004-02-18 2008-07-09 国立大学法人大阪大学 Endoscope system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5999662A (en) 1994-11-14 1999-12-07 Sarnoff Corporation System for automatically aligning images to form a mosaic image
US6075905A (en) 1996-07-17 2000-06-13 Sarnoff Corporation Method and apparatus for mosaic image construction
US6173087B1 (en) 1996-11-13 2001-01-09 Sarnoff Corporation Multi-view image registration with application to mosaicing and lens distortion correction
US6665003B1 (en) 1998-09-17 2003-12-16 Issum Research Development Company Of The Hebrew University Of Jerusalem System and method for generating and displaying panoramic images and movies
US6798897B1 (en) 1999-09-05 2004-09-28 Protrack Ltd. Real time image registration, motion detection and background replacement using discrete local motion estimation
WO2005032355A1 (en) 2003-10-06 2005-04-14 Emaki Incorporated Luminal organ diagnosing device
US20060215934A1 (en) 2005-03-25 2006-09-28 Yissum Research Development Co of the Hebrew University of Jerusalem Israeli Co Online registration of dynamic scenes using video extrapolation
US20060280334A1 (en) 2005-05-25 2006-12-14 Yissum Research Development Company Of The Hebrew University Of Jerusalem Fast and robust motion computations using direct methods

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012034973A1 (en) * 2010-09-17 2012-03-22 Siemens Aktiengesellschaft Endoscopy procedure for producing a panoramic image from individual images which are recorded chronologically one after the other using a magnetically guided endoscopy capsule and endoscopy unit operating according to said procedure
US10758209B2 (en) 2012-03-09 2020-09-01 The Johns Hopkins University Photoacoustic tracking and registration in interventional ultrasound
CN104918534A (en) * 2013-03-19 2015-09-16 奥林巴斯株式会社 Endoscope system and operation method of endoscope system
US9877086B2 (en) 2014-01-26 2018-01-23 BriefCam Ltd. Method and system for producing relevance sorted video summary
WO2016044624A1 (en) * 2014-09-17 2016-03-24 Taris Biomedical Llc Methods and systems for diagnostic mapping of bladder
CN106793939A (en) * 2014-09-17 2017-05-31 塔里斯生物医药公司 For the method and system of the diagnostic mapping of bladder
US10806346B2 (en) 2015-02-09 2020-10-20 The Johns Hopkins University Photoacoustic tracking and registration in interventional ultrasound
WO2018046092A1 (en) * 2016-09-09 2018-03-15 Siemens Aktiengesellschaft Method for operating an endoscope, and endoscope
EP3301639A1 (en) * 2016-09-28 2018-04-04 Fujifilm Corporation Image display device, image display method, and program
US10433709B2 (en) 2016-09-28 2019-10-08 Fujifilm Corporation Image display device, image display method, and program

Also Published As

Publication number Publication date
WO2008004222A3 (en) 2008-06-19

Similar Documents

Publication Publication Date Title
US10835344B2 (en) Display of preoperative and intraoperative images
US10733700B2 (en) System and method of providing real-time dynamic imagery of a medical procedure site using multiple modalities
EP3463032B1 (en) Image-based fusion of endoscopic image and ultrasound images
JP5380348B2 (en) System, method, apparatus, and program for supporting endoscopic observation
Bichlmeier et al. The virtual mirror: a new interaction paradigm for augmented reality environments
EP2838412B1 (en) Guidance tools to manually steer endoscope using pre-operative and intra-operative 3d images
JP5421828B2 (en) Endoscope observation support system, endoscope observation support device, operation method thereof, and program
US7967742B2 (en) Method for using variable direction of view endoscopy in conjunction with image guided surgical systems
US20130250081A1 (en) System and method for determining camera angles by using virtual planes derived from actual images
CN110709894B (en) Virtual shadow for enhanced depth perception
US20050054895A1 (en) Method for using variable direction of view endoscopy in conjunction with image guided surgical systems
KR20140112207A (en) Augmented reality imaging display system and surgical robot system comprising the same
WO2008004222A2 (en) Computer image-aided method and system for guiding instruments through hollow cavities
JP2006320722A (en) Method of expanding display range of 2d image of object region
Breedveld et al. Theoretical background and conceptual solution for depth perception and eye-hand coordination problems in laparoscopic surgery
WO2007115825A1 (en) Registration-free augmentation device and method
AU2018202682A1 (en) Endoscopic view of invasive procedures in narrow passages
De Paolis et al. Augmented reality in minimally invasive surgery
WO2015091226A1 (en) Laparoscopic view extended with x-ray vision
Vogt Real-Time Augmented Reality for Image-Guided Interventions
US11910995B2 (en) Instrument navigation in endoscopic surgery during obscured vision
CN109893257B (en) Integrated external-view mirror laparoscope system with color Doppler ultrasound function
JP2002017751A (en) Surgery navigation device
Eck et al. Display technologies
JP2005211529A (en) Operative technique supporting system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07766854

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: RU

122 Ep: pct application non-entry in european phase

Ref document number: 07766854

Country of ref document: EP

Kind code of ref document: A2