WO2017098506A1 - Autonomic goals-based training and assessment system for laparoscopic surgery - Google Patents

Autonomic goals-based training and assessment system for laparoscopic surgery Download PDF

Info

Publication number
WO2017098506A1
WO2017098506A1 PCT/IL2016/051307 IL2016051307W WO2017098506A1 WO 2017098506 A1 WO2017098506 A1 WO 2017098506A1 IL 2016051307 W IL2016051307 W IL 2016051307W WO 2017098506 A1 WO2017098506 A1 WO 2017098506A1
Authority
WO
WIPO (PCT)
Prior art keywords
parameter
combination
group
additionally
surgical
Prior art date
Application number
PCT/IL2016/051307
Other languages
French (fr)
Other versions
WO2017098506A9 (en
WO2017098506A8 (en
Inventor
Motti FRIMER
Tal Nir
Gal ATAROT
Lior ALPERT
Original Assignee
M.S.T. Medical Surgery Technologies Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by M.S.T. Medical Surgery Technologies Ltd. filed Critical M.S.T. Medical Surgery Technologies Ltd.
Priority to EP16872549.7A priority Critical patent/EP3414753A4/en
Publication of WO2017098506A1 publication Critical patent/WO2017098506A1/en
Publication of WO2017098506A8 publication Critical patent/WO2017098506A8/en
Publication of WO2017098506A9 publication Critical patent/WO2017098506A9/en

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B23/00Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes
    • G09B23/28Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for medicine
    • G09B23/285Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for medicine for injections, endoscopy, bronchoscopy, sigmoidscopy, insertion of contraceptive devices or enemas
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/40ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/20ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the management or administration of healthcare resources or facilities, e.g. managing hospital staff or surgery rooms

Definitions

  • the present invention generally pertains to a system and method for providing a means and method for automated training and assessment of a surgical operator.
  • the assessment is complicated by the fact that the assessor or trainer has, at best, a restricted view of the actions of the operator.
  • the assessor or trainer has the same view as the operator - a display, usually 2 dimensional, of at least part of the surgical environment.
  • At least one communicable database configured to (i) store said at least one surgical procedure; said at least one surgical procedure is characterized by at least one stored parameter of at least one item, (ii) real-time store said at least one parameter
  • said sensor is selected from a group consisting of an electromagnetic sensor; an ultrasound sensor; an inertial sensor to sense the angular velocity and the acceleration of the tool or other item; an accelerometer, a motion sensor, an EVIU, a sensor wearable by an operator, a sensor attachable to a surgical object, an RFID tag attachable to a surgical object, an ultrasound sensor, an infrared sensor, gyro-meter, tachometer
  • At least one second parameter said at least one feature, said at least one predetermined feature and any combination thereof.
  • said at least one second parameter is selected from a group consisting of: time to execute said
  • At least one communicable database configured to (i) store said at least one surgical procedure; said at least one surgical procedure is characterized by at least one stored parameter of at least one item, , said at least one stored parameter
  • said parameter said at least one feature, said at least one partial score, said GOALS score, said skill level, a suggestion, an instruction, a distance, an angle, an area, a volume, a size scale, information on a medical history of a patient, and any combination thereof.
  • EOA economy of area
  • EV economy of volume
  • EOA economy of area
  • EV economy of volume
  • Fig. 1 schematically illustrates a GOALS score overlaid on an image of a part of a surgical environment
  • Fig. 2 schematically illustrates movement of a tip of a surgical tool
  • Fig. 3A schematically illustrates speed of the tool tip and Fig. 3B schematically illustrates acceleration of the tool tip during the procedure;
  • Fig. 4A schematically illustrates speed of the tool tip
  • Fig. 4B schematically illustrates acceleration of the tool tip
  • Fig. 4C schematically illustrates jerk of the tool tip during the first part of the procedure
  • Fig. 5 schematically illustrates overlaying instructions to an operator on an image of part of a surgical environment
  • Figs. 6, 7 and 8 schematically illustrate an embodiment of a method of automatically assessing or automatically training an operator.
  • the term "fixed point” hereinafter refers to a point in 3D space which is fixed relative to a known location.
  • the known location can be for non-limiting example, an insertion point, a known location in or on a patient, a known location in an environment around a patient (e.g., an attachment point of a robotic manipulator to an operating table, a hospital bed, or the walls of a room), or a known location in a manipulation system, a practice dummy, or a demonstrator. .
  • item hereinafter refers to any identifiable thing within a field of view of an imaging device.
  • An item can be something belonging to a body or something introduced into the body. Items also comprise things such as, for non-limiting example, shrapnel or parasites and non-physical things such as fixed points.
  • object refers to an item naturally found within a body cavity.
  • Non- limiting examples of an object include a blood vessel, an organ, a nerve, and a ligament, as well as an abnormality such as a lesion and a tumor.
  • tool refers to an item mechanically introducible into a body cavity.
  • a tool include a laparoscope, a light, a suction device, a grasper, a suture material, a needle, and a swab.
  • surgical object refers to a surgical tool, a robotic manipulator or other maneuvering system configured to manipulate a surgical tool, at least a portion of a light source, and at least a portion of an ablator.
  • a principal operator such as, but not limited to, the surgeon carrying out the main parts of the procedure
  • an assistant such as, but not limited to, a nurse
  • an observer such as, but not limited to, a senior surgeon providing instruction to or assessing a principal operator.
  • An identifier for an operator can include, but is not limited to, a name, an ID number, a function and any combination thereof.
  • identifiable unit refers to an identifiable purposive activity during a surgical operation, typically a minimal identifiable activity. Examples include, but are not limited to, movement of a needle and forceps to the site where a suture is to be made, making a knot in suture thread, activating fluid flow, and making an incision.
  • surgical task hereinafter refers to a connected series of at least one identifiable unit which comprises an identifiable activity.
  • surgical tasks that comprise more than one identifiable unit include, but are not limited to, making one suture, removing incised tissue from a surgical field, and clearing debris from a surgical field.
  • a non-limiting example of a surgical task that comprises a single identifiable unit is making an incision.
  • complete procedure hereinafter refers to a connected series of at least one surgical task which forms an independent unit. For non-limiting example, closing an incision with one or more sutures will be referred to as a complete procedure.
  • a procedure refers to at least a portion of a surgical operation, with the portion of the surgical operation including at least one identifiable unit.
  • a procedure can comprise tying the knot in a suture, making a single suture, or closing an incision with a series of sutures.
  • the system of the present invention can assist in training of assessing an operator by providing automated training and/or assessment of an operator.
  • the system preferably analyzes at least one image of at least part of a surgical environment, and by means of said analysis, assesses the operator.
  • the system of the present invention can comprise an advanced artificial intelligence (AI) system running on at least one processor which is capable of analyzing at least part of a scene in a field of view (FOV), as captured in real time by an imaging device and, from the analysis (and, in some embodiments, additional information from other sensors) forming an understanding of what is occurring. From this understanding, the system can derive at least one parameter, such as a metric as disclosed herein, and, from the at least one parameter, it can generate at least one feature, score the at least one feature, and, from the at least one feature, it can autonomically perform at least one of assessment and training of an operator.
  • AI advanced artificial intelligence
  • An example of assessment is summing of the at least one score for the at least one feature to generate a GOALS -based score.
  • capture of the at least one image of at least part of an FOV is carried out in real time and storage of the at least one image is preferably carried out in real time, and although a GOALS score can be generated in real time, at least one of the steps in the calculation of a GOALS score can be carried out off-line.
  • the analysis can comprise, for non-limiting example, at least one of:
  • An objective skills assessment system for evaluating the quality of a procedure, either as part of a training system or as part of an evaluation of an operator.
  • a procedure either in real time or recorded, can be observed, either in real time or off-line, to determine the skill level of the operator.
  • Recorded procedures can be compared with outcomes, so as to determine which variants of a procedure have the better outcomes, thereby improving training and skill levels, [i.e., Global Operative Assessment of Laparoscopic Skills (GOALS)]
  • GOALS Global Operative Assessment of Laparoscopic Skills
  • feedback can be given to an operator, by the intelligent system and, in some embodiments, additionally by a human advisor (Gesture/Task Classification).
  • the intelligent system advisor, the human advisor, if present, or both, can be local or remote.
  • Information can be aggregated from a number of videos of robotic and/or laparoscopic procedures. These data can be used for:
  • a recorded procedure can be edited so that a shorter portion, typically a surgical task or an identifiable unit, can be stored, viewed and any combination thereof. Viewing of a shorter portions can be useful to highlight good practice and to indicate areas of suboptimal practice.
  • a stored record of a procedure including an identifiable unit, a surgical task, a complete procedure and a whole operation, preferably in 3D, can become part of at least one "big data" analysis of any combination of the above, for an individual operator, for at least a portion of a hospital, for at least a portion of a medical center and any combination thereof.
  • a record of a procedure can be tagged with one or more of: an identifier of an operator, a type of procedure, a previous procedure, a parameter, a feature, a GOALS score, a skill level, an identifier for an operating room, a physical characteristic of an operating room (e.g., temperature, humidity, time and date of cleaning, cleaning procedure, cleaning materials, type of lighting), a date of a procedure, a time of a procedure, an identifier of a patient, a physical characteristic of a patient, an outcome of a procedure, length of hospital stay for a patient, a readmission for a patient, and any combination thereof.
  • an identifier of an operator e.g., a type of procedure, a previous procedure, a parameter, a feature, a GOALS score, a skill level
  • an identifier for an operating room e.g., a physical characteristic of an operating room (e.g., temperature, humidity, time and date of cleaning, cleaning procedure, cleaning materials, type
  • the system can calculate at least one performance metric and/or at least one parameter, derive at least one feature, and generate at least one score.
  • the result of the analysis can be an assessment of the skill level of an operator, determination of training for an operator, advice to an operator as to at least one component of at least one procedure, a warning to an operator, at least one outcome of at least one procedure, and any combination thereof.
  • the outcome can selected from a group consisting of negligence, malpractice, surgical activity failure, and any combination thereof.
  • At least the database and preferably both a tracking subsystem and the database are in communication with at least one processor configured to analyze the spatiotemporal 3- dimensional surgical database. From the analysis, at least one first parameter of at least one item in a field of view is determined.
  • the at least one first parameter can be a 2D position, for non-limiting example, in the plane of the field of view, of at least a portion of the at least one item; a 2D orientation, for non-limiting example, in the plane of the field of view, of at least a portion of the at least one item; a 3D position of at least a portion of the at least one item; a 3D orientation of at least a portion of the at least one item; a 2D projection of a 3D position of at least a portion of the at least one item, a velocity of at least a portion of the at least one item, an acceleration of at least a portion of the at least one item, an angle of at least a portion of the at least one item, altering the state of at least a
  • the movement of the at least one item can be selected from a group consisting of a maneuver of an item carried out by a robotic manipulator connected to the item, a movement of part of an item, a change in state of the item, and any combination thereof.
  • Non-limiting examples of movement of an item include displacing it, rotating it, zooming it, or, for an item with at least one bendable section, changing its articulation.
  • Non-limiting examples of movements of part of the item are opening or closing a grasper or retractor, or operating a suturing mechanism.
  • Non-limiting examples of a change in state of an item include: altering a lighting level, altering an amount of suction, altering an amount of fluid flow, altering a heating level in an ablator, altering a speed of lateral movement of at least one blade in a pair of scissors or other cutter, altering an amount of defogging, or altering an amount of smoke removal.
  • Procedures can be stored in a database in communication with the processor; images can also be stored in a database.
  • the stored procedures can be manually-executed procedures, automatically-executed procedures, autonomically-executed procedure and any combination thereof.
  • the first parameter can then be compared to a stored second parameter, which comprises at least one second parameter of at least one item.
  • the second parameter can be a 2D position, for non-limiting example, in the plane of the field of view, of at least a portion of the at least one item; a 2D orientation, for non-limiting example, in the plane of the field of view, of at least a portion of the at least one item; a 3D position of at least a portion of the at least one item; a 3D orientation of at least a portion of the at least one item; a 2D projection of a 3D position of at least a portion of the at least one item, a velocity of at least a portion of the at least one item, an acceleration of at least a portion of the at least one item, an angle of at least a portion of the at least one item, a state of at least a portion of said at least one item, and any combination thereof.
  • the stored procedure can be a procedure known in the art. It can be generated from a procedure executed by an experienced operator, by an average of procedures by at least one experienced operator, by an average of procedures executed by the operator being assessed, by a simulation of the procedure, or an average of simulations of the procedure, executed by at least one operator, by a simulation of a procedure generated by simulation software, and any combination thereof.
  • a non-limiting example of a procedure generated by a combination of methods is a series of sutures to close an incision, where the movement of the tools between sutures is generated by simulation software, using the known path of the incision to generate the optimum tool movements between sutures, and the movement of the tools during a suture is generated from an average of the movements made by three experienced surgeons when carrying out suturing.
  • Parameters can be constants and they can be functions of time.
  • a plurality of parameters can also comprise a parameter, a set parameter.
  • the set parameter can comprise a plurality of parameters at a specific time, at a specific location or both, the set parameter can comprise a parameter at different times, and any combination thereof.
  • a non-limiting example of a plurality of parameters at a specific time comprises the position, orientation and speed of a tool at the beginning of a procedure.
  • a non-limiting example of a parameter a different times is the orientation of a grasper at the time a needle starts penetration of tissue, the orientation of the grasper after the needle exits the tissue, the orientation of the grasper at the time the grasper grasps the suture thread, and the orientation of the grasper at the time the suture thread is cut.
  • the state of a surgical object includes general properties such as its position, orientation, speed, and acceleration. It also includes surgical object- specific properties, such as whether a gripper is open or closed. There are various mechanisms by which a control system can determine these properties. Some of these mechanisms are described hereinbelow.
  • a stored record of a procedure which can be an identifiable unit, a surgical task, a complete procedure, a whole operation, and any combination thereof can be used for training purposes. At least one outcome is known for a stored procedure, so at least one procedure with at least one best outcome can be shown to a student, allowing the student to observe an example of best practice. Augmented reality can be used to enhance a student's learning experience.
  • a stored record can be tagged with at least one identifier, to enhance and simplify searching at least one library or database comprising at least one stored procedure.
  • a procedure can be tagged with an identifier of at least one operator, with a type of procedure, with a characteristic of a patient and any combination thereof.
  • this could be used to determine the quality of outcome for appendectomies performed by Dr. Jones and, therefore, Dr. Jones' skill in performing appendectomies.
  • Tagging can be manual or automatic.
  • an identifier of an operator will be entered manually.
  • a critical point or a fixed point can be tagged manually or automatically.
  • manual tagging can be by an operator indicating, by word, by gesture, by touching a touchscreen and any combination thereof, that a given point, such as the current position of a tool, is to be tagged as a critical point or a fixed point.
  • automatic tagging can occur when a system identifies a point as a critical point or a fixed point.
  • the identifier can be selected from a group consisting of: an identifier of an operator, an identifier of an operating room, a physical characteristic of said operating room, start time of a surgical procedure, end time of a surgical procedure, duration of a surgical procedure, date of a surgical procedure, an identifier of a patient, a physical characteristic of a patient, an outcome of a surgical procedure, length of hospital stay for a patient, a readmission for a patient, and any combination thereof.
  • a physical characteristic of an operating room can be selected from a group consisting of: temperature, humidity, size, time of cleaning, date of cleaning, cleaning procedure, cleaning material, type of lighting and any combination thereof.
  • a physical characteristic of a patient can be selected from a group consisting of: age, height, weight, body mass index, health status, medical status, and any combination thereof.
  • a surgical database analysis can compile historical records of previous surgical events to generate predicted success rates of an operator.
  • a surgical database analysis can compile historical records of previous surgical events to generate predicted successes rates of a type of surgical event.
  • a statistical analysis is a percentage of success and failure for an individual or type of surgical event.
  • a change in performance related to the equipment can be flagged up and a procedure stopped or changed, or a correction applied to at least one movement of at least one surgical object to maintain a procedure within limits of safety. Applying a correction can be done automatically or upon request by an operator. If a correction is applied upon command, an indication will be provided by the system to indicate that such correction needs to be applied.
  • the indication can be a visual signal, an audible signal, a tactile signal, and any combination thereof. In some embodiments, a warning, visual, audible, tactile and any combination thereof, can be provided when an automatic correction is applied.
  • the visual signal can be selected from a group consisting of a constant-color pattern, a varying-color pattern, a constant- shape pattern, a varying-shape pattern, constant-size pattern, a varying-size pattern, an arrow, a letter and any combination thereof.
  • the audible signal can be selected from a group consisting of a constant-pitch sound, a varying-pitch sound, a constant-loudness sound, a varying-loudness sound, a word and any combination thereof.
  • the tactile signal can be selected from a group consisting of a vibration, a constant-pressure signal, a varying -pressure signal, a stationary signal, a moving signal and any combination thereof.
  • the tactile signal can be applied to a member of a group consisting of: a head, a neck, a torso, an arm, a wrist, a hand, a finger, a leg, an ankle, a toe and any combination thereof.
  • an operator's performance can be monitored and warnings can be flagged up if the operator's performance falls below a predetermined level of safety.
  • the outcome of a procedure can have more than one aspect.
  • an outcome of a surgical procedure can be a successful aspect, a partially successful aspect, a partial failure in an aspect, a complete failure in an aspect, and any combination thereof.
  • aspects of an outcome include: amount of bleeding after completion of a procedure, amount of bleeding during a procedure, return of an abnormality such as a tumor, speed of healing, adhesions, patient discomfort and any combination thereof.
  • a successful aspect would constitute: minimal bleeding after completion of a procedure, minimal bleeding during a procedure, no return of the abnormality, rapid healing, no adhesions and minimal patient discomfort.
  • a partially successful aspect would constitute: some bleeding after completion of a procedure, some bleeding during a procedure, minimal return of the abnormality, moderately rapid healing, a few small adhesions and some patient discomfort.
  • a partial failure in the aspect would constitute: significant bleeding after completion of a procedure, significant bleeding during a procedure, return of a significant amount of the abnormality, slow healing, significant adhesions and significant patient discomfort.
  • complete failure in the aspect would constitute: serious or life- threatening bleeding after completion of a procedure, serious or life-threatening bleeding during a procedure, rapid return of the abnormality, very slow healing or failure to heal, serious adhesions and great patient discomfort. It is clear that an outcome can include any combination of aspects.
  • a procedure could have minimal bleeding, both during and after the procedure (successful) with a few adhesions (partial success), but significant patient discomfort (partial failure) and rapid return of the abnormality (complete failure).
  • the system can be in communication with other devices or systems.
  • the AI-based control software can control surgical objects such as, but not limited to, a robotic manipulator, an endoscope, a laparoscope, a surgical tools and any combination thereof.
  • the system can be in communication with an advanced imaging system, it can function as part of an integrated operating room and any combination thereof.
  • the system which comprises AI-based software, can have full connectivity with a member of a group consisting of: digital documentation, PACS, navigation, other health IT systems, and any combination thereof.
  • an operator's abilities are assessed in general areas of expertise (also refers as features).
  • the assessment is performed by skilled practitioners who make the assessment from a video of the procedure, typically showing the same scene as viewed by the operator.
  • the assessment is performed automatically and autonomously by the system, from analysis of at least one image of at least a portion of a field of view..
  • the procedure can be an identifiable unit (such as, but not limited to, approach of at least one tool to the site of a suture), a single activity (a surgical task), for non-limiting example, making a single suture or making a single incision, or it can include a plurality of activities (e.g., a complete procedure), for non-limiting example, closing an incision with a plurality of sutures or executing an entire cholecystectomy, from the first incision through dissection of the tissue, to suturing and final removal of the tools.
  • a single activity a surgical task
  • a plurality of activities e.g., a complete procedure
  • a GOALS assessment can be used for assessing the quality of an operator, for example, for accreditation, it can be used for training and any combination thereof.
  • the operator receives accreditation if the final score is greater than a predetermined value.
  • the system will indicate to an operator where improvements can be made.
  • the indicating can be real-time, during the procedure, or off-line, from recorded videos. In some embodiments, indications to the operator can additionally be made by a skilled operator.
  • Table I gives an example of how, for training purposes, a practitioner's abilities can be scored in a GOALS analysis.
  • Table II gives an example of how, for accreditation purposes, a practitioner's abilities scored in a GOALS analysis.
  • the accreditation assessment evaluates more advanced and more complex skills, such as leadership ability, than the training assessment.
  • one or more parameters are assessed. These parameters, together, comprise a feature. Each parameter can be scored; the combined parameter scores form a feature score. All of the feature scores can be combined to form an assessment of skill level. It should be noted that the individual scores for the features can be used to indicate areas of weakness or strength in an operator.
  • knot-tying feature can involve parameters such as total time spent tying a knot, idle time during knot tying, search time, approach time taken to reach the site of the knot, speed of tool movement during knot tying, motion smoothness, bimanual dexterity, the length of the path followed by at least one tool, and the distance efficiency, which is a comparison of at least one actual path length with at least one comparable optimal path length.
  • Fig. 1 shows an exemplary display of the features of an exemplary GOALS analysis, with the part scores for each feature (110) and the total score for the assessment (120). These are shown overlaid on a display (100) of a part of a surgical environment.
  • tissue damage a parameter which can be used in an assessment
  • effectiveness of a procedure in rectifying a medical condition a parameter which can be used in an assessment
  • long-term post-operative pain a parameter which can be used in an assessment
  • any combination thereof include, but are not limited to: tissue damage, pain caused to a patient, effectiveness of a procedure in rectifying a medical condition, long-term post-operative pain to the patient and any combination thereof.
  • an assessment would be carried out by at least one skilled professional.
  • an assessment can be carried out automatically, preferably using an artificial intelligence (Al)-based system.
  • At least one movement of at least one surgical object, manipulated by an operator, manipulated by the system and any combination thereof, a position of the surgical object, a force exerted by (and on) a surgical object and any combination thereof can be determined, from analysis of at least one image of at least a part of a surgical environment, by at least one tracking subsystem, by at least one sensor, and any combination thereof.
  • Image analysis can be used to determine the location of at least one patient feature such as, but not limited to, at least a protion of: an organ, a blood vessel, a nerve, a lesion, a tumor, tissue, a bone, a ligament and any combination thereof.
  • tool identification and tracking and image analysis can be used to determine the location of at least one substantially non-moving surgical object in the surgical environment such as, but not limited to, a surgical tool, such as a swab., a non-tool item in a surgical environment such things as glass shard or a bomb fragment, and any combination thereof.
  • the totality of the data on location, orientation and movement of surgical objects, non-tool items and patient features provide spatiotemporal 3 -dimensional data which characterize the surgical environment and at least one item within it.
  • the spatiotemporal 3-dimensional data can be stored in a database for later analysis.
  • Other measureable and storable data include: grasping force, torsion about a tool axis, Cartesian force, and any combination of these and the positions, orientations and movements disclosed above. It also can record at least one image of the surgical environment, up to a plurality of images encompassing an antire procedure. The plurality of images can be synchronized with force and position data.
  • sufficient depth information is provided so that the position and orientation of at least one item in the field of view can be determined in true 3D, enabling accurate determination of distance between two items, relative angle between two items, angle between three items, area of at least one item, area between at least two items, volume of at least one item, voluem encompassed by at least two items, and any combination thereof.
  • the 3D position and orientation of an item can be determined using data from multiple imaging devices, from at least one sensor attached to at least one surgical object, from at least one sensor attached to at least one manipulator, from "dead reckoning", from image analysis and any combination thereof.
  • an accurate determination can be made as to whether a surgical object's position, orientation, speed, acceleration, smoothness of motion and other parameters is correct. It is also possible to determine if a surgical object is accurately following a desired path, whether a collision can occur between two items, and whether the distance between two items is small enough that one or both can be activated.
  • An item that can be activated or deactivated based on distance information can include, but is not limited to, an ablator, a gripper, a fluid source, a light source, a pair of scissors, and any combination thereof.
  • activation of an ablator is best delayed until the ablator is close to the tissue to be ablated so that heating does not occur away from the tissue to be ablated, to minimize the possibility of damage to other tissue.
  • the ablator can be automatically activated when a distance between an ablator and the tissue to be ablated is less than a predetermined distance, so that there is no unnecessary heating of fluid or tissue away from the tissue to be ablated and so that ablation is carried out efficiently.
  • an ablator could be activated when the 2D distance was small, but the distance perpendicular to the 2D plane (upward) was still large. In this case, the operator (or the system, for autonomic ablation) could be unaware of this until it was observed that the ablator was heating fluid rather than ablating tissue. The operator (or the system, for autonomic ablation) would then have to move the ablator downward until ablation could occur, but would not have, nor could be given, information on how far downward to move. At this point, either the ablator could be deactivated and moved until it contacted the tissue, or the ablator could be left activated until ablation began. In either case, unwanted damage to the tissue is likely.
  • Table III gives a non-limiting example of metrics which can be assessed. In a given embodiment, any combination of metrics can be used.
  • Fig. 2 shows, schematically, the 3D movements, over time, of the tip of a surgical tool during a procedure.
  • Fig. 3A shows the speed of the surgical tool tip during the procedure
  • Fig. 3B shows the acceleration of the surgical tool tip during the procedure.
  • the speed, acceleration and jerk for the first part of the procedure are shown in Figs. 4A, B and C, respectively. From these, the metrics of Table IV can be calculated.
  • Table IV shows exemplary means of calculating the metrics of Table III.
  • Performing a task quickly typically means that it can be performed without external guidance. It does not necessarily mean that the task is being performed correctly.
  • Time is not necessarily a measure of ability; different surgeons can work at different speeds but attain the same quality of outcome.
  • Training for time teaches trainees to focus on working fast, rather than working carefully and accurately.
  • An overall time metric can be influenced by other aspects of the training scenario, for example, distracting factors or other differences between a practice scenario and an assessment scenario.
  • task completion time can be useful as a measure of trainee skill level when combined with other metrics.
  • significant correlation was found between experience and some of the position-based metrics and most of the force-based metrics, with the correlations being weaker for simpler tasks and stronger for more complex tasks.
  • the strongest correlation was found forthe position-based metrics was with the speed peaks and jerk metrics.
  • the strongest correlation for the force-based metrics was found for the integrals and derivatives of the forces.
  • the system can determine a type of procedure being executed, such as, but not limited to, suturing, making an incision, clearing smoke or fluid from a surgical field, ablating, etc.
  • a type of procedure being executed, such as, but not limited to, suturing, making an incision, clearing smoke or fluid from a surgical field, ablating, etc.
  • the type of procedure will be stored as a searchable identifier for the procedure. Determination of type of procedure can occur in real time, offline and any combination thereof.
  • the type of procedure is determined automatically and autonomously by the system. In less-preferred embodiments, the type of procedure is input to the system by an operator.
  • the system can determine an optimal variant of a procedure.
  • an optimal variant of a procedure is input into the system.
  • the outcome of the procedure can be input into the system, and the system can then assesses the procedure in light of the outcome to determine, for non-limiting example, whether a different choice of procedure could have improved an outcome, which error(s) adversely affected the outcome and any combination thereof.
  • the system compares the procedure as carried out with an optimal procedure, and indicates to the operator being trained or assessed such items as: a preferred path for an incision, deviations from an optimal path, an optimal pressure on tissue for a tool, a more optimal pressure if a less-than optimal pressure is being used, an optimal or more optimal position for lighting, suction or other auxiliary equipment, an optimal or more optimal temperature for ablation equipment, an optimal forward speed for a cutting instrument (in the direction of elongation of the cut), an optimal speed for lateral movement of a cutting blade (e.g., lateral movements of the blades of a pair of scissors), an optimal pressure for a cutting instrument, warnings such as, but not limited to, too close an approach to tissue, too deep an incision, too shallow an incision, too much or too little pressure being used, and any combination thereof.
  • An image captured by an imaging device can be a 2D image, a 3D image, a panoramic image, a high resolution image, a 3D image reconstructed from at least one 2D image, a stereo image, and any combination thereof.
  • Additional information can be obtained from an accelerometer, a motion sensor, an IMU, a sensor wearable by an operator, a sensor attachable to an item, an RFID tag attachable to an item, an ultrasound sensor, an infrared sensor, a CT image, an MRI image, and X-ray image, a gyro-meter, tachometer, shaft encoder, rotary encoder, strain gauge and any combination thereof.
  • the sensor can be in mechanical communication with a surgical object, in electrical communication, and any combination thereof.
  • Electrical communication can be wired communication, wireless communication and any combination thereof.
  • the state of a surgical tool can include general properties such as its position, orientation, speed, and acceleration. It can also include tool-specific properties, such as whether a gripper is open or closed. There are various mechanisms by which these properties can be determined. Some of these mechanisms are described hereinbelow.
  • the state of a surgical object can include, but is not limited to, a lighting level, an amount of suction, an amount of fluid flow, a heating level in an ablator, a speed of lateral movement of at least one blade of a pair of scissors, a speed of movement of at least one blade of a cutter, an amount of defogging, an amount of smoke removal and any combination thereof
  • Image-based tracking can identify at least one property of at least one surgical object, including a surgical object attached to a robotic manipulator, a surgical object controlled directly by an operator and a static a surgical object.
  • Image-based tracking can also track items in an environment besides the surgical objects, as described herein.
  • image-based tracking can be used to avoid an obstacle, to provide an instruction (e.g., to an operator) to avoid an obstacle, to focus (for example, an endoscope) on a point of interest, to provide an instruction (e.g., to an operator) to focus on a point of interest, and any combination thereof.
  • the system can evaluate how an operator interacts with at least one item to ascertain the operator's intent or to identify a procedure that is currently being performed.
  • At least one item can be tracked by identifying from at least one image which includes the item at least one inherent, distinguishing characteristic of the item. For example, this could include the shape, color, texture, and movement of an item.
  • an object can be modified to make it more recognizable in an image. For instance, a colored marker, a tracking patterns and any combination thereof can be affixed to at least one surgical object to aid in detection by a computer algorithm, to aid in identification by an operator and any combination therof.
  • An imaging device can operate in the infrared (IR), in the visible, in the UV, and any combination thereof.
  • Surgical object position and orientation can be determined, for example, via geometrical equations using analysis of a projection of a surgical object into an image plane or via at least one Bayesian classifier for detecting a surgical object in at least one image; the search for a surgical object can be further restricted by means of computing the projection of a surgical object's insertion point into the body of a patient.
  • Determination of surgical object position and orientation in an image-based system can be rendered more difficult by an ambiguous image structure, an occlusion caused by blockage of the line of sight (e.g., by at least one other surgical object), blood, an organ, smoke caused by electro-dissection, and any combination thereof.
  • Particle filtering in the Hough space can be used to improve tracking in the presence of smoke, occlusion, motion blurring and any combination thereof.
  • a displayed image can be enhanced by a member of an enhancement group consisting of: a colored area, a patterned area, an overlay and any combination thereof
  • the member of the enhancement group can indicate the presence of, enhance recognizability of, and any combination thereof at least one item selected from a group consisting of a blood vessel, an organ, a nerve, a lesion, a tool, blood, smoke, and any combination thereof.
  • An image can also be enhanced by an assessment score, by a suggestion, by an instruction, by a distance, by an angle, by an area, by a volume between items, by a volume, by a size scale, by information from a medical history, and any combination thereof.
  • a suggestion, an instruction and any combination thereof can be visual, audible and any combination thereof.
  • a visual overlay can include color, a line, an area, a volume, an arrow, a pattern, an image, and any combination thereof.
  • An audible overlays can include a constant-pitch sound, a varying pitch sound, a constant loudness sound, a varying loudness sound, a word, and any combination thereof. Words can be independent or can comprise a soundtrack to a plurality of images, such as a video.
  • Fig. 5 shows an example of visual advice to a trainee, for an incision in a liver (510).
  • the optimal path for the incision (520) is shown by a dashed line, which approximately bisects the right lobe.
  • the dotted line (530) indicates the actual path of the incision, which is not accurately following the optimal path (520).
  • the system provides instructions to the operator, as shown by the heavy arrow (540) which is overlaid on the screen image and indicates to the operator the direction in which the scalpel should move in order to return to the optimal path.
  • the optimal path (520) can be overlaid on the screen, so that an operator need only follow an indicated marking to follow an optimal path.
  • a marking can be visual, audible and any combination thereof.
  • a visual marking can be a line, an arrow, a pattern, a color change, and any combination thereof.
  • An audible indicator can be a voice (e.g., left, right, up, down, forward, back, harder, softer, more light, ablate, cut, suture, etc.) a predetermined sound pattern (rising pitch for left, lowering pitch for right, etc.), and any combination thereof.
  • Figs. 6-8 show an exemplary embodiment of a method of automatically assessing or training an operator.
  • the first step is to acquire at least one image of a field of view of an imaging device (610).
  • the image is analyzed (620), as described herein, to identify, in 3D, position, orientation and movement of at least one surgical object and preferably all of the items in the field of view, and the relationship of at least one surgical object to the surgical environment (i.e., the organs, blood vessels, nerves, etc. of the patient) and to other surgical objects in the surgical environment.
  • the force exerted by or on at least one surgical object can also be acquired.
  • At least one of the metrics described herein can be determined. From the at least one metric and from the procedure being executed, with the procedure being determinable either from user input of from analysis of at least one image, at least one actual metric of at least one surgical object can be compared (630) with at least one stored metric for the same at least one surgical object in the same procedure, with the stored at least one metric providing an optimum metric in the procedure. If (640) the at least one actual metric is substantially the same as the at least one stored metric, then the actual movement is well executed (circle 2) and the method executes the steps associated with a well-executed movement (Fig. 7). If (640) the the actual at least one metric is not substantially the same as the stored at least one metric, then the actual movement is not well executed (circle 3) and the method executes the steps associated with an ill-executed movement (Fig. 8).
  • Fig. 7 shows an exemplary embodiment of the method if the at least one actual metric is substantially the same as the at least one stored metric. If the at least one actual metric is substantially the same as the at least one stored metric, then the procedure is, at this point, well-executed (710). If (720) the system is assessing an operator, then (730) the assessment will show that, at this point in the procedure, the operator has an assessment of "good surgical technique". After assessment, the system checks (740) whether the procedure is complete. If it is not complete, the system (circle 1) acquires at least one new image and repeats the cycle. If the procedure being assessed is complete, the system creates (750) a cumulative assessment, such as a GOALS score, for the procedure just completed. If the surgical intervention comprises a number of procedures, at this point (not shown in Figs. 6-7) either a next procedure is identified and the cycle repeats, or, if the surgical intervention is complete, an overall assessment is made and the system terminates.
  • the surgical intervention comprises a
  • the system checks (740) whether the procedure is complete. If it is not complete, the system (circle 1) acquires at least one new image and repeats the cycle. If (740) the procedure is complete, then (not shown in Figs. 6-7) either a next procedure is identified and the cycle repeats, or, if the surgical intervention is complete, the system terminates.
  • Fig. 8 shows an exemplary embodiment of the method if the at least one actual metric is not substantially the same as the stored at least one metric. If the at least one actual metric is not substantially the same as the at least one stored metric, then, at this point, the procedure is not well-executed (810). If (820) the system is assessing an operator, then (830) the assessment will show that, at this point in the procedure, the operator has an assessment of "poor surgical technique". After assessment, the system checks (850) whether the procedure is complete. If it is not complete, the system (circle 1) acquires at least one new image and repeats the cycle. If (850) the procedure is complete, at this point (not shown in Figs. 6, 8) either a next procedure is identified and the cycle repeats, or, if the surgical intervention is complete, the system terminates.
  • At least one indication can be given (840) for a means of correcting an error and bringing the procedure closer to or back to a more optimum procedure.
  • An indication can be visual, audible or both, as disclosed hereinabove.
  • an indication can be provided as an overlay on a display.
  • an assessment can include "poor technique”, “moderate technique”, and “good technique”. Technique can also be assessed on a points scale, as in a GOALS analysis, with, for non- limiting example, very poor technique being 0 points, while very good technique is 5 points.
  • a single score is given for a procedure; in some embodiments, individual scores are given for the features, the component parts of a technique, for non- limiting example, as shown in Tables I and II above.
  • the system can simultaneously carry out both an assessment and training.
  • both assessment (Fig. 7) and training (Fig. 8) can be carried out.
  • a surgical intervention comprising a plurality of procedures
  • training can be carried out, assessment can be carried out, both can be carried out, or neither can be carried out.
  • assessment can be carried out, both can be carried out, or neither can be carried out.
  • which of these is done for any given procedure is independent of what is done for any other procedure.
  • assessment and training can be carried out off-line. If assessment, training or both is off-line, then the "acquisition of at least one image" above is retrieval of at least one stored image from a database.
  • a best practice or optimal procedure can be compiled from at least one fragment of at least one procedure executed by at least one surgeon, it can be a computer- generated procedure and any combination thereof.
  • the plurality of fragments of a procedure procedures can have been executed by a plurality of operators, thus combining the best parts of the different procedures to generate one best-practice procedure.
  • an object can be modified to make it more recognizable in an image.
  • a colored marker, a tracking pattern, an LED and any combination thereof can be affixed to at least one surgical object to aid in detection by a computer algorithm or to aid in identification by an operator.
  • a minimum of three non-collinear markers is necessary for determining six DOF, if the sole means of determining tool location and orientation is a marker.
  • a tracking system compriaing a modifier, as described above, can provide high accuracy and reliability, but it depends on a clear line of sight between a tracked tool and an imaging device.
  • the tracking subsystem can comprise at least one sensor (such as, for non-limiting example, a motion sensor) on at least one surgical object, by at least one sensor, at least one processor to determine movement of at least one at least one surgical object by determining change in position of at least one robot arm and by any combination thereof.
  • a sensor such as, for non-limiting example, a motion sensor
  • processor to determine movement of at least one at least one surgical object by determining change in position of at least one robot arm and by any combination thereof.
  • the sensor is preferably in communication with at least one surgical object; the communication can be electrical or mechanical and it can be wired or wireless.
  • the sensor can be, for non-limiting example, an electromagnetic sensor; an ultrasound sensor; an inertial sensor to sense the angular velocity and the acceleration of the tool or other item; a gyroscope, an accelerometer, an IMU, an RFID tag and any combination thereof.
  • An infrared tracking system can be used, which can locate at least one object that has at least one infrared marker attached to it. The object being tracked does not require any wires, but a line of sight from the tracking system to the tracked objects must be kept clear.
  • a magnetic tracking system can also be used. At least one magnetic sensors is affixed to at least one surgical object, and a magnetic transmitter emits a field that at least one sensor can detect. However, the presence of objects in the operating room that affect or are affected by magnetic fields can interfere with tracking.
  • At least one IMU can be used.
  • An IMU incorporates a plurality of sensors, such as an accelerometer, a gyroscope, a magnetometer, a velocity sensor and any combination thereof to track at least one of orientation, position, velocity, angular velocity and acceleration of a surgical object.
  • An IMU can transmit data wirelessly and has a high update rate.
  • an IMU can experience increasing error over time (especially in position), and some types of IMU sensor can be sensitive to interference from other devices in an operating room.
  • a rectification algorithms can be applied in order to reduce the effects error accumulation.
  • Kinematic tracking can be used to determine at least one property of at least one surgical tool maneuverable by a robotic manipulator.
  • a typical robotic manipulator comprises at least one jointed arm that manipulates at least one surgical tool on behalf of an operator.
  • a robot arm can also include at least one sensor (such as, but not limited to, an encoder, a potentiometer, a motion sensor, and an accelerometer) that can accurately determine the state of each joint in the arm. If the fixed properties of the physical structure of the robot arm are known (lengths of links, twists, etc.), they can be combined with the dynamic joint values to form a mathematical model of the robot arm. At least one property of a manipulated surgical object, such as a position and orientation of at least one portion of the surgical object, can be computed from this model.
  • Positional information resulting from kinematic tracking is generally expressed in terms of a coordinate system that is specific to the robot Techniques well known in the art can be used to generate a transformation that maps between the coordinate system relative to the robot and a coordinate system relative to an imaging device imaging the FOV.
  • At least one LED can be used to measure distance between a surgical object and tissue, typically by reflecting from the tissue light emitted by an LED on a surgical object.
  • a surgical tool can have a marker attached near its handle (outside the body) and a colored patch near its tip (inside the body).
  • movement of the marker is tracked by an imaging device outside the body (outside-outside), while movement of the colored patch is tracked by an imaging device inside the body (inside-inside).
  • an EM emitter can be close to the tip of the surgical tool, while an EM sensor is attached to an operating table (inside-outside).
  • An electromagnetic (EM) tracking system can be used to locate at least one surgical object or another object of interest. By computing the position and orientation of at least one small electromagnetic receiver on a surgical object, a dynamic, preferably real-time measurement of the position and orientation of a aurgical object can be found.
  • EM electromagnetic
  • At least one electromagnetic receiver is attached to at least one hand of at least one operator, tracking the changing position and orientation) of at least one surgical object by tracking the movement of the at least one hand.
  • keeping a sensor in a stable position during the entire execution of a surgical procedure can be difficult.
  • movement of an operator' s hand need not be directly related to movement of a surgical object.
  • An electromagnetic tracking system does not need a clear line of sight, but is strongly affected by ferromagnetic objects, such as a steel tool or electronic equipment in a clinical environment, which can seriously degrade tracking accuracy by affecting local magnetic fields. Moreover, the need for wires in systems of this type can interfere with the use of laparoscopic instruments.
  • Combined methods can also be used, for non-limiting example, a combination of a passive optical marker and an EM sensor on a tool in order to minimize the effects of occasional blocking of the line-of- sight of the optical markers and distortion in the EM system.
  • at least one force/torque sensor can be mounted on at least one surgical object. This exemplary combination can accurately measure position, orientation, velocity, acceleration, motion smoothness, and force applied by the surgical object, thereby enabling measurement of and assessment of movement metrics such as those, for non-limiting example, listed in Table III.
  • Ultrasound can be used in much the same manner as optical tracking.
  • three or more emitters are mounted on a surgical object to be tracked. Each emitter generates a sonic signal that is detected by a receiver placed at a fixed known position in the environment. Based on at least one sonic signal generated by at least one emitter, the system can determine at least one position of at least one portion of at least one surgical object by triangulation. Combining three receivers, an ultrasound tracker can also determine orientation of at least a portion of at least one surgical object.
  • accuracy of an ultrasound tracker can suffer from the environment-dependent velocity of the sound waves, which varies with temperature, pressure and humidity. The loss of energy of an ultrasonic signal with distance also limits the range of tracking.
  • acoustic tracking requires line-of- sight, lack of which can affect the quality of the signal.
  • the surgical tools comprise neither markers nor sensors, although at least one sensor can be used on at least one robotic manipulator.
  • the system determines tool position and orientation via analysis of at least one image, preferably an image provided by a laparoscopic imaging device, of which at least a portion is displayable and is therefore visible to an operator, as described above.
  • the system further comprises at least one restricting mechanism configured to restrict the movement of at least one surgical object.
  • a warning can be provided by use of a restricting mechanism, by a visual signal, by an audible signal, by a tactile signal and any combination thereof.
  • a visual signal can be , a constant-color light, a changing-color light, a constant brightness light, a varying brightness light, a constant-size pattern, a changing-size pattern, a constant- shape pattern, a changing-shape pattern, and any combination thereof.
  • An audible signal can be a constant-pitch sound, a changing -pitch sound, a constant loudness sound, a varying loudness sound, and any combination thereof.
  • a tactile signal can be a vibration, a constant-pressure signal, a varying-pressure signal, a stationary signal, a moving signal and any combination thereof.
  • a tactile signal can be applied to a head, a neck, a torso, an arm, a wrist, a hand, a finger, a leg, an ankle, a toe and any combination thereof.

Abstract

The present invention provides a system for assessing a skill level for execution of at least one surgical procedure, comprising: a. at least one imaging device configured to provide at least one image in a field of view of a surgical environment; b. processor in communication with said imaging device; a. communicable database configured to (i) store said at least one surgical procedure; said at least one surgical procedure is characterized by at least one stored parameter of at least one item, Pstored; (ii) real-time store said at least one parameter Pitem, of said at least one item; wherein from comparison between said at least one parameter of said at least one item, Pitem and at least one of said Pstored, said skill level is providable.

Description

AUTONOMIC GOALS-BASED TRAINING AND ASSESSMENT SYSTEM FOR
LAPAROSCOPIC SURGERY
FIELD OF THE INVENTION
The present invention generally pertains to a system and method for providing a means and method for automated training and assessment of a surgical operator.
BACKGROUND OF THE INVENTION
Present systems of training and assessing surgeons and other medical operators require considerable and time-consuming input from skilled professionals and are subjective, since they depend on assessment of the operator's skill by a human being.
Furthermore, in laparoscopic surgery, the assessment is complicated by the fact that the assessor or trainer has, at best, a restricted view of the actions of the operator. In a best-case scenario, the assessor or trainer has the same view as the operator - a display, usually 2 dimensional, of at least part of the surgical environment.
However, unlike in conventional surgery, this does not allow the trainer or assessor to see the operator's actions from any other point of view, nor is there any easy or obvious way for a trainer to indicate to an operator, other than verbally, how to correct an action.
It is therefore a long felt need to provide a training or assessment system for laparoscopic surgery which does not require considerable and time-consuming input from a skilled professional.
SUMMARY OF THE INVENTION
It is an object of the present invention to disclose a system and method for providing a means and method for automated training and assessment of a surgical operator.
It is another object of the present invention to disclose a system for assessing a skill level for execution of at least one surgical procedure, comprising: a. at least one imaging device configured to provide at least one image in a field of view of a surgical environment; b. at least one processor in communication with said imaging device; said at least one processor is configured to (i) analyze said at least one image from said at least one imaging device, (ii) identify from said at least one image at least one spatial position of at least one item; and, (iii) calculate from said at least one spatial position at least one parameter of said at least one item, and,
Figure imgf000004_0001
c. at least one communicable database configured to (i) store said at least one surgical procedure; said at least one surgical procedure is characterized by at least one stored parameter of at least one item, (ii) real-time store said at least one parameter
Figure imgf000004_0002
of said at least one item;
Figure imgf000004_0006
wherein from comparison between said at least one parameter of said at least one item, and at least one of said said skill level is providable.
Figure imgf000004_0007
Figure imgf000004_0008
It is another object of the present invention to disclose the system as described above, wherein said item is selected from a group consisting of: at least one surgical tool, a light source, a blood vessel, an organ, a nerve, and a ligament, a lesion, a tumor, smoke, fluid flow, bleeding, a fixed point, a critical point, and any combination thereof.
It is another object of the present invention to disclose the system as described above, wherein said at least one stored parameter is derivable from at least one stored spatial
Figure imgf000004_0005
position of said item.
It is another object of the present invention to disclose the system as described above, wherein at least one feature is definable according to at least one said parameter
Figure imgf000004_0003
It is another object of the present invention to disclose the system as described above, wherein at least one predetermined feature is definable according to at least one said stored parameter
Figure imgf000004_0004
It is another object of the present invention to disclose the system as described above, wherein comparison of said at least one feature to said at least one predetermined feature generates at least one partial score.
It is another object of the present invention to disclose the system as described above, wherein said at least one feature is selected from a group consisting of: depth perception, bimanual dexterity, efficiency, tissue handling, autonomy, difficulty, smooth conduct of the operation, autonomy of the operator, leadership ability, cooperation with assistants, proper positioning of the access ports, display of the operating field in the center of the monitor, clear display of the target organ, proper use of the retractor, proper selection and appropriate use of surgical tool on dominant side, proper use of surgical tool on non-dominant side, proper methods of traction and tissue handling, appropriate and smooth use of the correct type of energy in tissue ablation, correct layer of tissue dissection, correct identification and proper coagulation or clipping of blood vessels, suturing, knot-tying, cutting, and any combination thereof.
It is another object of the present invention to disclose the system as described above, wherein said at least one predetermined feature is selected from a group consisting of: depth perception, bimanual dexterity, efficiency, tissue handling, autonomy, difficulty, smooth conduct of the operation, autonomy of the operator, leadership ability, cooperation with assistants, proper positioning of the access ports, display of the operating field in the center of the monitor, clear display of the target organ, proper use of the retractor, proper selection and appropriate use of surgical tool on dominant side, proper use of surgical tool on non- dominant side, proper methods of traction and tissue handling, appropriate and smooth use of the correct type of energy in tissue ablation, correct layer of tissue dissection, correct identification and proper coagulation or clipping of blood vessels, suturing, knot-tying, cutting, and any combination thereof.
It is another object of the present invention to disclose the system as described above, wherein said at least one partial score is within a predetermined range.
It is another object of the present invention to disclose the system as described above, wherein a total score is calculable as a sum of said at least one partial score.
It is another object of the present invention to disclose the system as described above, wherein is displayable a member of a group consisting of said at least one partial score, said total score and any combination thereof.
It is another object of the present invention to disclose the system as described above, wherein a smaller difference between said at least one feature and said at least one predetermined feature results in said at least one partial score being closer to at least one preferred score.
It is another object of the present invention to disclose the system as described above, wherein said at least one preferred score is a maximum of said predetermined range.
It is another object of the present invention to disclose the system as described above, wherein a GOALS score is generatable from a sum of said at least one partial scores. It is another object of the present invention to disclose the system as described above, wherein said skill level is assessable from said GOALS score.
It is another object of the present invention to disclose the system as described above, wherein said skill level is selected from a group consisting of: poor, moderate, good and any combination thereof.
It is another object of the present invention to disclose the system as described above, wherein said at least one stored parameter is generatable from at least one stored
Figure imgf000006_0001
surgical procedure.
It is another object of the present invention to disclose the system as described above, wherein said at least one stored surgical procedure is generatable from a member of a group consisting of: a procedure executed by an experienced operator, an average of procedures by at least one experienced operator, an average of procedures executed by an operator being assessed, a simulation of a procedure, an average of simulations of a procedure executed by at least one operator, a simulation of a procedure generated by simulation software and any combination thereof.
It is another object of the present invention to disclose the system as described above, wherein said skill level is providable from a second parameter derivable from a member of a group consisting of: a signal from a sensor, a forward kinematics calculation, an inverse kinematics calculation, a CT image, an MRI image, and X-ray image and any combination thereof.
It is another object of the present invention to disclose the system as described above, wherein at least one overlay on at least a portion of at least one said image of said field of view of said surgical environment is selected from a group consisting of: said at least one parameter Pitem, said at least one stored parameter Pst0red, said at least one second parameter, said at least one feature, said at least one partial score, said GOALS score, said skill level, a suggestion, an instruction, a distance, an angle, an area, a volume, a size scale, information on a medical history of a patient, and any combination thereof.
It is another object of the present invention to disclose the system as described above, wherein communication between said at least one sensor and said at least one item is selected from a group consisting of: mechanical communication, wired communication, wireless communication and any combination thereof.
It is another object of the present invention to disclose the system as described above, wherein said sensor is selected from a group consisting of an electromagnetic sensor; an ultrasound sensor; an inertial sensor to sense the angular velocity and the acceleration of the tool or other item; an accelerometer, a motion sensor, an EVIU, a sensor wearable by an operator, a sensor attachable to a surgical object, an RFID tag attachable to a surgical object, an ultrasound sensor, an infrared sensor, gyro-meter, tachometer, shaft encoder, rotary encoder, strain gauge and any combination thereof.
It is another object of the present invention to disclose the system as described above, wherein said database is configured to store as a function of time a member of a group consisting of said at least one parameter Pitem, said at least one stored parameter said at
Figure imgf000007_0001
least one second parameter, said at least one feature, said at least one predetermined feature and any combination thereof.
It is another object of the present invention to disclose the system as described above, wherein said database is configured to store, for any given time t, a member of a group consisting of said at least one parameter Pitem, said at least one stored parameter
Figure imgf000007_0007
said at least one second parameter, said at least one feature, said at least one predetermined feature and any combination thereof.
It is another object of the present invention to disclose the system as described above, wherein said comparison is performable in a manner selected from a group consisting of: in real time, between a stored said at least one parameter and said at least one stored
Figure imgf000007_0002
parameter , between a stored said at least one parameter and said stored at least one
Figure imgf000007_0004
Figure imgf000007_0006
second parameter and between a stored said at least one second parameter and said at least one stored parameter
Figure imgf000007_0003
It is another object of the present invention to disclose the system as described above, wherein said surgical procedure is selected from a group consisting of an identifiable unit, a surgical task, a complete procedure, and any combination thereof.
It is another object of the present invention to disclose the system as described above, wherein said skill level is providable either in real time or off-line.
It is another object of the present invention to disclose the system as described above, wherein said at least one parameter is selected from a group consisting of: time to
Figure imgf000007_0005
execute said surgical procedure; accuracy of movement of at least one said surgical tool, accuracy of energy use; amount of coordination between movement of said at least one surgical tool and movement of at least one second surgical tool, accuracy of coordination between movement of said at least one surgical tool and movement of at least one second surgical tool, time, idle time, approach time, speed, maximum speed, speed profile, acceleration, motion smoothness, bimanual dexterity, search time, path length, distance efficiency, depth perception, transit profile, deviation on horizontal plane, deviation on vertical plane, response orientation, economy of area (EOA), economy of volume (EOV), number of movements, force range, interquartile force range, integral of the force, integral of the grasping force, integral of the Cartesian force, first derivative of the force, second derivative of the force, third derivative of the force, tissue handling, autonomy and any combination thereof.
It is another object of the present invention to disclose the system as described above, wherein said at least one stored parameter is selected from a group consisting of: time
Figure imgf000008_0001
to execute said surgical procedure; accuracy of movement of at least one said surgical tool, accuracy of energy use; amount of coordination between movement of said at least one surgical tool and movement of at least one second surgical tool, accuracy of coordination between movement of said at least one surgical tool and movement of at least one second surgical tool, time, idle time, approach time, speed, maximum speed, speed profile, acceleration, motion smoothness, bimanual dexterity, search time, path length, distance efficiency, depth perception, transit profile, deviation on horizontal plane, deviation on vertical plane, response orientation, economy of area (EOA), economy of volume (EOV), number of movements, force range, interquartile force range, integral of the force, integral of the grasping force, integral of the Cartesian force, first derivative of the force, second derivative of the force, third derivative of the force, tissue handling, autonomy and any combination thereof.
It is another object of the present invention to disclose the system as described above, wherein said at least one second parameter is selected from a group consisting of: time to execute said surgical procedure; accuracy of movement of at least one said surgical tool, accuracy of energy use; amount of coordination between movement of said at least one surgical tool and movement of at least one second surgical tool, accuracy of coordination between movement of said at least one surgical tool and movement of at least one second surgical tool, time, idle time, approach time, speed, maximum speed, speed profile, acceleration, motion smoothness, bimanual dexterity, search time, path length, distance efficiency, depth perception, transit profile, deviation on horizontal plane, deviation on vertical plane, response orientation, economy of area (EOA), economy of volume (EOV), number of movements, force range, interquartile force range, integral of the force, integral of the grasping force, integral of the Cartesian force, first derivative of the force, second derivative of the force, third derivative of the force, tissue handling, autonomy and any combination thereof.
It is another object of the present invention to disclose the system as described above, wherein said system is additionally configured to indicate a correction to said surgical procedure.
It is another object of the present invention to disclose the system as described above, wherein said correction is determinable from a member of a group consisting of: a comparison between said at least one parameter
Figure imgf000009_0001
and said at least one stored parameter Pstored, a comparison between said at least one parameter and said at least one second
Figure imgf000009_0002
parameter, a comparison between said at least one second parameter and said at least one stored parameter and any combination thereof.
Figure imgf000009_0003
It is another object of the present invention to disclose the system as described above, wherein said correction is indicatable by a member of a group consisting of a visual indication, an audible indication and any combination thereof.
It is another object of the present invention to disclose the system as described above, wherein said visual indication is overlayable on said at least one image.
It is another object of the present invention to disclose the system as described above, wherein said visual indication is selected from a group consisting of a constant-color pattern, a varying-color pattern, a constant-shape pattern, a varying-shape pattern, constant-size pattern, a varying-size pattern, an arrow, a word and any combination thereof.
It is another object of the present invention to disclose the system as described above, wherein said audible indication is selected from a group consisting of a constant-pitch sound, a varying-pitch sound, a constant-loudness sound, a varying-loudness sound, a word and any combination thereof.
It is another object of the present invention to disclose the system as described above, additionally comprising a restricting mechanism configured to restrict movement of at least one said surgical object.
It is another object of the present invention to disclose the system as described above, wherein said system is additionally configured to provide a warning. It is another object of the present invention to disclose the system as described above, wherein, if said difference between said at least one feature and said at least one predetermined feature is outside a predetermined range, said warning is providable.
It is another object of the present invention to disclose the system as described above, wherein said warning is indicatable by a member of a group consisting of said restricting mechanism, a visual signal, an audible signal, a tactile signal and any combination thereof.
It is another object of the present invention to disclose the system as described above, wherein said visual signal is selected from a group consisting of a constant-color pattern, a varying-color pattern, a constant- shape pattern, a varying-shape pattern, constant-size pattern, a varying-size pattern, an arrow, a word and any combination thereof.
It is another object of the present invention to disclose the system as described above, wherein said audible signal is selected from a group consisting of a constant-pitch sound, a varying-pitch sound, a constant-loudness sound, a varying-loudness sound, a word and any combination thereof.
It is another object of the present invention to disclose the system as described above, wherein said tactile signal is selected from a group consisting of a vibration, a constant- pressure signal, a varying-pressure signal, a stationary signal, a moving signal and any combination thereof.
It is another object of the present invention to disclose the system as described above, wherein said tactile signal is applicable to a member of a group consisting of: a head, a neck, a torso, an arm, a wrist, a hand, a finger, a leg, an ankle, a toe and any combination thereof.
It is another object of the present invention to disclose the system as described above, wherein said database is configured to store at least one record of at least one procedure, each said at least one record comprising a member of a group consisting of: said at least one image, said at least one parameter said at least one stored parameter said at least
Figure imgf000010_0001
Figure imgf000010_0002
one second parameter, said at least one feature, said at least one stored feature, said skill level, said GOALS score and any combination thereof.
It is another object of the present invention to disclose the system as described above, wherein said at least one record is selectable based upon at least one identifier.
It is another object of the present invention to disclose the system as described above, wherein said at least one identifier is storable in said database and said at least one identifier is selected from a group consisting of: an identifier of an operator, an identifier of an operating room, a physical characteristic of said operating room, start time of a surgical procedure, end time of a surgical procedure, duration of a surgical procedure, date of a surgical procedure, an identifier of a patient, a physical characteristic of a patient, an outcome of a surgical procedure, length of hospital stay for a patient, a readmission for a patient, and any combination thereof.
It is another object of the present invention to disclose the system as described above, wherein said physical characteristic of said operating room is selected from a group consisting of: temperature, humidity, size, time of cleaning, date of cleaning, cleaning procedure, cleaning material, type of lighting and any combination thereof.
It is another object of the present invention to disclose the system as described above, wherein said physical characteristic of said patient is selected from a group consisting of: age, height, weight, body mass index, health status, medical status, and any combination thereof.
It is another object of the present invention to disclose the system as described above, wherein said outcome of said surgical procedure is selected from a group consisting of: a successful aspect, a partially successful aspect, a partial failure in an aspect, a complete failure in an aspect, and any combination thereof.
It is another object of the present invention to disclose the system as described above, wherein said image is selected from a group consisting of: a 2D image, a 3D image, a panoramic image, a high resolution image, a 3D image reconstructed from at least one 2D image, a 3D stereo image and any combination thereof.
It is another object of the present invention to disclose a method for assessing a skill level for execution of at least one surgical procedure, comprising steps of: a. providing a system for assessing said skill level comprising: i. at least one imaging device configured to provide at least one image in a field of view of a surgical environment; ii. at least one processor in communication with said imaging device; said at least one processor is configured to (i) analyze at least one image from said at least one imaging device, (ii) identify from said at least one image at least one spatial position of at least one item; and, (iii) calculate from said at least one spatial position at least one parameter of said at least one item, and
Figure imgf000011_0001
iii. at least one communicable database configured to (i) store said at least one surgical procedure; said at least one surgical procedure is characterized by at least one stored parameter of at least one item, , said at least one stored parameter
Figure imgf000012_0005
derivable from said at least one spatial position of said item; (ii) real-time store said at least one spatial position parameter, Pitem, of said at least one item; b. acquiring, via said imaging device, at least one said image of a field of view; c. analyzing said at least one image and determining, from said analysis, said at least one spatial position of said at least one item; d. calculating from said at least one image said at least one parameter
Figure imgf000012_0006
e. storing said at least one parameter and
Figure imgf000012_0002
f. comparing said at least one parameter and said at least one stored parameter
Figure imgf000012_0003
Figure imgf000012_0001
thereby providing said skill level.
It is another object of the present invention to disclose the method as described above, additionally comprising step of selecting said item from a group consisting of: at least one surgical tool, a light source, a blood vessel, an organ, a nerve, and a ligament, a lesion, a tumor, smoke, fluid flow, bleeding, a fixed point, a critical point, and any combination thereof
It is another object of the present invention to disclose the method as described above, additionally comprising step of deriving said at least one stored parameter from at least one stored spatial position of said item.
It is another object of the present invention to disclose the method as described above, additionally comprising step of defining at least one feature according to at least one said parameter Pitem-
It is another object of the present invention to disclose the method as described above, additionally comprising step of defining at least one predetermined feature according to at least one said stored parameter
Figure imgf000012_0004
It is another object of the present invention to disclose the method as described above, additionally comprising step of generating at least one partial score by comparing said at least one feature to said at least one predetermined feature. It is another object of the present invention to disclose the method as described above, additionally comprising step of selecting said at least one feature from a group consisting of: depth perception, bimanual dexterity, efficiency, tissue handling, autonomy, difficulty, smooth conduct of the operation, autonomy of the operator, leadership ability, cooperation with assistants, proper positioning of the access ports, display of the operating field in the center of the monitor, clear display of the target organ, proper use of the retractor, proper selection and appropriate use of surgical tool on dominant side, proper use of surgical tool on non-dominant side, proper methods of traction and tissue handling, appropriate and smooth use of the correct type of energy in tissue ablation, correct layer of tissue dissection, correct identification and proper coagulation or clipping of blood vessels, suturing, knot-tying, cutting, and any combination thereof.
It is another object of the present invention to disclose the method as described above, additionally comprising step of selecting said at least one predetermined feature from a group consisting of: depth perception, bimanual dexterity, efficiency, tissue handling, autonomy, difficulty, smooth conduct of the operation, autonomy of the operator, leadership ability, cooperation with assistants, proper positioning of the access ports, display of the operating field in the center of the monitor, clear display of the target organ, proper use of the retractor, proper selection and appropriate use of surgical tool on dominant side, proper use of surgical tool on non-dominant side, proper methods of traction and tissue handling, appropriate and smooth use of the correct type of energy in tissue ablation, correct layer of tissue dissection, correct identification and proper coagulation or clipping of blood vessels, suturing, knot- tying, cutting, and any combination thereof.
It is another object of the present invention to disclose the method as described above, additionally comprising step of generating said at least one partial score within a predetermined range.
It is another object of the present invention to disclose the method as described above, additionally comprising step of calculating a total score as a sum of said at least one partial score
It is another object of the present invention to disclose the method as described above, additionally comprising step of displaying at least one member of a group consisting of said at least one partial score, said total score and any combination thereof.
It is another object of the present invention to disclose the method as described above, additionally comprising step of a smaller difference between said at least one feature and said at least one predetermined feature resulting in said at least one partial score being closer to at least one preferred score.
It is another object of the present invention to disclose the method as described above, additionally comprising step of selecting said at least one preferred score to be a maximum of said predetermined range.
It is another object of the present invention to disclose the method as described above, additionally comprising step of generating a GOALS score from a sum of said at least one partial scores.
It is another object of the present invention to disclose the method as described above, additionally comprising step of assessing said skill level from said GOALS score
It is another object of the present invention to disclose the method as described above, additionally comprising step of selecting said skill level from a group consisting of: poor, moderate, good and any combination thereof.
It is another object of the present invention to disclose the method as described above, additionally comprising a step of generating said at least one stored parameter from at
Figure imgf000014_0001
least one stored surgical procedure.
It is another object of the present invention to disclose the method as described above, additionally comprising step of generating said at least one stored surgical procedure from a member of a group consisting of: a procedure executed by an experienced operator, an average of procedures by at least one experienced operator, an average of procedures executed by an operator being assessed, a simulation of a procedure, an average of simulations of a procedure executed by at least one operator, a simulation of a procedure generated by simulation software and any combination thereof.
It is another object of the present invention to disclose the method as described above, additionally comprising step of providing said skill level from a second parameter derivable from a member of a group consisting of: a signal from a sensor, a forward kinematics calculation, an inverse kinematics calculation, a CT image, an MRI image, and X-ray image and any combination thereof.
It is another object of the present invention to disclose the method as described above, additionally comprising step of overlaying on at least a portion of at least one said image of said field of view of said surgical environment a member of a group consisting of: said at least one parameter Pitem, said at least one stored parameter aid at least one second
Figure imgf000015_0005
parameter, said at least one feature, said at least one partial score, said GOALS score, said skill level, a suggestion, an instruction, a distance, an angle, an area, a volume, a size scale, information on a medical history of a patient, and any combination thereof.
It is another object of the present invention to disclose the method as described above, additionally comprising step of providing communication between said at least one sensor and said at least one item from a group consisting of: mechanical communication, wired communication, wireless communication and any combination thereof..
It is another object of the present invention to disclose the method as described above, additionally comprising step of selecting said sensor from a group consisting of an electromagnetic sensor; an ultrasound sensor; an inertial sensor to sense the angular velocity and the acceleration of the tool or other item; an accelerometer, a motion sensor, an IMU, a sensor wearable by an operator, a sensor attachable to a surgical object, an RFID tag attachable to a surgical object, an ultrasound sensor, an infrared sensor, gyro-meter, tachometer, shaft encoder, rotary encoder, strain gauge and any combination thereof.
It is another object of the present invention to disclose the method as described above, additionally comprising step of storing as function of time a member of a group consisting of said at least one parameter Pitem, said at least one stored parameter Pstore, said at least one second parameter, said at least one feature, said at least one predetermined feature and any combination thereof.
It is another object of the present invention to disclose the method as described above, additionally comprising step of configuring said database to store, for any given time t, a member selected from a group consisting of said at least one parameter Pitem, said at least one stored parameter said at least one second parameter, said at least one feature, said at
Figure imgf000015_0002
least one predetermined feature and any combination thereof.
It is another object of the present invention to disclose the method as described above, additionally comprising step of performing said comparison in a manner selected from a group consisting of: in real time, between a stored said at least one parameter and said at
Figure imgf000015_0004
least one stored parameter between a stored said at least one parameter and a
Figure imgf000015_0003
Figure imgf000015_0001
stored said at least one second parameter, between a stored said at least one second parameter and said at least one stored parameter and any combination thereof. It is another object of the present invention to disclose the method as described above, additionally comprising step of selecting said surgical procedure from a group consisting of an identifiable unit, a surgical task, a complete procedure, and any combination thereof.
It is another object of the present invention to disclose the method as described above, additionally comprising step of providing said skill level either in real time or off-line.
It is another object of the present invention to disclose the method as described above, additionally comprising step of selecting said at least one parameter
Figure imgf000016_0001
from a group consisting of: time to execute said surgical procedure; accuracy of movement of at least one said surgical tool, accuracy of energy use; amount of coordination between movement of said at least one surgical tool and movement of at least one second surgical tool, accuracy of coordination between movement of said at least one surgical tool and movement of at least one second surgical tool, time, idle time, approach time, speed, maximum speed, speed profile, acceleration, motion smoothness, bimanual dexterity, search time, path length, distance efficiency, depth perception, transit profile, deviation on horizontal plane, deviation on vertical plane, response orientation, economy of area (EOA), economy of volume (EOV), number of movements, force range, interquartile force range, integral of the force, integral of the grasping force, integral of the Cartesian force, first derivative of the force, second derivative of the force, third derivative of the force, tissue handling, autonomy and any combination thereof.
It is another object of the present invention to disclose the method as described above, additionally comprising step of selecting said at least one stored parameter from a
Figure imgf000016_0002
group consisting of: time to execute said surgical procedure; accuracy of movement of at least one said surgical tool, accuracy of energy use; amount of coordination between movement of said at least one surgical tool and movement of at least one second surgical tool, accuracy of coordination between movement of said at least one surgical tool and movement of at least one second surgical tool, time, idle time, approach time, speed, maximum speed, speed profile, acceleration, motion smoothness, bimanual dexterity, search time, path length, distance efficiency, depth perception, transit profile, deviation on horizontal plane, deviation on vertical plane, response orientation, economy of area (EOA), economy of volume (EOV), number of movements, force range, interquartile force range, integral of the force, integral of the grasping force, integral of the Cartesian force, first derivative of the force, second derivative of the force, third derivative of the force, tissue handling, autonomy and any combination thereof. It is another object of the present invention to disclose the method as described above, additionally comprising step of selecting said at least one second parameter from a group consisting of: time to execute said surgical procedure; accuracy of movement of at least one said surgical tool, accuracy of energy use; amount of coordination between movement of said at least one surgical tool and movement of at least one second surgical tool, accuracy of coordination between movement of said at least one surgical tool and movement of at least one second surgical tool, time, idle time, approach time, speed, maximum speed, speed profile, acceleration, motion smoothness, bimanual dexterity, search time, path length, distance efficiency, depth perception, transit profile, deviation on horizontal plane, deviation on vertical plane, response orientation, economy of area (EOA), economy of volume (EOV), number of movements, force range, interquartile force range, integral of the force, integral of the grasping force, integral of the Cartesian force, first derivative of the force, second derivative of the force, third derivative of the force, tissue handling, autonomy and any combination thereof.
It is another object of the present invention to disclose the method as described above, additionally comprising step of indicating a correction to said surgical procedure.
It is another object of the present invention to disclose the method as described above, additionally comprising step of determining said correction from a member of a group consisting of a comparison between said at least one parameter and said at least one
Figure imgf000017_0001
stored parameter , a comparison between said at least one parameter and said at
Figure imgf000017_0003
Figure imgf000017_0002
least one second parameter, a comparison between said at least one second parameter and said at least one stored parameter and any combination thereof.
Figure imgf000017_0004
It is another object of the present invention to disclose the method as described above, additionally comprising step of indicating said correction by a member of a group consisting of a visual indication, an audible indication and any combination thereof.
It is another object of the present invention to disclose the method as described above, additionally comprising step of overlaying said visual indication on said at least one image.
It is another object of the present invention to disclose the method as described above, additionally comprising step of selecting said visual indication from a group consisting of a constant-color pattern, a varying-color pattern, a constant- shape pattern, a varying-shape pattern, constant-size pattern, a varying-size pattern, an arrow, a word and any combination thereof. It is another object of the present invention to disclose the method as described above, additionally comprising step of selecting said audible indication from a group consisting of a constant-pitch sound, a varying-pitch sound, a constant-loudness sound, a varying-loudness sound, a word and any combination thereof.
It is another object of the present invention to disclose the method as described above, additionally comprising steps of providing a restricting mechanism and of restricting movement of at least one said surgical object.
It is another object of the present invention to disclose the method as described above, additionally comprising step of configuring said system to provide a warning.
It is another object of the present invention to disclose the method as described above, additionally comprising step of providing said warning if said difference between said at least one feature and said at least one predetermined feature is outside a predetermined range.
It is another object of the present invention to disclose the method as described above, additionally comprising step of indicating said warning by a member of a group consisting of said restricting mechanism, a visual signal, an audible signal, a tactile signal and any combination thereof.
It is another object of the present invention to disclose the method as described above, additionally comprising step of selecting said visual signal from a group consisting of a constant-color pattern, a varying-color pattern, a constant- shape pattern, a varying-shape pattern, constant-size pattern, a varying-size pattern, an arrow, a word and any combination thereof.
It is another object of the present invention to disclose the method as described above, additionally comprising step of selecting said audible signal from a group consisting of a constant-pitch sound, a varying-pitch sound, a constant-loudness sound, a varying-loudness sound, a word and any combination thereof.
It is another object of the present invention to disclose the method as described above, additionally comprising step of selecting said tactile signal from a group consisting of a vibration, a constant-pressure signal, a varying-pressure signal, a stationary signal, a moving signal and any combination thereof.
It is another object of the present invention to disclose the method as described above, additionally comprising step of applying said tactile signal to a member of a group consisting of: a head, a neck, a torso, an arm, a wrist, a hand, a finger, a leg, an ankle, a toe and any combination thereof.
It is another object of the present invention to disclose the method as described above, additionally comprising step of configuring said database to store at least one record of at least one procedure, each said at least one record comprising a member of a group consisting of: said at least one image, said at least one parameter
Figure imgf000019_0001
said at least one stored parameter Pstored, said at least one second parameter, said at least one feature, said at least one stored feature, said skill level, said GOALS score and any combination thereof.
It is another object of the present invention to disclose the method as described above, additionally comprising step of selecting at least one record based upon at least one identifier.
It is another object of the present invention to disclose the method as described above, additionally comprising steps of storing said at least one identifier in said database and of selecting said at least one identifier from a group consisting of: an identifier of an operator, an identifier of an operating room, a physical characteristic of said operating room, start time of a surgical procedure, end time of a surgical procedure, duration of a surgical procedure, date of a surgical procedure, an identifier of a patient, a physical characteristic of a patient, an outcome of a surgical procedure, length of hospital stay for a patient, a readmission for a patient, and any combination thereof.
It is another object of the present invention to disclose the method as described above, additionally comprising step of selecting said physical characteristic of said operating room from a group consisting of: temperature, humidity, size, time of cleaning, date of cleaning, cleaning procedure, cleaning material, type of lighting and any combination thereof.
It is another object of the present invention to disclose the method as described above, additionally comprising step of selecting said physical characteristic of said patient from a group consisting of: age, height, weight, body mass index, health status, medical status, and any combination thereof.
It is another object of the present invention to disclose the method as described above, additionally comprising step of selecting said outcome of said surgical procedure from a group consisting of: a successful aspect, a partially successful aspect, a partial failure in an aspect, a complete failure in an aspect, and any combination thereof.
It is another object of the present invention to disclose the method as described above, additionally comprising step of selecting said image from a group consisting of: a 2D image, a 3D image, a panoramic image, a high resolution image, a 3D image reconstructed from at least one 2D image, a 3D stereo image and any combination thereof.
BRIEF DESCRIPTION OF THE FIGURES
In order to better understand the invention and its implementation in practice, a plurality of embodiments will now be described, by way of non-limiting example only, with reference to the accompanying drawings, wherein
Fig. 1 schematically illustrates a GOALS score overlaid on an image of a part of a surgical environment;
Fig. 2 schematically illustrates movement of a tip of a surgical tool;
Fig. 3A schematically illustrates speed of the tool tip and Fig. 3B schematically illustrates acceleration of the tool tip during the procedure;
Fig. 4A schematically illustrates speed of the tool tip, Fig. 4B schematically illustrates acceleration of the tool tip and Fig. 4C schematically illustrates jerk of the tool tip during the first part of the procedure;
Fig. 5 schematically illustrates overlaying instructions to an operator on an image of part of a surgical environment; and
Figs. 6, 7 and 8 schematically illustrate an embodiment of a method of automatically assessing or automatically training an operator.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
The following description is provided, alongside all chapters of the present invention, so as to enable any person skilled in the art to make use of said invention and sets forth the best modes contemplated by the inventor of carrying out this invention. Various modifications, however, will remain apparent to those skilled in the art, since the generic principles of the present invention have been defined specifically to provide a means and method for automated training and assessment of a surgical operator.
The term "fixed point" hereinafter refers to a point in 3D space which is fixed relative to a known location. The known location can be for non-limiting example, an insertion point, a known location in or on a patient, a known location in an environment around a patient (e.g., an attachment point of a robotic manipulator to an operating table, a hospital bed, or the walls of a room), or a known location in a manipulation system, a practice dummy, or a demonstrator. .
The term "item" hereinafter refers to any identifiable thing within a field of view of an imaging device. An item can be something belonging to a body or something introduced into the body. Items also comprise things such as, for non-limiting example, shrapnel or parasites and non-physical things such as fixed points.
The term "object" hereinafter refers to an item naturally found within a body cavity. Non- limiting examples of an object include a blood vessel, an organ, a nerve, and a ligament, as well as an abnormality such as a lesion and a tumor.
The term "tool" or "surgical tool" hereinafter refers to an item mechanically introducible into a body cavity. Non-limiting examples of a tool include a laparoscope, a light, a suction device, a grasper, a suture material, a needle, and a swab.
The term "surgical object" hereinafter refers to a surgical tool, a robotic manipulator or other maneuvering system configured to manipulate a surgical tool, at least a portion of a light source, and at least a portion of an ablator.
The term "operator" hereinafter refers to any of: a principal operator such as, but not limited to, the surgeon carrying out the main parts of the procedure, an assistant such as, but not limited to, a nurse and an observer such as, but not limited to, a senior surgeon providing instruction to or assessing a principal operator. An identifier for an operator can include, but is not limited to, a name, an ID number, a function and any combination thereof.
The term "identifiable unit" hereinafter refers to an identifiable purposive activity during a surgical operation, typically a minimal identifiable activity. Examples include, but are not limited to, movement of a needle and forceps to the site where a suture is to be made, making a knot in suture thread, activating fluid flow, and making an incision.
The term "surgical task" hereinafter refers to a connected series of at least one identifiable unit which comprises an identifiable activity. Non-limiting examples of surgical tasks that comprise more than one identifiable unit include, but are not limited to, making one suture, removing incised tissue from a surgical field, and clearing debris from a surgical field. A non-limiting example of a surgical task that comprises a single identifiable unit is making an incision. The term "complete procedure" hereinafter refers to a connected series of at least one surgical task which forms an independent unit. For non-limiting example, closing an incision with one or more sutures will be referred to as a complete procedure.
The term "procedure" or "surgical procedure" hereinafter refers to at least a portion of a surgical operation, with the portion of the surgical operation including at least one identifiable unit. For non-limiting example, in increasing order of complexity, a procedure can comprise tying the knot in a suture, making a single suture, or closing an incision with a series of sutures.
The system of the present invention can assist in training of assessing an operator by providing automated training and/or assessment of an operator. The system preferably analyzes at least one image of at least part of a surgical environment, and by means of said analysis, assesses the operator.
The system of the present invention can comprise an advanced artificial intelligence (AI) system running on at least one processor which is capable of analyzing at least part of a scene in a field of view (FOV), as captured in real time by an imaging device and, from the analysis (and, in some embodiments, additional information from other sensors) forming an understanding of what is occurring. From this understanding, the system can derive at least one parameter, such as a metric as disclosed herein, and, from the at least one parameter, it can generate at least one feature, score the at least one feature, and, from the at least one feature, it can autonomically perform at least one of assessment and training of an operator.
An example of assessment is summing of the at least one score for the at least one feature to generate a GOALS -based score.
Although capture of the at least one image of at least part of an FOV is carried out in real time and storage of the at least one image is preferably carried out in real time, and although a GOALS score can be generated in real time, at least one of the steps in the calculation of a GOALS score can be carried out off-line.
The analysis can comprise, for non-limiting example, at least one of:
• An objective skills assessment system for evaluating the quality of a procedure, either as part of a training system or as part of an evaluation of an operator. A procedure, either in real time or recorded, can be observed, either in real time or off-line, to determine the skill level of the operator. o Recorded procedures can be compared with outcomes, so as to determine which variants of a procedure have the better outcomes, thereby improving training and skill levels, [i.e., Global Operative Assessment of Laparoscopic Skills (GOALS)]
o In real time, feedback can be given to an operator, by the intelligent system and, in some embodiments, additionally by a human advisor (Gesture/Task Classification). The intelligent system advisor, the human advisor, if present, or both, can be local or remote.
Information can be aggregated from a number of videos of robotic and/or laparoscopic procedures. These data can be used for:
o Benchmarking - determining best practice.
o Training effectivity - showing examples of good practice vs. bad practice. o Skills progress during training - determining improvements in practice over time and identifying areas where progress is not adequate.
o Standardization - ensuring that best-practice regimes are followed.
o Repeatability - ensuring that best practice regimes are consistently achieved. A recorded procedure can be edited so that a shorter portion, typically a surgical task or an identifiable unit, can be stored, viewed and any combination thereof. Viewing of a shorter portions can be useful to highlight good practice and to indicate areas of suboptimal practice.
A stored record of a procedure, including an identifiable unit, a surgical task, a complete procedure and a whole operation, preferably in 3D, can become part of at least one "big data" analysis of any combination of the above, for an individual operator, for at least a portion of a hospital, for at least a portion of a medical center and any combination thereof.
A record of a procedure can be tagged with one or more of: an identifier of an operator, a type of procedure, a previous procedure, a parameter, a feature, a GOALS score, a skill level, an identifier for an operating room, a physical characteristic of an operating room (e.g., temperature, humidity, time and date of cleaning, cleaning procedure, cleaning materials, type of lighting), a date of a procedure, a time of a procedure, an identifier of a patient, a physical characteristic of a patient, an outcome of a procedure, length of hospital stay for a patient, a readmission for a patient, and any combination thereof. From analysis of at least one image, preferably comprising a spatiotemporal 3 -dimensional surgical database, the system can calculate at least one performance metric and/or at least one parameter, derive at least one feature, and generate at least one score. The result of the analysis can be an assessment of the skill level of an operator, determination of training for an operator, advice to an operator as to at least one component of at least one procedure, a warning to an operator, at least one outcome of at least one procedure, and any combination thereof. The outcome can selected from a group consisting of negligence, malpractice, surgical activity failure, and any combination thereof.
At least the database and preferably both a tracking subsystem and the database are in communication with at least one processor configured to analyze the spatiotemporal 3- dimensional surgical database. From the analysis, at least one first parameter of at least one item in a field of view is determined. The at least one first parameter can be a 2D position, for non-limiting example, in the plane of the field of view, of at least a portion of the at least one item; a 2D orientation, for non-limiting example, in the plane of the field of view, of at least a portion of the at least one item; a 3D position of at least a portion of the at least one item; a 3D orientation of at least a portion of the at least one item; a 2D projection of a 3D position of at least a portion of the at least one item, a velocity of at least a portion of the at least one item, an acceleration of at least a portion of the at least one item, an angle of at least a portion of the at least one item, altering the state of at least a portion of the at least one item, and any combination thereof.
The movement of the at least one item can be selected from a group consisting of a maneuver of an item carried out by a robotic manipulator connected to the item, a movement of part of an item, a change in state of the item, and any combination thereof. Non-limiting examples of movement of an item include displacing it, rotating it, zooming it, or, for an item with at least one bendable section, changing its articulation. Non-limiting examples of movements of part of the item are opening or closing a grasper or retractor, or operating a suturing mechanism. Non-limiting examples of a change in state of an item include: altering a lighting level, altering an amount of suction, altering an amount of fluid flow, altering a heating level in an ablator, altering a speed of lateral movement of at least one blade in a pair of scissors or other cutter, altering an amount of defogging, or altering an amount of smoke removal.
Procedures can be stored in a database in communication with the processor; images can also be stored in a database. The stored procedures can be manually-executed procedures, automatically-executed procedures, autonomically-executed procedure and any combination thereof.
The first parameter can then be compared to a stored second parameter, which comprises at least one second parameter of at least one item. The second parameter can be a 2D position, for non-limiting example, in the plane of the field of view, of at least a portion of the at least one item; a 2D orientation, for non-limiting example, in the plane of the field of view, of at least a portion of the at least one item; a 3D position of at least a portion of the at least one item; a 3D orientation of at least a portion of the at least one item; a 2D projection of a 3D position of at least a portion of the at least one item, a velocity of at least a portion of the at least one item, an acceleration of at least a portion of the at least one item, an angle of at least a portion of the at least one item, a state of at least a portion of said at least one item, and any combination thereof.
The stored procedure can be a procedure known in the art. It can be generated from a procedure executed by an experienced operator, by an average of procedures by at least one experienced operator, by an average of procedures executed by the operator being assessed, by a simulation of the procedure, or an average of simulations of the procedure, executed by at least one operator, by a simulation of a procedure generated by simulation software, and any combination thereof. A non-limiting example of a procedure generated by a combination of methods is a series of sutures to close an incision, where the movement of the tools between sutures is generated by simulation software, using the known path of the incision to generate the optimum tool movements between sutures, and the movement of the tools during a suture is generated from an average of the movements made by three experienced surgeons when carrying out suturing.
Parameters, both first and second parameters, can be constants and they can be functions of time. A plurality of parameters can also comprise a parameter, a set parameter. The set parameter can comprise a plurality of parameters at a specific time, at a specific location or both, the set parameter can comprise a parameter at different times, and any combination thereof. A non-limiting example of a plurality of parameters at a specific time comprises the position, orientation and speed of a tool at the beginning of a procedure. A non-limiting example of a parameter a different times is the orientation of a grasper at the time a needle starts penetration of tissue, the orientation of the grasper after the needle exits the tissue, the orientation of the grasper at the time the grasper grasps the suture thread, and the orientation of the grasper at the time the suture thread is cut. The state of a surgical object includes general properties such as its position, orientation, speed, and acceleration. It also includes surgical object- specific properties, such as whether a gripper is open or closed. There are various mechanisms by which a control system can determine these properties. Some of these mechanisms are described hereinbelow.
A stored record of a procedure, which can be an identifiable unit, a surgical task, a complete procedure, a whole operation, and any combination thereof can be used for training purposes. At least one outcome is known for a stored procedure, so at least one procedure with at least one best outcome can be shown to a student, allowing the student to observe an example of best practice. Augmented reality can be used to enhance a student's learning experience.
A stored record can be tagged with at least one identifier, to enhance and simplify searching at least one library or database comprising at least one stored procedure. For non-limiting example, a procedure can be tagged with an identifier of at least one operator, with a type of procedure, with a characteristic of a patient and any combination thereof. For non-limiting example, this could be used to determine the quality of outcome for appendectomies performed by Dr. Jones and, therefore, Dr. Jones' skill in performing appendectomies.
Tagging, supplying an identifier, can be manual or automatic. For non-limiting example, typically, an identifier of an operator will be entered manually. In another non-limiting example, a critical point or a fixed point can be tagged manually or automatically. For non- limiting example, manual tagging can be by an operator indicating, by word, by gesture, by touching a touchscreen and any combination thereof, that a given point, such as the current position of a tool, is to be tagged as a critical point or a fixed point. For non-limiting example, automatic tagging can occur when a system identifies a point as a critical point or a fixed point.
The identifier can be selected from a group consisting of: an identifier of an operator, an identifier of an operating room, a physical characteristic of said operating room, start time of a surgical procedure, end time of a surgical procedure, duration of a surgical procedure, date of a surgical procedure, an identifier of a patient, a physical characteristic of a patient, an outcome of a surgical procedure, length of hospital stay for a patient, a readmission for a patient, and any combination thereof.
A physical characteristic of an operating room can be selected from a group consisting of: temperature, humidity, size, time of cleaning, date of cleaning, cleaning procedure, cleaning material, type of lighting and any combination thereof. A physical characteristic of a patient can be selected from a group consisting of: age, height, weight, body mass index, health status, medical status, and any combination thereof.
In some embodiments, a surgical database analysis can compile historical records of previous surgical events to generate predicted success rates of an operator.
In some embodiments, a surgical database analysis can compile historical records of previous surgical events to generate predicted successes rates of a type of surgical event.
In some embodiments, a statistical analysis is a percentage of success and failure for an individual or type of surgical event.
Monitoring, both of normal motion and of changes in performance can be used for safety monitoring. A change in performance related to the equipment can be flagged up and a procedure stopped or changed, or a correction applied to at least one movement of at least one surgical object to maintain a procedure within limits of safety. Applying a correction can be done automatically or upon request by an operator. If a correction is applied upon command, an indication will be provided by the system to indicate that such correction needs to be applied. The indication can be a visual signal, an audible signal, a tactile signal, and any combination thereof. In some embodiments, a warning, visual, audible, tactile and any combination thereof, can be provided when an automatic correction is applied.
The visual signal can be selected from a group consisting of a constant-color pattern, a varying-color pattern, a constant- shape pattern, a varying-shape pattern, constant-size pattern, a varying-size pattern, an arrow, a letter and any combination thereof.
The audible signal can be selected from a group consisting of a constant-pitch sound, a varying-pitch sound, a constant-loudness sound, a varying-loudness sound, a word and any combination thereof.
The tactile signal can be selected from a group consisting of a vibration, a constant-pressure signal, a varying -pressure signal, a stationary signal, a moving signal and any combination thereof.
The tactile signal can be applied to a member of a group consisting of: a head, a neck, a torso, an arm, a wrist, a hand, a finger, a leg, an ankle, a toe and any combination thereof.
Similarly, an operator's performance can be monitored and warnings can be flagged up if the operator's performance falls below a predetermined level of safety. The outcome of a procedure can have more than one aspect. For non-limiting example, an outcome of a surgical procedure can be a successful aspect, a partially successful aspect, a partial failure in an aspect, a complete failure in an aspect, and any combination thereof. Non-limiting examples of aspects of an outcome include: amount of bleeding after completion of a procedure, amount of bleeding during a procedure, return of an abnormality such as a tumor, speed of healing, adhesions, patient discomfort and any combination thereof. For these exemplary aspects of an outcome, for each aspect, a successful aspect would constitute: minimal bleeding after completion of a procedure, minimal bleeding during a procedure, no return of the abnormality, rapid healing, no adhesions and minimal patient discomfort. For each aspect, a partially successful aspect would constitute: some bleeding after completion of a procedure, some bleeding during a procedure, minimal return of the abnormality, moderately rapid healing, a few small adhesions and some patient discomfort. For each aspect, a partial failure in the aspect would constitute: significant bleeding after completion of a procedure, significant bleeding during a procedure, return of a significant amount of the abnormality, slow healing, significant adhesions and significant patient discomfort. For each aspect, complete failure in the aspect would constitute: serious or life- threatening bleeding after completion of a procedure, serious or life-threatening bleeding during a procedure, rapid return of the abnormality, very slow healing or failure to heal, serious adhesions and great patient discomfort. It is clear that an outcome can include any combination of aspects. For the exemplary aspects above, a procedure could have minimal bleeding, both during and after the procedure (successful) with a few adhesions (partial success), but significant patient discomfort (partial failure) and rapid return of the abnormality (complete failure).
In preferred embodiments, the system can be in communication with other devices or systems. In some embodiments, for non-limiting example, the AI-based control software can control surgical objects such as, but not limited to, a robotic manipulator, an endoscope, a laparoscope, a surgical tools and any combination thereof. In some embodiments, the system can be in communication with an advanced imaging system, it can function as part of an integrated operating room and any combination thereof.
In some embodiments, the system, which comprises AI-based software, can have full connectivity with a member of a group consisting of: digital documentation, PACS, navigation, other health IT systems, and any combination thereof.
In a GOALS analysis, an operator's abilities are assessed in general areas of expertise (also refers as features). Typically, in the prior art, the assessment is performed by skilled practitioners who make the assessment from a video of the procedure, typically showing the same scene as viewed by the operator.
In the system of the present invention, the assessment is performed automatically and autonomously by the system, from analysis of at least one image of at least a portion of a field of view..
The procedure can be an identifiable unit (such as, but not limited to, approach of at least one tool to the site of a suture), a single activity (a surgical task), for non-limiting example, making a single suture or making a single incision, or it can include a plurality of activities (e.g., a complete procedure), for non-limiting example, closing an incision with a plurality of sutures or executing an entire cholecystectomy, from the first incision through dissection of the tissue, to suturing and final removal of the tools.
A GOALS assessment can be used for assessing the quality of an operator, for example, for accreditation, it can be used for training and any combination thereof. In the first case, the operator receives accreditation if the final score is greater than a predetermined value. In the second case, the system will indicate to an operator where improvements can be made. The indicating can be real-time, during the procedure, or off-line, from recorded videos. In some embodiments, indications to the operator can additionally be made by a skilled operator.
In an exemplary GOALS analysis for training, six such areas of expertise (also refers as features) are examined. These are:
• Depth perception - ability to accurately direct tools when using a screen image
• Bimanual dexterity - ability to effectively use both hands to execute a two-handed procedure (e.g., suturing).
• Efficiency - ability to minimize wasted time and/or effort.
• Tissue handling - ability to handle tissue without unnecessary damage.
• Autonomy - ability to carry out a procedure independently.
• Difficulty - ability to cope with conditions in a patient which make a procedure more difficult.
Table I gives an example of how, for training purposes, a practitioner's abilities can be scored in a GOALS analysis.
Figure imgf000029_0001
Figure imgf000030_0001
In an exemplary GOALS analysis for accreditation, four such features (areas of expertise) can be examined. These are: • Progress of the operation
• Display of the operating field
• Operative techniques
• Suturing and knot-tying
Table II gives an example of how, for accreditation purposes, a practitioner's abilities scored in a GOALS analysis.
Figure imgf000031_0001
In assessing the operator's skill using these criteria, the manual dexterity of the operator is not independently assessed.
It can be seen that there is considerable overlap between the exemplary training assessment and the exemplary accreditation assessment, with the differences mainly due to the fact that the accreditation assessment evaluates more advanced and more complex skills, such as leadership ability, than the training assessment. In a typical GOALS -oriented assessment, whether used for accreditation or for training, one or more parameters are assessed. These parameters, together, comprise a feature. Each parameter can be scored; the combined parameter scores form a feature score. All of the feature scores can be combined to form an assessment of skill level. It should be noted that the individual scores for the features can be used to indicate areas of weakness or strength in an operator.
A non-limiting example of parameters forming a feature is given by the knot-tying feature, which can involve parameters such as total time spent tying a knot, idle time during knot tying, search time, approach time taken to reach the site of the knot, speed of tool movement during knot tying, motion smoothness, bimanual dexterity, the length of the path followed by at least one tool, and the distance efficiency, which is a comparison of at least one actual path length with at least one comparable optimal path length.
Fig. 1 shows an exemplary display of the features of an exemplary GOALS analysis, with the part scores for each feature (110) and the total score for the assessment (120). These are shown overlaid on a display (100) of a part of a surgical environment.
Other parameters which can be used in an assessment include, but are not limited to: tissue damage, pain caused to a patient, effectiveness of a procedure in rectifying a medical condition, long-term post-operative pain to the patient and any combination thereof.
In the prior art, an assessment would be carried out by at least one skilled professional. In the present system, an assessment can be carried out automatically, preferably using an artificial intelligence (Al)-based system. At least one movement of at least one surgical object, manipulated by an operator, manipulated by the system and any combination thereof, a position of the surgical object, a force exerted by (and on) a surgical object and any combination thereof can be determined, from analysis of at least one image of at least a part of a surgical environment, by at least one tracking subsystem, by at least one sensor, and any combination thereof. Image analysis can be used to determine the location of at least one patient feature such as, but not limited to, at least a protion of: an organ, a blood vessel, a nerve, a lesion, a tumor, tissue, a bone, a ligament and any combination thereof. In some embodiments, tool identification and tracking and image analysis can be used to determine the location of at least one substantially non-moving surgical object in the surgical environment such as, but not limited to, a surgical tool, such as a swab., a non-tool item in a surgical environment such things as glass shard or a bomb fragment, and any combination thereof. The totality of the data on location, orientation and movement of surgical objects, non-tool items and patient features provide spatiotemporal 3 -dimensional data which characterize the surgical environment and at least one item within it. The spatiotemporal 3-dimensional data can be stored in a database for later analysis. Other measureable and storable data include: grasping force, torsion about a tool axis, Cartesian force, and any combination of these and the positions, orientations and movements disclosed above. It also can record at least one image of the surgical environment, up to a plurality of images encompassing an antire procedure. The plurality of images can be synchronized with force and position data.
In preferred embodiments of the system, sufficient depth information is provided so that the position and orientation of at least one item in the field of view can be determined in true 3D, enabling accurate determination of distance between two items, relative angle between two items, angle between three items, area of at least one item, area between at least two items, volume of at least one item, voluem encompassed by at least two items, and any combination thereof.
The 3D position and orientation of an item can be determined using data from multiple imaging devices, from at least one sensor attached to at least one surgical object, from at least one sensor attached to at least one manipulator, from "dead reckoning", from image analysis and any combination thereof.
From an accurate determination of distance and angle in 3D, an accurate determination can be made as to whether a surgical object's position, orientation, speed, acceleration, smoothness of motion and other parameters is correct. It is also possible to determine if a surgical object is accurately following a desired path, whether a collision can occur between two items, and whether the distance between two items is small enough that one or both can be activated.
An item that can be activated or deactivated based on distance information can include, but is not limited to, an ablator, a gripper, a fluid source, a light source, a pair of scissors, and any combination thereof.
For non-limiting example, activation of an ablator is best delayed until the ablator is close to the tissue to be ablated so that heating does not occur away from the tissue to be ablated, to minimize the possibility of damage to other tissue. With 3D position information, the ablator can be automatically activated when a distance between an ablator and the tissue to be ablated is less than a predetermined distance, so that there is no unnecessary heating of fluid or tissue away from the tissue to be ablated and so that ablation is carried out efficiently.
If only 2D distance information is available, an ablator could be activated when the 2D distance was small, but the distance perpendicular to the 2D plane (upward) was still large. In this case, the operator (or the system, for autonomic ablation) could be ignorant of this until it was observed that the ablator was heating fluid rather than ablating tissue. The operator (or the system, for autonomic ablation) would then have to move the ablator downward until ablation could occur, but would not have, nor could be given, information on how far downward to move. At this point, either the ablator could be deactivated and moved until it contacted the tissue, or the ablator could be left activated until ablation began. In either case, unwanted damage to the tissue is likely.
In some embodiments of an Al-based training or assessment system where the motion of at least one surgical object is tracked, Table III gives a non-limiting example of metrics which can be assessed. In a given embodiment, any combination of metrics can be used.
Table III Motion-based Metrics for Assessing an Operator's Skill
Figure imgf000034_0001
Figure imgf000035_0001
Fig. 2 shows, schematically, the 3D movements, over time, of the tip of a surgical tool during a procedure. Fig. 3A shows the speed of the surgical tool tip during the procedure, while Fig. 3B shows the acceleration of the surgical tool tip during the procedure. The speed, acceleration and jerk for the first part of the procedure are shown in Figs. 4A, B and C, respectively. From these, the metrics of Table IV can be calculated.
Table IV shows exemplary means of calculating the metrics of Table III.
Figure imgf000036_0001
In Table IV, is the start time of the procedure, is the end time, the motion has amplitude R and vector where and are, respectively, the time
Figure imgf000036_0002
Figure imgf000036_0004
spent using the left hand and the time spent using the right hand, and
Figure imgf000036_0003
is a measured force. It should be noted that, because time is easy to measure, in the prior art, task completion time is typically used as a performance metric. It is known that, for any activity, greater experience leads to faster performance. However, the following problems can arise when time is used as a performance metric:
1. Performing a task quickly typically means that it can be performed without external guidance. It does not necessarily mean that the task is being performed correctly.
2. A clear trade-off exists between speed and accuracy. Hence, performing a task faster is not necessarily better.
3. Time is not necessarily a measure of ability; different surgeons can work at different speeds but attain the same quality of outcome.
4. Depending on the specialty, working too fast can be a detriment to the quality of outcome. This is especially true for surgeons, such as thoracic surgeons, who work close to critical anatomic features.
5. Training for time teaches trainees to focus on working fast, rather than working carefully and accurately.
6. An overall time metric can be influenced by other aspects of the training scenario, for example, distracting factors or other differences between a practice scenario and an assessment scenario.
Nevertheless, task completion time can be useful as a measure of trainee skill level when combined with other metrics.
In one example, significant correlation was found between experience and some of the position-based metrics and most of the force-based metrics, with the correlations being weaker for simpler tasks and stronger for more complex tasks. The strongest correlation was found forthe position-based metrics was with the speed peaks and jerk metrics. The strongest correlation for the force-based metrics was found for the integrals and derivatives of the forces.
In some embodiments, from at least one of a location and a movement, the system can determine a type of procedure being executed, such as, but not limited to, suturing, making an incision, clearing smoke or fluid from a surgical field, ablating, etc. Preferably, the type of procedure will be stored as a searchable identifier for the procedure. Determination of type of procedure can occur in real time, offline and any combination thereof. In preferred embodiments, the type of procedure is determined automatically and autonomously by the system. In less-preferred embodiments, the type of procedure is input to the system by an operator.
In preferred embodiments, the system can determine an optimal variant of a procedure. In less-preferred embodiments, an optimal variant of a procedure is input into the system.
If the assessment is carried out off-line, in some embodiments, the outcome of the procedure can be input into the system, and the system can then assesses the procedure in light of the outcome to determine, for non-limiting example, whether a different choice of procedure could have improved an outcome, which error(s) adversely affected the outcome and any combination thereof.
In some embodiments, the system compares the procedure as carried out with an optimal procedure, and indicates to the operator being trained or assessed such items as: a preferred path for an incision, deviations from an optimal path, an optimal pressure on tissue for a tool, a more optimal pressure if a less-than optimal pressure is being used, an optimal or more optimal position for lighting, suction or other auxiliary equipment, an optimal or more optimal temperature for ablation equipment, an optimal forward speed for a cutting instrument (in the direction of elongation of the cut), an optimal speed for lateral movement of a cutting blade (e.g., lateral movements of the blades of a pair of scissors), an optimal pressure for a cutting instrument, warnings such as, but not limited to, too close an approach to tissue, too deep an incision, too shallow an incision, too much or too little pressure being used, and any combination thereof.
An image captured by an imaging device can be a 2D image, a 3D image, a panoramic image, a high resolution image, a 3D image reconstructed from at least one 2D image, a stereo image, and any combination thereof.
Additional information can be obtained from an accelerometer, a motion sensor, an IMU, a sensor wearable by an operator, a sensor attachable to an item, an RFID tag attachable to an item, an ultrasound sensor, an infrared sensor, a CT image, an MRI image, and X-ray image, a gyro-meter, tachometer, shaft encoder, rotary encoder, strain gauge and any combination thereof.
The sensor can be in mechanical communication with a surgical object, in electrical communication, and any combination thereof. Electrical communication can be wired communication, wireless communication and any combination thereof.
The state of a surgical tool can include general properties such as its position, orientation, speed, and acceleration. It can also include tool-specific properties, such as whether a gripper is open or closed. There are various mechanisms by which these properties can be determined. Some of these mechanisms are described hereinbelow.
The state of a surgical object can include, but is not limited to, a lighting level, an amount of suction, an amount of fluid flow, a heating level in an ablator, a speed of lateral movement of at least one blade of a pair of scissors, a speed of movement of at least one blade of a cutter, an amount of defogging, an amount of smoke removal and any combination thereof
Image-based tracking can identify at least one property of at least one surgical object, including a surgical object attached to a robotic manipulator, a surgical object controlled directly by an operator and a static a surgical object. Image-based tracking can also track items in an environment besides the surgical objects, as described herein. For non-limiting example, image-based tracking can be used to avoid an obstacle, to provide an instruction (e.g., to an operator) to avoid an obstacle, to focus (for example, an endoscope) on a point of interest, to provide an instruction (e.g., to an operator) to focus on a point of interest, and any combination thereof. In some embodiments, the system can evaluate how an operator interacts with at least one item to ascertain the operator's intent or to identify a procedure that is currently being performed.
Preferably, at least one item can be tracked by identifying from at least one image which includes the item at least one inherent, distinguishing characteristic of the item. For example, this could include the shape, color, texture, and movement of an item. To enhance tracking, an object can be modified to make it more recognizable in an image. For instance, a colored marker, a tracking patterns and any combination thereof can be affixed to at least one surgical object to aid in detection by a computer algorithm, to aid in identification by an operator and any combination therof.
An imaging device can operate in the infrared (IR), in the visible, in the UV, and any combination thereof.
Surgical object position and orientation can be determined, for example, via geometrical equations using analysis of a projection of a surgical object into an image plane or via at least one Bayesian classifier for detecting a surgical object in at least one image; the search for a surgical object can be further restricted by means of computing the projection of a surgical object's insertion point into the body of a patient.
Determination of surgical object position and orientation in an image-based system can be rendered more difficult by an ambiguous image structure, an occlusion caused by blockage of the line of sight (e.g., by at least one other surgical object), blood, an organ, smoke caused by electro-dissection, and any combination thereof. Particle filtering in the Hough space can be used to improve tracking in the presence of smoke, occlusion, motion blurring and any combination thereof.
It should be noted that a displayed image can be enhanced by a member of an enhancement group consisting of: a colored area, a patterned area, an overlay and any combination thereof The member of the enhancement group can indicate the presence of, enhance recognizability of, and any combination thereof at least one item selected from a group consisting of a blood vessel, an organ, a nerve, a lesion, a tool, blood, smoke, and any combination thereof. An image can also be enhanced by an assessment score, by a suggestion, by an instruction, by a distance, by an angle, by an area, by a volume between items, by a volume, by a size scale, by information from a medical history, and any combination thereof. A suggestion, an instruction and any combination thereof can be visual, audible and any combination thereof. A visual overlay can include color, a line, an area, a volume, an arrow, a pattern, an image, and any combination thereof. An audible overlays can include a constant-pitch sound, a varying pitch sound, a constant loudness sound, a varying loudness sound, a word, and any combination thereof. Words can be independent or can comprise a soundtrack to a plurality of images, such as a video.
Fig. 5 shows an example of visual advice to a trainee, for an incision in a liver (510). The optimal path for the incision (520) is shown by a dashed line, which approximately bisects the right lobe. The dotted line (530) indicates the actual path of the incision, which is not accurately following the optimal path (520). In this exemplary embodiment, the system provides instructions to the operator, as shown by the heavy arrow (540) which is overlaid on the screen image and indicates to the operator the direction in which the scalpel should move in order to return to the optimal path. In some embodiments, the optimal path (520) can be overlaid on the screen, so that an operator need only follow an indicated marking to follow an optimal path. In some embodiments, similarly, a distance between sutures, or a location for a next suture will be indicated. A marking can be visual, audible and any combination thereof. A visual marking can be a line, an arrow, a pattern, a color change, and any combination thereof. An audible indicator can be a voice (e.g., left, right, up, down, forward, back, harder, softer, more light, ablate, cut, suture, etc.) a predetermined sound pattern (rising pitch for left, lowering pitch for right, etc.), and any combination thereof.
Figs. 6-8 show an exemplary embodiment of a method of automatically assessing or training an operator. In the method (Fig. 6), the first step is to acquire at least one image of a field of view of an imaging device (610). The image is analyzed (620), as described herein, to identify, in 3D, position, orientation and movement of at least one surgical object and preferably all of the items in the field of view, and the relationship of at least one surgical object to the surgical environment (i.e., the organs, blood vessels, nerves, etc. of the patient) and to other surgical objects in the surgical environment. In some embodiments, the force exerted by or on at least one surgical object can also be acquired. From at least one of the position, orientation, movement, force, a relationship between at least two items, and any combination thereof, at least one of the metrics described herein can be determined. From the at least one metric and from the procedure being executed, with the procedure being determinable either from user input of from analysis of at least one image, at least one actual metric of at least one surgical object can be compared (630) with at least one stored metric for the same at least one surgical object in the same procedure, with the stored at least one metric providing an optimum metric in the procedure. If (640) the at least one actual metric is substantially the same as the at least one stored metric, then the actual movement is well executed (circle 2) and the method executes the steps associated with a well-executed movement (Fig. 7). If (640) the the actual at least one metric is not substantially the same as the stored at least one metric, then the actual movement is not well executed (circle 3) and the method executes the steps associated with an ill-executed movement (Fig. 8).
Fig. 7 shows an exemplary embodiment of the method if the at least one actual metric is substantially the same as the at least one stored metric. If the at least one actual metric is substantially the same as the at least one stored metric, then the procedure is, at this point, well-executed (710). If (720) the system is assessing an operator, then (730) the assessment will show that, at this point in the procedure, the operator has an assessment of "good surgical technique". After assessment, the system checks (740) whether the procedure is complete. If it is not complete, the system (circle 1) acquires at least one new image and repeats the cycle. If the procedure being assessed is complete, the system creates (750) a cumulative assessment, such as a GOALS score, for the procedure just completed. If the surgical intervention comprises a number of procedures, at this point (not shown in Figs. 6-7) either a next procedure is identified and the cycle repeats, or, if the surgical intervention is complete, an overall assessment is made and the system terminates.
If (720) the system is being used to train an operator, then no advice to the operator is needed at this point, so the system checks (740) whether the procedure is complete. If it is not complete, the system (circle 1) acquires at least one new image and repeats the cycle. If (740) the procedure is complete, then (not shown in Figs. 6-7) either a next procedure is identified and the cycle repeats, or, if the surgical intervention is complete, the system terminates.
Fig. 8 shows an exemplary embodiment of the method if the at least one actual metric is not substantially the same as the stored at least one metric. If the at least one actual metric is not substantially the same as the at least one stored metric, then, at this point, the procedure is not well-executed (810). If (820) the system is assessing an operator, then (830) the assessment will show that, at this point in the procedure, the operator has an assessment of "poor surgical technique". After assessment, the system checks (850) whether the procedure is complete. If it is not complete, the system (circle 1) acquires at least one new image and repeats the cycle. If (850) the procedure is complete, at this point (not shown in Figs. 6, 8) either a next procedure is identified and the cycle repeats, or, if the surgical intervention is complete, the system terminates.
If (820) the system is training an operator, then at least one indication can be given (840) for a means of correcting an error and bringing the procedure closer to or back to a more optimum procedure. An indication can be visual, audible or both, as disclosed hereinabove. As disclosed hereinabove, in preferred embodiments, an indication can be provided as an overlay on a display.
In some embodiments, other levels of assessment can be proviced. For example, an assessment can include "poor technique", "moderate technique", and "good technique". Technique can also be assessed on a points scale, as in a GOALS analysis, with, for non- limiting example, very poor technique being 0 points, while very good technique is 5 points. In some embodiments, a single score is given for a procedure; in some embodiments, individual scores are given for the features, the component parts of a technique, for non- limiting example, as shown in Tables I and II above.
It should be noted that the system can simultaneously carry out both an assessment and training. In that case, after acquisition (610) and analysis (620) of at least one image and comparison (630) of the at least one actual metric with the at least one stored metric, both assessment (Fig. 7) and training (Fig. 8) can be carried out.
For a surgical intervention comprising a plurality of procedures, for each procedure, training can be carried out, assessment can be carried out, both can be carried out, or neither can be carried out. In preferred embodiments, which of these is done for any given procedure is independent of what is done for any other procedure.
It should be noted that at least one of assessment and training can be carried out off-line. If assessment, training or both is off-line, then the "acquisition of at least one image" above is retrieval of at least one stored image from a database.
In some embodiments, a best practice or optimal procedure can be compiled from at least one fragment of at least one procedure executed by at least one surgeon, it can be a computer- generated procedure and any combination thereof. The plurality of fragments of a procedure procedures can have been executed by a plurality of operators, thus combining the best parts of the different procedures to generate one best-practice procedure.
To enhance tracking, an object can be modified to make it more recognizable in an image. For example, a colored marker, a tracking pattern, an LED and any combination thereof can be affixed to at least one surgical object to aid in detection by a computer algorithm or to aid in identification by an operator. A minimum of three non-collinear markers is necessary for determining six DOF, if the sole means of determining tool location and orientation is a marker. A tracking system compriaing a modifier, as described above, can provide high accuracy and reliability, but it depends on a clear line of sight between a tracked tool and an imaging device.
The tracking subsystem can comprise at least one sensor (such as, for non-limiting example, a motion sensor) on at least one surgical object, by at least one sensor, at least one processor to determine movement of at least one at least one surgical object by determining change in position of at least one robot arm and by any combination thereof.
The sensor is preferably in communication with at least one surgical object; the communication can be electrical or mechanical and it can be wired or wireless.
The sensor can be, for non-limiting example, an electromagnetic sensor; an ultrasound sensor; an inertial sensor to sense the angular velocity and the acceleration of the tool or other item; a gyroscope, an accelerometer, an IMU, an RFID tag and any combination thereof. An infrared tracking system can be used, which can locate at least one object that has at least one infrared marker attached to it. The object being tracked does not require any wires, but a line of sight from the tracking system to the tracked objects must be kept clear.
A magnetic tracking system can also be used. At least one magnetic sensors is affixed to at least one surgical object, and a magnetic transmitter emits a field that at least one sensor can detect. However, the presence of objects in the operating room that affect or are affected by magnetic fields can interfere with tracking.
At least one IMU can be used. An IMU incorporates a plurality of sensors, such as an accelerometer, a gyroscope, a magnetometer, a velocity sensor and any combination thereof to track at least one of orientation, position, velocity, angular velocity and acceleration of a surgical object. An IMU can transmit data wirelessly and has a high update rate. However, an IMU can experience increasing error over time (especially in position), and some types of IMU sensor can be sensitive to interference from other devices in an operating room. A rectification algorithms can be applied in order to reduce the effects error accumulation.
Kinematic tracking can be used to determine at least one property of at least one surgical tool maneuverable by a robotic manipulator. A typical robotic manipulator comprises at least one jointed arm that manipulates at least one surgical tool on behalf of an operator. A robot arm can also include at least one sensor (such as, but not limited to, an encoder, a potentiometer, a motion sensor, and an accelerometer) that can accurately determine the state of each joint in the arm. If the fixed properties of the physical structure of the robot arm are known (lengths of links, twists, etc.), they can be combined with the dynamic joint values to form a mathematical model of the robot arm. At least one property of a manipulated surgical object, such as a position and orientation of at least one portion of the surgical object, can be computed from this model.
Positional information resulting from kinematic tracking is generally expressed in terms of a coordinate system that is specific to the robot Techniques well known in the art can be used to generate a transformation that maps between the coordinate system relative to the robot and a coordinate system relative to an imaging device imaging the FOV.
At least one LED can be used to measure distance between a surgical object and tissue, typically by reflecting from the tissue light emitted by an LED on a surgical object.
Any or all of the tracking means above can be wholly or partially inside the body, outside the body and any combination thereof. For non-limiting example, a surgical tool can have a marker attached near its handle (outside the body) and a colored patch near its tip (inside the body). In this non-limiting example, movement of the marker is tracked by an imaging device outside the body (outside-outside), while movement of the colored patch is tracked by an imaging device inside the body (inside-inside). In another non-limiting example, an EM emitter can be close to the tip of the surgical tool, while an EM sensor is attached to an operating table (inside-outside). Other combinations will be obvious to one skilled in the art.
An electromagnetic (EM) tracking system can be used to locate at least one surgical object or another object of interest. By computing the position and orientation of at least one small electromagnetic receiver on a surgical object, a dynamic, preferably real-time measurement of the position and orientation of a aurgical object can be found.
In some embodiments, at least one electromagnetic receiver is attached to at least one hand of at least one operator, tracking the changing position and orientation) of at least one surgical object by tracking the movement of the at least one hand. However, keeping a sensor in a stable position during the entire execution of a surgical procedure can be difficult. Furthermore, movement of an operator' s hand need not be directly related to movement of a surgical object.
An electromagnetic tracking system does not need a clear line of sight, but is strongly affected by ferromagnetic objects, such as a steel tool or electronic equipment in a clinical environment, which can seriously degrade tracking accuracy by affecting local magnetic fields. Moreover, the need for wires in systems of this type can interfere with the use of laparoscopic instruments.
Combined methods can also be used, for non-limiting example, a combination of a passive optical marker and an EM sensor on a tool in order to minimize the effects of occasional blocking of the line-of- sight of the optical markers and distortion in the EM system. In addition, at least one force/torque sensor can be mounted on at least one surgical object. This exemplary combination can accurately measure position, orientation, velocity, acceleration, motion smoothness, and force applied by the surgical object, thereby enabling measurement of and assessment of movement metrics such as those, for non-limiting example, listed in Table III.
Ultrasound can be used in much the same manner as optical tracking. Commonly, three or more emitters are mounted on a surgical object to be tracked. Each emitter generates a sonic signal that is detected by a receiver placed at a fixed known position in the environment. Based on at least one sonic signal generated by at least one emitter, the system can determine at least one position of at least one portion of at least one surgical object by triangulation. Combining three receivers, an ultrasound tracker can also determine orientation of at least a portion of at least one surgical object. However, accuracy of an ultrasound tracker can suffer from the environment-dependent velocity of the sound waves, which varies with temperature, pressure and humidity. The loss of energy of an ultrasonic signal with distance also limits the range of tracking. In addition, acoustic tracking requires line-of- sight, lack of which can affect the quality of the signal.
In preferred embodiments, the surgical tools comprise neither markers nor sensors, although at least one sensor can be used on at least one robotic manipulator. In such preferred embodiments, the system determines tool position and orientation via analysis of at least one image, preferably an image provided by a laparoscopic imaging device, of which at least a portion is displayable and is therefore visible to an operator, as described above.
In some embodiments, the system further comprises at least one restricting mechanism configured to restrict the movement of at least one surgical object.
In some embodiments, a warning can be provided by use of a restricting mechanism, by a visual signal, by an audible signal, by a tactile signal and any combination thereof.
A visual signal can be , a constant-color light, a changing-color light, a constant brightness light, a varying brightness light, a constant-size pattern, a changing-size pattern, a constant- shape pattern, a changing-shape pattern, and any combination thereof.
An audible signal can be a constant-pitch sound, a changing -pitch sound, a constant loudness sound, a varying loudness sound, and any combination thereof.
A tactile signal can be a vibration, a constant-pressure signal, a varying-pressure signal, a stationary signal, a moving signal and any combination thereof. A tactile signal can be applied to a head, a neck, a torso, an arm, a wrist, a hand, a finger, a leg, an ankle, a toe and any combination thereof.

Claims

1. A system for assessing a skill level for execution of at least one surgical procedure, comprising:
a. at least one imaging device configured to provide at least one image in a field of view of a surgical environment;
b. at least one processor in communication with said imaging device; said at least one processor is configured to (i) analyze said at least one image from said at least one imaging device, (ii) identify from said at least one image at least one spatial position of at least one item; and, (iii) calculate from said at least one spatial position at least one parameter of said at least one item, Pitem; and, c. at least one communicable database configured to (i) store said at least one surgical procedure; said at least one surgical procedure is characterized by at least one stored parameter of at least one item,
Figure imgf000047_0004
) real-time store said at least one parameter Pitem, of said at least one item;
wherein from comparison between said at least one parameter of said at least one item,
Pitem and at least one of said Pst0red, said skill level is providable.
2. The system of claim 1, wherein said item is selected from a group consisting of: at least one surgical tool, a light source, a blood vessel, an organ, a nerve, and a ligament, a lesion, a tumor, smoke, fluid flow, bleeding, a fixed point, a critical point, and any combination thereof.
3. The system of claim 1, wherein said at least one stored parameter is derivable
Figure imgf000047_0003
from at least one stored spatial position of said item.
4. The system of claim 1, wherein at least one feature is definable according to at least one said parameter
Figure imgf000047_0002
5. The system of claim 1, wherein at least one predetermined feature is definable according to at least one said stored parameter
Figure imgf000047_0001
6. The system of claim 5, wherein comparison of said at least one feature to said at least one predetermined feature generates at least one partial score.
7. The system of claim 6, wherein said at least one feature is selected from a group consisting of: depth perception, bimanual dexterity, efficiency, tissue handling, autonomy, difficulty, smooth conduct of the operation, autonomy of the operator, leadership ability, cooperation with assistants, proper positioning of the access ports, display of the operating field in the center of the monitor, clear display of the target organ, proper use of the retractor, proper selection and appropriate use of surgical tool on dominant side, proper use of surgical tool on non-dominant side, proper methods of traction and tissue handling, appropriate and smooth use of the correct type of energy in tissue ablation, correct layer of tissue dissection, correct identification and proper coagulation or clipping of blood vessels, suturing, knot-tying, cutting, and any combination thereof.
8. The system of claim 6, wherein said at least one predetermined feature is selected from a group consisting of: depth perception, bimanual dexterity, efficiency, tissue handling, autonomy, difficulty, smooth conduct of the operation, autonomy of the operator, leadership ability, cooperation with assistants, proper positioning of the access ports, display of the operating field in the center of the monitor, clear display of the target organ, proper use of the retractor, proper selection and appropriate use of surgical tool on dominant side, proper use of surgical tool on non-dominant side, proper methods of traction and tissue handling, appropriate and smooth use of the correct type of energy in tissue ablation, correct layer of tissue dissection, correct identification and proper coagulation or clipping of blood vessels, suturing, knot-tying, cutting, and any combination thereof.
9. The system of claim 6, wherein said at least one partial score is within a predetermined range.
10. The system of claim 9, wherein a total score is calculable as a sum of said at least one partial score.
11. The system of claim 10, wherein is displayable a member of a group consisting of said at least one partial score, said total score and any combination thereof.
12. The system of claim 6, wherein a smaller difference between said at least one feature and said at least one predetermined feature results in said at least one partial score being closer to at least one preferred score.
13. The system of claim 12, wherein said at least one preferred score is a maximum of said predetermined range.
14. The system of claim 6, wherein a GOALS score is generatable from a sum of said at least one partial scores.
15. The system of claim 14, wherein said skill level is assessable from said GOALS score.
16. The system of claim 14, wherein said skill level is selected from a group consisting of: poor, moderate, good and any combination thereof.
17. The system of claim 1, wherein said at least one stored parameter Pstored is generatable from at least one stored surgical procedure.
18. The system of claim 1, wherein said at least one stored surgical procedure is generatable from a member of a group consisting of: a procedure executed by an experienced operator, an average of procedures by at least one experienced operator, an average of procedures executed by an operator being assessed, a simulation of a procedure, an average of simulations of a procedure executed by at least one operator, a simulation of a procedure generated by simulation software and any combination thereof.
19. The system of claim 1, wherein said skill level is providable from a second parameter derivable from a member of a group consisting of: a signal from a sensor, a forward kinematics calculation, an inverse kinematics calculation, a CT image, an MRI image, and X-ray image and any combination thereof.
20. The system of claim 19, wherein at least one overlay on at least a portion of at least one said image of said field of view of said surgical environment is selected from a group consisting of: said at least one parameter Pitem, said at least one stored parameter Pst0red, said at least one second parameter, said at least one feature, said at least one partial score, said GOALS score, said skill level, a suggestion, an instruction, a distance, an angle, an area, a volume, a size scale, information on a medical history of a patient, and any combination thereof.
21. The system of claim 19, wherein communication between said at least one sensor and said at least one item is selected from a group consisting of: mechanical communication, wired communication, wireless communication and any combination thereof.
22. The system of claim 19, wherein said sensor is selected from a group consisting of an electromagnetic sensor; an ultrasound sensor; an inertial sensor to sense the angular velocity and the acceleration of the tool or other item; an accelerometer, a motion sensor, an IMU, a sensor wearable by an operator, a sensor attachable to a surgical object, an RFID tag attachable to a surgical object, an ultrasound sensor, an infrared sensor, gyro-meter, tachometer, shaft encoder, rotary encoder, strain gauge and any combination thereof.
23. The system of claim 19, wherein said database is configured to store as a function of time a member of a group consisting of said at least one parameter Pitem, said at least one stored parameter said at least one second parameter, said at least one feature,
Figure imgf000050_0005
said at least one predetermined feature and any combination thereof.
24. The system of claim 23, wherein said database is configured to store, for any given time t, a member of a group consisting of said at least one parameter said at least one
Figure imgf000050_0006
stored parameter said at least one second parameter, said at least one feature, said
Figure imgf000050_0003
at least one predetermined feature and any combination thereof.
25. The system of claim 24, wherein said comparison is performable in a manner selected from a group consisting of: in real time, between a stored said at least one parameter Pitem and said at least one stored parameter between a stored said at least one
Figure imgf000050_0007
parameter and said stored at least one second parameter and between a stored said
Figure imgf000050_0001
at least one second parameter and said at least one stored parameter
Figure imgf000050_0002
26. The system of claim 1, wherein said surgical procedure is selected from a group consisting of an identifiable unit, a surgical task, a complete procedure, and any combination thereof.
27. The system of claim 1, wherein said skill level is providable either in real time or offline.
28. The system of claim 1, wherein said at least one parameter
Figure imgf000050_0004
is selected from a group consisting of: time to execute said surgical procedure; accuracy of movement of at least one said surgical tool, accuracy of energy use; amount of coordination between movement of said at least one surgical tool and movement of at least one second surgical tool, accuracy of coordination between movement of said at least one surgical tool and movement of at least one second surgical tool, time, idle time, approach time, speed, maximum speed, speed profile, acceleration, motion smoothness, bimanual dexterity, search time, path length, distance efficiency, depth perception, transit profile, deviation on horizontal plane, deviation on vertical plane, response orientation, economy of area (EOA), economy of volume (EOV), number of movements, force range, interquartile force range, integral of the force, integral of the grasping force, integral of the Cartesian force, first derivative of the force, second derivative of the force, third derivative of the force, tissue handling, autonomy and any combination thereof.
29. The system of claim 1, wherein said at least one stored parameter is selected from
Figure imgf000051_0001
a group consisting of: time to execute said surgical procedure; accuracy of movement of at least one said surgical tool, accuracy of energy use; amount of coordination between movement of said at least one surgical tool and movement of at least one second surgical tool, accuracy of coordination between movement of said at least one surgical tool and movement of at least one second surgical tool, time, idle time, approach time, speed, maximum speed, speed profile, acceleration, motion smoothness, bimanual dexterity, search time, path length, distance efficiency, depth perception, transit profile, deviation on horizontal plane, deviation on vertical plane, response orientation, economy of area (EOA), economy of volume (EOV), number of movements, force range, interquartile force range, integral of the force, integral of the grasping force, integral of the Cartesian force, first derivative of the force, second derivative of the force, third derivative of the force, tissue handling, autonomy and any combination thereof.
30. The system of claim 1, wherein said at least one second parameter is selected from a group consisting of: time to execute said surgical procedure; accuracy of movement of at least one said surgical tool, accuracy of energy use; amount of coordination between movement of said at least one surgical tool and movement of at least one second surgical tool, accuracy of coordination between movement of said at least one surgical tool and movement of at least one second surgical tool, time, idle time, approach time, speed, maximum speed, speed profile, acceleration, motion smoothness, bimanual dexterity, search time, path length, distance efficiency, depth perception, transit profile, deviation on horizontal plane, deviation on vertical plane, response orientation, economy of area (EOA), economy of volume (EOV), number of movements, force range, interquartile force range, integral of the force, integral of the grasping force, integral of the Cartesian force, first derivative of the force, second derivative of the force, third derivative of the force, tissue handling, autonomy and any combination thereof.
31. The system of claim 1, wherein said system is additionally configured to indicate a correction to said surgical procedure.
32. The system of claim 31, wherein said correction is determinable from a member of a group consisting of: a comparison between said at least one parameter
Figure imgf000052_0001
and said at least one stored parameter Pst0red, a comparison between said at least one parameter
Figure imgf000052_0002
and said at least one second parameter, a comparison between said at least one second parameter and said at least one stored parameter Pst0red, and any combination thereof.
33. The system of claim 1, wherein said correction is indicatable by a member of a group consisting of a visual indication, an audible indication and any combination thereof.
34. The system of claim 33, wherein said visual indication is overlayable on said at least one image.
35. The system of claim 33, wherein said visual indication is selected from a group consisting of a constant-color pattern, a varying-color pattern, a constant- shape pattern, a varying-shape pattern, constant-size pattern, a varying-size pattern, an arrow, a word and any combination thereof.
36. The system of claim 33, wherein said audible indication is selected from a group consisting of a constant-pitch sound, a varying -pitch sound, a constant- loudness sound, a varying-loudness sound, a word and any combination thereof.
37. The system of claim 1, additionally comprising a restricting mechanism configured to restrict movement of at least one said surgical object.
38. The system of claim 1, wherein said system is additionally configured to provide a warning.
39. The system of claim 38, wherein, if said difference between said at least one feature and said at least one predetermined feature is outside a predetermined range, said warning is providable.
40. The system of claim 38, wherein said warning is indicatable by a member of a group consisting of said restricting mechanism, a visual signal, an audible signal, a tactile signal and any combination thereof.
41. The system of claim 40, wherein said visual signal is selected from a group consisting of a constant-color pattern, a varying-color pattern, a constant- shape pattern, a varying- shape pattern, constant-size pattern, a varying-size pattern, an arrow, a word and any combination thereof.
42. The system of claim 40, wherein said audible signal is selected from a group consisting of a constant-pitch sound, a varying-pitch sound, a constant-loudness sound, a varying- loudness sound, a word and any combination thereof.
43. The system of claim 40, wherein said tactile signal is selected from a group consisting of a vibration, a constant-pressure signal, a varying-pressure signal, a stationary signal, a moving signal and any combination thereof.
44. The system of claim 40, wherein said tactile signal is applicable to a member of a group consisting of: a head, a neck, a torso, an arm, a wrist, a hand, a finger, a leg, an ankle, a toe and any combination thereof.
45. The system of claim 1, wherein said database is configured to store at least one record of at least one procedure, each said at least one record comprising a member of a group consisting of: said at least one image, said at least one parameter Pitem, said at least one stored parameter
Figure imgf000053_0001
said at least one second parameter, said at least one feature, said at least one stored feature, said skill level, said GOALS score and any combination thereof.
46. The system of claim 1, wherein said at least one record is selectable based upon at least one identifier.
47. The system of claim 1, wherein said at least one identifier is storable in said database and said at least one identifier is selected from a group consisting of: an identifier of an operator, an identifier of an operating room, a physical characteristic of said operating room, start time of a surgical procedure, end time of a surgical procedure, duration of a surgical procedure, date of a surgical procedure, an identifier of a patient, a physical characteristic of a patient, an outcome of a surgical procedure, length of hospital stay for a patient, a readmission for a patient, and any combination thereof.
48. The system of claim 47, wherein said physical characteristic of said operating room is selected from a group consisting of: temperature, humidity, size, time of cleaning, date of cleaning, cleaning procedure, cleaning material, type of lighting and any combination thereof.
49. The system of claim 47, wherein said physical characteristic of said patient is selected from a group consisting of: age, height, weight, body mass index, health status, medical status, and any combination thereof.
50. The system of claim 47, wherein said outcome of said surgical procedure is selected from a group consisting of: a successful aspect, a partially successful aspect, a partial failure in an aspect, a complete failure in an aspect, and any combination thereof.
1. The system of claim 1, wherein said image is selected from a group consisting of: a 2D image, a 3D image, a panoramic image, a high resolution image, a 3D image reconstructed from at least one 2D image, a 3D stereo image and any combination thereof.
52. A method for assessing a skill level for execution of at least one surgical procedure, comprising steps of:
a. providing a system for assessing said skill level comprising:
i. at least one imaging device configured to provide at least one image in a field of view of a surgical environment;
ii. at least one processor in communication with said imaging device; said at least one processor is configured to (i) analyze at least one image from said at least one imaging device, (ii) identify from said at least one image at least one spatial position of at least one item; and, (iii) calculate from said at least one spatial position at least one parameter of said at least one item, Pitem; and iii. at least one communicable database configured to (i) store said at least one surgical procedure; said at least one surgical procedure is characterized by at least one stored parameter of at least one item, said at least one stored
Figure imgf000054_0005
parameter derivable from said at least one spatial position of said item; (ii) real-time store said at least one spatial position parameter, of said at least
Figure imgf000054_0004
one item;
b. acquiring, via said imaging device, at least one said image of a field of view; c. analyzing said at least one image and determining, from said analysis, said at least one spatial position of said at least one item;
d. calculating from said at least one image said at least one parameter
Figure imgf000054_0001
e. storing said at least one parameter ; and
Figure imgf000054_0003
f. comparing said at least one parameter m and said at least one stored parameter
Figure imgf000054_0002
Pstored?
thereby providing said skill level.
53. The method of claim 52, additionally comprising step of selecting said item from a group consisting of: at least one surgical tool, a light source, a blood vessel, an organ, a nerve, and a ligament, a lesion, a tumor, smoke, fluid flow, bleeding, a fixed point, a critical point, and any combination thereof
54. The method of claim 52, additionally comprising step of deriving said at least one stored parameter Pstored from at least one stored spatial position of said item.
55. The method of claim 52, additionally comprising step of defining at least one feature according to at least one said parameter Pitem-
56. The method of claim 52, additionally comprising step of defining at least one predetermined feature according to at least one said stored parameter
Figure imgf000055_0001
57. The method of claim 56, additionally comprising step of generating at least one partial score by comparing said at least one feature to said at least one predetermined feature.
58. The method of claim 57, additionally comprising step of selecting said at least one feature from a group consisting of: depth perception, bimanual dexterity, efficiency, tissue handling, autonomy, difficulty, smooth conduct of the operation, autonomy of the operator, leadership ability, cooperation with assistants, proper positioning of the access ports, display of the operating field in the center of the monitor, clear display of the target organ, proper use of the retractor, proper selection and appropriate use of surgical tool on dominant side, proper use of surgical tool on non-dominant side, proper methods of traction and tissue handling, appropriate and smooth use of the correct type of energy in tissue ablation, correct layer of tissue dissection, correct identification and proper coagulation or clipping of blood vessels, suturing, knot-tying, cutting, and any combination thereof.
59. The method of claim 57, additionally comprising step of selecting said at least one predetermined feature from a group consisting of: depth perception, bimanual dexterity, efficiency, tissue handling, autonomy, difficulty, smooth conduct of the operation, autonomy of the operator, leadership ability, cooperation with assistants, proper positioning of the access ports, display of the operating field in the center of the monitor, clear display of the target organ, proper use of the retractor, proper selection and appropriate use of surgical tool on dominant side, proper use of surgical tool on non-dominant side, proper methods of traction and tissue handling, appropriate and smooth use of the correct type of energy in tissue ablation, correct layer of tissue dissection, correct identification and proper coagulation or clipping of blood vessels, suturing, knot- tying, cutting, and any combination thereof.
60. The method of claim 57, additionally comprising step of generating said at least one partial score within a predetermined range.
61. The method of claim 60, additionally comprising step of calculating a total score as a sum of said at least one partial score
62. The method of claim 61, additionally comprising step of displaying at least one member of a group consisting of said at least one partial score, said total score and any combination thereof.
63. The method of claim 57, additionally comprising step of a smaller difference between said at least one feature and said at least one predetermined feature resulting in said at least one partial score being closer to at least one preferred score.
64. The method of claim 63, additionally comprising step of selecting said at least one preferred score to be a maximum of said predetermined range.
65. The method of claim 57, additionally comprising step of generating a GOALS score from a sum of said at least one partial scores.
66. The method of claim 65, additionally comprising step of assessing said skill level from said GOALS score
67. The method of claim 65, additionally comprising step of selecting said skill level from a group consisting of: poor, moderate, good and any combination thereof.
68. The method of claim 52, additionally comprising a step of generating said at least one stored parameter
Figure imgf000056_0001
from at least one stored surgical procedure.
69. The method of claim 52, additionally comprising step of generating said at least one stored surgical procedure from a member of a group consisting of: a procedure executed by an experienced operator, an average of procedures by at least one experienced operator, an average of procedures executed by an operator being assessed, a simulation of a procedure, an average of simulations of a procedure executed by at least one operator, a simulation of a procedure generated by simulation software and any combination thereof.
70. The method of claim 52, additionally comprising step of providing said skill level from a second parameter derivable from a member of a group consisting of: a signal from a sensor, a forward kinematics calculation, an inverse kinematics calculation, a CT image, an MRI image, and X-ray image and any combination thereof.
71. The method of claim 70, additionally comprising step of overlaying on at least a portion of at least one said image of said field of view of said surgical environment a member of a group consisting of: said at least one parameter Pitem, said at least one stored parameter said at least one second parameter, said at least one feature, said at
Figure imgf000057_0001
least one partial score, said GOALS score, said skill level, a suggestion, an instruction, a distance, an angle, an area, a volume, a size scale, information on a medical history of a patient, and any combination thereof.
72. The method of claim 70, additionally comprising step of providing communication between said at least one sensor and said at least one item from a group consisting of: mechanical communication, wired communication, wireless communication and any combination thereof..
73. The method of claim 70, additionally comprising step of selecting said sensor from a group consisting of an electromagnetic sensor; an ultrasound sensor; an inertial sensor to sense the angular velocity and the acceleration of the tool or other item; an accelerometer, a motion sensor, an IMU, a sensor wearable by an operator, a sensor attachable to a surgical object, an RFID tag attachable to a surgical object, an ultrasound sensor, an infrared sensor, gyro-meter, tachometer, shaft encoder, rotary encoder, strain gauge and any combination thereof.
74. The method of claim 70, additionally comprising step of storing as function of time a member of a group consisting of said at least one parameter Pitem, said at least one stored parameter aid at least one second parameter, said at least one feature, said
Figure imgf000057_0002
at least one predetermined feature and any combination thereof.
75. The method of claim 74, additionally comprising step of configuring said database to store, for any given time t, a member selected from a group consisting of said at least one parameter said at least one stored parameter said at least one second
Figure imgf000057_0003
Figure imgf000057_0006
parameter, said at least one feature, said at least one predetermined feature and any combination thereof.
76. The method of claim 75, additionally comprising step of performing said comparison in a manner selected from a group consisting of: in real time, between a stored said at least one parameter Pitem and said at least one stored parameter between a stored said
Figure imgf000057_0005
at least one parameter Pitem and a stored said at least one second parameter, between a stored said at least one second parameter and said at least one stored parameter
Figure imgf000057_0004
and any combination thereof.
77. The method of claim 52, additionally comprising step of selecting said surgical procedure from a group consisting of an identifiable unit, a surgical task, a complete procedure, and any combination thereof.
78. The method of claim 52, additionally comprising step of providing said skill level either in real time or off-line.
79. The method of claim 52, additionally comprising step of selecting said at least one parameter from a group consisting of: time to execute said surgical procedure;
Figure imgf000058_0001
accuracy of movement of at least one said surgical tool, accuracy of energy use; amount of coordination between movement of said at least one surgical tool and movement of at least one second surgical tool, accuracy of coordination between movement of said at least one surgical tool and movement of at least one second surgical tool, time, idle time, approach time, speed, maximum speed, speed profile, acceleration, motion smoothness, bimanual dexterity, search time, path length, distance efficiency, depth perception, transit profile, deviation on horizontal plane, deviation on vertical plane, response orientation, economy of area (EOA), economy of volume (EOV), number of movements, force range, interquartile force range, integral of the force, integral of the grasping force, integral of the Cartesian force, first derivative of the force, second derivative of the force, third derivative of the force, tissue handling, autonomy and any combination thereof.
80. The method of claim 52, additionally comprising step of selecting said at least one stored parameter from a group consisting of: time to execute said surgical
Figure imgf000058_0002
procedure; accuracy of movement of at least one said surgical tool, accuracy of energy use; amount of coordination between movement of said at least one surgical tool and movement of at least one second surgical tool, accuracy of coordination between movement of said at least one surgical tool and movement of at least one second surgical tool, time, idle time, approach time, speed, maximum speed, speed profile, acceleration, motion smoothness, bimanual dexterity, search time, path length, distance efficiency, depth perception, transit profile, deviation on horizontal plane, deviation on vertical plane, response orientation, economy of area (EOA), economy of volume (EOV), number of movements, force range, interquartile force range, integral of the force, integral of the grasping force, integral of the Cartesian force, first derivative of the force, second derivative of the force, third derivative of the force, tissue handling, autonomy and any combination thereof.
81. The method of claim 52, additionally comprising step of selecting said at least one second parameter from a group consisting of: time to execute said surgical procedure; accuracy of movement of at least one said surgical tool, accuracy of energy use; amount of coordination between movement of said at least one surgical tool and movement of at least one second surgical tool, accuracy of coordination between movement of said at least one surgical tool and movement of at least one second surgical tool, time, idle time, approach time, speed, maximum speed, speed profile, acceleration, motion smoothness, bimanual dexterity, search time, path length, distance efficiency, depth perception, transit profile, deviation on horizontal plane, deviation on vertical plane, response orientation, economy of area (EOA), economy of volume (EOV), number of movements, force range, interquartile force range, integral of the force, integral of the grasping force, integral of the Cartesian force, first derivative of the force, second derivative of the force, third derivative of the force, tissue handling, autonomy and any combination thereof.
82. The method of claim 52, additionally comprising step of indicating a correction to said surgical procedure.
83. The method of claim 82, additionally comprising step of determining said correction from a member of a group consisting of a comparison between said at least one parameter and said at least one stored parameter a comparison between said
Figure imgf000059_0001
Figure imgf000059_0003
at least one parameter and said at least one second parameter, a comparison
Figure imgf000059_0002
between said at least one second parameter and said at least one stored parameter
Figure imgf000059_0004
and any combination thereof.
84. The method of claim 52, additionally comprising step of indicating said correction by a member of a group consisting of a visual indication, an audible indication and any combination thereof.
85. The method of claim 84, additionally comprising step of overlaying said visual indication on said at least one image.
86. The method of claim 84, additionally comprising step of selecting said visual indication from a group consisting of a constant-color pattern, a varying-color pattern, a constant- shape pattern, a varying-shape pattern, constant-size pattern, a varying-size pattern, an arrow, a word and any combination thereof.
87. The method of claim 84, additionally comprising step of selecting said audible indication from a group consisting of a constant-pitch sound, a varying-pitch sound, a constant-loudness sound, a varying-loudness sound, a word and any combination thereof.
88. The method of claim 52, additionally comprising steps of providing a restricting mechanism and of restricting movement of at least one said surgical object.
89. The method of claim 52, additionally comprising step of configuring said system to provide a warning.
90. The method of claim 89, additionally comprising step of providing said warning if said difference between said at least one feature and said at least one predetermined feature is outside a predetermined range.
91. The method of claim 89, additionally comprising step of indicating said warning by a member of a group consisting of said restricting mechanism, a visual signal, an audible signal, a tactile signal and any combination thereof.
92. The method of claim 91, additionally comprising step of selecting said visual signal from a group consisting of a constant-color pattern, a varying-color pattern, a constant- shape pattern, a varying-shape pattern, constant-size pattern, a varying-size pattern, an arrow, a word and any combination thereof.
93. The method of claim 91, additionally comprising step of selecting said audible signal from a group consisting of a constant-pitch sound, a varying-pitch sound, a constant- loudness sound, a varying-loudness sound, a word and any combination thereof.
94. The method of claim 91, additionally comprising step of selecting said tactile signal from a group consisting of a vibration, a constant-pressure signal, a varying -pressure signal, a stationary signal, a moving signal and any combination thereof.
95. The method of claim 91, additionally comprising step of applying said tactile signal to a member of a group consisting of: a head, a neck, a torso, an arm, a wrist, a hand, a finger, a leg, an ankle, a toe and any combination thereof.
96. The method of claim 52, additionally comprising step of configuring said database to store at least one record of at least one procedure, each said at least one record comprising a member of a group consisting of: said at least one image, said at least one parameter Pitem, said at least one stored parameter Pstored, said at least one second parameter, said at least one feature, said at least one stored feature, said skill level, said GOALS score and any combination thereof.
97. The method of claim 96, additionally comprising step of selecting at least one record based upon at least one identifier.
98. The method of claim 96, additionally comprising steps of storing said at least one identifier in said database and of selecting said at least one identifier from a group consisting of: an identifier of an operator, an identifier of an operating room, a physical characteristic of said operating room, start time of a surgical procedure, end time of a surgical procedure, duration of a surgical procedure, date of a surgical procedure, an identifier of a patient, a physical characteristic of a patient, an outcome of a surgical procedure, length of hospital stay for a patient, a readmission for a patient, and any combination thereof.
99. The method of claim 96, additionally comprising step of selecting said physical characteristic of said operating room from a group consisting of: temperature, humidity, size, time of cleaning, date of cleaning, cleaning procedure, cleaning material, type of lighting and any combination thereof.
100. The method of claim 96, additionally comprising step of selecting said physical characteristic of said patient from a group consisting of: age, height, weight, body mass index, health status, medical status, and any combination thereof.
101. The method of claim 96, additionally comprising step of selecting said outcome of said surgical procedure from a group consisting of: a successful aspect, a partially successful aspect, a partial failure in an aspect, a complete failure in an aspect, and any combination thereof.
102. The method of claim 52, additionally comprising step of selecting said image from a group consisting of: a 2D image, a 3D image, a panoramic image, a high resolution image, a 3D image reconstructed from at least one 2D image, a 3D stereo image and any combination thereof.
PCT/IL2016/051307 2015-12-07 2016-12-06 Autonomic goals-based training and assessment system for laparoscopic surgery WO2017098506A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP16872549.7A EP3414753A4 (en) 2015-12-07 2016-12-06 Autonomic goals-based training and assessment system for laparoscopic surgery

Applications Claiming Priority (9)

Application Number Priority Date Filing Date Title
US201562263749P 2015-12-07 2015-12-07
US62/263,749 2015-12-07
US201662290693P 2016-02-04 2016-02-04
US62/290,693 2016-02-04
US62/290,963 2016-02-04
US201662334464P 2016-05-11 2016-05-11
US62/334,464 2016-05-11
US201662336672P 2016-05-15 2016-05-15
US62/336,672 2016-05-15

Publications (3)

Publication Number Publication Date
WO2017098506A1 true WO2017098506A1 (en) 2017-06-15
WO2017098506A8 WO2017098506A8 (en) 2017-07-27
WO2017098506A9 WO2017098506A9 (en) 2017-11-16

Family

ID=59012730

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2016/051307 WO2017098506A1 (en) 2015-12-07 2016-12-06 Autonomic goals-based training and assessment system for laparoscopic surgery

Country Status (2)

Country Link
EP (1) EP3414753A4 (en)
WO (1) WO2017098506A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109659001A (en) * 2018-12-18 2019-04-19 延安大学 A kind of anti-cancer supervisory systems and method
RU197549U1 (en) * 2018-11-27 2020-05-13 Федеральное государственное автономное образовательное учреждение высшего образования "Национальный исследовательский Томский государственный университет" (ТГУ, НИ ТГУ) HUMAN HAND MOVEMENT DYNAMIC CORRECTION
CN111788637A (en) * 2017-12-28 2020-10-16 爱惜康有限责任公司 Usage and technical analysis of surgeon/personnel performance relative to baseline to optimize device utilization and performance for both current and future procedures
US10810907B2 (en) 2016-12-19 2020-10-20 National Board Of Medical Examiners Medical training and performance assessment instruments, methods, and systems
CN112702967A (en) * 2018-09-14 2021-04-23 柯惠Lp公司 Surgical robotic system and method of tracking use of surgical instruments thereof
EP3762853A4 (en) * 2018-03-08 2021-11-24 Duke University Electronic identification tagging systems, methods, applicators, and tapes for tracking and managing medical equipment and other objects
US11918423B2 (en) 2018-10-30 2024-03-05 Corindus, Inc. System and method for navigating a device through a path to a target location

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116884570B (en) * 2023-09-06 2023-12-12 南京诺源医疗器械有限公司 Intraoperative real-time simulation curative effect evaluation system based on image processing

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999042978A1 (en) * 1998-02-19 1999-08-26 Boston Dynamics, Inc. Method and apparatus for surgical training and simulating surgery
US20060058919A1 (en) * 2004-08-31 2006-03-16 Andres Sommer Medical examination and treatment apparatus
US20070021738A1 (en) * 2005-06-06 2007-01-25 Intuitive Surgical Inc. Laparoscopic ultrasound robotic surgical system
US20100120006A1 (en) * 2006-09-15 2010-05-13 The Trustees Of Tufts College Dynamic Minimally Invasive Training and Testing Environments
WO2010097771A2 (en) * 2009-02-26 2010-09-02 Surgica Robotica S.P.A. Method and apparatus for surgical training
US20100248200A1 (en) * 2008-09-26 2010-09-30 Ladak Hanif M System, Method and Computer Program for Virtual Reality Simulation for Medical Procedure Skills Training
WO2012101286A1 (en) * 2011-01-28 2012-08-02 Virtual Proteins B.V. Insertion procedures in augmented reality
US20130224709A1 (en) * 2012-02-24 2013-08-29 Arizona Board Of Regents, On Behalf Of The University Of Arizona Portable Low Cost Computer Assisted Surgical Trainer and Assessment System
US20140127660A1 (en) * 2012-11-02 2014-05-08 Digital Surgicals Pte. Ltd. Apparatus, Method and System for Microsurgical Suture Training

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050142525A1 (en) * 2003-03-10 2005-06-30 Stephane Cotin Surgical training system for laparoscopic procedures
WO2012060901A1 (en) * 2010-11-04 2012-05-10 The Johns Hopkins University System and method for the evaluation of or improvement of minimally invasive surgery skills

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999042978A1 (en) * 1998-02-19 1999-08-26 Boston Dynamics, Inc. Method and apparatus for surgical training and simulating surgery
US20060058919A1 (en) * 2004-08-31 2006-03-16 Andres Sommer Medical examination and treatment apparatus
US20070021738A1 (en) * 2005-06-06 2007-01-25 Intuitive Surgical Inc. Laparoscopic ultrasound robotic surgical system
US20100120006A1 (en) * 2006-09-15 2010-05-13 The Trustees Of Tufts College Dynamic Minimally Invasive Training and Testing Environments
US20100248200A1 (en) * 2008-09-26 2010-09-30 Ladak Hanif M System, Method and Computer Program for Virtual Reality Simulation for Medical Procedure Skills Training
WO2010097771A2 (en) * 2009-02-26 2010-09-02 Surgica Robotica S.P.A. Method and apparatus for surgical training
WO2012101286A1 (en) * 2011-01-28 2012-08-02 Virtual Proteins B.V. Insertion procedures in augmented reality
US20130224709A1 (en) * 2012-02-24 2013-08-29 Arizona Board Of Regents, On Behalf Of The University Of Arizona Portable Low Cost Computer Assisted Surgical Trainer and Assessment System
US20140127660A1 (en) * 2012-11-02 2014-05-08 Digital Surgicals Pte. Ltd. Apparatus, Method and System for Microsurgical Suture Training

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3414753A4 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10810907B2 (en) 2016-12-19 2020-10-20 National Board Of Medical Examiners Medical training and performance assessment instruments, methods, and systems
CN111788637A (en) * 2017-12-28 2020-10-16 爱惜康有限责任公司 Usage and technical analysis of surgeon/personnel performance relative to baseline to optimize device utilization and performance for both current and future procedures
EP3762853A4 (en) * 2018-03-08 2021-11-24 Duke University Electronic identification tagging systems, methods, applicators, and tapes for tracking and managing medical equipment and other objects
CN112702967A (en) * 2018-09-14 2021-04-23 柯惠Lp公司 Surgical robotic system and method of tracking use of surgical instruments thereof
US11918423B2 (en) 2018-10-30 2024-03-05 Corindus, Inc. System and method for navigating a device through a path to a target location
RU197549U1 (en) * 2018-11-27 2020-05-13 Федеральное государственное автономное образовательное учреждение высшего образования "Национальный исследовательский Томский государственный университет" (ТГУ, НИ ТГУ) HUMAN HAND MOVEMENT DYNAMIC CORRECTION
CN109659001A (en) * 2018-12-18 2019-04-19 延安大学 A kind of anti-cancer supervisory systems and method
CN109659001B (en) * 2018-12-18 2023-08-04 延安大学 Cancer prevention supervision system and method

Also Published As

Publication number Publication date
EP3414753A4 (en) 2019-11-27
WO2017098506A9 (en) 2017-11-16
WO2017098506A8 (en) 2017-07-27
EP3414753A1 (en) 2018-12-19

Similar Documents

Publication Publication Date Title
AU2019352792B2 (en) Indicator system
WO2017098506A1 (en) Autonomic goals-based training and assessment system for laparoscopic surgery
CN108472084B (en) Surgical system with training or assisting function
WO2017098505A1 (en) Autonomic system for determining critical points during laparoscopic surgery
US20210369354A1 (en) Navigational aid
EP3413774A1 (en) Database management for laparoscopic surgery
EP3414686A1 (en) Autonomic detection of malfunctioning in surgical tools
Bihlmaier et al. Learning dynamic spatial relations
US20240071243A1 (en) Training users using indexed to motion pictures
Bihlmaier et al. Endoscope robots and automated camera guidance
CN115551432A (en) Systems and methods for facilitating automated operation of devices in a surgical space
CN114845654A (en) Systems and methods for identifying and facilitating intended interaction with a target object in a surgical space
US20240029858A1 (en) Systems and methods for generating and evaluating a medical procedure
US20230302646A1 (en) Systems and methods for controlling and enhancing movement of a surgical robotic unit during surgery
WO2023178092A1 (en) Systems and methods for generating customized medical simulations
GB2608016A (en) Feature identification
GB2611972A (en) Feature identification

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16872549

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2016872549

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2016872549

Country of ref document: EP

Effective date: 20180709